Probabilistic Path Enumeration in Directed Acyclic Graphs
Given a directed acyclic graph (DAG) consisting of $V$ vertices and $E$ edges, each edge $e_k$ possesses an independent activation probability $p_k$. The objective is to compute the expected number of functional paths originating from vertex $0$ and terminating at vertex $V-1$. A path is considered functional if all edges traversed along it are successfully activated.
Analytical Approach
The expectation can be derived through topological processing. For any vertex $u$, the expected count of functional paths reaching $u$ is the summation of expected paths reaching its predecessors, multiplied by the respective transition probabilities. By initializing the source vertex with an expectation of $1.0$, a standard topological traversal propagates these values across the DAG. The in-degree tracking guarantees that a vertex is evaluated only when all its incoming dependencies have been resolved, ensuring a correct linear accumulation without overcounting.
Implementation
#include <iostream>
#include <vector>
#include <queue>
#include <iomanip>
using namespace std;
const int MAX_V = 100005;
vector<pair<int, double>> adjacency[MAX_V];
int in_degree[MAX_V];
double expected_paths[MAX_V];
int main() {
int v_count, e_count;
cin >> v_count >> e_count;
for (int i = 0; i < e_count; ++i) {
int from, to;
double prob;
cin >> from >> to >> prob;
adjacency[from].push_back({to, prob});
in_degree[to]++;
}
queue<int> process_queue;
for (int i = 0; i < v_count; ++i) {
if (in_degree[i] == 0) {
process_queue.push(i);
}
}
expected_paths[0] = 1.0;
while (!process_queue.empty()) {
int curr = process_queue.front();
process_queue.pop();
for (auto& edge : adjacency[curr]) {
int next = edge.first;
double p = edge.second;
expected_paths[next] += expected_paths[curr] * p;
in_degree[next]--;
if (in_degree[next] == 0) {
process_queue.push(next);
}
}
}
cout << fixed << setprecision(2) << expected_paths[v_count - 1] << endl;
return 0;
}
Bisection Validity via Split-and-Merge
Determine the number of valid subsets within a multiset of $N$ integers that can be partitioned in to two disjoint sub-multisets $A$ and $B$ such that the sum of elements in $A$ equals the sum of elements in $B$. Empty subsets are excluded from the final tally.
Analytical Approach
For a subset to be bisectable, the difference between the cumulative weights of its two partitions must be zero. Direct enumeration of all $2^N$ states is computationally prohibitive for large $N$. By bifurcating the dataset into two halves, we apply a meet-in-the-middle srtategy. The first half generates all possible assignment configurations, mapping the net weight difference $\Delta$ to the bitmask of selected elements. The second half iterates over its configurations, querying the map for complementary differences $-\Delta$. The union of bitmasks from complementary pairs yields a valid bisectable subset, which is then recorded in a boolean validity array.
Implementation
#include <iostream>
#include <vector>
#include <unordered_map>
using namespace std;
const int MAX_N = 25;
long long elements[MAX_N];
bool is_valid[1 << MAX_N];
unordered_map<long long, vector<int>> delta_registry;
void generate_left(int depth, long long delta, int bitmask, int limit) {
if (depth > limit) {
delta_registry[delta].push_back(bitmask);
return;
}
generate_left(depth + 1, delta, bitmask, limit);
generate_left(depth + 1, delta + elements[depth], bitmask | (1 << (depth - 1)), limit);
generate_left(depth + 1, delta - elements[depth], bitmask | (1 << (depth - 1)), limit);
}
void generate_right(int depth, long long delta, int bitmask, int start, int total) {
if (depth > total) {
if (delta_registry.find(-delta) != delta_registry.end()) {
for (int prev_mask : delta_registry[-delta]) {
is_valid[prev_mask | bitmask] = true;
}
}
return;
}
generate_right(depth + 1, delta, bitmask, start, total);
generate_right(depth + 1, delta + elements[depth], bitmask | (1 << (depth - 1)), start, total);
generate_right(depth + 1, delta - elements[depth], bitmask | (1 << (depth - 1)), start, total);
}
int main() {
int n;
cin >> n;
for (int i = 1; i <= n; ++i) cin >> elements[i];
int mid = n / 2;
generate_left(1, 0, 0, mid);
generate_right(mid + 1, 0, 0, mid + 1, n);
int valid_count = 0;
for (int i = 1; i < (1 << n); ++i) {
if (is_valid[i]) valid_count++;
}
cout << valid_count << endl;
return 0;
}
Segment Tree Modulo Optimization
An array $A$ of length $N$ undergoes $M$ operations: point updates setting $A[idx] = val$, range modulus operations applying $A[i] \bmod k$ for $i \in [l, r]$, and range maximum queries. The challenge lies in executing range modulus efficiently without degrading to $O(N)$ per operation.
Analytical Approach
A segment tree maintains the maximum value within each interval. A modulus operation $\bmod k$ only affects elements strictly greater than $k$. If the maximum value in a subtree is less than $k$, the operation has no effect and can be pruned entirely. When descending into a segment where the maximum exceeds $k$, we propagate the modulus down to the leaf nodes, updating their values and aggregating the new maximums upwards. This pruning mechanism drastically accelerates operations where $k$ is large relative to the array values.
Implementation
#include <iostream>
#include <algorithm>
using namespace std;
const int MAX_SIZE = 100005;
long long tree_nodes[4 * MAX_SIZE];
long long arr_data[MAX_SIZE];
void construct(int node, int left, int right) {
if (left == right) {
tree_nodes[node] = arr_data[left];
return;
}
int mid = (left + right) / 2;
construct(2 * node, left, mid);
construct(2 * node + 1, mid + 1, right);
tree_nodes[node] = max(tree_nodes[2 * node], tree_nodes[2 * node + 1]);
}
void modify_point(int node, int left, int right, int idx, long long val) {
if (left == right) {
tree_nodes[node] = val;
return;
}
int mid = (left + right) / 2;
if (idx <= mid) modify_point(2 * node, left, mid, idx, val);
else modify_point(2 * node + 1, mid + 1, right, idx, val);
tree_nodes[node] = max(tree_nodes[2 * node], tree_nodes[2 * node + 1]);
}
void apply_modulo(int node, int left, int right, int q_left, int q_right, long long mod_val) {
if (tree_nodes[node] < mod_val) return;
if (left == right) {
tree_nodes[node] %= mod_val;
return;
}
int mid = (left + right) / 2;
if (q_left <= mid) apply_modulo(2 * node, left, mid, q_left, q_right, mod_val);
if (q_right > mid) apply_modulo(2 * node + 1, mid + 1, right, q_left, q_right, mod_val);
tree_nodes[node] = max(tree_nodes[2 * node], tree_nodes[2 * node + 1]);
}
long long query_max(int node, int left, int right, int q_left, int q_right) {
if (q_left <= left && right <= q_right) return tree_nodes[node];
int mid = (left + right) / 2;
long long res = 0;
if (q_left <= mid) res = max(res, query_max(2 * node, left, mid, q_left, q_right));
if (q_right > mid) res = max(res, query_max(2 * node + 1, mid + 1, right, q_left, q_right));
return res;
}
Permutation Conjugacy and Cycle Decomposition
Given two permutations $P$ and $Q$ of size $N$, determine if a permutation $H$ exists such that $H \ imes P \ imes H^{-1} = Q$. If such a mapping exists, output any valid $H$; otherwise, indicate impossibility.
Analytical Approach
The equation implies that $P$ and $Q$ must possess identical cycle structure topologies. If $P$ contains a cycle of length $L$, $Q$ must contain a corresponding cycle of length $L$. The algorithm proceeds by extracting all cycles from $Q$ and grouping them by their lengths into buckets. Subsequently, it decomposes $P$ into cycles. For each cycle in $P$ of length $L$, it retrieves an arbitrary cycle of length $L$ from the $Q$-buckets and constructs a mapping $H$ that links the respective cyclic elements. If any cycle in $P$ cannot find a matching length in $Q$, the conjugacy permutation $H$ does not exist.
Implementation
#include <iostream>
#include <vector>
#include <queue>
using namespace std;
const int MAX_N = 100005;
int perm_p[MAX_N], perm_q[MAX_N], mapping_h[MAX_N];
bool explored[MAX_N];
queue<int> cycle_buckets[MAX_N];
int main() {
int n;
cin >> n;
for (int i = 0; i < n; ++i) cin >> perm_p[i];
for (int i = 0; i < n; ++i) cin >> perm_q[i];
for (int i = 0; i < n; ++i) {
if (!explored[i]) {
int curr = i, len = 0;
while (!explored[curr]) {
explored[curr] = true;
curr = perm_q[curr];
len++;
}
cycle_buckets[len].push(i);
}
}
fill(explored, explored + n, false);
bool feasible = true;
for (int i = 0; i < n && feasible; ++i) {
if (!explored[i]) {
vector<int> p_cycle;
int curr = i, len = 0;
while (!explored[curr]) {
explored[curr] = true;
p_cycle.push_back(curr);
curr = perm_p[curr];
len++;
}
if (cycle_buckets[len].empty()) {
feasible = false;
} else {
int q_start = cycle_buckets[len].front();
cycle_buckets[len].pop();
int q_curr = q_start;
for (int p_elem : p_cycle) {
mapping_h[p_elem] = q_curr;
q_curr = perm_q[q_curr];
}
}
}
}
if (!feasible) {
cout << "NO" << endl;
} else {
cout << "YES" << endl;
for (int i = 0; i < n; ++i) cout << mapping_h[i] + 1 << " ";
cout << endl;
}
return 0;
}
Coprime Base Enumeration for LCM Constraints
For parameters $Limit, X, Y$, compute the aggregate sum of $|A \ imes X - B \ imes Y|$ across all ordered positive integer pairs $(A, B)$ satisfying $LCM(A, B) \le Limit$. The Least Common Multiple constraint necessitates an efficient divisor-based decomposition rather than naive iteration.
Analytical Approach
Let $G = GCD(A, B)$ and represent the coprime bases as $A = G \ imes i$, $B = G \ imes j$ where $GCD(i, j) = 1$. The LCM constraint translates to $i \ imes j \ imes G \le Limit$. By iterating over all coprime pairs $(i, j)$ where $i \ imes j \le Limit$, we compute a fundamental weight $W_{i,j} = |i \times X - j \times Y| + [i \ne j] \ imes |i \times Y - j \times X|$. For each base pair, the valid scaling factors $G$ range up to $Limit / (i \ imes j)$. The total contribution is $W_{i,j} \ imes G \ imes \sum_{k=1}^{\lfloor Limit / (i \ imes j \ imes G) \rfloor} k$. This reduces the problem to an $O(N \log N)$ coprime enumeration with linear scaling summations.
Implementation
#include <iostream>
#include <cmath>
using namespace std;
long long compute_gcd(long long a, long long b) {
return b == 0 ? a : compute_gcd(b, a % b);
}
long long triangular_sum(long long n) {
return n * (n + 1) / 2;
}
int main() {
long long limit, x_coeff, y_coeff;
cin >> limit >> x_coeff >> y_coeff;
long long total_sum = 0;
for (long long i = 1; i <= limit; ++i) {
long long max_j = limit / i;
for (long long j = i; j <= max_j; ++j) {
if (compute_gcd(i, j) == 1) {
long long core_diff = abs(i * x_coeff - j * y_coeff);
if (i != j) core_diff += abs(i * y_coeff - j * x_coeff);
long long base_lcm = i * j;
long long max_scale = limit / base_lcm;
total_sum += core_diff * triangular_sum(max_scale);
}
}
}
cout << total_sum << endl;
return 0;
}