Problem A: Reachable Sums via Step Sizes
Tags: Dynamic Programming Knapsack Variation
Approach
Given a maximum limit n and two step values a and b, the objective is to determine the largest integer less than or equal to n that can be formed by summing multiples of a and b. Since the value range is constrained, a boolean dynamic programming array efficiently tracks reachable states. Initialize reachable[0] as true. Iterate through the array, marking reachable[i] as true if reachable[i - a] or reachable[i - b] is valid. After populating the state table, scan backwards from n to find the highest reachable value. The result is the difference between n and this maximum reachable sum. The time complexity scales linearly with the limit O(n).
Implementation
#include <iostream>
#include <vector>
#include <algorithm>
int main() {
int limit, step_x, step_y;
if (!(std::cin >> limit >> step_x >> step_y)) return 0;
std::vector<bool> can_reach(limit + 1, false);
can_reach[0] = true;
for (int val = step_x; val <= limit; ++val) {
if (can_reach[val - step_x]) can_reach[val] = true;
}
for (int val = step_y; val <= limit; ++val) {
if (can_reach[val - step_y]) can_reach[val] = true;
}
int max_val = 0;
for (int i = 1; i <= limit; ++i) {
if (can_reach[i]) max_val = i;
}
std::cout << (limit - max_val) << "\n";
return 0;
}
Problem B: Pattern Matching with Character Filtering
Tags: String Processing KMP Algorithm
Approach
The task requires locating a pattern string within a source string after stripping specific special characters. First, define a set of forbidden ASCII characters. Traverse the source string, appending only allowed characters to a new filtered buffer. Once sanitized, apply the Knuth-Morris-Pratt (KMP) algorithm to search for the pattern. Precompute the Longest Prefix Suffix (LPS) array for the pattern to enable O(1) fallbacks during mismatches. Traverse the filtered text with the pattern pointer, advancing on matches and utilizing the LPS table on mismatches. If the pattern pointer reaches the pattern length, a match exists. The overall complexity is linear relative to the combined string lengths O(|text| + |pattern|).
Implementation
#include <iostream>
#include <string>
#include <vector>
#include <unordered_set>
int main() {
int len_a, len_b;
std::cin >> len_a >> len_b;
std::string raw_text, pattern;
std::cin >> raw_text >> pattern;
std::unordered_set<char> special = {
'0','1','2','3','4','5','6','7','8','9',
'+','-','*','|',',','.','~','!','@','#',
'$','%','^','&','(',')','[',']','{','}',
'"',';',':','?','<','>','\\','/'
};
std::string filtered;
filtered.reserve(len_a);
for (char c : raw_text) {
if (special.find(c) == special.end()) {
filtered.push_back(c);
}
}
int m = pattern.length();
int n = filtered.length();
if (m == 0 || n < m) {
std::cout << "NO\n";
return 0;
}
std::vector<int> lps(m, 0);
for (int i = 1, len = 0; i < m; ) {
if (pattern[i] == pattern[len]) {
lps[i++] = ++len;
} else if (len > 0) {
len = lps[len - 1];
} else {
lps[i++] = 0;
}
}
for (int i = 0, j = 0; i < n; ) {
if (filtered[i] == pattern[j]) {
i++; j++;
}
if (j == m) {
std::cout << "YES\n";
return 0;
} else if (i < n && filtered[i] != pattern[j]) {
if (j > 0) j = lps[j - 1];
else i++;
}
}
std::cout << "NO\n";
return 0;
}
Problem C: Array-Backed Doubly Linked List Operations
Tags: Data Structures Simulation
Approach
This problem simulates dynamic neighbor queries and node removals in a sequence. A traditional pointer-based linked list introduces allocation overhead, so an array-backed doubly linked structure is optimal. Maintain two arrays, prev_node and next_node, where indices represent node identifiers. Initialize the arrays to reflect a linear sequence from 1 to n. For removal operations, update the neighbors' pointers to bypass the target node, effectively unlinking it in O(1) time. For query operations, directly access the prev_node array. Boundary checks insure pointers remain within valid ranges. The simulation processes all operations in O(k) time.
Implementation
#include <iostream>
#include <vector>
int main() {
int n, queries;
std::cin >> n >> queries;
std::vector<int> prev_node(n + 2), next_node(n + 2);
for (int i = 1; i <= n; ++i) {
prev_node[i] = i - 1;
next_node[i] = i + 1;
}
while (queries--) {
int type, target;
std::cin >> type >> target;
if (type == 2) {
std::cout << prev_node[target] << "\n";
} else {
int left = prev_node[target];
int right = next_node[target];
if (left >= 1) next_node[left] = right;
if (right <= n) prev_node[right] = left;
}
}
return 0;
}
Problem D: Impartial Game Analysis with Modulo Invariants
Tags: Game Theory Mathematical Induction
Approach
Two players alternately remove stones from two piles. The game ends when a player cannot make a valid move. Analyzing terminal states reveals that (1, 1) is a losing position for the first player. Positions containing a pile of size 1 or 2 (excluding (1,1)) are winning. For larger piles, the game exhibits a periodic invariant modulo 3. When piles are unequal, the second player can mirror moves to reduce both piles by 3 each round, making the outcome dependent on min(a, b) % 3. If the minimum pile is divisible by 3, the second player wins; otherwise, the first player forces a win. When piles are equal, the symmetry breaks differently: the second player wins unless the pile size modulo 3 equals 2. This classification allows O(1) determination per test case.
Implementation
#include <iostream>
#include <algorithm>
void solve() {
long long pile1, pile2;
std::cin >> pile1 >> pile2;
if (pile1 == 1 && pile2 == 1) {
std::cout << "niumei\n";
return;
}
if (pile1 <= 2 || pile2 <= 2) {
std::cout << "niuniu\n";
return;
}
long long smaller = std::min(pile1, pile2);
if (pile1 != pile2) {
std::cout << (smaller % 3 == 0 ? "niumei\n" : "niuniu\n");
} else {
std::cout << (pile1 % 3 == 2 ? "niuniu\n" : "niumei\n");
}
}
int main() {
int t;
std::cin >> t;
while (t--) solve();
return 0;
}
Problem E: Greedy Sequence Construction for Target Pair Counts
Tags: Constructive Algorithms Greedy Strategy
Approach
The goal is to construct a permutation of length n such that exactly k valid pairs exist, where a pair (i, j) with i < j is valid if arr[i] - arr[j] is a power of two. Contributions are maximized when larger numbers precede smaller numbers. Precompute the contribution capacity for each number x, which equals the count of values y < x where x - y is a power of two. This forms a monotonically increasing prefix sum array. To achieve exactly k pairs, iterate from n down to 1. If the current number's contribution fits within the remaining k, place it in a descending segment and decrement k. Otherwise, place it in an ascending segment to contribute zero additional pairs. Because contribution values are continuous, this greedy partition guarantees an exact match. Output the descending segment followed by the reversed ascending segment. Complexity is O(n).
Implementation
#include <iostream>
#include <vector>
#include <algorithm>
int main() {
int n;
long long k;
std::cin >> n >> k;
std::vector<int> contribution(n + 1, 0);
for (int i = 1; i <= n; i <<= 1) {
if (i + 1 <= n) contribution[i + 1] = 1;
}
for (int i = 1; i <= n; ++i) {
contribution[i] += contribution[i - 1];
}
long long max_pairs = 0;
for (int i = 1; i <= n; ++i) max_pairs += contribution[i];
if (k > max_pairs) {
std::cout << "-1\n";
return 0;
}
std::vector<int> descending_seq, ascending_seq;
for (int val = n; val >= 1; --val) {
if (k >= contribution[val]) {
descending_seq.push_back(val);
k -= contribution[val];
} else {
ascending_seq.push_back(val);
}
}
for (int x : descending_seq) std::cout << x << " ";
for (auto it = ascending_seq.rbegin(); it != ascending_seq.rend(); ++it) {
std::cout << *it << " ";
}
std::cout << "\n";
return 0;
}
Problem F: Parallel Task Scheduling on Hierarchical Structures
Tags: Tree DP Greedy Scheduling
Approach
Each node in a tree represents a task with an execution cost. Child tasks must complete before their parent begins. Two child subtrees can execute in parallel, but a single processor cannot handle two tasks simultaneously. This models a two-processor scheduling problem on a dependency tree. Perform a post-order traversal to compute two metrics per node: subtree_weight (sum of all costs in the subtree) and completion_time (minimum time to finish the subtree). For a given node, the parallel execution of children is bottlenecked by either the heaviest child chain or half the total child weight (rounded up), whichever is larger. The node's completion time equals this bottleneck plus its own cost. This greedy aggregation ensures optimal parallel utilization. The algorithm runs in O(n) time.
Implementation
#include <iostream>
#include <vector>
#include <algorithm>
const int MAXN = 1000005;
std::vector<int> children[MAXN];
long long node_cost[MAXN];
long long subtree_weight[MAXN];
long long completion_time[MAXN];
void compute_schedule(int u) {
long long max_child_time = 0;
long long total_child_weight = 0;
for (int v : children[u]) {
compute_schedule(v);
total_child_weight += subtree_weight[v];
max_child_time = std::max(max_child_time, completion_time[v]);
}
subtree_weight[u] = total_child_weight + node_cost[u];
long long parallel_limit = (total_child_weight + 1) / 2;
completion_time[u] = std::max(max_child_time, parallel_limit) + node_cost[u];
}
int main() {
int n;
std::cin >> n;
for (int i = 1; i <= n; ++i) {
std::cin >> node_cost[i];
}
for (int i = 2; i <= n; ++i) {
int parent;
std::cin >> parent;
children[parent].push_back(i);
}
compute_schedule(1);
std::cout << completion_time[1] << "\n";
return 0;
}