- CPU Utilization Threshold Alert
Monitor processor load and trigger a notification when usage exceeds a defined limit. This implementation calculates active CPU time by subtracting the idle percentage reported by top.
#!/usr/bin/env bash
ALERT_LIMIT=75
CURRENT_LOAD=$(top -bn1 | awk '/^%Cpu/ {print 100 - $8}')
if awk "BEGIN {exit !($CURRENT_LOAD > $ALERT_LIMIT)}"; then
echo "Warning: CPU utilization has reached ${CURRENT_LOAD}%"
# Integration point for notification webhooks or logging
fi
- Automated Directory Archival
Create timestamped compressed archives of critical directories. The script validates the source path before execution and isolates the archive name generation for better readability.
#!/usr/bin/env bash
TARGET_DIR="/var/data/project"
ARCHIVE_DEST="/mnt/backups"
TIMESTAMP=$(date +"%F_%H-%M-%S")
ARCHIVE_NAME="proj_backup_${TIMESTAMP}.tar.gz"
if [[ -d "$TARGET_DIR" ]]; then
tar -czf "${ARCHIVE_DEST}/${ARCHIVE_NAME}" -C "$(dirname "$TARGET_DIR")" "$(basename "$TARGET_DIR")"
echo "Archive created: ${ARCHIVE_NAME}"
else
echo "Error: Source directory missing." >&2
exit 1
fi
- Log Retention Cleanup
Automatically purge outdated log files to conserve storage. The script uses find with execution batching for efficient file removal.
#!/usr/bin/env bash
LOG_PATH="/var/log/custom_app"
RETENTION_DAYS=14
find "$LOG_PATH" -type f -name "*.log" -mtime +${RETENTION_DAYS} -exec rm -f {} +
echo "Purged logs older than ${RETENTION_DAYS} days from ${LOG_PATH}"
- Daemon Status Verification
Check the operational state of a critical service and attempt an automatic recovery if it has stopped. System logs are updated via logger for audit trails.
#!/usr/bin/env bash
DAEMON_NAME="postgresql"
if ! systemctl is-active --quiet "${DAEMON_NAME}"; then
logger -t svc_monitor "Critical: ${DAEMON_NAME} is inactive. Attempting restart."
systemctl restart "${DAEMON_NAME}"
fi
- Synchronized Multi-Node Deployment
Distribute application builds across a cluster of servers using rsync. The loop iterates through a predefined node array and mirrors the source directory while removing stale files on the remote targets.
#!/usr/bin/env bash
NODE_LIST=("web-01" "web-02" "web-03")
SRC_PATH="./build/dist/"
REMOTE_PATH="/opt/webapp/current/"
for HOST in "${NODE_LIST[@]}"; do
echo "Syncing assets to ${HOST}..."
rsync -az --delete "$SRC_PATH" "admin@${HOST}:${REMOTE_PATH}"
done
- Storage Capacity Warning
Parse filesystem usage statistics and flag partitions that breach a capacity threshold. The df -P flag ensures POSIX-compliant output formatting for reliable column extraction.
#!/usr/bin/env bash
CAPACITY_LIMIT=85
df -P | tail -n +2 | while read -r fs blocks used avail percent mount; do
usage_val=${percent%\%}
if (( usage_val > CAPACITY_LIMIT )); then
echo "Storage warning: ${mount} is ${usage_val}% full"
fi
done
- Batch Service Lifecycle Management
Control multiple related daemons simultaneously. The operation (start, stop, or restart) is passed as a command-line argument, defaulting to restart if omitted.
#!/usr/bin/env bash
SVC_GROUP=("redis" "memcached" "rabbitmq-server")
OPERATION="${1:-restart}"
for SVC in "${SVC_GROUP[@]}"; do
echo "Executing ${OPERATION} on ${SVC}..."
systemctl "${OPERATION}" "${SVC}" 2>/dev/null || echo "Failed to control ${SVC}"
done
- Remote Credential Rotation
Securely update passwords across multiple hosts using an inventory file and expect for interactive SSH handling. New credentials are generated with openssl and logged for future reference.
#!/usr/bin/env bash
HOST_FILE="servers.csv" # Format: ip,user,old_pass,port
CRED_LOG="updated_creds.csv"
while IFS=',' read -r node_addr login_name current_pwd ssh_port; do
[[ "$node_addr" =~ ^#.*$ || -z "$node_addr" ]] && continue
fresh_pwd=$(openssl rand -base64 12)
/usr/bin/expect <<eof chpasswd="" echo="" eof="" exp_continue="" expect="" send="" set="" spawn="" ssh="" stricthostkeychecking="no" timeout="">> "$CRED_LOG"
done < "$HOST_FILE"</eof>
- Tabular Data Transformation
Extract and filter specific columns from a CSV dataset. This implementation uses awk to skip headers, apply numeric conditions, and output a streamlined report.
#!/usr/bin/env bash
INPUT_CSV="raw_metrics.csv"
OUTPUT_CSV="filtered_report.csv"
awk -F',' 'NR>1 && $4>100 {print $1","$4}' "$INPUT_CSV" > "$OUTPUT_CSV"
echo "Transformation complete. Output written to ${OUTPUT_CSV}"
- API Endpoint Health Validation
Verify the availability of a web service by inspecting the HTTP response code. The script enforces a connection timeout and evaluates success based on standard 2xx status ranges.
#!/usr/bin/env bash
ENDPOINT="https://api.internal.service/health"
TIMEOUT_SEC=5
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time $TIMEOUT_SEC "$ENDPOINT")
if [[ "$HTTP_CODE" -ge 200 && "$HTTP_CODE" -lt 300 ]]; then
echo "Health check successful: Received HTTP ${HTTP_CODE}"
else
echo "Health check failed: Received HTTP ${HTTP_CODE}" >&2
exit 1
fi
- Routine System Inspection
Generate a consolidated diagnostic report covering storage, memory, and recent kernel errors. Output is redirected to a timestamped file for historical tracking.
#!/usr/bin/env bash
REPORT_FILE="/tmp/sys_inspection_$(date +%F).txt"
{
echo "=== System Inspection Report ==="
echo "Generated: $(date)"
echo -e "\n[Storage Usage]"
df -h | grep -E '^/dev/'
echo -e "\n[Memory Overview]"
free -m
echo -e "\n[Recent Kernel Errors]"
journalctl -k --priority=err --since "24 hours ago" --no-pager | tail -n 5
} > "$REPORT_FILE"
cat "$REPORT_FILE"
- Performance Baseline Capture
Record a snapshot of system resource consumption for capacity planning. The script aggregates load averages, virtual memory statistics, and top resource-consuming processes into a structured log.
#!/usr/bin/env bash
SNAPSHOT_DIR="/var/log/perf_baselines"
mkdir -p "$SNAPSHOT_DIR"
OUT_FILE="${SNAPSHOT_DIR}/baseline_$(date +%Y%m%d_%H%M).log"
{
echo "--- CPU & Load ---"
uptime
vmstat 1 3
echo -e "\n--- Memory Allocation ---"
free -h
echo -e "\n--- Top Resource Consumers ---"
ps -eo pid,user,%cpu,%mem,comm --sort=-%cpu | head -n 11
} > "$OUT_FILE"
echo "Performance snapshot saved to ${OUT_FILE}"