Data integrity is a critical component of server administration. Relying on manual intervention often leads to inconsistent recovery points. To mitigate this risk, an automated solution utilizing a Bash script provides a reliable mechanism for performing consistent dumps, compressing the output, and securely replicating archives to a remote server.
Implementation Script
The following script orchestrates the connection to the MySQL instance, iterates through user-defined schemas to exclude internal databases, and utilizes mysqldump with gzip for efficient storage. It concludes by transferring the artifacts to a designated off-site locasion via SCP.
#!/bin/bash
# Database Configuration
SQL_USER="admin_user"
SQL_PASS="secure_password"
# Local Storage Paths
LOCAL_STORAGE="/var/backups/mysql"
LOG_FILE="/var/log/db_backup_$(date +%Y%m%d).log"
# Remote Replication Settings
REMOTE_USER="sync_user"
REMOTE_HOST="backup.example.com"
REMOTE_PATH="/archive/mysql_dumps"
# Network Timeout (seconds)
TIMEOUT_LIMIT=15
# Timestamp Generation
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
# Initialize Environment
mkdir -p "$LOCAL_STORAGE"
mkdir -p "$(dirname "$LOG_FILE")"
# Identify User Databases (Excluding System Schemas)
DB_LIST=$(mysql -u"$SQL_USER" -p"$SQL_PASS" -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
# Array to hold successfully created archives
TRANSFER_QUEUE=()
# Iterative Backup Process
for SCHEMA in $DB_LIST; do
ARCHIVE_PATH="${LOCAL_STORAGE}/${SCHEMA}_${TIMESTAMP}.sql.gz"
echo "[$TIMESTAMP] Initiating dump for schema: $SCHEMA" >> "$LOG_FILE"
# Execute Dump with Single-Transaction for InnoDB consistency
if mysqldump -u"$SQL_USER" -p"$SQL_PASS" --single-transaction --databases "$SCHEMA" | gzip > "$ARCHIVE_PATH"; then
echo "[$TIMESTAMP] Compression successful: $ARCHIVE_PATH" >> "$LOG_FILE"
TRANSFER_QUEUE+=("$ARCHIVE_PATH")
else
echo "[$TIMESTAMP] Critical error backing up $SCHEMA. Proceeding to next schema." >> "$LOG_FILE"
fi
done
# Remote Synchronization Logic
if [[ -n "$REMOTE_HOST" ]] && [[ ${#TRANSFER_QUEUE[@]} -gt 0 ]]; then
# Ensure Remote Directory Exists
ssh -o ConnectTimeout=$TIMEOUT_LIMIT "$REMOTE_USER@$REMOTE_HOST" "mkdir -p $REMOTE_PATH"
if [ $? -eq 0 ]; then
echo "[$TIMESTAMP] Uploading ${#TRANSFER_QUEUE[@]} files to remote host..." >> "$LOG_FILE"
# Batch Transfer via SCP
scp -o ConnectTimeout=$TIMEOUT_LIMIT "${TRANSFER_QUEUE[@]}" "$REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH"
echo "[$TIMESTAMP] Upload complete. Purging local archives..." >> "$LOG_FILE"
# Clean up local files to save space
for FILE in "${TRANSFER_QUEUE[@]}"; do
rm -f "$FILE"
done
echo "[$TIMESTAMP] Local cleanup finished." >> "$LOG_FILE"
else
echo "[$TIMESTAMP] SSH connection failed. Aborting transfer." >> "$LOG_FILE"
fi
fi
echo "[$TIMESTAMP] Backup cycle finalized." >> "$LOG_FILE"
Configuring Passwordless SSH
To ensure the automation runs without interactive prompts, SSH public-key authentication must be established between the databasse server and the remote storage server.
Generating Key Pairs
On the database server, generate an RSA key pair if one does not already exist. Execute the following command and accept the default file locations:
ssh-keygen -t rsa -b 4096
This generates a private key (id_rsa) and a public key (id_rsa.pub).
Distributing the Public Key
Deploy the public key to the remote server by appending it to the authorized_keys file of the target user. The ssh-copy-id utility streamlines this process:
ssh-copy-id -i ~/.ssh/id_rsa.pub sync_user@backup.example.com
You will be prompted for the remote user's password once. Upon successful execution, future connections will not require a password.
Validation
Verify the configuration by attempting to log in to the remote server:
ssh sync_user@backup.example.com
If the shell grants access without a password prompt, the infrastructure is ready for automated script execution.