Backup & Recovery Guide
Complete guide for backing up and recovering the SBM CRM Platform.
Backup Strategy
Backup Types
- Database Backups - Daily full backups, hourly incremental
- File Backups - Daily backups of uploaded files
- Configuration Backups - Weekly backups of configuration files
- Log Backups - Monthly archives of application logs
Backup Schedule
- Database: Daily at 2:00 AM
- Files: Daily at 3:00 AM
- Configuration: Weekly on Sunday at 1:00 AM
- Retention: 30 days for daily, 90 days for weekly
Database Backups
Automated Backup Script
Create /usr/local/bin/backup-sbmcrm-db.sh:
#!/bin/bash
# Configuration
BACKUP_DIR="/backups/sbmcrm/database"
DATE=$(date +%Y%m%d_%H%M%S)
DB_NAME="sbmcrm_production"
DB_USER="sbmcrm"
DB_HOST="localhost"
RETENTION_DAYS=30
# Create backup directory
mkdir -p $BACKUP_DIR
# Perform backup
pg_dump -h $DB_HOST -U $DB_USER -F c -b -v -f "$BACKUP_DIR/sbmcrm_$DATE.dump" $DB_NAME
# Compress backup
gzip "$BACKUP_DIR/sbmcrm_$DATE.dump"
# Remove old backups
find $BACKUP_DIR -name "sbmcrm_*.dump.gz" -mtime +$RETENTION_DAYS -delete
# Upload to S3 (optional)
aws s3 cp "$BACKUP_DIR/sbmcrm_$DATE.dump.gz" s3://your-backup-bucket/database/
echo "Backup completed: sbmcrm_$DATE.dump.gz"
Set Up Cron Job
# Edit crontab
crontab -e
# Add daily backup at 2 AM
0 2 * * * /usr/local/bin/backup-sbmcrm-db.sh >> /var/log/sbmcrm/backup.log 2>&1
Manual Backup
# Full backup
pg_dump -h localhost -U sbmcrm -F c -b -v -f backup.dump sbmcrm_production
# Compressed backup
pg_dump -h localhost -U sbmcrm sbmcrm_production | gzip > backup.sql.gz
# Backup specific table
pg_dump -h localhost -U sbmcrm -t customers sbmcrm_production > customers_backup.sql
File Backups
Backup Uploaded Files
#!/bin/bash
BACKUP_DIR="/backups/sbmcrm/files"
DATE=$(date +%Y%m%d_%H%M%S)
UPLOAD_DIR="/var/www/sbmcrm/uploads"
RETENTION_DAYS=30
mkdir -p $BACKUP_DIR
# Create tar archive
tar -czf "$BACKUP_DIR/uploads_$DATE.tar.gz" -C $UPLOAD_DIR .
# Remove old backups
find $BACKUP_DIR -name "uploads_*.tar.gz" -mtime +$RETENTION_DAYS -delete
# Upload to S3
aws s3 cp "$BACKUP_DIR/uploads_$DATE.tar.gz" s3://your-backup-bucket/files/
Configuration Backups
Backup Configuration Files
#!/bin/bash
BACKUP_DIR="/backups/sbmcrm/config"
DATE=$(date +%Y%m%d)
CONFIG_DIR="/opt/sbmcrm/config"
mkdir -p $BACKUP_DIR
# Backup configuration
tar -czf "$BACKUP_DIR/config_$DATE.tar.gz" -C $CONFIG_DIR .
# Keep last 90 days
find $BACKUP_DIR -name "config_*.tar.gz" -mtime +90 -delete
Recovery Procedures
Database Recovery
Full Database Restore
# Stop application
sudo systemctl stop sbmcrm-api
# Drop existing database (WARNING: This deletes all data)
dropdb -h localhost -U sbmcrm sbmcrm_production
# Create new database
createdb -h localhost -U sbmcrm sbmcrm_production
# Restore from backup
pg_restore -h localhost -U sbmcrm -d sbmcrm_production backup.dump
# Or from compressed SQL
gunzip < backup.sql.gz | psql -h localhost -U sbmcrm sbmcrm_production
# Start application
sudo systemctl start sbmcrm-api
Point-in-Time Recovery
For point-in-time recovery, use WAL archiving:
# Enable WAL archiving in postgresql.conf
wal_level = replica
archive_mode = on
archive_command = 'cp %p /backups/wal/%f'
# Restore to specific time
pg_basebackup -D /var/lib/postgresql/restore -Ft -z -P
# Then apply WAL files up to target time
File Recovery
# Extract files from backup
tar -xzf uploads_20240120.tar.gz -C /var/www/sbmcrm/uploads
# Restore specific file
tar -xzf uploads_20240120.tar.gz path/to/file.jpg -C /var/www/sbmcrm/uploads
Backup Verification
Verify Backup Integrity
# Test database backup
pg_restore --list backup.dump
# Verify file backup
tar -tzf uploads_20240120.tar.gz | head -10
Automated Verification
#!/bin/bash
BACKUP_FILE="$1"
# Verify database backup
if pg_restore --list "$BACKUP_FILE" > /dev/null 2>&1; then
echo "Database backup is valid"
else
echo "Database backup is corrupted!"
exit 1
fi
Disaster Recovery Plan
Recovery Time Objectives (RTO)
- Critical Systems: 4 hours
- Non-Critical Systems: 24 hours
Recovery Point Objectives (RPO)
- Database: 1 hour (hourly backups)
- Files: 24 hours (daily backups)
Recovery Steps
-
Assess Damage
- Identify what needs to be recovered
- Determine recovery point
-
Prepare Environment
- Set up new servers if needed
- Restore base system
-
Restore Backups
- Restore database
- Restore files
- Restore configuration
-
Verify System
- Test application functionality
- Verify data integrity
- Check service health
-
Resume Operations
- Start services
- Monitor for issues
- Notify stakeholders
Backup Storage
Local Storage
- Fast access for quick recovery
- Limited retention
- Vulnerable to local disasters
Cloud Storage (S3, Azure Blob)
- Off-site protection
- Scalable storage
- Cost-effective for long retention
Hybrid Approach
- Recent backups: Local storage
- Older backups: Cloud storage
- Best of both worlds
Monitoring
Backup Monitoring
Set up alerts for:
- Backup failures
- Backup size anomalies
- Backup duration issues
- Storage space warnings
Health Checks
# Check last backup time
ls -lt /backups/sbmcrm/database/ | head -2
# Check backup size
du -sh /backups/sbmcrm/database/
# Verify backup age
find /backups/sbmcrm/database/ -mtime +1 -name "*.dump.gz"
Best Practices
- Test Restores Regularly - Monthly restore tests
- Multiple Backup Locations - Local + cloud
- Encrypt Backups - Protect sensitive data
- Document Procedures - Clear recovery steps
- Monitor Backups - Automated monitoring
- Version Control - Track backup versions
- Retention Policy - Clear retention rules