Database Setup
Complete guide for setting up and managing the SBM CRM Platform database.
Database Requirements
- PostgreSQL: Version 13 or later
- Storage: Minimum 50GB (varies by data volume)
- Backup Storage: 2x database size recommended
Initial Setup
Create Database
# Connect as postgres user
sudo -u postgres psql
# Create database
CREATE DATABASE sbmcrm_production;
# Create user
CREATE USER sbmcrm WITH PASSWORD 'your_secure_password';
# Grant privileges
GRANT ALL PRIVILEGES ON DATABASE sbmcrm_production TO sbmcrm;
# Connect to database
\c sbmcrm_production
# Grant schema privileges
GRANT ALL ON SCHEMA public TO sbmcrm;
# Exit
\q
Run Migrations
# Run all migrations
npm run migrate:production
# Or using Docker
docker-compose exec api npm run migrate
Database Schema
Core Tables
customers- Customer profilespoints_transactions- Points earned/redeemedrewards- Reward catalogcampaigns- Marketing campaignssegments- Customer segmentsnotifications- Notification historybookings- Booking records
Indexes
Key indexes are created automatically:
-- Customer indexes
CREATE INDEX idx_customers_email ON customers(email);
CREATE INDEX idx_customers_tier ON customers(tier);
CREATE INDEX idx_customers_created_at ON customers(created_at);
-- Points transactions
CREATE INDEX idx_points_customer_id ON points_transactions(customer_id);
CREATE INDEX idx_points_created_at ON points_transactions(created_at);
CREATE INDEX idx_points_type ON points_transactions(type);
-- Campaign participants
CREATE INDEX idx_campaign_participants_campaign ON campaign_participants(campaign_id);
CREATE INDEX idx_campaign_participants_customer ON campaign_participants(customer_id);
Database Maintenance
Vacuum and Analyze
Regular maintenance to optimize performance:
-- Vacuum database
VACUUM ANALYZE;
-- Vacuum specific table
VACUUM ANALYZE customers;
-- Full vacuum (requires exclusive lock)
VACUUM FULL;
Check Database Size
SELECT
pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database
WHERE datname = 'sbmcrm_production';
Check Table Sizes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
Performance Tuning
PostgreSQL Configuration
Edit /etc/postgresql/13/main/postgresql.conf:
# Memory settings
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 64MB
maintenance_work_mem = 1GB
# Connection settings
max_connections = 200
max_worker_processes = 8
# Query planner
random_page_cost = 1.1
effective_io_concurrency = 200
# WAL settings
wal_buffers = 16MB
checkpoint_completion_target = 0.9
Connection Pooling
Use PgBouncer for connection pooling:
[databases]
sbmcrm_production = host=localhost port=5432 dbname=sbmcrm_production
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
Backup and Restore
Automated Backups
Set up cron job for daily backups:
# Add to crontab
0 2 * * * /usr/local/bin/backup-sbmcrm-db.sh
Backup script:
#!/bin/bash
BACKUP_DIR="/backups/sbmcrm"
DATE=$(date +%Y%m%d_%H%M%S)
pg_dump -U sbmcrm -h localhost sbmcrm_production | gzip > "$BACKUP_DIR/sbmcrm_$DATE.sql.gz"
# Keep only last 30 days
find $BACKUP_DIR -name "sbmcrm_*.sql.gz" -mtime +30 -delete
Manual Backup
# Full backup
pg_dump -U sbmcrm -h localhost sbmcrm_production > backup.sql
# Compressed backup
pg_dump -U sbmcrm -h localhost sbmcrm_production | gzip > backup.sql.gz
# Backup specific table
pg_dump -U sbmcrm -h localhost -t customers sbmcrm_production > customers_backup.sql
Restore
# Restore from backup
psql -U sbmcrm -h localhost sbmcrm_production < backup.sql
# Restore from compressed backup
gunzip < backup.sql.gz | psql -U sbmcrm -h localhost sbmcrm_production
Replication
Set Up Read Replica
- Configure primary server
- Set up streaming replication
- Configure replica server
- Start replication
See PostgreSQL documentation for detailed replication setup.
Monitoring
Check Active Connections
SELECT
count(*) as total_connections,
state,
wait_event_type
FROM pg_stat_activity
WHERE datname = 'sbmcrm_production'
GROUP BY state, wait_event_type;
Check Slow Queries
SELECT
query,
calls,
total_time,
mean_time,
max_time
FROM pg_stat_statements
WHERE query NOT LIKE '%pg_stat%'
ORDER BY mean_time DESC
LIMIT 10;
Check Database Locks
SELECT
locktype,
relation::regclass,
mode,
granted
FROM pg_locks
WHERE relation IS NOT NULL;
Troubleshooting
Connection Issues
# Check PostgreSQL is running
sudo systemctl status postgresql
# Check connection
psql -U sbmcrm -h localhost -d sbmcrm_production
# Check PostgreSQL logs
sudo tail -f /var/log/postgresql/postgresql-*.log
Performance Issues
- Check for long-running queries
- Analyze query execution plans
- Review index usage
- Check for table bloat
- Monitor connection pool usage
Disk Space
-- Check database size
SELECT pg_size_pretty(pg_database_size('sbmcrm_production'));
-- Check available disk space
SELECT pg_size_pretty(
pg_database_size('sbmcrm_production') -
(SELECT setting::bigint FROM pg_settings WHERE name = 'data_directory')
);