Backup and Security: Protecting Factory Data From Loss and Breach
Backup Strategies: The 3-2-1 Rule
Data loss in an industrial environment means lost production records, compliance violations, and weeks of manual reconstruction. The 3-2-1 rule is the gold standard:
- 3 copies of your data (original + 2 backups)
- 2 different storage media (local disk + cloud)
- 1 copy offsite (a different physical location)
| Type | Description | Frequency |
|---|---|---|
| Full | Complete copy of all data | Weekly |
| Incremental | Only changes since last backup | Daily |
| Transaction log | Database write-ahead logs | Continuous |
Database Backups: SurrealDB and PostgreSQL
SurrealDB
surreal export --conn http://localhost:8000 \
--user root --pass factory123 \
--ns factory --db production \
--output /backups/surrealdb/factory-$(date +%Y%m%d).surql
# Restore
surreal import --conn http://localhost:8000 \
--user root --pass factory123 \
--ns factory --db production \
/backups/surrealdb/factory-20250415.surql
PostgreSQL
pg_dump -h localhost -U factory_user -Fc production_db > /backups/postgres/factory-$(date +%Y%m%d).dump
pg_restore -h localhost -U factory_user -d production_db /backups/postgres/factory-20250415.dump
Docker Volume Backup
docker compose stop surrealdb
docker run --rm -v factory-monitor_surreal-data:/data -v /backups:/backup \
alpine tar czf /backup/surreal-volume-$(date +%Y%m%d).tar.gz -C /data .
docker compose start surrealdb
Encrypting Backups
Unencrypted backups are a liability. Encrypt every backup before it leaves the server.
GPG
gpg --encrypt --recipient backup@drmachine.io /backups/factory-20250415.surql
gpg --decrypt /backups/factory-20250415.surql.gpg > /tmp/restore.surql
OpenSSL
openssl enc -aes-256-cbc -salt -pbkdf2 \
-in /backups/factory-20250415.tar.gz \
-out /backups/factory-20250415.tar.gz.enc
openssl enc -d -aes-256-cbc -pbkdf2 \
-in /backups/factory-20250415.tar.gz.enc \
-out /backups/factory-20250415.tar.gz
Upload to Cloud
rclone copy /backups/factory-20250415.tar.gz.enc remote:factory-backups/daily/
Key and Secret Management
Environment Files with Strict Permissions
cat > /opt/factory-monitor/.env << 'EOF'
DATABASE_URL=ws://localhost:8000
DB_USER=root
DB_PASS=secure_factory_password_2025
EOF
chmod 600 /opt/factory-monitor/.env
chown factory:factory /opt/factory-monitor/.env
Docker Secrets
services:
factory-app:
secrets: [db_password, api_key]
environment:
- DB_PASS_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txt
Rotating Secrets
NEW_PASS=$(openssl rand -base64 32)
surreal sql --conn http://localhost:8000 --user root --pass "$OLD_PASS" \
"DEFINE USER root ON ROOT PASSWORD '$NEW_PASS'"
sed -i "s/DB_PASS=.*/DB_PASS=$NEW_PASS/" /opt/factory-monitor/.env
sudo systemctl restart factory-monitor
Disaster Recovery Plan
A backup is useless if you cannot restore from it. Define your RTO (how quickly must the system return) and RPO (how much data loss is acceptable).
Recovery Checklist
1. [ ] Provision a new server or use standby
2. [ ] Install Docker and Docker Compose
3. [ ] Restore the latest backup
4. [ ] Verify data integrity
5. [ ] Update DNS records if IP changed
6. [ ] Re-establish VPN tunnels
7. [ ] Verify monitoring and alerts
8. [ ] Notify the operations team
Test Recovery Quarterly
docker compose -f docker-compose.test.yml up -d
surreal import --conn http://localhost:9000 --user root --pass test123 \
--ns factory --db production /backups/latest/factory-latest.surql
surreal sql --conn http://localhost:9000 --user root --pass test123 \
"SELECT count() FROM sensor_readings GROUP ALL"
docker compose -f docker-compose.test.yml down -v
Practical Example: Automated Backup System for Factory Data
#!/bin/bash
# /opt/scripts/backup-factory.sh
set -euo pipefail
BACKUP_DIR="/backups/$(date +%Y%m%d)"
RETENTION_DAYS=30
mkdir -p "$BACKUP_DIR"
# 1. Export SurrealDB
surreal export --conn http://localhost:8000 --user root --pass "$DB_PASS" \
--ns factory --db production --output "$BACKUP_DIR/surrealdb.surql"
# 2. Backup Docker volumes
docker run --rm -v factory-monitor_surreal-data:/data -v "$BACKUP_DIR":/backup \
alpine tar czf /backup/volumes.tar.gz -C /data .
# 3. Backup configs
tar czf "$BACKUP_DIR/configs.tar.gz" /opt/factory-monitor/docker-compose.yml \
/opt/factory-monitor/.env /etc/wireguard/wg0.conf
# 4. Encrypt
tar czf - "$BACKUP_DIR" | openssl enc -aes-256-cbc -salt -pbkdf2 \
-pass file:/opt/secrets/backup-key -out "/backups/factory-$(date +%Y%m%d).tar.gz.enc"
# 5. Upload to cloud
rclone copy "/backups/factory-$(date +%Y%m%d).tar.gz.enc" remote:factory-backups/daily/
# 6. Clean old backups
find /backups -name "*.enc" -mtime +$RETENTION_DAYS -delete
rm -rf "$BACKUP_DIR"
Cron Entry
0 2 * * * /opt/scripts/backup-factory.sh >> /var/log/backup.log 2>&1
Summary
Data protection requires a layered approach: the 3-2-1 rule ensures copies survive any single failure, encryption protects data at rest and in transit, proper secret management prevents credential leaks, and a tested disaster recovery plan ensures you can actually restore when needed. This concludes the Docker and DevOps series covering containerization, orchestration, Linux administration, CI/CD, monitoring, networking, and data protection for industrial environments.