Hi,
I'm trying to mirror some MS Access Databases and i've successfully set-up a backup job to copy to S3 every 10 minutes only the changes (delta copy). This is working well as the AWS inbound charges are free and the storage costs for the full copy + deltas are minimal. However when it comes to restores this is expensive as i'm restoring to a server every 10 minutes and the restore job appears to delete the file and overwrite it with a fresh copy from S3. The zipped DB is about 40MB and so the restores every 10 minutes results in about 5GB of outbound traffic per day, and growing.
I guess it might be too much to consider delta updates on the restore side but is there any other way to optimise the restores? For instance, if the file hasn't been modified since the last restore can the job be set to ignore? In our case the restored copy sits in a staged area and after restoring a powershell job copies the restored db into production area, so the staged restored db will never be touched. Therefore the restore job should be able to compare its timestamp with the copy on S3.
Our aim is to copy(mirror) the on-premise access dbs to a number of load-balanced cloud servers like this. All traffic is one-way:
BACKUP:
ON-PREMISE--->S3
RESTORE:
S3--->CLOUD SERVER1
+-->CLOUD SERVER2
+-->CLOUD SERVER3
Any assistance would be appreciated.
Thanks,
Neil