Hi,
I've recently upgraded to NVMe SSDs, hoping that this would vastly improve the backup performance of large files as well.
My setup is:
- Skylake Core i7 mobile CPU (4 physical cores and 8 hyper-threaded ones)
- 16 GB RAM
- 2 x Samsung 960 EVO NVMe SSD (one contains source data, the other one is the backup medium)
- Windows 10 and Syncovery 7.85h 64bit
My scenario is:
- Synthetic backup of a 60GB-large VM disk from one SSD to another one
After the initial backup, only block-level increments are written as expected. What I've noticed is that while reading the files (original and the backup one) in order to compare them and generate the delta, Syncovery does the reading (?) at approx. 160 MB/s (as shown by Windows resource monitor) whereas the SSD is capable of ~3 GB/s sequential read speed. Looking at the CPU it is loaded at merely 16% (no single core maxed out according to Windows 10 resource monitor, e.g. all of them are loaded at approx. 16%).
Now, yes, these SSDs are not very fast when writing. But my tests involved just starting the VM and shutting it down again in order to generate a small block-level difference between two disks. E.g. the total delta written by Syncovery after this test is approx. 5MB.
My question is what would be the bottleneck here. I was expecting the two 60GB files to be compared _very_ quickly.
What exactly is happening during the file comparison phase during synthetic backup? What kind of reading is involved (sequential?) and what kind of writing? Is the comparison multi-threaded (e.g. multiple chunks compared in parallel)?
Thanks