I have a pretty decent internet connection (150Mbps downstream, 15Mbps upstream). In my case, I have several hundred thousand small files and a few thousand large files. While uploading large files to Google Drive, Syncovery uses my bandwidth fully but with small files my usage falls below 1%. The reason for this is that I have a cap of maximum 10 simultaneous threads.
The problem I'm having is that I would like my uploads to go as fast as possible but also have the ability to pause them in order to use the bandwidth for something else. Currently when I pause Syncovery, it waits until every file in the thread has finished uploading so if I have 10 small files being uploaded it pauses very quickly but if I have 10 large files it might even take a couple of hours to stop (it depends on how bit the files are). In essence, uploading multiple big files simultaneously renders the pause feature useless.
My question is: is there a way to optimize my particular case?
To be honest, I could do without the pause feature but I'd rather have Syncovery use my full bandwidth to reduce upload times.
I suppose my ideal scenario would be to have a condition where files get catalogued as small, medium, and large based on an arbitrary size (that I could fine tune in the settings). Let's say,
small = <1MB
medium = 1Mb-5Mb
large = >5Mb
Now, suppose we had 100 threads but each thread would only grab files according to its definition. So only 1 thread would be uploading large files, 5 threads medium files, and 94 threads small files. In this scenario you'd be always uploading a large file (thus maximizing your bandwidth) and you would also be able to fine tune how many small files are being transmitted simultaneously. Seems to me that API calls take longer than the actual transfer for really small files so being able to fine tune these threads would be optimal.