FTP over SSL is good and fast.
SFTP is secure and straightforward.
But there is a small peculiarity - the synchronization is not completely consistent.
The proposal is simple - create the ability to synchronize using your own protocol.
Since my programming skills do not include development in CPP, which was used to develop FreeFileSync, I will only briefly describe ideas that could be used for this type of protocol.
1. One side is the server and the other is the client.
2. Each party, at the time of first initializing the use of the protocol, generates self-signed RSA-2048 certificates, which are subsequently used for mutual identification of the parties.
3. The server opens at least one TCP port and an indefinite number of TCP and UDP ports (specified in the settings).
3.1 When the appropriate option is set, the server can forward ports through the router using UPnP discovery.
4. The client connects to the server via the main TCP port.
4.1 When connecting, both on the client side and on the server side, a two-way request arises with certificate fingerprints, which are color-coded (take every 6 characters of the fingerprint and convert it into HTML color, and underline these 6 characters with the resulting color)
4.2 If the request is accepted - the client initiates a request for data about the files from the server - it is enough to simply transfer the database file.
4.2.1 It is possible that the server, when initially listing paths, will split all files into blocks of size N (specified in the settings) and calculate a hash for them. Cases with collisions of cheap hashes need to be considered separately.
5. After receiving data about files on the other side (and comparing it with data on the local side), the result is output to the user.
6. When the transfer is initiated, the data is divided into blocks that are transferred to the opened TCP and UDP ports, while the decision about which transfer method is more effective at the moment FreeFileSync must make independently - it is only set the maximum thresholds.
6.1 Transmission channels must be AES encrypted (faster than a full SSL handshake), and UDP must have retransmission of missing blocks.
6.2 During initial indexing by content, it is possible that the same block will correspond to several files. In this case, the block must be transmitted once and written several times.
P.S. An explanation of why this opportunity interested me: I need to transfer about 174 terabytes of data, of very different types, from one continent to another. Both sides use Windows, but the specificity of the data does not allow the use of well-known protocols. The use of VPN is also unacceptable - local regulators either completely block it or radically interfere with the work, making transmission at sane speeds impossible. A temporary solution is to use SFTP and FTPS, but with active use, operators begin to cut them too.
P.P.S. This text was translated from my local language using Google Translate. I ask you to be understanding about the strangeness of the translation.
[Feature request] Custom synchronization protocol
- Posts: 1
- Joined: 3 Feb 2024
- Posts: 4074
- Joined: 11 Jun 2019
This will never, and shouldn't ever, be something added to FFS. That is such a unique and complex case outside the scope of FFS' design. Also, you basically described FTP exactly in your proposal anyway. If you're under such monitoring where traffic is interfered with, there is no way to transfer that amount of data concurrently. You'll probably be best off chunking the data and transferring it in sections on your own. UDP is used specifically to decrease latency at the cost of data loss, so retransmitting bad blocks while using it is like using a shotgun to hit a target 1000 yards away. The point is that you don't care if some packets are lost to maintain realtime latency, such as VOIP.