CloudBerry S3 Backup is a powerful Windows program that automates backup and restore processes to Amazon S3 cloud storage. One of the features that cause some confusion among our users is an ability to break larger files down into chunks. It is designed to make data transfer faster, more efficient and more reliable. However, the main drawback is that the files remain chunked on Amazon S3 and you need CloudBerry Backup to get those files back. In the post, we are going to answer some of the common questions and shed some lights on the chunking implementation details.
Q. Are the encrypted files (.chunk / .map) stored in some form of open/standard format?
A. This is our own format.
The chunks (that end with "..chunk..#) are data files. It's just a split data stream.
The definition map-file (name ends with "..chunk..map") is an XML that describes the data stream. It contains the information about compression and encryption algorithms used.
Q. Can I use any S3 client product to browse, download files, and then decrypt the files by some generic decryption utility (supplying the correct passphrase of course)?
A. You can download all chunks ("chunk..<number") and combine them into a single file (in appropriate sequence). This single file is a compressed and encrypted stream (if you used these options).
- If neither compression nor encryption was used, this single file represents the initial unchunked file
- If the only compression was used - you can decompress it with WinRAR or WinZIP or any other tool that understands GZIP compression format. Just add ".gz" to file name. Note that if the file name is "file.doc", you should rename it to "file.doc.gz" because gz-file doesn't have the information about the initial file name.
In this case, the map-file contains compression information:
Currently, only GZip compression is supported.
- If the encryption is used, the map-file contains similar information like the one below
All encryption algorithms that we use are standard and supported by Microsoft .Net Framework. AES, DES, 3DES, and RC2
Is a key size in bits
For all algorithms, we use CBC mode and the PKCS7 padding scheme. The Initialization vector is stored in the map-file (base64-encoded).
This is a base64-encoded SHA1 hash of the encryption key (don't mix up with password). It's used for cases when you try to download the file and incorrectly entered the password (so you don't need to download for example whole 1 GByte file to know that the password is wrong)
For the key generation, we use MS .Net's Rfc2898DeriveBytes() function with zero salt that uses PKCS #5 standard PBKDF2 function (see http://www.ietf.org/rfc/
So if you have a tool that can accept the initial vector and can generate an encryption key using PBKDF2 (password-based key derivation, you might be able to decompress this single file.
- If the file was both compressed and encrypted. You have to decrypt it first, and then decompress.
So the general algorithm is the next:
1. Download with any S3 tool all chunks: filename..chunk..1, filename..chunk..2, ... , filename..chunk..N
2. Combine these files into single one by appending in the next sequence: 1,2,..., N
You can do it with some advanced file managers like FAR or Total Commander.
3. Download map-file: filename..chunk..map
4. If the file is encrypted, decrypt the single file using information from the map.
5. If the file is compressed, decompress it by any tool that supports gzip format. For many tools, it's better to add ".gz" extension first
Q. Am I required to use a Cloudberry product to decrypt the files? If so, is Cloudberry S3
Explorer compatible with the storage/encryption format used by WHS Backup?
A. It's possible to get files back with CloudBerry Explorer Pro (in chunk transparency mode). Read about CloudBerry Explorer chunking support here
Q. Is the source code used by WHS Backup for encryption/decryption publicly available?
A. We are going to make a freeware tool for decrypting/decompressing chunked files (that are already downloaded to a local computer) with and make the source code available. Stay tuned.
As always we would be happy to hear your feedback and you are welcome to post a comment.
Note: this post applies to CloudBerry Backup 1.3 and later.