CloudBerry Dedup Server FAQ
Q: What’s new in CloudBerry Dedup Server 2.0?
We simplified data transferring methods - namely, you no longer need to use intermediate network shares for client backups; all data is going directly to the dedup service.
Q: What storages does CloudBerry Dedup Server support?
It works with multiple cloud based data storages like Amazon Web Services, Microsoft Azure, Google Cloud and many other cloud platforms.
Q: Does Dedup server support compression and encryption?
A: Yes, it supports compression and encryption (AES-256).
Q: Does CloudBerry Dedup Server maintain connection with cloud storage?
No. Connections with cloud storage are established and closed on demand when uploading and restoring files.
Q: Does CloudBerry Backup maintain connection with CloudBerry Dedup Server?
In current version no, connections are established on demand. However, we are working over some features that may be required (or it would be preferred) to maintain connection.
Q: Does CloudBerry Dedup Server have a PostgreSQL database that keeps track of everything related to deduplication?
A PostgreSQL database is used to store deduplication metadata. The database is an
essential part and losing it means losing all backed up data, because it's impossible to
generate original files from deduplicated data without the database
What will be the size of Dedup ‘intermediate’ storage in future? Is it a copy of data
stored in the cloud, or is it only an index file related to the cloud storage, yet much
smaller? How much space do I need to allocate the ‘intermediate’ storage as
compared to the overall size of backed up data before and after deduplication?
The Intermediate storage is currently used only to combine sets of blocks (called
Mediasets) before uploading to a cloud. The mediaset size is limited to 4GB by default. Once
the 4GB file is ready, it goes to the cloud and and then it is removed from the Intermediate
storage. We also have added a force upload timer, which allows Dedup to send data to the
storage without checking its size. If you want to use this option, go to the Advanced tab of
What is the function of Dedup Server ‘repository backup’? Is it a backup of actual
data sent by CloudBerry Backups, or is it a backup of its configuration, or is it a
backup of the ‘intermediate’ storage?
The repository backup makes backups of the PostgreSQL database that contains all
dedup metadata. Without this information all deduplicated data is useless.
In case my computers are gone out in flames, how do I setup CloudBerry Dedup
Server on a new computer to be able to restore data from the cloud? How do I setup
CloudBerry Backup on a new computer to restore backed up data from the previous
computer, which is dead and gone?
In case of disaster, you shall act as follows.
On the server:
- Install the Dedup Server on a new computer
- Run the “Restore CloudBerry Dedup Server” shortcut
- Configure your cloud account
- Run a repository restore
- Start CloudBerry Dedup Server service
On the client computer:
- Install CloudBerry Backup
- Configure new Dedup Server account and click 'Advanced Settings'
- Select the name of your crashed computer as a Backup Prefix
Then the repository synchronization process will start and when it is completed you'll able to
see the backed up files on the Backup Storage tab.