Skip to content


Trac Migration edited this page May 18, 2019 · 1 revision

Feature Requests

Add notes on changes you'd like to see in a future release below - there is no commitment to add these, but they stand a better chance if they're documented here. It is a good idea to add your name to any feature request in case a developer wants to discuss it further.

Local Mode

Make it possible to not use network sockets at all - by making the client communicate with the Box Backup server through pipes after connecting to the server through ssh (rsync has this option, and it would make Box Backup less dependent upon server configurations - many servers don't allow listening processes or persistent processes)

Serverless backup to local (removable) drive. There's a lot of people out there that could use a client only backup solution because they don't have the resources or know how to get a dedicated backup machine up and running.

Changing Flags

Add the posibility to change the flags of files on the server (for example to unmark a file as deleted - already exists in the undocumented bbackupquery command 'undelete', now documented in 0.11 - chris)

Versioned Repository

Time-based repository access would allow

  • Rolling back a repository (for a specific client) to a specific date/time. This would be useful in case largescale changes were accidentally 'backed-up' (eg if a backed-up filesystem is unmounted and bbackupd tells the server all its contents are deleted). (StuartH)
  • bbackupquery to view and restore a repository "as at" a specified time. (StuartH)

This requires changing the repository structure, and hence breaking backwards compatibility with older clients. It's being considered for 0.12. However it will be hugely invasive and require a massive amount of developer time and testing, and therefore the prospects are not good at the moment (Chris).

Mass Deletion Protection

Protection against an unmounted filesystem causing large scale 'deletion' of things within the repository because the client can no longer find it - particularly important for external drives (which may be switched off accidentally, or connected to a laptop and lose power whilst the laptop continues to run), or for backing up network-mounted drives that may become disconnected. (StuartH)

Extending the preceding feature: Permit backing up of remote drives, either mapped with drive letters (like Z:\files) or using UNC (\remotebox\files) (Andrei reports that he is doing this already). First test if the remote filesystem exists at all, and if not, mark objects with a new flag; the new flag protects the missing files for a configurable number of days before deletion; the new flag is removed if the files appear again within that period. (PeteJ 2006-07-20)

Box Backup has a setting (actually only in trunk) for how long to wait before deleting unused root directory entries. You could have the option to set this to to zero to disable deleting them forever. The following would change the default time of 172800 seconds (2 days) to "never delete".

DeleteRedundantLocationsAfter = 0

Logging Client Bandwidth Use

At the server side, keeping track of amount of traffic generated

Isn't this supported in Box Backup 0.10 already? Found in /var/log/box (in bytes?):

May 16 00:06:11 localhost bbstored27361: \
    Certificate CN: BACKUP-10002033
May 16 00:06:40 localhost bbstored27361: \
    Connection statistics for BACKUP-10002033: IN=55898 OUT=335383 TOTAL=391281

Does it make sense, and if so, would it be fairly easy to calculate and add (KBps), like:

May 16 00:06:40 localhost bbstored27361: \
    Connection statistics for BACKUP-10002033: IN=55898(2) OUT=335383(12) TOTAL=391281

Backup Limits

Don't backup files bigger than X kbytes

IPv6 support

Self explanatory.

bbackupquery ease-of-use

Readline tab completion for text interface (commands, filenames, options)

Special file support

Handle device files, unix sockets and named pipes. (I can't see why you'd want to back up unix sockets -- Chris).

Account Locking

Allow an account to be marked as "frozen" so that no store changes can be made (but read-only bbackupquerys are allowed). This would be useful if recovery etc is to be done as it prevents changes to backup store being made until recovery is complete. This would be like the current "write lock", but would be persistent across bbstored restarts until explicitly cleared. (StuartH)

Administration Scaling

To facilitate automation in multi-client environments (submitted by PeteJ):

  • Add some general client-account- and server-health checks to housekeeping (so lazy server admin doesn't have to code and maintain bbstoreaccounts cron jobs). Log good and bad results for easy log grep-ing.
  • Add "all" option to "bbstoreaccounts check".

SNMP Support

Add SNMP Trap Support to client and server. IANA registration is free ( and this would make it much easier to centrally manage/monitor activities/problems/thresholds with clients and servers.

I don't think this belongs in bbackupd or bbstored. I think you can do it with external scripts. Anyone is welcome to write and contribute them, and negotiate which features are required in bbackupd or bbstored to support them. (Chris)


Work better with large directories (check with nasty cases with lots and lots of files)

Work better with files with many different block sizes. Currently the diffing process has to reread the entire file for each block size. Rereading a multi-gigabyte file 128 times for 128 different block sizes frankly sucks. (see ticket #45)

When multiple CPUs are present, diff files in more than one thread

bbstored is writing some files extremely often. The particular files are associated with directories which hold a lot of files which are compared. I've watched the file for a while. It makes a (similar) copy of the file with the extension ".rfwX" which grows from 0 to the size of the original ".rfw" file, then starts over again. Probably this file is re-written for every file in the client directory. Since we have SVN repositories with several thousand revisions (and therefore entries in the db/revs/ directory), this file is written quite often and seems to be a bottle-neck.

AIX Support

Support for AIX 5 and HP-UX 11 (I will be happy to do it as well) (Stefan) Someone with access to these platforms needs to port the code and offer to maintain it. See ticket #4 (Chris)

Automatic Compare

Add automated periodic compare to client. Maybe a daily quick compare and a weekly late-night full (or nearly so) compare. (PJ) This belongs in cron/bbackupquery, not bbackupd. Maybe install the cron job automatically (Chris)

Client to Server Logging

Perhaps we could add a protocol command that the client sends to the server to request it to log a message (Chris). For example, indicate "not uploading changes to files" (for "Exceeded storage limits on server" or any other reason) on server side for admin convenience. Should be in "bbstoreaccounts check" and "bbstoreaccounts info" and /var/log/box and maybe even in /var/log/messages. Use "error" or "warning" string for easy grep-ing (PJ). We have to be careful about denial of service (log flooding) against the server by the client (Chris). Add client send BoxBackup client version number to server upon logging in (PeteJ). Add Client Version number to "bbstoreaccounts info" output (PeteJ).

Lazy and Snapshot Modes

Lazy mode during the day, with a snapshot at the end? Make sure it works well with machines which aren't powered on all the time. (you can already do that with lazy mode + bbackupctl sync).


Only perform housekeeping if there have been changes since the last housekeeping run that successfully completed (as I'm not sure it's necessary unless the store has actually changed) (StuartH) (it's not necessary - Chris)

Web Interface

A web interface for bbackupquery.

  • I've just began work on a php web interface to bbackupquery. My intention is to have a complete system where users can access all of the files stored on the server, deleted and old versions included. I'm looking for some help on this one, If anyone's interested in helping with this project contact me kiall - @t - (Project will be GPL'd)

The start of a web interface for bbstoreaccounts is in contrib/bbadmin.

Improved Verification

Allow for end-to-end server data verification with bbackupquery compare -aq (quick compare), without having to re-download all data (as is the case with bbackupquery compare -a). The feature would require bbstored (server) to read factual data blocks off its storage media and compare those data blocks against checksums both stored locally and sent up from bbackupquery (client). The only undetected failure condition in such a scenario would require both a server data block and a server checksum getting corrupted in such a way as to match a client checksum - the chances of this are beyond astronomical. (Gary)

Age Control

Add options in bbackupd.conf for minage & maxage (only back up new files or only back up older files)

I would like to second this request, but with a twist. I would like a BackupIfNewerThan = (date) variable for each BackupLocation in the client bbackupd.conf. For example, for a client with a very large fileset, they can use other backup tools to take duplicate snapshots to other media (DVDs, tapes, external hard drives) and send them to two off-site locations, and then set BackupIfNewerThan = (snapshot date) and capture all the files that change after that snapshot. (PeteJ 2007-10-19).

Windows file locking

Sometimes Box Backup locks business users out of their larger important files for uncomfortably long periods of time because the upload to the backup store is (reasonably) slow. To minimize this down time, perhaps a feature could be added wherein Box Backup could first create a quick local temporary copy of files over a configurable size, and then back up and immediately delete that temporary file. (PeteJ 2006-09-21)

Inotify support

Use Inotify on Linux to reduce overhead in detecting updated files.

Default Exclusions

Can we add a section to bbackupd.conf called, say, "DefaultExclusions", that would apply to all BackupLocations, which would be overridden by any BackupLocation-specific exclusions listed under the BackupLocations? That way the client-wide exclusions wouldn't have to be typed repetitively and kept in sync separately for each BackupLocation. "Exclusions" here of course implies Includes as well. (PeteJ 2006-12-11)

Backup Time Control

Add a client-side bbackupd.conf option to only perform backups during certain hours of the day. We wanted to change a Windows laptop client from lazy to snapshot mode, but realized we had to create a new client account to do that and, well, we were too lazy to do that. So we're using two scheduled tasks to run "net start boxbackup" and "net stop boxbackup". Maybe something like, "!BackupHoursWindow = 20:00-04:00". (PeteJ 2007-02-25)

(You can already do this with snapshot mode + kill)

Server-side Statistics

Add a server-side "bbstoreaccounts info" output line indicating number of blocks and Mb used by only the current versions of client files (not overhead, not directories, not old, not deleted, not old and deleted). Note that some files are obviously counted twice in the current info output (PeteJ 2007-02-25):

                 Blocks used: 12178542 (47572.43Mb)
    Blocks used by old files: 9052490 (35361.29Mb)
Blocks used by deleted files: 10209326 (39880.18Mb)

Add a similar client-side "bbackupquery -q usage quit" output line.

Key Checking

On logging in, compare a hash of the key material with a record on the server, to give a better error when you use the wrong key file.

Default Configuration File

In bbackupd.conf, put all possible configuration values with good describing comment (e.g. with the unit of values (seconds, bytes, ...)) (I believe this is now done by bbackupd-config -- Chris)

In bbackupd.conf, it'd be nice if we could enter these values in minutes or hours: (PeteJ 2007-05-15)

UpdateStoreInterval = 3600  
MinimumFileAge = 21600     
MaxUploadWait = 86400   
MaximumDiffingTime = 600    
KeepAliveTime = 630.  

In bbackupd.conf, it would be nice if the various application file locations could use variables, for easier editing. Maybe a drive (for Windows) or device variable, and a path variable. For example, on Windows:

KeysFile = D:\Program Files\Box Backup\10004015-FileEncKeys.raw

could be changed to something like:

BBPATH = \Program Files\Box Backup
KeysFile = %BBDRIVE%\%BBPATH%\10004015-FileEncKeys.raw

Here's a list of application files that could take advantage of this feature:

KeysFile = D:\Program Files\Box Backup\10004015-FileEncKeys.raw
CertificateFile = D:\Program Files\Box Backup\10004015-cert.pem
PrivateKeyFile = D:\Program Files\Box Backup\10004015-key.pem
TrustedCAsFile = D:\Program Files\Box Backup\serverCA.pem
DataDirectory = D:\Program Files\Box Backup\bbackupd
# NotifyScript requires cygwin at the moment...
NotifyScript = D:\Program Files\Box Backup\
SyncAllowScript = D:\Program Files\Box Backup\SyncAllowScript.bat
StoreObjectInfoFile = D:\Program Files\Box Backup\bbackupd\bbackupd.dat
PidFile = D:\Program Files\Box Backup\bbackupd\

Similarly, it'd be nice to be able to use the same long list of exclusions in multiple BackupLocations using a variable of some sort (or maybe even reading from an external file, or, egads, maybe even downloading from a web service of some sort, like (a variant of my 2006-12-11 Feature Request above). (bbackupd.conf doesn't need to become a programming language. Use m4/perl/python -- Chris)

Redundant Servers

In bbackupd.conf, it might help make for easy geographic redundancy by being able to enter 2 or more StoreHostname records, with the client either backing up to both of them each cycle, or maybe even just alternating between them if that is acceptable to the user. (PeteJ 2007-05-16)

It's important to note that in the default configuration, our backups are as many as 24 hours old anyway, with 24 being a fairly arbitrary number.

To get similar first-level protection as with the current single StoreHostName, the user would need to speed up one or more of the bbackupd.conf timers, perhaps increasing the average client-side outbound network load. Note that as the number of clients and storage on a server increases, though, it becomes a quite large outbound bandwidth load on the server to rsync all those gigabytes across the country (especially when a new large client is added), and so it may be preferable in many cases just to have the client suffer with that extra outbound load.

See thread As of that writing, apparently 2 client services can be running concurrently on a single machine using 2 different bbackupd.conf files; I'll need to test that on Windows. (it is supported on Windows -- Chris)

Test Mode

Add a test mode to the client. Maybe run through everything, but don't send files to the server. (PeteJ 2007-05-17)

Timeouts Exceeded

Does the current version log when MaximumDiffingTime and KeepAliveTime times are exceeded? Would help find troublesome files. (PeteJ 2007-05-17) (I believe it does -- Chris)

ID to Name Mapping

Provide a way to reverse map object IDs on the store to local filenames on the client, wherever they may be (Johann Glaser 2007-08-29)

Client Wildcards

  • Please add the feature to use "list" with filenames (e.g. list -dots *.txt) and with subdirectories (list -dots db/) and combined (list -dots */db/*.txt). (Johann Glaser 2007-08-29)

Log Statistics

The current syslog information from bbackupd about file statistics is only understandable by insiders.

bbackupd20269: File statistics: total file size uploaded 53958832, bytes already on server 42146186, encoded size 7533195

Please change the output to state the following items:

  • total file count (on the client)
  • total file size (on the client)
  • number of files already on server
  • number of new files
  • number of deleted files
  • number of files which have changed
  • split up into algorithms how the changes have been detected (date, size, checksum, ?)
  • total size of changes (only the differences) and new files
  • total file size uploaded (is the same?)
  • bytes already on server
  • encoded size

Backup order, pre- and post-scripts, specialize for sub-directories

Offer directives to specify files/directories in a user-defined order which are then backed up in the that order.

The first issue for that is: What to do with files which are not captured by any of the regexps?

The second issue is what to do with files which are captured multiple times? They should be backed up only once. The "best" way would be to use the least-specific directive, e.g. in the example below "./current" and "./lastfile" both are captured by "[.]/.*" too, but should be excluded from this.

For easier configuration I propose the following additional directive which specifies the order within one subdirectory.

 BackupFilesOrder = (reverse) (any|sorted|alphabetical|numeric|timestamp|size)
  • any uses the files as they come from the filesystem
  • "sorted" should be a normal ASCII-code sort
  • "alphabetical" should consider the locale to sort
  • "numeric" sorts numbers by their value, e.g. "2" is before "10"
  • "timestamp" sorts the files by their mtime
  • "size" sorts the files by their size
  • "reverse" reverses the order

To specify whether files or directories should be mixed or separated, I propose the following directive:

  BackupSubdirs = (mixed|first|last)
  • "mixed" is to consider them in the order coming from the filesystem
  • "first" backups subdirectories before the files in the directories
  • "last" backups the subdirs after all files in the directory

It would be even better if that directives could be applied to specific subdirectories and groups of files, e.g.

BackupLocations {
  data {
    Path = /data
    ExcludeDir = /data/mysql
    Subdirectory {
      PathRegex = /data/svn/repositories/[a-zA-Z0-9]+/db/
      BackupSubdirs = last
      BackupFilesOrder = sorted
      PreScript = /backup/scripts/bb-svn-full-pre
      Subdirectory {
        Path = ./current
        Priority = 100
      Subdirectory {
        PathRegex = [.]/.*
        Priority = 300
        Subdirectory {
          PathRegex = [.]/(revs|revprops)/.*
          BackupFilesOrder = reverse numeric
      Subdirectory {
        Path = ./lastfile
        Priority = 500

I don't like the "Subdirectory" directive because it should also be applicable to files or a list of files, but I couldn't think of another one.

This is unbelievably complex and unimplementable. Box Backup doesn't need to do this. We should have an interface to an external program that can tell us whether or not to back up a particular file, and then you can implement whatever policy you like. (Chris)

Configuration File/Installation Issues

Please consider renaming bbackupd.conf in the installation parcels to, perhaps, bbackupd.template.conf, so that an already-present one is not overwritten when upgrading. (Technically, maybe it shouldn't be called bbackupd.conf also because it is not, and maybe cannot ever be, a valid config file.) I frequently upgrade several different clients, and have to handle this one file specially to protect it each time. This concept might apply to the server side, too. (Pete J. 2007-10-17) (it's not hard to make your own parcels - chris)


Please consider changing the default timers in bbackupd.conf. I think they are too aggressive, wasting bandwidth and storage. I think they should be set to be more "like tape", so we get closer to just one copy of each file each day. (PeteJ 2007-10-19) which timers and what values would you suggest? (chris)

Key Changes

Inspired by this month's Debian Debacle (PeteJ 2008-05-30):

  • Provide a mechanism for the client to optionally download new keys and/or new software from the trusted server (download files, stop service, remove service, backup old files, unpack new files, install service, check sanity, start service). (this can be done as an external script, someone needs to write it -- chris)
  • On the server, provide a configurable delay between client login attempts, to slow down brute-force attacks.
  • It would be fun to add a simple port-knocking feature for additional obfuscation (KNOCK=22221,22211,22222). (this can be done as a simple daemon + total account lockout feature)

OpenBSD Port

Make a OpenBSD "port" for easy installation, when the system is stable enough for general use. (it is now - chris)

Bandwidth Throttling

Bandwidth throttling with timed limits (low priority - your OS or router should be able to do this for you)

1. Symbolic Names for Hosts

In addition to using a hex number to identify backup clients, a mapping must be put in place between the account number, and a name the user (backup admin) determines. This name can be up to 255 characters long (RFC 1034).

For backwards compatibility, account numbers must still be accessible, and usable just as before. If needed command line switches can be applied to denote one or the other. Additionally, for management purposes it should be possible to explicitly set the account number during the creation of a new client (as well as the symbolic name).

While the name can be arbitrary, the installation process should attempt to determine the full domain name of the host, and use that value as the default.

The bbackupd installation process should be able to use both the symbolic name and the account ID. Certificate files, etc. will be generated using the symbolic name, if available. Otherwise it will fall back to the account ID.

2. Client Groups

To support the distinction of groups of boxes being backed up, the concept of groups of users should be implemented. Groups are a collection of clients, and no group can be a member of other groups.

A group has a 'group administrator' associated with itself. Messages that would go to the Backup administrator if there were no groups will go to the group administrator as well for messages related to group member accounts.

The 'bbstoreaccounts' executable will add functionality to manage groups, including getting statistics from a group. The statistics will be the cumulative values of the same data as for a single client, with the addition of the following:

  • List of group members
  • ?

It is given that if a client is a member of multiple groups, the statistical data will count in each group.

Code and configuration should be implemented to support optional quotas on groups. Each group will have hard and soft limits, much like client accounts.

I'm not sure what the consequences of exceeding a group quota should be? Aggressive housekeeping on all members, when a client hits the group ceiling? Something else?

3. Client Monitoring

The client should be able to send 'heart beat' messages to the bbstored server.

Configuration information for heart beat is kept on the bbstored server. It includes:

For each client:

  • on/off switch. Clients can be monitored, but are not required to be. This is especially useful for mobile users, who are not connected to the internet all the time.
  • Heart beat interval. How often the client sends heart beat information. Given in seconds. Default is 900.

Heart beat messages could be transmitted whenever a client connects to the server for backing up, rather than on a separate time schedule. Snapshot backups should transmit the data as well, to be able to track when the last backup was made. It would be preferable if the interval was a separate number as described in the previous paragraph. This would give more consistent data about clients that snapshot backup or have long backup intervals. This is often done (at least by me) to improve sluggish client machine performance every hour, when the disk is scanned for eligible files.

Heart beat messages will not interfere with long-running syncs or restores (large files), but will insert itself as close to the interval as possible, to ensure that as few false error alerts as possible are sent to administrators.

When bbackupd starts, it will register with the bbstored server, and request its configuration information. It will use this to send the messages at the appointed times.

Also, bbstored will create a record of the now running bbackupd (in memory, mmap, or whatever works best), to hold the data for the statistics, as well as to ensure that only clients that have registered are being monitored. Snapshot backups will not register, but rather data will be kept about the timestamps, etc. of the last backup.

When a bbackupd daemon completes an orderly shutdown, it will 'de-register' itself from the bbstored service, to ensure that no false 'down alerts' go out. However, if bbackupd dies as a result of some failure, the record on the bbstored server will remain, and eventually cause alerts to go out to the backup administrator, and the group owner for a given group.

The heart beat packet contains the following information:

  • Host identifier (name and/or account number)
  • bbackupd version number
  • backup type for last backup performed (lazy/snapshot)
  • uptime (ie. how long has bbackupd been running on this host)
  • time stamp of last connection (not necessarily any files uploaded)
  • timestamps of last sync (when was the last file uploaded)
  • Number of bytes synced since last heart beat message
  • Number of bytes restored since last heart beat message
  • any significant errors that have occurred since last heart beat.
  • ?

On the server side, a daemon (most likely bbstored) receives these heart beat messages, and keeps track of the status of all clients. It will keep a running counter of the byte-count statistics for the client, as well as a log of the significant errors.

When a bbackup client daemon dies unexpectedly, the bbstored server will notice that there is no heart beat message from the client after approximately 2 x the heart beat interval. It will then notify backup administrators using the mechanism, or one very much like it. This mechanism should support notification to a 'group owner', for clients that are in a group.

When a significant error occurs, and is logged with the server, a similar notification mechanism will be used to notify the backup administrator.

Optionally, the statistics information can be stored in a database for billing/auditing purposes.

A utility (possibly an updated bbstoreaccounts) will be needed to display this information in ways that will be useful to administrators. For individual accounts this information could include:

  • time/date of last successful sync
  • duration of last successful sync
  • ???

All this can be done on the server by an external process monitoring logs (Chris)

4. Space Use Reporting

Reporting of space consumption is needed at several levels:

  • The entire bbstored server (all RAID volumes being used for backups).
  • Each Volume. Ensuring that one single volume isn't bearing the brunt of the load, as well as for planning purposes.
  • By Group. This relates to item #2 in my list. It has very similar reporting requirements to the individual client, with the same additions as described in the Group section.
  • By Individual. This is already available in the current version.

5. Account Database

The ability to store the client account information in a database is crucial to the stability and scalability of the system. Change the use of text files to using a database.

Implement support for storing the client account information for multiple Box servers in one database.

6. Interaction With the Rest of the World

Interfacing in an easy way to other systems for Monitoring and reporting purposes. In addition to nicely formatted output there should be an option in all commands to format the output for human and for script consumption. This data could then be used by products like Nagios (

Already done for bbstoreaccounts and bbackupquery usage. Where else is it needed? (Chris)

7. Account Migration Tools

It should be possible to move a client account from one Box server to another.

When the move is complete (not before), the old bbstored server should either redirect (preferred) or proxy the requests to the 'new' server, so the client can continue operations unaffected by the change.

rsync + dns + account lockout (chris)

8. Server Redundancy (grabbed from message by Ben on 9/24/04)

Design objectives

Failure means the server cannot be contacted by the client. If a server can be contacted by another server but not the client, then that server must still be considered down.

No central server. The objective above means server choice must be made by the client.

A misbehaving client should not cause the stores to lose syncronisation.

Assume that all servers have the same amount of disc space, and identical disc configuration.

Allow choice of primary and secondary on a per account basis.

Any connection can be dropped at any time, and the stores should be in a workable, if non-optimal, state.

As simple as possible. Avoid keeping large amounts of state about the accounts on another server.

8.1 Server Groups

The client store marker is defined to change at the end of every sync (if and only if data changed) from the client. The client sync marker should increase each time the store is updated. This allows the server groups to determine easily if they are in sync, and which is the latest version.

Stores are grouped. Each server is a peer within the group.

On login, the server returns a list of all other servers in the group. The client records this list on disc.

When the client needs to obtain a connection to a store, it uses the following algorithm:

Let S = last server successfully connected
Let P = primary server
    Attempt to connect to S
    If(S == P and S is not connected)
        Try connecting to P again.
} While(S is not connected and not all servers have been tried)

If(S is not connected)
    Start process again

Let CSM_S = client store marker from S

If(S != P)
    Attempt to connect to P again, but with a short timeout this time
    If(P is connected)
        Let CSM_P = client store marker from P
        If(CSM == expected client store marker)
            Disconnect S
            S = P
            Disconnect P

This algorithm ensures that the client prefers to connect to the primary server, but will keep talking to the secondary server for as long as it's available and is at a later state than the primary store. (This gives time for the data to be transferred from the secondary to the primary and avoid repeat uploads of data.)

Servers within a group use the fast sync protocol to update accounts on a regular basis.

8.2 Observations

The servers are simply peers. The primary server for an account is chosen merely by configuring the client.

If the servers simply use best efforts to keep each other up to date, the client will automatically choose the best server to contact.

Using the existing methods of handling unexpected changes to the client store marker, it doesn't matter whether a server is out of date or not. The existing code handles this occurence perfectly.

The servers do not need to check whether other servers are down. This fact is actually irrelevant, because it's the client's view of upness which is important.

8.3 Accounts

The accounts database must be identical on each machine. bbstoreaccounts will need to push changes to all servers. It will probably be necessary to change the account database, and store the limits within the database rather than in the stores of data. This is desirable anyway.

Note: If another server is down, it won't be possible to update the account database.

Alternatively, servers could update each other with changes to the accounts database on a lazy basis. This might cause issues with housekeeping unnecessarily deleting files which have to be

  • 8.4 Fast Sync Protocol

Compare client store markers. End if they are the same. Otherwise, the server with the greater number becomes the source, and the lesser the target.

Zero client store marker on target.

Send stream of deleted (by housekeeping) object IDs from source to target. Target deletes the objects immediately.

Send stream of object ID + hash of directories on source server to the target.

For each directory on the target server which doesn't exist, or doesn't have the right hash...

  • check objects exist, and transfer them
  • write directory, only if all the objects are correct
  • check for patches. Attempt to transfer by patch if new version exists

Each server records the client store marker it expects on the remote server. If that marker is not as expected, then the contents of the directories are checked as well, sending MD5 hashes across. This allows recovery from partial syncs. [This should probably be optimised if for when there's an empty store at one end.]

When an object is uploaded, the "last object ID used" value for that account should be kept within the acceptable range to allow recovery when syncing with the client.

Write new client store marker on target.

If a client connects during a fast sync, then that fast sync will be aborted to give the client the lock on the account.

8.5 Optimised Fast Sync

It's undesirable for the fast sync to check every directory when it doesn't have to. During sync with a client a store:

Keeps a list of changed directories by writing to disc (and flushing) every time a directory is saved back to disc.

Keep patches from previous versions to send to remote store.

Connect after backup to remote stores, use fast sync to send changes over.

This will allow short-cuts to be taken when syncing, and changes sent by patch.

The cache of patches will need to be managed, deleting them when they are transferred to a peer or are too old.

8.6 Housekeeping

Deleted objects need to be kept in sync too. Housekeeping takes place independently on each server. Since housekeeping is a deterministic process, this should not delete different files on different servers.

A list of deleted objects is kept on each server during the housekeeping process.

In the unlikely event that a server deletes an object that the source server doesn't, this object will be retrieved in the next fast sync. This is unlikely to happen because clients only add data.

Typically, housekeeping on non-primary servers will never delete an object in that account.

9 Pseudo-Clustering of Servers (from Ben on 9/27/04)

It has just occurred to me that using the built-in software RAID, a limited form of redundant servers could be created. Someone suggested this on the list a while back, and I've only just realised the implications.

All you need are three identical servers. On each server, compose the RAID file sets from the local hard drives and the two hard drives from the other servers (mount the discs using NFS or something.)

Run the bbstored daemon on each, and use round-robin DNS with a low TTL to send clients to different machines.

It should then "just work". If any machine goes down, then the software RAID will kick in and no-one will notice, apart from the administrator who will notice the log messages.

The changes required are:

  • Add communications between bbstored servers so that a client can log in even if another server is housekeeping that account.
  • Account database syncing between servers.
  • Raid file disc set restoration tools needs to be written (which is still currently lacking -- right now you have to move the existing files away in case they're needed, then blank every account and wait until the clients have uploaded everything again.)
  • Efficiency: write the raidfile daemon to offload RAID work, and write the temporary files to the local filesystem only.

The advantage over the previous plan is that most of the work is already done -- none of the above is a particularly significant amount of effort. The disadvantage is that it limits clusters to three machines which are connected to each other with fast network connections. However, it is a rather neat and simple solution.

10. No SSL/TLS on the Wire

It should be an option to turn off SSL/TLS after the initial handshake, to lower the overhead of the protocol.

Clone this wiki locally
You can’t perform that action at this time.