Skip to content
This repository has been archived by the owner on Jul 18, 2024. It is now read-only.

Commit

Permalink
Doc updates
Browse files Browse the repository at this point in the history
  • Loading branch information
alfpark committed Jun 3, 2017
1 parent 29c2f78 commit bb914fa
Show file tree
Hide file tree
Showing 4 changed files with 16 additions and 17 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,14 @@ from Azure Blob and File Storage
* `replica` mode allows replication of a file across multiple destinations
including to multiple storage accounts
* Client-side encryption support
* Support all blob types for both upload and download
* Support all Azure Blob types and Azure Files for both upload and download
* Advanced skip options for rsync-like operations
* Store/restore POSIX filemode and uid/gid
* Support for reading/pipe from `stdin`
* Support for reading from blob snapshots
* Configurable one-shot block upload support
* Configurable chunk size for both upload and download
* Automatic block blob size adjustment for uploading
* Automatic block size selection for block blob uploading
* Automatic uploading of VHD/VHDX files as page blobs
* Include and exclude filtering support
* Rsync-like delete support
Expand All @@ -46,7 +46,7 @@ the [installation guide](https://github.com/Azure/blobxfer/blob/master/docs/01-i
on how to install `blobxfer`.

## Documentation
Please refer to the [blobxfer Documentation](https://github.com/Azure/blobxfer/blob/master/docs)
Please refer to the [`blobxfer` documentation](https://github.com/Azure/blobxfer/blob/master/docs)
for more details and usage information.

## Change Log
Expand Down
15 changes: 7 additions & 8 deletions docs/10-cli-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,6 @@ recursively uploaded or downloaded.
* `--remote-path` is the remote Azure path. This path must contain the
Blob container or File share at the begining, e.g., `mycontainer/vdir`
* `--resume-file` specifies the resume file to write to.
* `--storage-account` specifies the storage account to use. This can be
optionally provided through an environment variable `BLOBXFER_STORAGE_ACCOUNT`
instead.
* `--timeout` is the integral timeout value in seconds to use.
* `-h` or `--help` can be passed at every command level to receive context
sensitive help.
Expand Down Expand Up @@ -101,13 +98,15 @@ to/from Azure Storage.
### Connection
* `--endpoint` is the Azure Storage endpoint to connect to; the default is
Azure Public regions, or `core.windows.net`.
* `--storage-account` is the storage account to connect to.
* `--storage-account` specifies the storage account to use. This can be
optionally provided through an environment variable `BLOBXFER_STORAGE_ACCOUNT`
instead.

### Encryption
* `--rsa-private-key` is the RSA private key in PEM format to use. This can
be provided for uploads but must be specified to decrypt encrypted remote
entities. This can be optionally provided through an environment variable
`BLOBXFER_RSA_PRIVATE_KEY`.
entities for downloads. This can be optionally provided through an environment
variable `BLOBXFER_RSA_PRIVATE_KEY`.
* `--rsa-private-key-passphrase` is the RSA private key passphrase. This can
be optionally provided through an environment variable
`BLOBXFER_RSA_PRIVATE_KEY_PASSPHRASE`.
Expand Down Expand Up @@ -166,7 +165,7 @@ file path. The default is `1`.
### `download` Examples
#### Download an Entire Encrypted Blob Container to Current Working Directory
```shell
blobxfer download --storage-account mystorageaccount --sas "mysastoken" --remote-path mycontainer --local-path . --rsa-public-key ~/mypubkey.pem
blobxfer download --storage-account mystorageaccount --sas "mysastoken" --remote-path mycontainer --local-path . --rsa-private-key ~/myprivatekey.pem
```

#### Download an Entire File Share to Designated Path and Skip On Filesize Matches
Expand Down Expand Up @@ -197,7 +196,7 @@ blobxfer download --config myconfig.yaml
### `upload` Examples
#### Upload Current Working Directory as Encrypted Block Blobs Non-recursively
```shell
blobxfer upload --storage-account mystorageaccount --sas "mysastoken" --remote-path mycontainer --local-path . --rsa-private-key ~/myprivatekey.pem --no-recursive
blobxfer upload --storage-account mystorageaccount --sas "mysastoken" --remote-path mycontainer --local-path . --no-recursive --rsa-public-key ~/mypubkey.pem
```

#### Upload Specific Path Recursively to a File Share, Store File MD5 and POSIX File Attributes to a File Share and Exclude Some Files
Expand Down
8 changes: 4 additions & 4 deletions docs/30-vectored-io.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ how to download from multiple sources.

The logic is fairly simple in how this is accomplished. Each source file
has portions of the file read from disk, buffered in memory and then
replicated across multiple storage accounts.
replicated across all specified destinations.

```
Whole File +---------------------+
Expand Down Expand Up @@ -51,9 +51,9 @@ configuration file to define multiple destinations.
### Stripe
`stripe` mode will splice a file into multiple chunks and scatter these
chunks across destinations specified. These destinations can be different
containers within the same storage account or even containers distributed
across multiple storage accounts if single storage account bandwidth limits
are insufficient.
a single or multiple containers within the same storage account or even
containers distributed across multiple storage accounts if single storage
account bandwidth limits are insufficient.

`blobxfer` will slice the source file into multiple chunks where the
`stripe_chunk_size_bytes` is the stripe width of each chunk. This parameter
Expand Down
4 changes: 2 additions & 2 deletions docs/98-performance-considerations.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ File shares and Storage Account types (GRS, LRS, ZRS, etc).
maximum performance according to your system and network characteristics.
* Disk threads: concurrency in reading (uploads) and writing (downloads) to
disk is controlled by the number of disk threads.
* Transfer threads: concurrency in the number of threads from/to Azure
Storage is controlled by the number of transfer threads.
* Transfer threads: concurrency in the number of threads transferring
from/to Azure Storage is controlled by the number of transfer threads.
* MD5 processes: computing MD5 for potential omission from transfer due
to `skip_on` `md5_match` being specified are offloaded to the specified
number of processors.
Expand Down

0 comments on commit bb914fa

Please sign in to comment.