Skip to content
This repository was archived by the owner on Jul 18, 2024. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 3 additions & 10 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,14 @@ language: python
cache: pip
python:
- 2.7
- 3.3
- 3.4
- 3.5
- 3.6
install:
- travis_retry pip install --upgrade pip
- travis_retry pip install --upgrade setuptools wheel coveralls
- travis_retry pip install --upgrade setuptools wheel
- travis_retry pip install --upgrade coveralls tox-travis
script:
- |
if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]] || [[ $TRAVIS_PYTHON_VERSION == '3.3' ]] || [[ $TRAVIS_PYTHON_VERSION == '3.4' ]]; then
travis_retry pip install tox
tox -e $(echo py$TRAVIS_PYTHON_VERSION | tr -d .)
else
travis_retry pip install tox-travis
tox
fi
- tox
after_success:
- coveralls --rcfile=.coveragerc --verbose
24 changes: 23 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,27 @@

## [Unreleased]

## [1.0.0b1] - 2017-08-28
### Added
- Cross-mode synchronous copy support
- Duplicate detection (different local source paths mapping to the same
destination) on upload

### Changed
- Python 3.3 is no longer supported (due to `cryptography` dropping support
for 3.3).
- `--strip-components` now defaults to `0`
- `timeout_sec` YAML property is now named `timeout` and is a complex property
comprised of `connect` and `read` values expressed in seconds
- Test coverage improved
- Dependencies updated to latest

### Fixed
- Properly merge CLI options with YAML config options. You can now override
most YAML config settings with CLI options at runtime.
- Issue with zero-byte uploads
- Check for max page blob size

## [1.0.0a5] - 2017-06-09
### Added
- Synchronous copy support with the `synccopy` command. This command supports
Expand Down Expand Up @@ -210,7 +231,8 @@ usage documentation carefully when upgrading from 0.12.1.
`--no-skiponmatch`.
- 0.8.2: performance regression fixes

[Unreleased]: https://github.com/Azure/blobxfer/compare/1.0.0a5...HEAD
[Unreleased]: https://github.com/Azure/blobxfer/compare/1.0.0b1...HEAD
[1.0.0b1]: https://github.com/Azure/blobxfer/compare/1.0.0a5...1.0.0b1
[1.0.0a5]: https://github.com/Azure/blobxfer/compare/1.0.0a4...1.0.0a5
[1.0.0a4]: https://github.com/Azure/blobxfer/compare/0.12.1...1.0.0a4
[0.12.1]: https://github.com/Azure/blobxfer/compare/0.12.0...0.12.1
Expand Down
8 changes: 8 additions & 0 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Code of Conduct

This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [<opencode@microsoft.com>](mailto:opencode@microsoft.com) with any
additional questions or comments.
10 changes: 1 addition & 9 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,4 @@
Contributing Code
-----------------

This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any
additional questions or comments.
# Contributing

If you would like to contribute to this project, please view the
[Microsoft Contribution guidelines](https://azure.github.io/guidelines/).
16 changes: 6 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ from Azure Blob and File Storage
throughput limits
* `replica` mode allows replication of a file across multiple destinations
including to multiple storage accounts
* Synchronous copy with replication support (including block-level copies
for Block blobs)
* Synchronous copy with cross-mode replication support (including block-level
copies for Block blobs)
* Client-side encryption support
* Support all Azure Blob types and Azure Files for both upload and download
* Advanced skip options for rsync-like operations
Expand All @@ -39,6 +39,7 @@ for Block blobs)
* Include and exclude filtering support
* Rsync-like delete support
* No clobber support in either direction
* Automatic content type tagging
* File logging support

## Installation
Expand All @@ -56,11 +57,6 @@ For recent changes, please refer to the
[CHANGELOG.md](https://github.com/Azure/blobxfer/blob/master/CHANGELOG.md)
file.

------------------------------------------------------------------------

This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [<opencode@microsoft.com>](mailto:opencode@microsoft.com) with any
additional questions or comments.
* * *
Please see this project's [Code of Conduct](CODE_OF_CONDUCT.md) and
[Contributing](CONTRIBUTING.md) guidelines.
9 changes: 3 additions & 6 deletions blobxfer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,12 @@
azure.storage._constants.USER_AGENT_STRING = 'blobxfer/{} {}'.format(
__version__, azure.storage._constants.USER_AGENT_STRING)

# monkeypatch SOCKET_TIMEOUT value in Azure Storage SDK
azure.storage._constants.SOCKET_TIMEOUT = (5, 300)

# set stdin source
if sys.version_info >= (3, 0):
if sys.version_info >= (3, 0): # noqa
STDIN = sys.stdin.buffer
else:
else: # noqa
# set stdin to binary mode on Windows
if sys.platform == 'win32': # noqa
if sys.platform == 'win32':
import msvcrt
import os
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
Expand Down
17 changes: 8 additions & 9 deletions blobxfer/models/download.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ class Descriptor(object):
_AES_BLOCKSIZE = blobxfer.models.crypto.AES256_BLOCKSIZE_BYTES

def __init__(self, lpath, ase, options, resume_mgr):
# type: (Descriptior, pathlib.Path,
# type: (Descriptor, pathlib.Path,
# blobxfer.models.azure.StorageEntity,
# blobxfer.models.options.Download,
# blobxfer.operations.resume.DownloadResumeManager) -> None
Expand Down Expand Up @@ -321,10 +321,10 @@ def compute_allocated_size(size, is_encrypted):
size //
blobxfer.models.download.Descriptor._AES_BLOCKSIZE - 1
) * blobxfer.models.download.Descriptor._AES_BLOCKSIZE
if allocatesize < 0:
raise RuntimeError('allocatesize is negative')
else:
allocatesize = size
if allocatesize < 0:
allocatesize = 0
else:
allocatesize = 0
return allocatesize
Expand Down Expand Up @@ -364,10 +364,9 @@ def convert_vectored_io_slice_to_final_path_name(local_path, ase):
:rtype: pathlib.Path
:return: converted final path
"""
name = local_path.name
name = blobxfer.models.metadata.\
remove_vectored_io_slice_suffix_from_name(
name, ase.vectored_io.slice_id)
local_path.name, ase.vectored_io.slice_id)
_tmp = list(local_path.parts[:-1])
_tmp.append(name)
return pathlib.Path(*_tmp)
Expand Down Expand Up @@ -460,7 +459,7 @@ def _resume(self):
return None
self._allocate_disk_space()
# check if final path exists
if not self.final_path.exists():
if not self.final_path.exists(): # noqa
logger.warning('download path {} does not exist'.format(
self.final_path))
return None
Expand Down Expand Up @@ -493,7 +492,7 @@ def _resume(self):
if rr.md5hexdigest != hexdigest:
logger.warning(
'MD5 mismatch resume={} computed={} for {}'.format(
rr.md5hexdigest, hexdigest, self.final_path))
rr.md5hexdigest, hexdigest, self.final_path))
# reset hasher
self.md5 = blobxfer.util.new_md5_hasher()
return None
Expand Down Expand Up @@ -768,12 +767,12 @@ def _restore_file_attributes(self):
if self._ase.file_attributes is None:
return
# set file uid/gid and mode
if blobxfer.util.on_windows():
if blobxfer.util.on_windows(): # noqa
# TODO not implemented yet
pass
else:
self.final_path.chmod(int(self._ase.file_attributes.mode, 8))
if os.getuid() == 0:
if os.getuid() == 0: # noqa
os.chown(
str(self.final_path),
self._ase.file_attributes.uid,
Expand Down
12 changes: 0 additions & 12 deletions blobxfer/models/metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,18 +164,6 @@ def fileattr_from_metadata(md):
return fileattr


def restore_fileattr(path, metadata):
# type: (pathlib.Path, dict) -> None
"""Restore file attributes from metadata
:param pathlib.Path path: path to modify
:param dict metadata: existing metadata dict
"""
if blobxfer.util.on_windows():
logger.warning(
'file attributes store/restore on Windows is not supported yet')
raise NotImplementedError()


def create_vectored_io_next_entry(ase):
# type: (blobxfer.models.azure.StorageEntity) -> str
"""Create Vectored IO next entry id
Expand Down
51 changes: 48 additions & 3 deletions blobxfer/models/options.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@

# create logger
logger = logging.getLogger(__name__)
# global defines
_DEFAULT_REQUESTS_TIMEOUT = (3.1, 12.1)

# named tuples
VectoredIo = collections.namedtuple(
Expand Down Expand Up @@ -95,13 +97,56 @@
SyncCopy = collections.namedtuple(
'SyncCopy', [
'delete_extraneous_destination',
'dest_mode',
'mode',
'overwrite',
'recursive',
]
)


class Timeout(object):
"""Timeout Options"""
def __init__(self, connect, read):
"""Ctor for Timeout options
:param Timeout self: this
:param float connect: connect timeout
:param float read: read timeout
"""
if connect is None or connect <= 0:
self._connect = _DEFAULT_REQUESTS_TIMEOUT[0]
else:
self._connect = connect
if read is None or read <= 0:
self._read = _DEFAULT_REQUESTS_TIMEOUT[1]
else:
self._read = read

@property
def connect(self):
"""Connect timeout
:rtype: float
:return: connect timeout
"""
return self._connect

@property
def read(self):
"""Read timeout
:rtype: float
:return: read timeout
"""
return self._read

@property
def timeout(self):
"""Timeout property in requests format
:rtype: tuple
:return: (connect, read) timeout tuple
"""
return (self._connect, self._read)


class Concurrency(object):
"""Concurrency Options"""
def __init__(
Expand Down Expand Up @@ -157,14 +202,14 @@ class General(object):
"""General Options"""
def __init__(
self, concurrency, log_file=None, progress_bar=True,
resume_file=None, timeout_sec=None, verbose=False):
resume_file=None, timeout=None, verbose=False):
"""Ctor for General Options
:param General self: this
:param Concurrency concurrency: concurrency options
:param bool progress_bar: progress bar
:param str log_file: log file
:param str resume_file: resume file
:param int timeout_sec: timeout in seconds
:param Timeout timeout: timeout options
:param bool verbose: verbose output
"""
if concurrency is None:
Expand All @@ -176,5 +221,5 @@ def __init__(
self.resume_file = pathlib.Path(resume_file)
else:
self.resume_file = None
self.timeout_sec = timeout_sec
self.timeout = timeout
self.verbose = verbose
9 changes: 3 additions & 6 deletions blobxfer/models/resume.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,8 +156,7 @@ def __repr__(self):
'next_integrity_chunk={} completed={} md5={}>').format(
self.final_path, self.length, self.chunk_size,
self.next_integrity_chunk, self.completed,
self.md5hexdigest,
)
self.md5hexdigest)


class Upload(object):
Expand Down Expand Up @@ -295,8 +294,7 @@ def __repr__(self):
'md5={}>').format(
self.local_path, self.length, self.chunk_size,
self.total_chunks, self.completed_chunks, self.completed,
self.md5hexdigest,
)
self.md5hexdigest)


class SyncCopy(object):
Expand Down Expand Up @@ -428,5 +426,4 @@ def __repr__(self):
return ('SyncCopy<length={} chunk_size={} total_chunks={} '
'completed_chunks={} completed={}>').format(
self.length, self.chunk_size, self.total_chunks,
self.completed_chunks, self.completed,
)
self.completed_chunks, self.completed)
Loading