Skip to content

Commit

Permalink
Release 10.17.0 (#2029)
Browse files Browse the repository at this point in the history
* Add mitigation for weird NtQuerySecurityObject behavior on NAS sources (#1872)

* Add check for 0 length, attempt to validate the returned object.

* Change to grabbing real SD length

* Add comment describing issue

* Prevent infinite loop upon listing failure

* Fix GCP error checking

* Fix GCP disable

* Fix bad URL delete (#1892)

* Manipulate URLs safely

* Fix folder deletion test

* Prevent infinite loop upon listing failure

* Fix GCP error checking

* Fix GCP disable

* Fail when errors listing/clearing bucket

* Update MacOS testing pipeline (#1896)

* fixing small typo (,) in help of jobs clean (#1899)

* Microsoft mandatory file

* fixing small typo (,) in help of jobs clean

Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>

* Implement MD OAuth testing (#1859)

* Implement MD OAuth testing

* Handle async on RevokeAccess, handle job cancel/failure better

* Prevent parallel testing of managed disks

* lint check

* Prevent infinite loop upon listing failure

* Fix GCP error checking

* Fix GCP disable

* Fail when errors listing/clearing bucket

* Add env vars

* Avoid revoking MD access, as it can be shared.

* Fix intermittent failures

* Disable MD OAuth testing temporarily.

* Add "all" to documentation (#1902)

* 10.16.1 patch notes (#1913)

* Add bugfixes to change log.

* Correct wording & punctuation

* Correct version

* Export Successfully Updated bytes (#1884)

* Add info in error message for mkdir on Log/Plan (#1883)

* Microsoft mandatory file

* Add info in error message for mkdir on Log/Plan

Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>

* Fix fixupTokenJson (#1890)

* Microsoft mandatory file

* Fix fixupTokenJson

Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>
Co-authored-by: Adam Orosz <adam.orosz@neotechnology.com>

* Do not log request/response for container creation error (#1893)

* Expose AZCOPY_DOWNLOAD_TO_TEMP_PATH environment variable. (#1895)

* Slice against the correct string (#1927)

* UX improvement: avoid crash when copying S2S with user delegation SAS (#1932)

* Fix bad build + Prevent bad builds in the future (#1917)

* Fix bad build + Prevent bad builds in the future

* Add Windows build

* Make sync use last write time for Azure Files (#1930)

* Make sync use last write time for Azure Files

* Implement test

* 10.16.2 Changelog (#1948)

* Update azcopy version

* Fixed a bug where preserve permissions would not work with OAuth

* Added CODEOWNERS file

* Fixed issue where CPK would not be injected on retries

* remove OAuth from test

* Updated version check string to indicate current AzCopy version (#1969)

* added codeowner

* Enhance job summary with details about file/folders (#1952)

* Add flag to disable version check (#1950)

* darwin arm64

* Update golang version to 10.19.2 (#1925)

* enable cgo

* added tests

* Minor fixes: More in description (#1968)

* Echo auto-login failure if any

* Update help for sync command to use trailing slash on directories

* azcopy fail to copy 12TB file to Storage containers in Dev.

The logic is used to calculate proper blockSize if it’s not provided, and due to the uint32 cast, it can’t give proper blockSize if filesize is between 50000 * (8 * 1024 * 1024) * X + 1, to 50000 * (8 * 1024 * 1024) * X + 49999. It should return 16MB instead of 8MB blockSize.

Accommodated the changes suggested by Narasimha Kulkarni

* Added extra logging when switching endpoints

* Enable support for preserving SMB info on Linux. (#1723)

* Microsoft mandatory file

* Enable support for preserving SMB info on Linux.

Implemented the GetSDDL/PutSDDL GetSMBProperties/PutSMBProperties
methods for Linux using extended attributes.
Following are the xattrs we use for fetching/setting various required
info.

// Extended Attribute (xattr) keys for fetching various information from Linux cifs client.
const (
        CIFS_XATTR_CREATETIME     = "user.cifs.creationtime" // File creation time.
        CIFS_XATTR_ATTRIB         = "user.cifs.dosattrib"    // FileAttributes.
        CIFS_XATTR_CIFS_ACL       = "system.cifs_acl"        // DACL only.
        CIFS_XATTR_CIFS_NTSD      = "system.cifs_ntsd"       // Owner, Group, DACL.
        CIFS_XATTR_CIFS_NTSD_FULL = "system.cifs_ntsd_full"  // Owner, Group, DACL, SACL.
)

Majority of the changes are in sddl/sddlHelper_linux.go which implement
the following Win32 APIs for dealing with SIDs.

	ConvertSecurityDescriptorToStringSecurityDescriptorW
	ConvertStringSecurityDescriptorToSecurityDescriptorW
	ConvertSidToStringSidW
	ConvertStringSidToSidW

Note: I have skipped Object ACE support in sddl/sddlHelper_linux.go as
      those should not be used for filesystem properties, only AD object
      properties.
      Can someone confirm this?

TBD:
Conditional SID

* Audited, fixed, tested support for "No ACL"/NO_ACCESS_CONTROL and ACL w/o any ACE

Tested the following cases:

c:\Users\natomar\Downloads>cd testacl

// This has "No ACLs" and everyone should be allowed access.
c:\Users\natomar\Downloads\testacl>touch NO_ACCESS_CONTROL.txt
c:\Users\natomar\Downloads\testacl>cacls NO_ACCESS_CONTROL.txt /S:D:NO_ACCESS_CONTROL
Are you sure (Y/N)?y
processed file: c:\Users\natomar\Downloads\testacl\NO_ACCESS_CONTROL.txt

// This has "No ACLs" and everyone should be allowed access.
// It additionally has the "P" (protected) flag set, but that won't have
// any effect as that just prevents ACE inheritance but this ACL will
// not have any ACLs due to the NO_ACCESS_CONTROL flag.
c:\Users\natomar\Downloads\testacl>touch PNO_ACCESS_CONTROL.txt
c:\Users\natomar\Downloads\testacl>cacls PNO_ACCESS_CONTROL.txt /S:D:PNO_ACCESS_CONTROL
Are you sure (Y/N)?y
processed file: c:\Users\natomar\Downloads\testacl\PNO_ACCESS_CONTROL.txt

// This should set DACL but with no ACEs, but since "P" is not set it
// inherits ACEs from the parent dir.
c:\Users\natomar\Downloads\testacl>touch empty_d.txt
c:\Users\natomar\Downloads\testacl>cacls empty_d.txt /S:D:
Are you sure (Y/N)?y
processed file: c:\Users\natomar\Downloads\testacl\empty_d.txt

// This should set DACL but with no ACEs, but since "P" is set it
//  doesn't inherit ACEs from the parent dir and hence this will block
// all users.
c:\Users\natomar\Downloads\testacl>touch empty_d_with_p.txt
c:\Users\natomar\Downloads\testacl>cacls empty_d_with_p.txt /S:D:P
Are you sure (Y/N)?y
processed file: c:\Users\natomar\Downloads\testacl\empty_d_with_p.txt

* Don't fail outright for ACL revision 4.

Though our supported ACL types must carry ACL revision 2 as per the doc

https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-dtyp/20233ed8-a6c6-4097-aafa-dd545ed24428

but I've seen some dirs have ACL revision 4 but ACL types are still
supported ones. So instead of failing upfront, let it fail with
unsupported ACE type.

Also hexadecimal aceRights are more commonly seen than I expected, so
removing a log.

* Minor fix after running azcopy on a large dir.

This was something which I have doubt on. Now that we got a real world
issue due to this, it's all clear :-)

* Some minor updates after the rebase to latest Azcopy.

* Set default value of flag preserve-smb-info to true on Windows and false on other OS

(cherry picked from commit ac5bedb)

Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Nagendra Tomar <Nagendra.Tomar@microsoft.com>

* Added log indicating a sub-directory is being enqueued (#1999)

* Log sync deletions to scanning logger (#2000)

* ieproxy fix

* remove cgo

* fix

* fix

* fix

* more testing

* more testing

* more testing

* more testing

* mod tidy

* mod tidy

* more testing

* Added codespell (#2008)

* Added codespell

* Fixed initial codespell errors

* Fix format in codespell.yml

* Added s3 url parts

* Added CodeQL (#2009)

* Added linting file

* Upgrade codeql to v2

* Fix incorrect conversion between integer types

* Fix GCP URL parts

* Fix for rare infinite loop on mutex acquisition (#2012)

* small fix

* removed test

* Added trivy file (#2015)

* Added trivy file

* renamed trivy

* Improve debug-ability of e2e tests by uploading logs of failed jobs (#1898)

* Upload testing logs to storage account on failed test

* Handle as pipeline artifact instead

* mkdirall

* copy plan files too

* Fix failing tests

* Change overwrite to affect any "locked in"/completed state

* Fail copy job if single blob does not exist (#1981)

* Job fail if single file does not exist

* fixed change

* fail only on a single file not existing

* fail on file not found

* fail on file not found

* fail on file not found

* cleanup

* added tests

* cleanup

* removed test

* Correct odd behavior around folder overwrites (#1961)

* Fix files sync by determining which LMT to use via smb properties flag (#1958)

* Fix files sync by determining which LMT to use via smb properties flag

* Implement testing for LMT switch

* Fix testing

* Limit SMB testing to SMB-compatible environment

* Enforce SMB LMT for Linux/MacOS test of SMB LMT preference

* Fix metadata parsing (#1953)

* Fix metadata parsing

* rework metadata parsing to be more robust; add test

* Fix comment lines

* Codespell :|

* Fix ADLSG2 intermittent failure (#1901)

* Fix ADLSG2 intermittent failure

* Add test

* Reduce code dupe

* Fix build errors

* Fix infinite loop maybe?

* Store source token and pass to other threads (#1996)

* Store source token

* testing

* failing pipe

* cleanup

* test logger

* fix test failure

* fix 2

* fix

* sync fix

* cleanup check

* Hash based sync (#2020)

* Implement hash based sync for MD5

* Implement testing

* Ensure folders are handled properly in HBS & Test S2S

* Add skip/process logging

* Include generic xattr syncmeta application

* Fix 0-size blobs

* Fix core testing

* Revert "Include generic xattr syncmeta application"

This reverts commit fba55e4.

* Warn on no hash @ source, remove MHP

* Comments

* Comments

* Copy properties from Source (#1964)

* Copy properties from Source

* Remove unnecessary ws changes

* Preserve UNIX properties

* Move entity type to Overwrite option

* Add python suite

* Review comments

* Fix test

* Release notes and version update (#2028)

Co-authored-by: adreed-msft <49764384+adreed-msft@users.noreply.github.com>
Co-authored-by: mstenz <mstenz-design@web.de>
Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>
Co-authored-by: Adele Reed <adreed@microsoft.com>
Co-authored-by: Karla Saur <1703543+ksaur@users.noreply.github.com>
Co-authored-by: adam-orosz <106535811+adam-orosz@users.noreply.github.com>
Co-authored-by: Adam Orosz <adam.orosz@neotechnology.com>
Co-authored-by: Ze Qian Zhang <zezha@microsoft.com>
Co-authored-by: Gauri Prasad <gapra@microsoft.com>
Co-authored-by: Gauri Prasad <51212198+gapra-msft@users.noreply.github.com>
Co-authored-by: Tamer Sherif <tasherif@microsoft.com>
Co-authored-by: Tamer Sherif <69483382+tasherif-msft@users.noreply.github.com>
Co-authored-by: reshmav18 <73923840+reshmav18@users.noreply.github.com>
Co-authored-by: linuxsmiths <linuxsmiths@gmail.com>
Co-authored-by: Nagendra Tomar <Nagendra.Tomar@microsoft.com>
  • Loading branch information
17 people committed Jan 23, 2023
1 parent a10fdd0 commit 108dbdd
Show file tree
Hide file tree
Showing 133 changed files with 4,766 additions and 620 deletions.
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
* @gapra-msft @adreed-msft @nakulkar-msft @siminsavani-msft @vibhansa-msft @tasherif-msft
67 changes: 67 additions & 0 deletions .github/workflows/codeql-analysis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"

on:
pull_request:
branches: [ main, dev ]
push:
branches: [ main, dev ]

jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write

strategy:
fail-fast: false
matrix:
language: [ 'go' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://git.io/codeql-language-support

steps:
- name: Checkout repository
uses: actions/checkout@v2

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main

# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2

# ℹ️ Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl

# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language

#- run: |
# make bootstrap
# make release

- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
24 changes: 24 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# GitHub Action to automate the identification of common misspellings in text files.
# https://github.com/codespell-project/actions-codespell
# https://github.com/codespell-project/codespell
name: codespell
on:
push:
branches:
- dev
- main
pull_request:
branches:
- dev
- main
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: codespell-project/actions-codespell@master
with:
check_filenames: true
skip: ./sddl/sddlPortable_test.go,./sddl/sddlHelper_linux.go
ignore_words_list: "resue,pase,cancl,cacl,froms"
54 changes: 54 additions & 0 deletions .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

name: trivy

on:
push:
branches: [ "main", "dev" ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "main", "dev" ]
schedule:
- cron: '31 19 * * 1'

permissions:
contents: read

jobs:
build:
permissions:
contents: read # for actions/checkout to fetch code
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
actions: read # only required for a private repository by github/codeql-action/upload-sarif to get the Action run status

name: Build
runs-on: "ubuntu-22.04"

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Build AzCopy
run: |
go build -o azcopy
ls -l
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: fs
scan-ref: './azcopy'
ignore-unfixed: true
format: 'sarif'
output: 'trivy-results-binary.sarif'
severity: 'CRITICAL,HIGH,MEDIUM,LOW'

- name: List Issues
run: |
cat trivy-results-binary.sarif
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results-binary.sarif'
31 changes: 22 additions & 9 deletions ChangeLog.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,19 @@

# Change Log

## Version 10.17.0

### New features

1. Added support for hash-based sync. AzCopy sync can now take two new flags `--compare-hash` and `--missing-hash-policy=Generate`, which which user will be able to transfer only those files which differ in their MD5 hash.

### Bug fixes
1. Fixed [issue 1994](https://github.com/Azure/azure-storage-azcopy/pull/1994): Error in calculation of block size
2. Fixed [issue 1957](https://github.com/Azure/azure-storage-azcopy/pull/1957): Repeated Authentication token refresh
3. Fixed [issue 1870](https://github.com/Azure/azure-storage-azcopy/pull/1870): Fixed issue where CPK would not be injected on retries
4. Fixed [issue 1946](https://github.com/Azure/azure-storage-azcopy/issues/1946): Fixed Metadata parsing
5: Fixed [issue 1931](https://github.com/Azure/azure-storage-azcopy/issues/1931)

## Version 10.16.2

### Bug Fixes
Expand Down Expand Up @@ -35,7 +48,7 @@
1. Fixed [issue 1506](https://github.com/Azure/azure-storage-azcopy/issues/1506): Added input watcher to resolve issue since job could not be resumed.
2. Fixed [issue 1794](https://github.com/Azure/azure-storage-azcopy/issues/1794): Moved log-level to root.go so log-level arguments do not get ignored.
3. Fixed [issue 1824](https://github.com/Azure/azure-storage-azcopy/issues/1824): Avoid creating .azcopy under HOME if plan/log location is specified elsewhere.
4. Fixed [isue 1830](https://github.com/Azure/azure-storage-azcopy/issues/1830), [issue 1412](https://github.com/Azure/azure-storage-azcopy/issues/1418), and [issue 873](https://github.com/Azure/azure-storage-azcopy/issues/873): Improved error message for when AzCopy cannot determine if source is directory.
4. Fixed [issue 1830](https://github.com/Azure/azure-storage-azcopy/issues/1830), [issue 1412](https://github.com/Azure/azure-storage-azcopy/issues/1418), and [issue 873](https://github.com/Azure/azure-storage-azcopy/issues/873): Improved error message for when AzCopy cannot determine if source is directory.
5. Fixed [issue 1777](https://github.com/Azure/azure-storage-azcopy/issues/1777): Fixed job list to handle respective output-type correctly.
6. Fixed win64 alignment issue.

Expand Down Expand Up @@ -191,7 +204,7 @@

### New features
1. Added option to [disable parallel blob listing](https://github.com/Azure/azure-storage-azcopy/pull/1263)
1. Added support for uploading [large files](https://github.com/Azure/azure-storage-azcopy/pull/1254/files) upto 4TiB. Please refer the [public documentation](https://docs.microsoft.com/en-us/rest/api/storageservices/create-file) for more information
1. Added support for uploading [large files](https://github.com/Azure/azure-storage-azcopy/pull/1254/files) up to 4TiB. Please refer the [public documentation](https://docs.microsoft.com/en-us/rest/api/storageservices/create-file) for more information
1. Added support for `include-before`flag. Refer [this](https://github.com/Azure/azure-storage-azcopy/issues/1075) for more information

### Bug fixes
Expand Down Expand Up @@ -469,7 +482,7 @@ disallowed because none (other than include-path) are respected.

1. The `*` character is no longer supported as a wildcard in URLs, except for the two exceptions
noted below. It remains supported in local file paths.
1. The first execption is that `/*` is still allowed at the very end of the "path" section of a
1. The first exception is that `/*` is still allowed at the very end of the "path" section of a
URL. This is illustrated by the difference between these two source URLs:
`https://account/container/virtual?SAS` and
`https://account/container/virtualDir/*?SAS`. The former copies the virtual directory
Expand Down Expand Up @@ -501,7 +514,7 @@ disallowed because none (other than include-path) are respected.
1. Percent complete is displayed as each job runs.
1. VHD files are auto-detected as page blobs.
1. A new benchmark mode allows quick and easy performance benchmarking of your network connection to
Blob Storage. Run AzCopy with the paramaters `bench --help` for details. This feature is in
Blob Storage. Run AzCopy with the parameters `bench --help` for details. This feature is in
Preview status.
1. The location for AzCopy's "plan" files can be specified with the environment variable
`AZCOPY_JOB_PLAN_LOCATION`. (If you move the plan files and also move the log files using the existing
Expand All @@ -520,7 +533,7 @@ disallowed because none (other than include-path) are respected.
1. Memory usage can be controlled by setting the new environment variable `AZCOPY_BUFFER_GB`.
Decimal values are supported. Actual usage will be the value specified, plus some overhead.
1. An extra integrity check has been added: the length of the
completed desination file is checked against that of the source.
completed destination file is checked against that of the source.
1. When downloading, AzCopy can automatically decompress blobs (or Azure Files) that have a
`Content-Encoding` of `gzip` or `deflate`. To enable this behaviour, supply the `--decompress`
parameter.
Expand Down Expand Up @@ -685,21 +698,21 @@ information, including those needed to set the new headers.

1. For creating MD5 hashes when uploading, version 10.x now has the OPPOSITE default to version
AzCopy 8.x. Specifically, as of version 10.0.9, MD5 hashes are NOT created by default. To create
Content-MD5 hashs when uploading, you must now specify `--put-md5` on the command line.
Content-MD5 hashes when uploading, you must now specify `--put-md5` on the command line.

### New features

1. Can migrate data directly from Amazon Web Services (AWS). In this high-performance data path
the data is read directly from AWS by the Azure Storage service. It does not need to pass through
the machine running AzCopy. The copy happens syncronously, so you can see its exact progress.
the machine running AzCopy. The copy happens synchronously, so you can see its exact progress.
1. Can migrate data directly from Azure Files or Azure Blobs (any blob type) to Azure Blobs (any
blob type). In this high-performance data path the data is read directly from the source by the
Azure Storage service. It does not need to pass through the machine running AzCopy. The copy
happens syncronously, so you can see its exact progress.
happens synchronously, so you can see its exact progress.
1. Sync command prompts with 4 options about deleting unneeded files from the target: Yes, No, All or
None. (Deletion only happens if the `--delete-destination` flag is specified).
1. Can download to /dev/null. This throws the data away - but is useful for testing raw network
performance unconstrained by disk; and also for validing MD5 hashes in bulk (when run in a cloud
performance unconstrained by disk; and also for validating MD5 hashes in bulk (when run in a cloud
VM in the same region as the Storage account)

### Bug fixes
Expand Down
2 changes: 1 addition & 1 deletion azbfs/parsing_urls.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ type BfsURLParts struct {
isIPEndpointStyle bool // Ex: "https://ip/accountname/filesystem"
}

// isIPEndpointStyle checkes if URL's host is IP, in this case the storage account endpoint will be composed as:
// isIPEndpointStyle checks if URL's host is IP, in this case the storage account endpoint will be composed as:
// http(s)://IP(:port)/storageaccount/share(||container||etc)/...
func isIPEndpointStyle(url url.URL) bool {
return net.ParseIP(url.Host) != nil
Expand Down
2 changes: 1 addition & 1 deletion azbfs/zc_credential_token.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ type TokenCredential interface {
// indicating how long the TokenCredential object should wait before calling your tokenRefresher function again.
func NewTokenCredential(initialToken string, tokenRefresher func(credential TokenCredential) time.Duration) TokenCredential {
tc := &tokenCredential{}
tc.SetToken(initialToken) // We dont' set it above to guarantee atomicity
tc.SetToken(initialToken) // We don't set it above to guarantee atomicity
if tokenRefresher == nil {
return tc // If no callback specified, return the simple tokenCredential
}
Expand Down
35 changes: 27 additions & 8 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ jobs:
env:
GO111MODULE: 'on'
inputs:
version: '1.17.9'
version: '1.19.2'

- script: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.43.0
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.46.2
echo 'Installation complete'
./bin/golangci-lint --version
./bin/golangci-lint run e2etest
Expand Down Expand Up @@ -83,7 +83,12 @@ jobs:
- script: |
go build -o "$(Build.ArtifactStagingDirectory)/azcopy_darwin_amd64"
displayName: 'Generate MacOS Build'
displayName: 'Generate MacOS Build with AMD64'
condition: eq(variables.type, 'mac-os')
- script: |
GOARCH=arm64 CGO_ENABLED=1 go build -o "$(Build.ArtifactStagingDirectory)/azcopy_darwin_arm64"
displayName: 'Generate MacOS Build with ARM64'
condition: eq(variables.type, 'mac-os')
- task: PublishBuildArtifacts@1
Expand Down Expand Up @@ -116,7 +121,7 @@ jobs:
steps:
- task: GoTool@0
inputs:
version: '1.17.9'
version: '1.19.2'

# Running E2E Tests on Linux - AMD64
- script: |
Expand All @@ -134,6 +139,7 @@ jobs:
AZCOPY_E2E_CLIENT_SECRET: $(AZCOPY_SPA_CLIENT_SECRET)
AZCOPY_E2E_CLASSIC_ACCOUNT_NAME: $(AZCOPY_E2E_CLASSIC_ACCOUNT_NAME)
AZCOPY_E2E_CLASSIC_ACCOUNT_KEY: $(AZCOPY_E2E_CLASSIC_ACCOUNT_KEY)
AZCOPY_E2E_LOG_OUTPUT: '$(System.DefaultWorkingDirectory)/logs'
AZCOPY_E2E_OAUTH_MANAGED_DISK_CONFIG: $(AZCOPY_E2E_OAUTH_MANAGED_DISK_CONFIG)
AZCOPY_E2E_STD_MANAGED_DISK_CONFIG: $(AZCOPY_E2E_STD_MANAGED_DISK_CONFIG)
CPK_ENCRYPTION_KEY: $(CPK_ENCRYPTION_KEY)
Expand All @@ -157,6 +163,7 @@ jobs:
AZCOPY_E2E_CLIENT_SECRET: $(AZCOPY_SPA_CLIENT_SECRET)
AZCOPY_E2E_CLASSIC_ACCOUNT_NAME: $(AZCOPY_E2E_CLASSIC_ACCOUNT_NAME)
AZCOPY_E2E_CLASSIC_ACCOUNT_KEY: $(AZCOPY_E2E_CLASSIC_ACCOUNT_KEY)
AZCOPY_E2E_LOG_OUTPUT: '$(System.DefaultWorkingDirectory)/logs'
AZCOPY_E2E_OAUTH_MANAGED_DISK_CONFIG: $(AZCOPY_E2E_OAUTH_MANAGED_DISK_CONFIG)
AZCOPY_E2E_STD_MANAGED_DISK_CONFIG: $(AZCOPY_E2E_STD_MANAGED_DISK_CONFIG)
CPK_ENCRYPTION_KEY: $(CPK_ENCRYPTION_KEY)
Expand All @@ -182,13 +189,21 @@ jobs:
AZCOPY_E2E_CLIENT_SECRET: $(AZCOPY_SPA_CLIENT_SECRET)
AZCOPY_E2E_CLASSIC_ACCOUNT_NAME: $(AZCOPY_E2E_CLASSIC_ACCOUNT_NAME)
AZCOPY_E2E_CLASSIC_ACCOUNT_KEY: $(AZCOPY_E2E_CLASSIC_ACCOUNT_KEY)
AZCOPY_E2E_LOG_OUTPUT: '$(System.DefaultWorkingDirectory)/logs'
AZCOPY_E2E_OAUTH_MANAGED_DISK_CONFIG: $(AZCOPY_E2E_OAUTH_MANAGED_DISK_CONFIG)
AZCOPY_E2E_STD_MANAGED_DISK_CONFIG: $(AZCOPY_E2E_STD_MANAGED_DISK_CONFIG)
CPK_ENCRYPTION_KEY: $(CPK_ENCRYPTION_KEY)
CPK_ENCRYPTION_KEY_SHA256: $(CPK_ENCRYPTION_KEY_SHA256)
displayName: 'E2E Test MacOs'
displayName: 'E2E Test MacOs AMD64'
condition: eq(variables.type, 'mac-os')
- task: PublishBuildArtifacts@1
displayName: 'Publish logs'
condition: succeededOrFailed()
inputs:
pathToPublish: '$(System.DefaultWorkingDirectory)/logs'
artifactName: logs

- job: Test_On_Ubuntu
variables:
isMutexSet: 'false'
Expand All @@ -204,18 +219,22 @@ jobs:
- task: GoTool@0
name: 'Set_up_Golang'
inputs:
version: '1.17.9'
version: '1.19.2'
- task: DownloadSecureFile@1
name: ciGCSServiceAccountKey
displayName: 'Download GCS Service Account Key'
inputs:
secureFile: 'ci-gcs-dev.json'
- script: |
pip install azure-storage-blob==12.12.0
# set the variable to indicate that the mutex is being acquired
# note: we set it before acquiring the mutex to ensure we release the mutex.
# setting this after can result in an un-broken mutex if someone cancels the pipeline after we acquire the
# mutex but before we set this variable.
# setting this before will always work since it is valid to break an un-acquired mutex.
echo '##vso[task.setvariable variable=isMutexSet]true'
# acquire the mutex before running live tests to avoid conflicts
python ./tool_distributed_mutex.py lock "$(MUTEX_URL)"
# set the variable to indicate that the mutex was actually acquired
echo '##vso[task.setvariable variable=isMutexSet]true'
name: 'Acquire_the_distributed_mutex'
- script: |
# run unit test and build executable
Expand Down
4 changes: 2 additions & 2 deletions cmd/benchmark.go
Original file line number Diff line number Diff line change
Expand Up @@ -273,15 +273,15 @@ func (h benchmarkSourceHelper) FromUrl(s string) (fileCount uint, bytesPerFile i
pieces[0] = strings.Split(pieces[0], "=")[1]
pieces[1] = strings.Split(pieces[1], "=")[1]
pieces[2] = strings.Split(pieces[2], "=")[1]
fc, err := strconv.ParseUint(pieces[0], 10, 64)
fc, err := strconv.ParseUint(pieces[0], 10, 32)
if err != nil {
return 0, 0, 0, err
}
bpf, err := strconv.ParseInt(pieces[1], 10, 64)
if err != nil {
return 0, 0, 0, err
}
nf, err := strconv.ParseUint(pieces[2], 10, 64)
nf, err := strconv.ParseUint(pieces[2], 10, 32)
if err != nil {
return 0, 0, 0, err
}
Expand Down
Loading

0 comments on commit 108dbdd

Please sign in to comment.