Skip to content

Commit

Permalink
10.16.2 release (#1949)
Browse files Browse the repository at this point in the history
* Add mitigation for weird NtQuerySecurityObject behavior on NAS sources (#1872)

* Add check for 0 length, attempt to validate the returned object.

* Change to grabbing real SD length

* Add comment describing issue

* Prevent infinite loop upon listing failure

* Fix GCP error checking

* Fix GCP disable

* Fix bad URL delete (#1892)

* Manipulate URLs safely

* Fix folder deletion test

* Prevent infinite loop upon listing failure

* Fix GCP error checking

* Fix GCP disable

* Fail when errors listing/clearing bucket

* Update MacOS testing pipeline (#1896)

* fixing small typo (,) in help of jobs clean (#1899)

* Microsoft mandatory file

* fixing small typo (,) in help of jobs clean

Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>

* Implement MD OAuth testing (#1859)

* Implement MD OAuth testing

* Handle async on RevokeAccess, handle job cancel/failure better

* Prevent parallel testing of managed disks

* lint check

* Prevent infinite loop upon listing failure

* Fix GCP error checking

* Fix GCP disable

* Fail when errors listing/clearing bucket

* Add env vars

* Avoid revoking MD access, as it can be shared.

* Fix intermittent failures

* Disable MD OAuth testing temporarily.

* Add "all" to documentation (#1902)

* 10.16.1 patch notes (#1913)

* Add bugfixes to change log.

* Correct wording & punctuation

* Correct version

* Export Successfully Updated bytes (#1884)

* Add info in error message for mkdir on Log/Plan (#1883)

* Microsoft mandatory file

* Add info in error message for mkdir on Log/Plan

Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>

* Fix fixupTokenJson (#1890)

* Microsoft mandatory file

* Fix fixupTokenJson

Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>
Co-authored-by: Adam Orosz <adam.orosz@neotechnology.com>

* Do not log request/response for container creation error (#1893)

* Expose AZCOPY_DOWNLOAD_TO_TEMP_PATH environment variable. (#1895)

* Slice against the correct string (#1927)

* UX improvement: avoid crash when copying S2S with user delegation SAS (#1932)

* Fix bad build + Prevent bad builds in the future (#1917)

* Fix bad build + Prevent bad builds in the future

* Add Windows build

* Make sync use last write time for Azure Files (#1930)

* Make sync use last write time for Azure Files

* Implement test

* 10.16.2 Changelog (#1948)

* Update azcopy version

Co-authored-by: mstenz <mstenz-design@web.de>
Co-authored-by: microsoft-github-policy-service[bot] <77245923+microsoft-github-policy-service[bot]@users.noreply.github.com>
Co-authored-by: Mohit Sharma <65536214+mohsha-msft@users.noreply.github.com>
Co-authored-by: Narasimha Kulkarni <nakulkar@microsoft.com>
Co-authored-by: Karla Saur <1703543+ksaur@users.noreply.github.com>
Co-authored-by: adam-orosz <106535811+adam-orosz@users.noreply.github.com>
Co-authored-by: Adam Orosz <adam.orosz@neotechnology.com>
Co-authored-by: Ze Qian Zhang <zezha@microsoft.com>
  • Loading branch information
9 people committed Nov 7, 2022
1 parent 4185948 commit a10fdd0
Show file tree
Hide file tree
Showing 18 changed files with 142 additions and 36 deletions.
8 changes: 8 additions & 0 deletions ChangeLog.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,14 @@

# Change Log

## Version 10.16.2

### Bug Fixes

1. Fixed an issue where sync would always re-download files as we were comparing against the service LMT, not the SMB LMT
2. Fixed a crash when copying objects service to service using a user delegation SAS token
3. Fixed a crash when deleting folders that may have a raw path string

## Version 10.16.1

### Documentation changes
Expand Down
24 changes: 23 additions & 1 deletion azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,37 @@ jobs:
- script: |
GOARCH=amd64 GOOS=linux go build -o "$(Build.ArtifactStagingDirectory)/azcopy_linux_amd64"
displayName: 'Generate Linux AMD64'
condition: eq(variables.type, 'linux')
- script: |
GOARCH=amd64 GOOS=linux go build -tags "se_integration" -o "$(Build.ArtifactStagingDirectory)/azcopy_linux_se_amd64"
displayName: 'Generate Linux AMD64 SE Integration'
condition: eq(variables.type, 'linux')
- script: |
GOARCH=arm64 GOOS=linux go build -o "$(Build.ArtifactStagingDirectory)/azcopy_linux_arm64"
displayName: 'Generate Linux ARM64'
condition: eq(variables.type, 'linux')
- script: |
GOARCH=amd64 GOOS=windows go build -o "$(Build.ArtifactStagingDirectory)/azcopy_windows_amd64.exe"
displayName: 'Generate Windows AMD64'
condition: eq(variables.type, 'linux')
- script: |
GOARCH=386 GOOS=windows go build -o "$(Build.ArtifactStagingDirectory)/azcopy_windows_386.exe"
displayName: 'Generate Windows i386'
condition: eq(variables.type, 'linux')
- script: |
GOARCH=arm GOARM=7 GOOS=windows go build -o "$(Build.ArtifactStagingDirectory)/azcopy_windows_v7_arm.exe"
displayName: 'Generate Windows ARM'
condition: eq(variables.type, 'linux')
- script: |
cp NOTICE.txt $(Build.ArtifactStagingDirectory)
displayName: 'Generate Linux And Windows Build'
displayName: 'Copy NOTICE.txt'
condition: eq(variables.type, 'linux')
- script: |
Expand Down
4 changes: 2 additions & 2 deletions cmd/copy.go
Original file line number Diff line number Diff line change
Expand Up @@ -943,8 +943,8 @@ func areBothLocationsSMBAware(fromTo common.FromTo) bool {
func areBothLocationsPOSIXAware(fromTo common.FromTo) bool {
// POSIX properties are stored in blob metadata-- They don't need a special persistence strategy for BlobBlob.
return runtime.GOOS == "linux" && (
// fromTo == common.EFromTo.BlobLocal() || TODO
fromTo == common.EFromTo.LocalBlob()) ||
// fromTo == common.EFromTo.BlobLocal() || TODO
fromTo == common.EFromTo.LocalBlob()) ||
fromTo == common.EFromTo.BlobBlob()
}

Expand Down
15 changes: 9 additions & 6 deletions cmd/copyEnumeratorInit.go
Original file line number Diff line number Diff line change
Expand Up @@ -163,14 +163,15 @@ func (cca *CookedCopyCmdArgs) initEnumerator(jobPartOrder common.CopyJobPartOrde
// only create the destination container in S2S scenarios
if cca.FromTo.From().IsRemote() && dstContainerName != "" { // if the destination has a explicit container name
// Attempt to create the container. If we fail, fail silently.
err = cca.createDstContainer(dstContainerName, cca.Destination, ctx, existingContainers, azcopyLogVerbosity)
err = cca.createDstContainer(dstContainerName, cca.Destination, ctx, existingContainers, common.ELogLevel.None())

// check against seenFailedContainers so we don't spam the job log with initialization failed errors
if _, ok := seenFailedContainers[dstContainerName]; err != nil && jobsAdmin.JobsAdmin != nil && !ok {
logDstContainerCreateFailureOnce.Do(func() {
glcm.Info("Failed to create one or more destination container(s). Your transfers may still succeed if the container already exists.")
})
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("failed to initialize destination container %s; the transfer will continue (but be wary it may fail): %s", dstContainerName, err), pipeline.LogWarning)
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("Failed to create destination container %s. The transfer will continue if the container exists", dstContainerName), pipeline.LogWarning)
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("Error %s", err), pipeline.LogDebug)
seenFailedContainers[dstContainerName] = true
}
} else if cca.FromTo.From().IsRemote() { // if the destination has implicit container names
Expand All @@ -197,7 +198,7 @@ func (cca *CookedCopyCmdArgs) initEnumerator(jobPartOrder common.CopyJobPartOrde
continue
}

err = cca.createDstContainer(bucketName, cca.Destination, ctx, existingContainers, azcopyLogVerbosity)
err = cca.createDstContainer(bucketName, cca.Destination, ctx, existingContainers, common.ELogLevel.None())

// if JobsAdmin is nil, we're probably in testing mode.
// As a result, container creation failures are expected as we don't give the SAS tokens adequate permissions.
Expand All @@ -206,7 +207,8 @@ func (cca *CookedCopyCmdArgs) initEnumerator(jobPartOrder common.CopyJobPartOrde
logDstContainerCreateFailureOnce.Do(func() {
glcm.Info("Failed to create one or more destination container(s). Your transfers may still succeed if the container already exists.")
})
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("failed to initialize destination container %s; the transfer will continue (but be wary it may fail): %s", bucketName, err), pipeline.LogWarning)
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("failed to initialize destination container %s; the transfer will continue (but be wary it may fail).", bucketName), pipeline.LogWarning)
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("Error %s", err), pipeline.LogDebug)
seenFailedContainers[bucketName] = true
}
}
Expand All @@ -220,13 +222,14 @@ func (cca *CookedCopyCmdArgs) initEnumerator(jobPartOrder common.CopyJobPartOrde
resName, err := containerResolver.ResolveName(cName)

if err == nil {
err = cca.createDstContainer(resName, cca.Destination, ctx, existingContainers, azcopyLogVerbosity)
err = cca.createDstContainer(resName, cca.Destination, ctx, existingContainers, common.ELogLevel.None())

if _, ok := seenFailedContainers[dstContainerName]; err != nil && jobsAdmin.JobsAdmin != nil && !ok {
logDstContainerCreateFailureOnce.Do(func() {
glcm.Info("Failed to create one or more destination container(s). Your transfers may still succeed if the container already exists.")
})
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("failed to initialize destination container %s; the transfer will continue (but be wary it may fail): %s", dstContainerName, err), pipeline.LogWarning)
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("failed to initialize destination container %s; the transfer will continue (but be wary it may fail).", resName), pipeline.LogWarning)
jobsAdmin.JobsAdmin.LogToJobLog(fmt.Sprintf("Error %s", err), pipeline.LogDebug)
seenFailedContainers[dstContainerName] = true
}
}
Expand Down
28 changes: 20 additions & 8 deletions cmd/zc_enumerator.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,13 @@ import (
// we can add more properties if needed, as this is easily extensible
// ** DO NOT instantiate directly, always use newStoredObject ** (to make sure its fully populated and any preprocessor method runs)
type StoredObject struct {
name string
entityType common.EntityType
lastModifiedTime time.Time
size int64
md5 []byte
blobType azblob.BlobType // will be "None" when unknown or not applicable
name string
entityType common.EntityType
lastModifiedTime time.Time
smbLastModifiedTime time.Time
size int64
md5 []byte
blobType azblob.BlobType // will be "None" when unknown or not applicable

// all of these will be empty when unknown or not applicable.
contentDisposition string
Expand Down Expand Up @@ -92,7 +93,16 @@ type StoredObject struct {
}

func (s *StoredObject) isMoreRecentThan(storedObject2 StoredObject) bool {
return s.lastModifiedTime.After(storedObject2.lastModifiedTime)
lmtA := s.lastModifiedTime
if !s.smbLastModifiedTime.IsZero() {
lmtA = s.smbLastModifiedTime
}
lmtB := storedObject2.lastModifiedTime
if !storedObject2.smbLastModifiedTime.IsZero() {
lmtB = storedObject2.smbLastModifiedTime
}

return lmtA.After(lmtB)
}

func (s *StoredObject) isSingleSourceFile() bool {
Expand Down Expand Up @@ -569,7 +579,9 @@ func InitResourceTraverser(resource common.ResourceString, location common.Locat
type objectProcessor func(storedObject StoredObject) error

// TODO: consider making objectMorpher an interface, not a func, and having newStoredObject take an array of them, instead of just one
// Might be easier to debug
//
// Might be easier to debug
//
// modifies a StoredObject, but does NOT process it. Used for modifications, such as pre-pending a parent path
type objectMorpher func(storedObject *StoredObject)

Expand Down
16 changes: 13 additions & 3 deletions cmd/zc_traverser_file.go
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,9 @@ func (t *fileTraverser) Traverse(preprocessor objectMorpher, processor objectPro
targetURLParts.ShareName,
)

smbLastWriteTime, _ := time.Parse(azfile.ISO8601, fileProperties.FileLastWriteTime()) // no need to worry about error since we'll only check against it if it's non-zero for sync
storedObject.smbLastModifiedTime = smbLastWriteTime

if t.incrementEnumerationCounter != nil {
t.incrementEnumerationCounter(common.EEntityType.File())
}
Expand All @@ -111,6 +114,7 @@ func (t *fileTraverser) Traverse(preprocessor objectMorpher, processor objectPro

// We need to omit some properties if we don't get properties
lmt := time.Time{}
smbLMT := time.Time{}
var contentProps contentPropsProvider = noContentProps
var meta common.Metadata = nil

Expand All @@ -124,6 +128,7 @@ func (t *fileTraverser) Traverse(preprocessor objectMorpher, processor objectPro
}, err
}
lmt = fullProperties.LastModified()
smbLMT, _ = time.Parse(azfile.ISO8601, fullProperties.FileLastWriteTime())
if f.entityType == common.EEntityType.File() {
contentProps = fullProperties.(*azfile.FileGetPropertiesResponse) // only files have content props. Folders don't.
// Get an up-to-date size, because it's documented that the size returned by the listing might not be up-to-date,
Expand All @@ -135,7 +140,7 @@ func (t *fileTraverser) Traverse(preprocessor objectMorpher, processor objectPro
}
meta = common.FromAzFileMetadataToCommonMetadata(fullProperties.NewMetadata())
}
return newStoredObject(
obj := newStoredObject(
preprocessor,
getObjectNameOnly(f.name),
relativePath,
Expand All @@ -146,7 +151,11 @@ func (t *fileTraverser) Traverse(preprocessor objectMorpher, processor objectPro
noBlobProps,
meta,
targetURLParts.ShareName,
), nil
)

obj.smbLastModifiedTime = smbLMT

return obj, nil
}

processStoredObject := func(s StoredObject) error {
Expand Down Expand Up @@ -262,7 +271,7 @@ func newFileTraverser(rawURL *url.URL, p pipeline.Pipeline, ctx context.Context,
return
}

// allows polymorphic treatment of folders and files
// allows polymorphic treatment of folders and files
type azfileEntity struct {
name string
contentLength int64
Expand Down Expand Up @@ -301,4 +310,5 @@ func newAzFileRootFolderEntity(rootDir azfile.DirectoryURL, name string) azfileE
type azfilePropertiesAdapter interface {
NewMetadata() azfile.Metadata
LastModified() time.Time
FileLastWriteTime() string
}
1 change: 1 addition & 0 deletions common/environment.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ var VisibleEnvironmentVariables = []EnvironmentVariable{
EEnvironmentVariable.CPKEncryptionKeySHA256(),
EEnvironmentVariable.DisableSyslog(),
EEnvironmentVariable.MimeMapping(),
EEnvironmentVariable.DownloadToTempPath(),
}

var EEnvironmentVariable = EnvironmentVariable{}
Expand Down
2 changes: 1 addition & 1 deletion common/folderDeletionManager.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ func (s *standardFolderDeletionManager) getParent(u *url.URL) (*url.URL, bool) {
out := s.clean(u)
out.Path = out.Path[:strings.LastIndex(out.Path, "/")]
if out.RawPath != "" {
out.RawPath = out.Path[:strings.LastIndex(out.RawPath, "/")]
out.RawPath = out.RawPath[:strings.LastIndex(out.RawPath, "/")]
}
return out, true
}
Expand Down
5 changes: 5 additions & 0 deletions common/oauthTokenManager.go
Original file line number Diff line number Diff line change
Expand Up @@ -774,6 +774,11 @@ func fixupTokenJson(bytes []byte) []byte {
separatorString := `"not_before":"`
stringSlice := strings.Split(byteSliceToString, separatorString)

// OIDC token issuer returns an integer for "not_before" and not a string
if len(stringSlice) == 1 {
return bytes
}

if stringSlice[1][0] != '"' {
return bytes
}
Expand Down
2 changes: 1 addition & 1 deletion common/version.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
package common

const AzcopyVersion = "10.16.1"
const AzcopyVersion = "10.16.2"
const UserAgent = "AzCopy/" + AzcopyVersion
const S3ImportUserAgent = "S3Import " + UserAgent
const GCPImportUserAgent = "GCPImport " + UserAgent
Expand Down
5 changes: 5 additions & 0 deletions e2etest/declarativeResourceAdapters.go
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,11 @@ func (a filesResourceAdapter) toHeaders(c asserter, share azfile.ShareURL) azfil
headers.SMBProperties.FileAttributes = &attribs
}

if a.obj.creationProperties.lastWriteTime != nil {
lwt := *a.obj.creationProperties.lastWriteTime
headers.SMBProperties.FileLastWriteTime = &lwt
}

props := a.obj.creationProperties.contentHeaders
if props == nil {
return headers
Expand Down
20 changes: 13 additions & 7 deletions e2etest/scenario_helpers.go
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,9 @@ func (s scenarioHelper) generateLocalFilesFromList(c asserter, options *generate
if file.creationProperties.smbPermissionsSddl != nil {
osScenarioHelper{}.setFileSDDLString(c, filepath.Join(options.dirPath, file.name), *file.creationProperties.smbPermissionsSddl)
}
if file.creationProperties.lastWriteTime != nil {
c.AssertNoErr(os.Chtimes(filepath.Join(options.dirPath, file.name), time.Now(), *file.creationProperties.lastWriteTime), "set times")
}
} else {
sourceData, err := s.generateLocalFile(
filepath.Join(options.dirPath, file.name),
Expand All @@ -141,6 +144,9 @@ func (s scenarioHelper) generateLocalFilesFromList(c asserter, options *generate
if file.creationProperties.smbPermissionsSddl != nil {
osScenarioHelper{}.setFileSDDLString(c, filepath.Join(options.dirPath, file.name), *file.creationProperties.smbPermissionsSddl)
}
if file.creationProperties.lastWriteTime != nil {
c.AssertNoErr(os.Chtimes(filepath.Join(options.dirPath, file.name), time.Now(), *file.creationProperties.lastWriteTime), "set times")
}
}
}

Expand Down Expand Up @@ -690,7 +696,7 @@ func (scenarioHelper) generateAzureFilesFromList(c asserter, options *generateAz
c.AssertNoErr(err)
}

if f.creationProperties.smbPermissionsSddl != nil || f.creationProperties.smbAttributes != nil {
if f.creationProperties.smbPermissionsSddl != nil || f.creationProperties.smbAttributes != nil || f.creationProperties.lastWriteTime != nil {
_, err := dir.SetProperties(ctx, ad.toHeaders(c, options.shareURL).SMBProperties)
c.AssertNoErr(err)

Expand Down Expand Up @@ -743,7 +749,12 @@ func (scenarioHelper) generateAzureFilesFromList(c asserter, options *generateAz
c.AssertNoErr(err)
c.Assert(cResp.StatusCode(), equals(), 201)

if f.creationProperties.smbPermissionsSddl != nil || f.creationProperties.smbAttributes != nil {
_, err = file.UploadRange(context.Background(), 0, contentR, nil)
if err == nil {
c.Failed()
}

if f.creationProperties.smbPermissionsSddl != nil || f.creationProperties.smbAttributes != nil || f.creationProperties.lastWriteTime != nil {
/*
via Jason Shay:
Providing securityKey/SDDL during 'PUT File' and 'PUT Properties' can and will provide different results/semantics.
Expand Down Expand Up @@ -773,11 +784,6 @@ func (scenarioHelper) generateAzureFilesFromList(c asserter, options *generateAz
}
}

_, err = file.UploadRange(context.Background(), 0, contentR, nil)
if err == nil {
c.Failed()
}

// TODO: do we want to put some random content into it?
}
}
Expand Down
29 changes: 28 additions & 1 deletion e2etest/zt_preserve_smb_properties_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ func TestProperties_SMBPermissionsSDDLPreserved(t *testing.T) {
}

// TODO: add some tests (or modify the above) to make assertions about case preservation (or not) in metadata
// See https://github.com/Azure/azure-storage-azcopy/issues/113 (which incidentally, I'm not observing in the tests above, for reasons unknown)
//
// See https://github.com/Azure/azure-storage-azcopy/issues/113 (which incidentally, I'm not observing in the tests above, for reasons unknown)
func TestProperties_SMBDates(t *testing.T) {
RunScenarios(t, eOperation.CopyAndSync(), eTestFromTo.Other(common.EFromTo.LocalFile(), common.EFromTo.FileLocal()), eValidate.Auto(), anonymousAuthOnly, anonymousAuthOnly, params{
recursive: true,
Expand Down Expand Up @@ -265,3 +265,30 @@ func TestProperties_SMBWithCopyWithShareRoot(t *testing.T) {
"",
)
}

func TestProperties_SMBTimes(t *testing.T) {
RunScenarios(
t,
eOperation.Sync(),
eTestFromTo.Other(common.EFromTo.FileLocal()),
eValidate.Auto(),
anonymousAuthOnly,
anonymousAuthOnly,
params{
recursive: true,
preserveSMBInfo: true,
},
nil,
testFiles{
defaultSize: "1K",

shouldSkip: []interface{}{
folder("", with{lastWriteTime: time.Now().Add(-time.Hour)}), // If the fix worked, these should not be overwritten.
f("asdf.txt", with{lastWriteTime: time.Now().Add(-time.Hour)}), // If the fix did not work, we'll be relying upon the service's "real" LMT, which is not what we persisted, and an hour ahead of our files.
},
},
EAccountType.Standard(),
EAccountType.Standard(),
"",
)
}
2 changes: 1 addition & 1 deletion jobsAdmin/JobsAdmin.go
Original file line number Diff line number Diff line change
Expand Up @@ -545,7 +545,7 @@ func (ja *jobsAdmin) LogToJobLog(msg string, level pipeline.LogLevel) {
if level <= pipeline.LogWarning {
prefix = fmt.Sprintf("%s: ", common.LogLevel(level)) // so readers can find serious ones, but information ones still look uncluttered without INFO:
}
ja.jobLogger.Log(pipeline.LogWarning, prefix+msg) // use LogError here, so that it forces these to get logged, even if user is running at warning level instead of Info. They won't have "warning" prefix, if Info level was passed in to MessagesForJobLog
ja.jobLogger.Log(level, prefix+msg)
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Expand Down
Loading

0 comments on commit a10fdd0

Please sign in to comment.