Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix uploading zipped tiff (should set needsConversion=true) #7856

Merged
merged 7 commits into from
Jun 11, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
### Fixed
- Fixed a bug where brushing on a fallback segmentation with active mapping and with segment index file would lead to failed saves. [#7833](https://github.com/scalableminds/webknossos/pull/7833)
- Fixed a bug where sometimes old mismatching javascript code would be served after upgrades. [#7854](https://github.com/scalableminds/webknossos/pull/7854)
- Fixed a bug where dataset uploads of zipped tiff data via the UI would be rejected. [#7856](https://github.com/scalableminds/webknossos/pull/7856)

### Removed

Expand Down
1 change: 1 addition & 0 deletions conf/messages
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ dataset.upload.linkRestricted=Can only link layers of datasets that are either p
dataset.upload.invalidLinkedLayers=Could not link all requested layers
dataset.upload.noFiles=Tried to finish upload with no files. May be a retry of a failed finish request, see previous errors.
dataset.upload.storageExceeded=Cannot upload dataset because the storage quota of the organization is exceeded.
dataset.upload.finishFailed=Failed to finalize dataset upload.
dataset.explore.failed.readFile=Failed to read remote file
dataset.explore.magDtypeMismatch=Element class must be the same for all mags of a layer. Got {0}
dataset.explore.autoAdd.failed=Failed to automatically import the explored dataset.
Expand Down
6 changes: 3 additions & 3 deletions frontend/javascripts/admin/dataset/dataset_upload_view.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -511,9 +511,9 @@ class DatasetUploadView extends React.Component<PropsWithFormAndRouter, State> {
zipFile: [],
});
}
// We return here since not more than 1 zip archive is supported anyway. This is guarded
// against via form validation.
return;
// The loop breaks here in case of zip because at most one zip archive is supported anyway.
// Form validation takes care of that assertion.
break;
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ class DataSourceController @Inject()(
result <- accessTokenService.validateAccess(UserAccessRequest.writeDataSource(dataSourceId),
urlOrHeaderToken(token, request)) {
for {
(dataSourceId, datasetSizeBytes) <- uploadService.finishUpload(request.body)
(dataSourceId, datasetSizeBytes) <- uploadService.finishUpload(request.body) ?~> "finishUpload.failed"
_ <- remoteWebknossosClient.reportUpload(
dataSourceId,
datasetSizeBytes,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,8 @@ class UploadService @Inject()(dataSourceRepository: DataSourceRepository,
} yield explored)
.toList)
combinedLayers = exploreLocalLayerService.makeLayerNamesUnique(dataSources.flatMap(_.dataLayers))
dataSource = GenericDataSource[DataLayer](dataSourceId, combinedLayers, dataSources.head.scale)
firstExploredDatasource <- dataSources.headOption.toFox
dataSource = GenericDataSource[DataLayer](dataSourceId, combinedLayers, firstExploredDatasource.scale)
Comment on lines +293 to +294
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the advantage of this change?
To me the code semantically reads the same. Just that the testing for whether dataSources has atleast a one item is more explicit. Or am I misunderstanding something here?

Copy link
Member Author

@fm3 fm3 Jun 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

head on an empty list raises an exception, which bypasses some of our error handling. headOption.toFox creates a Fox.failure that will yield nicer error messages in our context.
I’d say head is always a codesmell in our codebase, unless it is directly wrapped in a tryo (but then headOption should always be possible instead) or we are absolutely certain that the list will not be empty.

path <- Fox.runIf(combinedLayers.nonEmpty)(
exploreLocalLayerService.writeLocalDatasourceProperties(dataSource, path))
} yield path
Expand Down Expand Up @@ -418,7 +419,8 @@ class UploadService @Inject()(dataSourceRepository: DataSourceRepository,

private def looksLikeN5Multilayer(dataSourceDir: Path): Box[Boolean] =
for {
_ <- containsMatchingFile(List(FILENAME_ATTRIBUTES_JSON), dataSourceDir, 1) // root attributes.json
matchingFileIsPresent <- containsMatchingFile(List(FILENAME_ATTRIBUTES_JSON), dataSourceDir, 1) // root attributes.json
_ <- bool2Box(matchingFileIsPresent)
directories <- PathUtils.listDirectories(dataSourceDir, silent = false)
detectedLayerBoxes = directories.map(looksLikeN5MultiscalesLayer)
_ <- bool2Box(detectedLayerBoxes.forall(_.openOr(false)))
Expand All @@ -433,7 +435,8 @@ class UploadService @Inject()(dataSourceRepository: DataSourceRepository,
// - directories 0 to n
// - attributes.json (N5Header, dimension, compression,...)
for {
_ <- containsMatchingFile(List(FILENAME_ATTRIBUTES_JSON), dataSourceDir, 1) // root attributes.json
matchingFileIsPresent <- containsMatchingFile(List(FILENAME_ATTRIBUTES_JSON), dataSourceDir, 1) // root attributes.json
_ <- bool2Box(matchingFileIsPresent)
datasetDir <- PathUtils.listDirectories(dataSourceDir, silent = false).map(_.headOption)
scaleDirs <- datasetDir.map(PathUtils.listDirectories(_, silent = false)).getOrElse(Full(Seq.empty))
_ <- bool2Box(scaleDirs.length == 1) // Must be 1, otherwise it is a multiscale dataset
Expand Down