Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix dragging annotation into a different orga’s dataset #7816

Merged
merged 5 commits into from
May 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
- Updated several backend dependencies for optimized stability and performance. [#7782](https://github.com/scalableminds/webknossos/pull/7782)
- Voxelytics workflows can be searched by name and hash. [#7790](https://github.com/scalableminds/webknossos/pull/7790)
- If a self-hosted WEBKNOSSOS instance has not been updated for six months or more, a closable banner proposes an upgrade to webknossos.org. [#7768](https://github.com/scalableminds/webknossos/pull/7768)
- Uploading an annotation into a dataset that it was not created for now also works if the dataset is in a different organization. [#7816](https://github.com/scalableminds/webknossos/pull/7816)

### Changed
- Non-admin or -manager users can no longer start long-running jobs that create datasets. This includes annotation materialization and AI inferrals. [#7753](https://github.com/scalableminds/webknossos/pull/7753)
Expand Down
7 changes: 6 additions & 1 deletion app/controllers/AnnotationIOController.scala
Original file line number Diff line number Diff line change
Expand Up @@ -103,9 +103,14 @@ class AnnotationIOController @Inject()(
request.body.dataParts("createGroupForEachFile").headOption.contains("true")
val overwritingDatasetName: Option[String] =
request.body.dataParts.get("datasetName").flatMap(_.headOption)
val overwritingOrganizationName: Option[String] =
request.body.dataParts.get("organizationName").flatMap(_.headOption)
val attachedFiles = request.body.files.map(f => (f.ref.path.toFile, f.filename))
val parsedFiles =
annotationUploadService.extractFromFiles(attachedFiles, useZipName = true, overwritingDatasetName)
annotationUploadService.extractFromFiles(attachedFiles,
useZipName = true,
overwritingDatasetName,
overwritingOrganizationName)
val parsedFilesWrapped =
annotationUploadService.wrapOrPrefixGroups(parsedFiles.parseResults, shouldCreateGroupForEachFile)
val parseResultsFiltered: List[NmlParseResult] = parsedFilesWrapped.filter(_.succeeded)
Expand Down
46 changes: 37 additions & 9 deletions app/models/annotation/AnnotationUploadService.scala
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@ class AnnotationUploadService @Inject()(tempFileService: TempFileService) extend
private def extractFromNmlFile(file: File,
name: String,
overwritingDatasetName: Option[String],
overwritingOrganizationName: Option[String],
isTaskUpload: Boolean)(implicit m: MessagesProvider): NmlParseResult =
extractFromNml(new FileInputStream(file), name, overwritingDatasetName, isTaskUpload)
extractFromNml(new FileInputStream(file), name, overwritingDatasetName, overwritingOrganizationName, isTaskUpload)

private def formatChain(chain: Box[Failure]): String = chain match {
case Full(failure) =>
Expand All @@ -37,9 +38,10 @@ class AnnotationUploadService @Inject()(tempFileService: TempFileService) extend
private def extractFromNml(inputStream: InputStream,
name: String,
overwritingDatasetName: Option[String],
overwritingOrganizationName: Option[String],
isTaskUpload: Boolean,
basePath: Option[String] = None)(implicit m: MessagesProvider): NmlParseResult =
NmlParser.parse(name, inputStream, overwritingDatasetName, isTaskUpload, basePath) match {
NmlParser.parse(name, inputStream, overwritingDatasetName, overwritingOrganizationName, isTaskUpload, basePath) match {
case Full((skeletonTracing, uploadedVolumeLayers, description, wkUrl)) =>
NmlParseSuccess(name, skeletonTracing, uploadedVolumeLayers, description, wkUrl)
case Failure(msg, _, chain) => NmlParseFailure(name, msg + chain.map(_ => formatChain(chain)).getOrElse(""))
Expand All @@ -50,6 +52,7 @@ class AnnotationUploadService @Inject()(tempFileService: TempFileService) extend
zipFileName: Option[String],
useZipName: Boolean,
overwritingDatasetName: Option[String],
overwritingOrganizationName: Option[String],
isTaskUpload: Boolean)(implicit m: MessagesProvider): MultiNmlParseResult = {
val name = zipFileName getOrElse file.getName
var otherFiles = Map.empty[String, File]
Expand All @@ -58,7 +61,12 @@ class AnnotationUploadService @Inject()(tempFileService: TempFileService) extend
ZipIO.withUnziped(file) { (filename, inputStream) =>
if (filename.toString.endsWith(".nml")) {
val result =
extractFromNml(inputStream, filename.toString, overwritingDatasetName, isTaskUpload, Some(file.getPath))
extractFromNml(inputStream,
filename.toString,
overwritingDatasetName,
overwritingOrganizationName,
isTaskUpload,
Some(file.getPath))
parseResults ::= (if (useZipName) result.withName(name) else result)
} else {
val tempFile: Path = tempFileService.create(file.getPath.replaceAll("/", "_") + filename.toString)
Expand Down Expand Up @@ -136,6 +144,7 @@ class AnnotationUploadService @Inject()(tempFileService: TempFileService) extend
def extractFromFiles(files: Seq[(File, String)],
useZipName: Boolean,
overwritingDatasetName: Option[String] = None,
overwritingOrganizationName: Option[String] = None,
isTaskUpload: Boolean = false)(implicit m: MessagesProvider): MultiNmlParseResult =
files.foldLeft(NmlResults.MultiNmlParseResult()) {
case (acc, (file, name)) =>
Expand All @@ -145,30 +154,49 @@ class AnnotationUploadService @Inject()(tempFileService: TempFileService) extend
if (allZips)
acc.combineWith(
extractFromFiles(
extractFromZip(file, Some(name), useZipName, overwritingDatasetName, isTaskUpload).otherFiles.toSeq
.map(tuple => (tuple._2, tuple._1)),
extractFromZip(file,
Some(name),
useZipName,
overwritingDatasetName,
overwritingOrganizationName,
isTaskUpload).otherFiles.toSeq.map(tuple => (tuple._2, tuple._1)),
useZipName,
overwritingDatasetName,
overwritingOrganizationName,
isTaskUpload
))
else
acc.combineWith(extractFromFile(file, name, useZipName, overwritingDatasetName, isTaskUpload))
acc.combineWith(
extractFromFile(file,
name,
useZipName,
overwritingDatasetName,
overwritingOrganizationName,
isTaskUpload))
case _ => acc
} else
acc.combineWith(extractFromFile(file, name, useZipName, overwritingDatasetName, isTaskUpload))
acc.combineWith(
extractFromFile(file, name, useZipName, overwritingDatasetName, overwritingOrganizationName, isTaskUpload))
}

private def extractFromFile(file: File,
fileName: String,
useZipName: Boolean,
overwritingDatasetName: Option[String],
overwritingOrganizationName: Option[String],
isTaskUpload: Boolean)(implicit m: MessagesProvider): MultiNmlParseResult =
if (fileName.endsWith(".zip")) {
logger.trace("Extracting from Zip file")
extractFromZip(file, Some(fileName), useZipName, overwritingDatasetName, isTaskUpload)
extractFromZip(file,
Some(fileName),
useZipName,
overwritingDatasetName,
overwritingOrganizationName,
isTaskUpload)
} else {
logger.trace("Extracting from Nml file")
val parseResult = extractFromNmlFile(file, fileName, overwritingDatasetName, isTaskUpload)
val parseResult =
extractFromNmlFile(file, fileName, overwritingDatasetName, overwritingOrganizationName, isTaskUpload)
MultiNmlParseResult(List(parseResult), Map.empty)
}

Expand Down
3 changes: 2 additions & 1 deletion app/models/annotation/nml/NmlParser.scala
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ object NmlParser extends LazyLogging with ProtoGeometryImplicits with ColorGener
def parse(name: String,
nmlInputStream: InputStream,
overwritingDatasetName: Option[String],
overwritingOrganizationName: Option[String],
isTaskUpload: Boolean,
basePath: Option[String] = None)(
implicit m: MessagesProvider): Box[(Option[SkeletonTracing], List[UploadedVolumeLayer], String, Option[String])] =
Expand All @@ -62,7 +63,7 @@ object NmlParser extends LazyLogging with ProtoGeometryImplicits with ColorGener
_ <- TreeValidator.validateTrees(treesSplit, treeGroupsAfterSplit, branchPoints, comments)
additionalAxisProtos <- parseAdditionalAxes(parameters \ "additionalAxes")
datasetName = overwritingDatasetName.getOrElse(parseDatasetName(parameters \ "experiment"))
organizationName = if (overwritingDatasetName.isDefined) None
organizationName = if (overwritingDatasetName.isDefined) overwritingOrganizationName
else parseOrganizationName(parameters \ "experiment")
Comment on lines +66 to 67
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the if-check check for overwritingOrganizationName being defined and not for overwritingDatasetName? Or is there some other place where it is ensured that in case overwritingDatasetName is defined, then overwritingOrganizationName is also defined?

And maybe the same syntax as in the line above can be used here as well, as imo the code is more readable this way. Or is there a semantic difference between my suggestion and your code?

Suggested change
organizationName = if (overwritingDatasetName.isDefined) overwritingOrganizationName
else parseOrganizationName(parameters \ "experiment")
organizationName = overwritingOrganizationName.getOrElse(parseOrganizationName(parameters \ "experiment"))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there is in fact a difference. I want the overwritingOrganizationName, even if it is None, if the overwritingDatasetName is defined.

This case will probably not happen, because the frontend sends both or neither. But from the backend point of view, the organizationName from the NML becomes invalid once the datasetName is overwritten (this was already the case before this PR). So in that moment I want to yield the None, so that it will later be filled by the disambiguation logic via dataset name, and not the organizationName from the NML.

} yield {
val description = parseDescription(parameters \ "experiment")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,7 @@ class TracingLayoutView extends React.PureComponent<PropsWithRouter, State> {
nmlFile: files,
createGroupForEachFile,
datasetName: this.props.datasetName,
organizationName: this.props.organization,
},
});
this.props.history.push(`/annotations/${response.annotation.typ}/${response.annotation.id}`);
Expand Down
7 changes: 6 additions & 1 deletion test/backend/NMLUnitTestSuite.scala
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,12 @@ class NMLUnitTestSuite extends PlaySpec {
val os = new ByteArrayOutputStream()
Await.result(nmlFunctionStream.writeTo(os)(scala.concurrent.ExecutionContext.global), Duration.Inf)
val array = os.toByteArray
NmlParser.parse("", new ByteArrayInputStream(array), None, isTaskUpload = true, None)
NmlParser.parse("",
new ByteArrayInputStream(array),
overwritingDatasetName = None,
overwritingOrganizationName = None,
isTaskUpload = true,
basePath = None)
}

def isParseSuccessful(
Expand Down