-
Notifications
You must be signed in to change notification settings - Fork 18
Engine Placer #357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Engine Placer #357
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
6a49659
Use placer for engine upload
nagem c551769
Add start to engine placer tests
nagem 44b3f5d
Use base upload for engine upload
nagem a45223c
Clean up update hierarcy logic
nagem 882ca9c
Add engine acquisition test
nagem b1824be
Add additional tests for engine placer
nagem 4f8aa48
Add test for metadata without files
nagem 833a91f
Cleanup debug logs
nagem 09b163f
Only explicitly set timestamp
nagem bd06eeb
Add test for subsequent file upload
nagem 740b531
Fix bug with file metadata
nagem File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -53,6 +53,5 @@ | |
"additionalProperties": false | ||
} | ||
}, | ||
"required": ["acquisition"], | ||
"additionalProperties": false | ||
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -134,7 +134,6 @@ def process_upload(request, strategy, container_type=None, id=None, origin=None, | |
if placer.sse and not response: | ||
raise Exception("Programmer error: response required") | ||
elif placer.sse: | ||
log.debug('SSE') | ||
response.headers['Content-Type'] = 'text/event-stream; charset=utf-8' | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🎉 |
||
response.headers['Connection'] = 'keep-alive' | ||
response.app_iter = placer.finalize() | ||
|
@@ -207,92 +206,51 @@ def engine(self): | |
""" | ||
.. http:post:: /api/engine | ||
|
||
Confirm endpoint is ready for requests | ||
Default behavior: | ||
Uploads a list of files sent as file1, file2, etc to a existing | ||
container and updates fields of the files, the container and it's | ||
parents as specified in the metadata fileformfield using the | ||
engine placer class | ||
When ``level`` is ``analysis``: | ||
Uploads a list of files to an existing analysis object, marking | ||
all files as ``output=true`` using the job-based analyses placer | ||
class | ||
|
||
:query level: container_type | ||
:query id: container_id | ||
:query job: job_id | ||
:param level: one of ``project``, ``session``, ``acquisition``, ``analysis`` | ||
:type level: string | ||
|
||
:param id: Container ID | ||
:type id: string | ||
|
||
:param id: Job ID | ||
:type id: string | ||
|
||
:statuscode 200: no error | ||
:statuscode 400: Target container ``level`` is required | ||
:statuscode 400: Level must be ``project``, ``session``, ``acquisition``, ``analysis`` | ||
:statuscode 400: Target container ``id`` is required | ||
:statuscode 402: Uploads must be from an authorized drone | ||
|
||
:statuscode 400: describe me | ||
:statuscode 402: describe me | ||
:statuscode 404: describe me | ||
""" | ||
|
||
if not self.superuser_request: | ||
self.abort(402, 'uploads must be from an authorized drone') | ||
|
||
level = self.get_param('level') | ||
if level is None: | ||
self.abort(404, 'container level is required') | ||
|
||
cont_id = self.get_param('id') | ||
if not cont_id: | ||
self.abort(404, 'container id is required') | ||
self.abort(400, 'container level is required') | ||
if level not in ['analysis', 'acquisition', 'session', 'project']: | ||
self.abort(400, 'container level must be analysis, acquisition, session or project.') | ||
cid = self.get_param('id') | ||
if not cid: | ||
self.abort(400, 'container id is required') | ||
else: | ||
cont_id = bson.ObjectId(cont_id) | ||
if level not in ['acquisition', 'analysis']: | ||
self.abort(404, 'engine uploads are supported only at the acquisition or analysis level') | ||
cid = bson.ObjectId(cid) | ||
|
||
if level == 'analysis': | ||
context = {'job_id': self.get_param('job')} | ||
return process_upload(self.request, Strategy.analysis_job, origin=self.origin, container_type=level, id=cont_id, context=context) | ||
|
||
if not self.superuser_request: | ||
self.abort(402, 'uploads must be from an authorized drone') | ||
with tempfile.TemporaryDirectory(prefix='.tmp', dir=config.get_item('persistent', 'data_path')) as tempdir_path: | ||
try: | ||
file_store = files.MultiFileStore(self.request, tempdir_path) | ||
except files.FileStoreException as e: | ||
self.abort(400, str(e)) | ||
if not file_store.metadata: | ||
self.abort(400, 'metadata is missing') | ||
payload_schema_uri = util.schema_uri('input', 'enginemetadata.json') | ||
metadata_validator = validators.from_schema_path(payload_schema_uri) | ||
metadata_validator(file_store.metadata, 'POST') | ||
file_infos = file_store.metadata['acquisition'].pop('files', []) | ||
now = datetime.datetime.utcnow() | ||
try: | ||
acquisition_obj = hierarchy.update_container_hierarchy(file_store.metadata, cont_id, level) | ||
except APIStorageException as e: | ||
self.abort(400, e.message) | ||
# move the files before updating the database | ||
for name, parsed_file in file_store.files.items(): | ||
fileinfo = parsed_file.info | ||
target_path = os.path.join(config.get_item('persistent', 'data_path'), util.path_from_hash(fileinfo['hash'])) | ||
files.move_file(parsed_file.path, target_path) | ||
# merge infos from the actual file and from the metadata | ||
merged_files = hierarchy.merge_fileinfos(file_store.files, file_infos) | ||
# update the fileinfo in mongo if a file already exists | ||
for f in acquisition_obj['files']: | ||
merged_file = merged_files.get(f['name']) | ||
if merged_file: | ||
fileinfo = merged_file.info | ||
fileinfo['modified'] = now | ||
acquisition_obj = hierarchy.update_fileinfo('acquisitions', acquisition_obj['_id'], fileinfo) | ||
fileinfo['existing'] = True | ||
# create the missing fileinfo in mongo | ||
for name, merged_file in merged_files.items(): | ||
fileinfo = merged_file.info | ||
# if the file exists we don't need to create it | ||
# skip update fileinfo for files that don't have a path | ||
if not fileinfo.get('existing') and merged_file.path: | ||
fileinfo['mimetype'] = fileinfo.get('mimetype') or util.guess_mimetype(name) | ||
fileinfo['created'] = now | ||
fileinfo['modified'] = now | ||
fileinfo['origin'] = self.origin | ||
acquisition_obj = hierarchy.add_fileinfo('acquisitions', acquisition_obj['_id'], fileinfo) | ||
|
||
for f in acquisition_obj['files']: | ||
if f['name'] in file_store.files: | ||
file_ = { | ||
'name': f['name'], | ||
'hash': f['hash'], | ||
'type': f.get('type'), | ||
'measurements': f.get('measurements', []), | ||
'mimetype': f.get('mimetype') | ||
} | ||
rules.create_jobs(config.db, acquisition_obj, 'acquisition', file_) | ||
return [{'name': k, 'hash': v.info.get('hash'), 'size': v.info.get('size')} for k, v in merged_files.items()] | ||
return process_upload(self.request, Strategy.analysis_job, origin=self.origin, container_type=level, id=cid, context=context) | ||
else: | ||
return process_upload(self.request, Strategy.engine, container_type=level, id=cid, origin=self.origin) | ||
|
||
def clean_packfile_tokens(self): | ||
""" | ||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a docstring with a few sentences describing what this does? I'm not sure if I've ever understood what (exactly) this function does without reverse-engineering it each time I read it 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment applies to the other diff'd functions in this file