Skip to content

Commit

Permalink
Jkmarx/remove old node maps api (#2279)
Browse files Browse the repository at this point in the history
* Remove node relationship model and references.

* Remove node_set model and references.

* Bump version (#2264)

* Bump version

* Change name to match expectations of on_startup.py

* Update tests to work with new IGV image version

* Update assertion with new Solr mock info

* Assert that `VisualizationError` is raised now that `launch()` is being called directly

* Scottx611x/set workflow tool analysis library_id (#2280)

* Set Analysis.library_id

* Test that WorkflowTool.analysis.library_id is set when we create a Galaxy Library

* Jkmarx/analysis tab url fix (#2276)

* Add facet name.

* Remove error caused from double encoding.

* Refactor facet_name method.

* Update test.

* Remove data_set_query which is not needed.

* Update analysis complete url.

* Update urls to all analysis links.

* Remove consoles.

* Remove self targets, unneccessary.

* Fix typo in `generate_tool_definitions` mgmt command (#2270)

* Mock USER_FILES_COLUMNS (#2282)

Tests pass locally. Fix #2235.

* Remove unused JS global. Fix #1765. (#2283)

* Scottx611x/tool manager test coverage (#2271)

* Add test for case where specified docker image has no version

* Add test coverage for `FileTypeValidationErrors`

* Add test coverage for ToolDefinitions with bad filetypes specified

* Add test coverage for `get_workflows()`

* `get_queryset()` shouldn't return an HTTPResponse;
The exception handling here also might mask the importance of the GuardianError.
Letting it propgate might not be a bad idea since this is one of those "shouldn't happen" scenarios, and knowing that something is wrong asap would be great.

* Fix typos

* Remove comma

* Be more clear as to my intentions with the overriding of this setting

* Factor out commonalities between these tests

* Fix test

* Remove unused JS global. Fix #1765. (#2287)

* Remove extra scrollbar cause by auto class on exterior panel. (#2297)

* Mccalluc/misc travis log cleanup (#2302)

* geckodriver download progress can be 1000 lines in the log

* FF download also wastes space... but what does the unzip?

* Remove "PhantomJS 2.1.1 (Linux 0.0.0): Executed 638 of 648 SUCCESS" messages

* Last commit was in entirely the wrong place to quiet the JS tests
(There should be a better way to make them quiet, but it looks like we choose between this, or a new package like mostly-quiet-grunt

* Adding grep also confuses the result status

* Feels like there should be a better way, but this cleans up the output without confusing the status (#2303)

* Jkmarx/data set 1 cleanup (#2278)

* Remove analysis-launch module.

* Remove from unit test.

* Remove node-relationship module.

* Remove node-mapping module and add third party dependency to refinery.js.

* Clean up data-set-nav and remove references.

* Update unit tests.

* Remove select workflow module.

* Remove igv module.

* Fix typo ,

* Remove solr_table_view.

* Remove analysis view.

* Remove solr_facet view.

* Remove pivot matrix.

* Remove data_set_configurator.

* Remove IGV references.

* Remove the file browser one table and some unneccessary code.

* Remove unneeded code since there's only one view.

* Remove float thread code.

* Remove float thread dependencies.

* Remove matrix styling.

* Remove old help modals, not dealing with provvis.

* Remove tipsy.

* Remove tipsy depenency in angular.

* Remove commented code which references old node sets.

* Remove node_set_manager reference.

* Remove node-set-manager references.

* Remove lingering merge tag.

* Add node set comments.

* Fix comment typo.

* Remove added dependency in wrong location and tracked.

* Fix height error.

* Revert data_set.html.

* Jkmarx/provvis tab integration (#2289)

* Add provvis tab.

* Seperate out provvis module initialization.

* Move ctrls to own file.

* Move directives to own files.

* Remove inialization module, ctrl, & directives to own file.

* Add provvis module.

* Fix page redirect.

* Fix info-panel routing.

* Fix bug with panel groups redirect.

* Better solution to nav tab.

* Add padding to top of infobar.

* Add refinery-base class to btns.

* Remove modal for popover.

* Remove provenance view and template.

* Cosemetic changes.

* Prevent pre-generation from content.js when loading in data-set page.

* Add bootbox.

* Fix height issue.

* Fix missing closed tag.

* Fix outdent.

* Fix tabbing.
  • Loading branch information
jkmarx committed Nov 1, 2017
1 parent 1076d57 commit 4f055bf
Show file tree
Hide file tree
Showing 47 changed files with 647 additions and 5,364 deletions.
34 changes: 2 additions & 32 deletions refinery/analysis_manager/tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,10 @@
from analysis_manager.models import AnalysisStatus
from analysis_manager.tasks import (_check_galaxy_history_state, _get_analysis,
_get_analysis_status, run_analysis)
from analysis_manager.utils import (_fetch_node_relationship, _fetch_node_set,
fetch_objects_required_for_analysis,
from analysis_manager.utils import (fetch_objects_required_for_analysis,
validate_analysis_config)
from analysis_manager.views import analysis_status, run
from core.models import (Analysis, DataSet, NodeRelationship, NodeSet, Project,
Workflow, WorkflowEngine)
from core.models import Analysis, DataSet, Project, Workflow, WorkflowEngine
from data_set_manager.models import Assay
from factory_boy.utils import (create_dataset_with_necessary_models,
make_analyses_with_single_dataset)
Expand Down Expand Up @@ -190,34 +188,6 @@ def test_fetch_objects_required_for_analyses_bad_dataset(self):
)
self.assertIn("Couldn't fetch DataSet", context.exception.message)

def test_fetch_nodeset_valid_uuid(self):
nodeset = NodeSet.objects.create(
study=self.study,
assay=self.assay
)
self.assertEqual(
nodeset,
_fetch_node_set(nodeset.uuid)
)

def test_fetch_nodeset_invalid_uuid(self):
with self.assertRaises(RuntimeError):
_fetch_node_set(str(uuid.uuid4()))

def test_fetch_node_relationship_valid_uuid(self):
node_relationship = NodeRelationship.objects.create(
study=self.study,
assay=self.assay
)
self.assertEqual(
node_relationship,
_fetch_node_relationship(node_relationship.uuid)
)

def test_fetch_node_relationship_invalid_uuid(self):
with self.assertRaises(RuntimeError):
_fetch_node_relationship(str(uuid.uuid4()))


class AnalysisViewsTests(AnalysisManagerTestBase, ToolManagerTestBase):
"""
Expand Down
50 changes: 1 addition & 49 deletions refinery/analysis_manager/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@
import requests
from requests.packages.urllib3.exceptions import HTTPError

from core.models import (Analysis, NodeRelationship, NodeSet, Study, Workflow,
WorkflowDataInputMap)
from core.models import Analysis, Study, Workflow, WorkflowDataInputMap
from core.utils import get_aware_local_time
import tool_manager

Expand Down Expand Up @@ -237,50 +236,3 @@ def _create_analysis_name(current_workflow):
current_workflow.name,
get_aware_local_time().strftime("%Y-%m-%d @ %H:%M:%S")
)


def _fetch_node_relationship(node_relationship_uuid):
"""
Fetches a NodeRelationship instance from a given UUID
:param node_relationship_uuid: UUID String
:return: <NodeRelationship>
:raises: RuntimeError
"""
try:
return NodeRelationship.objects.get(uuid=node_relationship_uuid)
except(NodeRelationship.DoesNotExist,
NodeRelationship.MultipleObjectsReturned) as e:
raise RuntimeError(
"Couldn't fetch NodeRelationship from UUID: {} {}"
.format(node_relationship_uuid, e)
)


def _fetch_node_set(node_set_uuid):
"""
Fetches a NodeSet instance from a given UUID
:param node_set_uuid: UUID String
:return: <NodeSet>
:raises: RuntimeError
"""
try:
return NodeSet.objects.get(uuid=node_set_uuid)
except(NodeSet.DoesNotExist, NodeSet.MultipleObjectsReturned) as e:
raise RuntimeError(
"Couldn't fetch NodeSet from UUID: {} {}".format(node_set_uuid, e)
)


def _fetch_solr_uuids(nodeset_instance):
"""
Fetches solr_uuids from a given NodeSet instance
:param nodeset_instance: <NodeSet> instance
:return: list of UUIDs corresponding to Nodes indexed in Solr
"""
curr_node_dict = json.loads(nodeset_instance.solr_query_components)
return get_solr_results(
nodeset_instance.solr_query,
only_uuids=True,
selected_mode=curr_node_dict['documentSelectionBlacklistMode'],
selected_nodes=curr_node_dict['documentSelection']
)
12 changes: 3 additions & 9 deletions refinery/config/urls.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,9 @@
from config.utils import RouterCombiner
from core.api import (AnalysisResource, DataSetResource, ExtendedGroupResource,
GroupManagementResource, InvitationResource,
NodePairResource, NodeRelationshipResource, NodeResource,
NodeSetListResource, NodeSetResource, ProjectResource,
StatisticsResource, UserAuthenticationResource,
UserProfileResource, WorkflowInputRelationshipsResource,
WorkflowResource)
NodeResource, ProjectResource, StatisticsResource,
UserAuthenticationResource, UserProfileResource,
WorkflowInputRelationshipsResource, WorkflowResource)
from core.forms import RegistrationFormWithCustomFields
from core.models import AuthenticationFormUsernameOrEmail, DataSet
from core.urls import core_router
Expand Down Expand Up @@ -53,10 +51,6 @@
v1_api.register(DataSetResource())
v1_api.register(AttributeOrderResource())
v1_api.register(NodeResource())
v1_api.register(NodeSetResource())
v1_api.register(NodeSetListResource())
v1_api.register(NodePairResource())
v1_api.register(NodeRelationshipResource())
v1_api.register(WorkflowResource())
v1_api.register(WorkflowInputRelationshipsResource())
v1_api.register(StatisticsResource())
Expand Down
20 changes: 1 addition & 19 deletions refinery/core/admin.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,7 @@

from core.models import (Analysis, AnalysisNodeConnection, AnalysisResult,
DataSet, DiskQuota, Download, ExtendedGroup,
InvestigationLink, Invitation, NodePair,
NodeRelationship, NodeSet, Ontology, Project,
InvestigationLink, Invitation, Ontology, Project,
SiteProfile, Tutorials, UserProfile, Workflow,
WorkflowDataInput, WorkflowDataInputMap,
WorkflowEngine, WorkflowFilesDL,
Expand Down Expand Up @@ -139,20 +138,6 @@ class ExtendedGroupAdmin(GuardedModelAdmin, ForeignKeyAutocompleteAdmin):
'member_list', 'perm_list', 'can_edit']


class NodePairAdmin(GuardedModelAdmin, ForeignKeyAutocompleteAdmin):
list_display = ['id', 'uuid', 'node1', 'node2', 'group']


class NodeRelationshipAdmin(GuardedModelAdmin, ForeignKeyAutocompleteAdmin):
list_display = ['__unicode__', 'id', 'type', 'node_set_1',
'node_set_2', 'study', 'assay', 'is_current']


class NodeSetAdmin(GuardedModelAdmin, ForeignKeyAutocompleteAdmin):
list_display = ['__unicode__', 'id', 'node_count', 'is_implicit',
'study', 'assay', 'is_current']


class UserProfileAdmin(GuardedModelAdmin):
list_display = ['__unicode__', 'id', 'uuid', 'user', 'affiliation',
'catch_all_project', 'login_count',
Expand Down Expand Up @@ -201,9 +186,6 @@ class SiteProfileAdmin(GuardedModelAdmin):
admin.site.register(AnalysisResult, AnalysisResultAdmin)
admin.site.register(AnalysisNodeConnection, AnalysisNodeConnectionAdmin)
admin.site.register(DiskQuota, DiskQuotaAdmin)
admin.site.register(NodePair, NodePairAdmin)
admin.site.register(NodeRelationship, NodeRelationshipAdmin)
admin.site.register(NodeSet, NodeSetAdmin)
admin.site.register(Invitation, InvitationAdmin)
admin.site.register(UserProfile, UserProfileAdmin)
admin.site.register(Tutorials, TutorialsAdmin)
Expand Down
193 changes: 2 additions & 191 deletions refinery/core/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
from datetime import timedelta
import json
import logging
import re
from sets import Set
import uuid

Expand All @@ -33,16 +32,15 @@
from tastypie.authorization import Authorization
from tastypie.bundle import Bundle
from tastypie.constants import ALL, ALL_WITH_RELATIONS
from tastypie.exceptions import ImmediateHttpResponse, NotFound, Unauthorized
from tastypie.exceptions import ImmediateHttpResponse, NotFound
from tastypie.http import (HttpAccepted, HttpBadRequest, HttpCreated,
HttpForbidden, HttpGone, HttpMethodNotAllowed,
HttpNoContent, HttpNotFound, HttpUnauthorized)
from tastypie.resources import ModelResource, Resource
from tastypie.utils import trailing_slash

from core.models import (Analysis, DataSet, ExtendedGroup, GroupManagement,
Invitation, NodePair, NodeRelationship, NodeSet,
Project, ResourceStatistics, Tutorials,
Invitation, Project, ResourceStatistics, Tutorials,
UserAuthentication, UserProfile, Workflow,
WorkflowInputRelationships)
from core.utils import get_data_sets_annotations, get_resources_for_user
Expand Down Expand Up @@ -1119,193 +1117,6 @@ def dehydrate(self, bundle):
bundle.data['file_import_status'] = file_item.get_import_status()
return bundle

# def get_object_list(self, request):
# """
# Temporarily removed for performance reasons (and not required
# without authorization)
# Get all nodes available to the current user (via dataset)
# Temp workaround due to Node being not Ownable
#
# """
# user = request.user
# perm = 'read_%s' % DataSet._meta.module_name
# if (user.is_authenticated()):
# allowed_datasets = get_objects_for_user(user, perm, DataSet)
# else:
# allowed_datasets = get_objects_for_group(
# ExtendedGroup.objects.public_group(), perm, DataSet)
# # get list of node UUIDs that belong to all datasets available to
# # the current user
# all_allowed_studies = []
# for dataset in allowed_datasets:
# dataset_studies = dataset.get_investigation().study_set.all()
# all_allowed_studies.extend(
# [study for study in dataset_studies]
# )
# allowed_nodes = []
# for study in all_allowed_studies:
# allowed_nodes.extend(study.node_set.all().values('uuid'))
# # filter nodes using that list
# return super(NodeResource, self).get_object_list(request).filter(
# uuid__in=[node['uuid'] for node in allowed_nodes])


class NodeSetResource(ModelResource):
# https://github.com/toastdriven/django-tastypie/pull/538
# https://github.com/toastdriven/django-tastypie/issues/526
# Once the above has been integrated into a tastypie release branch remove
# NodeSetListResource and use "use_in" instead
# nodes = fields.ToManyField(NodeResource, 'nodes', use_in="detail" )

solr_query = fields.CharField(attribute='solr_query', null=True)
solr_query_components = fields.CharField(
attribute='solr_query_components', null=True)
node_count = fields.IntegerField(attribute='node_count', null=True)
is_implicit = fields.BooleanField(attribute='is_implicit', default=False)
study = fields.ToOneField(StudyResource, 'study')
assay = fields.ToOneField(AssayResource, 'assay')

class Meta:
# create node count attribute on the fly - node_count field has to be
# defined on resource
queryset = NodeSet.objects.all().order_by('-is_current', 'name')
resource_name = 'nodeset'
detail_uri_name = 'uuid' # for using UUIDs instead of pk in URIs
authentication = SessionAuthentication()
authorization = Authorization()
fields = [
'is_current', 'name', 'summary', 'assay', 'study', 'uuid',
'is_implicit', 'node_count', 'solr_query', 'solr_query_components'
]
ordering = [
'is_current', 'name', 'summary', 'assay', 'study', 'uuid',
'is_implicit', 'node_count', 'solr_query', 'solr_query_components'
]
allowed_methods = ["get", "post", "put"]
filtering = {
"study": ALL_WITH_RELATIONS, "assay": ALL_WITH_RELATIONS,
"uuid": ALL
}
# jQuery treats a 201 as an error for data type "JSON"
always_return_data = True

def prepend_urls(self):
return [
url((r"^(?P<resource_name>%s)/(?P<uuid>" + UUID_RE + r")/$") %
self._meta.resource_name,
self.wrap_view('dispatch_detail'),
name="api_dispatch_detail"),
]

def obj_create(self, bundle, **kwargs):
"""Create a new NodeSet instance and assign current user as owner if
current user has read permission on the data set referenced by the new
NodeSet
"""
# get the Study specified by the UUID in the new NodeSet
study_uri = bundle.data['study']
match = re.search(
UUID_RE,
study_uri)
study_uuid = match.group()
try:
study = Study.objects.get(uuid=study_uuid)
except Study.DoesNotExist:
logger.error("Study '%s' does not exist", study_uuid)
self.unauthorized_result(
Unauthorized("You are not allowed to create a new NodeSet."))
# look up the dataset via InvestigationLink relationship
# an investigation is only associated with a single dataset even though
# InvestigationLink is a many to many relationship
try:
dataset = \
study.investigation.investigationlink_set.all()[0].data_set
except IndexError:
logger.error("Data set not found in study '%s'", study.uuid)
self.unauthorized_result(
Unauthorized("You are not allowed to create a new NodeSet."))
permission = "read_%s" % dataset._meta.module_name
if not bundle.request.user.has_perm(permission, dataset):
self.unauthorized_result(
Unauthorized("You are not allowed to create a new NodeSet."))
# if user has the read permission on the data set
# continue with creating the new NodeSet instance
bundle = super(NodeSetResource, self).obj_create(bundle, **kwargs)
bundle.obj.set_owner(bundle.request.user)
return bundle


class NodeSetListResource(ModelResource):
study = fields.ToOneField(StudyResource, 'study')
assay = fields.ToOneField(AssayResource, 'assay')
node_count = fields.IntegerField(attribute='node_count', readonly=True)
is_implicit = fields.BooleanField(attribute='is_implicit', default=False)

class Meta:
# create node count attribute on the fly - node_count field has to be
# defined on resource
queryset = NodeSet.objects.all().order_by('-is_current', 'name')
# NG: introduced to get correct resource IDs
detail_resource_name = 'nodeset'
resource_name = 'nodesetlist'
detail_uri_name = 'uuid' # for using UUIDs instead of pk in URIs
authentication = SessionAuthentication()
authorization = Authorization()
fields = ['is_current', 'name', 'summary', 'assay', 'study', 'uuid']
allowed_methods = ["get"]
filtering = {"study": ALL_WITH_RELATIONS, "assay": ALL_WITH_RELATIONS}
ordering = ['is_current', 'name', 'node_count']

def dehydrate(self, bundle):
# replace resource URI to point to the nodeset resource instead of the
# nodesetlist resource
bundle.data['resource_uri'] = bundle.data['resource_uri'].replace(
self._meta.resource_name, self._meta.detail_resource_name)
return bundle


class NodePairResource(ModelResource):
node1 = fields.ToOneField(NodeResource, 'node1')
node2 = fields.ToOneField(NodeResource, 'node2', null=True)
group = fields.CharField(attribute='group', null=True)

class Meta:
detail_allowed_methods = ['get', 'post', 'delete', 'put', 'patch']
queryset = NodePair.objects.all()
detail_resource_name = 'nodepair'
resource_name = 'nodepair'
detail_uri_name = 'uuid'
authentication = SessionAuthentication()
authorization = Authorization()
# for use with AngularJS $resources: returns newly created object upon
# POST (in addition to the location response header)
always_return_data = True


class NodeRelationshipResource(ModelResource):
name = fields.CharField(attribute='name', null=True)
type = fields.CharField(attribute='type', null=True)
# , full=True), if you need each attribute for each nodepair
node_pairs = fields.ToManyField(NodePairResource, 'node_pairs')
study = fields.ToOneField(StudyResource, 'study')
assay = fields.ToOneField(AssayResource, 'assay')

class Meta:
detail_allowed_methods = ['get', 'post', 'delete', 'put', 'patch']
queryset = NodeRelationship.objects.all().order_by('-is_current',
'name')
detail_resource_name = 'noderelationship'
resource_name = 'noderelationship'
detail_uri_name = 'uuid'
authentication = SessionAuthentication()
authorization = Authorization()
# for use with AngularJS $resources: returns newly created object upon
# POST (in addition to the location response header)
always_return_data = True
# fields = ['type', 'study', 'assay', 'node_pairs']
ordering = ['is_current', 'name', 'type', 'node_pairs']
filtering = {'study': ALL_WITH_RELATIONS, 'assay': ALL_WITH_RELATIONS}


class StatisticsResource(Resource):
user = fields.IntegerField(attribute='user')
Expand Down

0 comments on commit 4f055bf

Please sign in to comment.