Skip to content

Commit

Permalink
Release candidate November 1st, 2017 (#2373)
Browse files Browse the repository at this point in the history
* replace printable for try/except utf-8

* add sleeps for tests

* moving wait_for_prep_information_job

* just leaving the changes on this pr

* Fixing tests

* Reverting changes

* dist: precise

* addressing @josenavas comments

* fix-2212

* Adding branch description to Contributing (#2256)

* Adding branch description

* Adding spaces

* Adding timeline

* upgrading software

* mv r_client to qiita

* from redis import Redis

* rm moi and ipython

* Move qiita_db/private.py -> qiita_ware/private_plugin.py

* _system_call as system_call

* flake8

* rm create_raw_data

* Transferring VAMPS submission to internal job

* rm other unnecessary files

* addressing @josenavas comment

* Fixing merge conflicts

* Adding tests to private plugin

* rm wrapper.py

* Removing import

* Fixing import

* Modifying GUI to use the plugin

* moving qiita pet in .travis.yml

* Transfer submit to vamps (#2265)

* Move qiita_db/private.py -> qiita_ware/private_plugin.py

* Transferring VAMPS submission to internal job

* Fixing import

* some other .travis.yml fixes

* addressing @josenavas comment

* Adding success test

* adding some methods

* flake8

* Transfer copy raw data (#2267)

* Move qiita_db/private.py -> qiita_ware/private_plugin.py

* Transferring VAMPS submission to internal job

* Fixing merge conflicts

* Adding tests to private plugin

* Removing import

* Fixing import

* Modifying GUI to use the plugin

* Adding success test

* change imports

* adding delete_artifact and create_sample_template

* fixing errors and gui

* fix delete error

* ENH: Make jobs list modal

With some recent changes to the position of the navigation bar, the
header of the jobs list is cut from screen. With this patch in place,
the jobs list will be shown as a modal window, so that won't be a
problem anymore.

* Transfer update delete templates (#2274)

* Moving update_sample_template

* Transfer update_sample_template

* Porting update prep template

* Moving delete sample or column

* Removing tests

* Removing dispatchable and its tests

* Updating interface to use the new functionality'

* Adapting the prep template GUI

* Submitting jobs

* Fixing tests

* Removing qiita_ware/context.py

* flake8ing

* Fixing _system_call

* Safeguarding the call to rollback

* Unmasking more errors

* Forcing different connections on different processes

* Moving job completion to internal plugin structure

* Removing unused code

* Forcing the creation of a new transaction on the jobs

* Fixing tests

* forcing the commit

* Fixing all tests

* Addressing @antgonza's comments

* Addressing @antgonza's comment

* Addressing @ElDeveloper's comments

* BUG: Fix job updates

* Fixing the redis DB (#2277)

* Fixing the redis DB

* Addressing @antgonza's comments

* redbiom install

* redbiom to install_requires

* mv moi-ws to qiita_websocket

* init commit

* addressing @wasade comment

* rm webdis.log

* cleaning code for initial review

* install latest redbiom

* fix test

* ENH: Change phrasing of upload text (#2281)

* ENH: Change phrasing of upload text

As per a user's request.

* Fix typo in docs

Fixes #2259

* addressing @wasade comments and adding other tests

* flake8

* addressing @josenavas comments

* fix #2258

* fix #2258

* fix #858 (#2286)

* addressing @ElDeveloper and @josenavas comments

* @ElDeveloper :|

* redbiom now adds per sample studies to analysis

* jobs-list-as-modal

* addresssing @josenavas comments

* rm () from update_processing_job_data

* add prints to review errors

* rm prints

* Fix 2190 (#2292)

* Adding patch to add the 'name' parameter to all validate commands

* Adding the parameter 'name' automatically and adding a test for it

* Removing extra blank line

* Adding dflt value to artifact name

* Edited the wrong file

* Adding user defined name at creation time

* Fixing test

* add is_from_analysis to artifact_handlers (#2293)

* add is_from_analysis to artifact_handlers

* fix test_post_metadata

* fix error

* WIP: rm sudo from travis

* adding sed for config file

* fix sed

* rm &

* cat redis.conf

* using local redis.conf

* # protected-mode yes

* # supervised no

* redis-server --port 7777 &

* Fix 1293 (#2291)

* fix #1293

* flake8

* fix errors

* addressing @ElDeveloper comments

* rm redis.conf

* fix awaiting_approval list bug

* Fix #2276 (#2294)

* Fix #2276

* Factoring out generate nginx directory file list

* Factoring out the nginx file list writing

* Factoring out generating the file list of an artifact

* Factoring out the header setting

* Addressing @antgonza's comment

* Addressing @wasade's comments

* Fixing patch

* fix error

* Fixing failing test

* fix #2214

* addressing @josenavas comment

* fix #2209

* fix #2331

* fix #2326 (#2328)

* fix #2326

* addressing @ElDeveloper comment

* fix #2226 (#2330)

* fix #2226

* addressing @ElDeveloper comment

* fix #2336

* fix #2316

* Fixes 2269

* addressing @josenavas comments

* Fixes 2038 (#2349)

* fixes #2038

* Adding test

* Trying to debug

* Checking value

* More debugging

* Undo changes

* Fixing failure

* fix #2333

* addressing @josenavas

* fixes #2245 (#2350)

* fixes #2245

* Addressing @antgonza's comments

* Fixing test

* Fixing Qiita installation (#2362)

* Sync-ing with master (#2367)

* fix calls to system_call and ebi submissions

* fixing errors

* fix if state == submitting:

* just raise error

* EBISubmissionError -> ComputeError

* fix #2084 (#2365)

* fix #2125 (#2366)

* fix #2125

* fix error

* fix #2364

* fix #1812

* add tests

* flake8

* populating ProcessingJob.create True

* fix more errors

* addressing comments from @josenavas and @stephanieorch

* mv ProcessingJob.create True around

* Partial #2237 (#2368)

* fix calls to system_call and ebi submissions

* fixing errors

* fix if state == submitting:

* just raise error

* EBISubmissionError -> ComputeError

* Sorting values

* Case insensitive sorting

* Addressing @ElDeveloper's comments

* fix-1591 (#2370)

* fix-1591

* removing warning ATTN @josenavas, fix tests

* addressing @ElDeveloper and @josenavas comments

* Fix 2230 (#2372)

* fix calls to system_call and ebi submissions

* fixing errors

* fix if state == submitting:

* just raise error

* EBISubmissionError -> ComputeError

* Fix #2230 solved using modal in order to prevent large file download

* Function moved into in order to keep order. Some details fixed.

* addressing @wasade and @ElDeveloper comments

* fixing errors

* fix GUI and erros

* addressing @ElDeveloper comments

* rm artifacts from parameters listing

* fix flake8

* fix ilike quote params

* addressing @josenavas comment and adding test for job without children

* fix errors

* addressing @josenavas comment

* Fixing network labels (#2376)

* Fixing network labels

* Fixing error

* Update redbiom.html

* Update redbiom.html

* Update redbiom.html

* Patch 61 - transfer all parameters to str (#2379)

* Patch 61 - transfer all parameters to str

* Fixing errors

* rm lower from redbiom

* fixing smal details and adding emp_release1

* fixing
  • Loading branch information
josenavas authored and antgonza committed Oct 31, 2017
1 parent 9e10d9c commit fabee6b
Show file tree
Hide file tree
Showing 54 changed files with 800 additions and 346 deletions.
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ install:
# install a few of the dependencies that pip would otherwise try to install
# when intalling scikit-bio
- travis_retry conda create --yes -n qiita python=$PYTHON_VERSION pip nose flake8
pyzmq networkx pyparsing natsort mock future libgfortran seaborn nltk
pyzmq 'networkx<2.0' pyparsing natsort mock future libgfortran seaborn nltk
'pandas>=0.18' 'matplotlib>=1.1.0' 'scipy>0.13.0' 'numpy>=1.7' 'h5py>=2.3.1'
- source activate qiita
- pip install -U pip
Expand Down
16 changes: 9 additions & 7 deletions qiita_db/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ def create(cls, owner, name, description, from_default=False,
'analysis': a_id,
'merge_dup_sample_ids': merge_duplicated_sample_ids})
job = qdb.processing_job.ProcessingJob.create(
owner, params)
owner, params, True)
sql = """INSERT INTO qiita.analysis_processing_job
(analysis_id, processing_job_id)
VALUES (%s, %s)"""
Expand Down Expand Up @@ -429,15 +429,17 @@ def mapping_file(self):
Returns
-------
str or None
full filepath to the mapping file or None if not generated
int or None
The filepath id of the analysis mapping file or None
if not generated
"""
fp = [fp for _, fp, fp_type in qdb.util.retrieve_filepaths(
"analysis_filepath", "analysis_id", self._id)
if fp_type == 'plain_text']
fp = [fp_id
for fp_id, _, fp_type in qdb.util.retrieve_filepaths(
"analysis_filepath", "analysis_id", self._id)
if fp_type == 'plain_text']

if fp:
# returning the actual path vs. an array
# returning the actual filepath id vs. an array
return fp[0]
else:
return None
Expand Down
3 changes: 2 additions & 1 deletion qiita_db/handlers/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,8 @@ def get(self, analysis_id):
"""
with qdb.sql_connection.TRN:
a = _get_analysis(analysis_id)
mf_fp = a.mapping_file
mf_fp = qdb.util.get_filepath_information(
a.mapping_file)['fullpath']
response = None
if mf_fp is not None:
df = qdb.metadata_template.util.load_template_to_dataframe(
Expand Down
2 changes: 1 addition & 1 deletion qiita_db/handlers/processing_job.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ def post(self):
params = qdb.software.Parameters.load(cmd, json_str=params_dict)

job = qdb.processing_job.ProcessingJob.create(
qdb.user.User(user), params)
qdb.user.User(user), params, True)

if status:
job._set_status(status)
Expand Down
4 changes: 2 additions & 2 deletions qiita_db/handlers/tests/test_artifact.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,8 @@ def test_get_artifact(self):
'prep_information': [],
'study': None,
'analysis': 1,
'processing_parameters': {'biom_table': 8, 'depth': 9000,
'subsample_multinomial': False},
'processing_parameters': {'biom_table': '8', 'depth': '9000',
'subsample_multinomial': 'False'},
'files': exp_fps}
obs = loads(obs.body)
# The timestamp is genreated at patch time, so we can't check for it
Expand Down
4 changes: 1 addition & 3 deletions qiita_db/metadata_template/test/test_prep_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -887,9 +887,7 @@ def test_create(self):
def test_create_already_prefixed_samples(self):
"""Creates a new PrepTemplate"""
fp_count = qdb.util.get_count('qiita.filepath')
pt = npt.assert_warns(
qdb.exceptions.QiitaDBWarning,
qdb.metadata_template.prep_template.PrepTemplate.create,
pt = qdb.metadata_template.prep_template.PrepTemplate.create(
self.metadata_prefixed, self.test_study, self.data_type)
self._common_creation_checks(pt, fp_count)

Expand Down
4 changes: 1 addition & 3 deletions qiita_db/metadata_template/test/test_sample_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -1084,9 +1084,7 @@ def test_create_str_prefixes(self):

def test_create_already_prefixed_samples(self):
"""Creates a new SampleTemplate with the samples already prefixed"""
st = npt.assert_warns(
qdb.exceptions.QiitaDBWarning,
qdb.metadata_template.sample_template.SampleTemplate.create,
st = qdb.metadata_template.sample_template.SampleTemplate.create(
self.metadata_prefixed, self.new_study)
new_id = self.new_study.id
# The returned object has the correct id
Expand Down
24 changes: 22 additions & 2 deletions qiita_db/metadata_template/test/test_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@

from six import StringIO
from unittest import TestCase, main
import warnings

import numpy.testing as npt
import pandas as pd
Expand Down Expand Up @@ -36,12 +37,31 @@ def test_prefix_sample_names_with_id(self):
}
exp_df = pd.DataFrame.from_dict(exp_metadata_dict, orient='index',
dtype=str)
qdb.metadata_template.util.prefix_sample_names_with_id(
self.metadata_map, 1)
with warnings.catch_warnings(record=True) as warn:
qdb.metadata_template.util.prefix_sample_names_with_id(
self.metadata_map, 1)
self.assertEqual(len(warn), 0)
self.metadata_map.sort_index(inplace=True)
exp_df.sort_index(inplace=True)
assert_frame_equal(self.metadata_map, exp_df)

# test that it only prefixes the samples that are needed
metadata_dict = {
'Sample1': {'int_col': 1, 'float_col': 2.1, 'str_col': 'str1'},
'1.Sample2': {'int_col': 2, 'float_col': 3.1, 'str_col': '200'},
'Sample3': {'int_col': 3, 'float_col': 3, 'str_col': 'string30'},
}
metadata_map = pd.DataFrame.from_dict(
metadata_dict, orient='index', dtype=str)
with warnings.catch_warnings(record=True) as warn:
qdb.metadata_template.util.prefix_sample_names_with_id(
metadata_map, 1)
self.assertEqual(len(warn), 1)
self.assertEqual(str(warn[0].message), 'Some of the samples were '
'already prefixed with the study id.')
metadata_map.sort_index(inplace=True)
assert_frame_equal(metadata_map, exp_df)

def test_load_template_to_dataframe(self):
obs = qdb.metadata_template.util.load_template_to_dataframe(
StringIO(EXP_SAMPLE_TEMPLATE))
Expand Down
42 changes: 19 additions & 23 deletions qiita_db/metadata_template/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,29 +34,25 @@ def prefix_sample_names_with_id(md_template, study_id):
study_id : int
The study to which the metadata belongs to
"""
# Get all the prefixes of the index, defined as any string before a '.'
prefixes = {idx.split('.', 1)[0] for idx in md_template.index}
# If the samples have been already prefixed with the study id, the prefixes
# set will contain only one element and it will be the str representation
# of the study id
if len(prefixes) == 1 and prefixes.pop() == str(study_id):
# The samples were already prefixed with the study id
warnings.warn("Sample names were already prefixed with the study id.",
qdb.exceptions.QiitaDBWarning)
else:
# Create a new pandas series in which all the values are the study_id
# and it is indexed as the metadata template
study_ids = pd.Series([str(study_id)] * len(md_template.index),
index=md_template.index)
# Create a new column on the metadata template that includes the
# metadata template indexes prefixed with the study id
md_template['sample_name_with_id'] = (study_ids + '.' +
md_template.index.values)
md_template.index = md_template.sample_name_with_id
del md_template['sample_name_with_id']
# The original metadata template had the index column unnamed - remove
# the name of the index for consistency
md_template.index.name = None
# loop over the samples and prefix those that aren't prefixed
md_template['qiita_sample_name_with_id'] = pd.Series(
[idx if idx.split('.', 1)[0] == str(study_id)
else '%d.%s' % (study_id, idx)
for idx in md_template.index], index=md_template.index)

# get the rows that are gonna change
changes = len(md_template.index[
md_template['qiita_sample_name_with_id'] != md_template.index])
if changes != 0 and changes != len(md_template.index):
warnings.warn(
"Some of the samples were already prefixed with the study id.",
qdb.exceptions.QiitaDBWarning)

md_template.index = md_template.qiita_sample_name_with_id
del md_template['qiita_sample_name_with_id']
# The original metadata template had the index column unnamed -> remove
# the name of the index for consistency
md_template.index.name = None


def load_template_to_dataframe(fn, index='sample_name'):
Expand Down
Loading

0 comments on commit fabee6b

Please sign in to comment.