Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After upgrade to release_16.01: jobs are no longer run #1789

Closed
rekado opened this issue Feb 24, 2016 · 48 comments
Closed

After upgrade to release_16.01: jobs are no longer run #1789

rekado opened this issue Feb 24, 2016 · 48 comments
Assignees

Comments

@rekado
Copy link

rekado commented Feb 24, 2016

Hi,

we've upgraded our Galaxy instances to release 16.01 and since then found that none of our tools work any longer. One test case is to upload a text file with the "upload1" tool. In the logs we see that a job is created, we also see it in the database. The job remains in the "new" state without changes.

Upon starting Galaxy we see that the main Galaxy Queue Worker is initialized to run on our postgresql database, which contains the job records, and we see that 4 LocalRunner workers are started.

The uploaded files are in fact created in new_file_path as upload_file_data_0Sf_99 (and have the expected contents), but in the job_working_directory they only appear as zero-sized files.

Could you please give us a hint as to what's going on here?

@borauyar
Copy link

Following on rekado's post, below is the output from paster.log file when an upload job is executed:
The job seems to be run but on the history (and also admin interface) the jobs are queued and waiting to run.

galaxy.tools DEBUG 2016-02-24 16:15:07,036 Validated and populated state for tool request (38.443 ms)
galaxy.tools.actions.upload DEBUG 2016-02-24 16:15:07,070 Persisted uploads (15.362 ms)
galaxy.tools.actions.upload DEBUG 2016-02-24 16:15:07,170 Checked and cleaned uploads (99.831 ms)
galaxy.tools.actions.upload_common INFO 2016-02-24 16:15:07,195 tool upload1 created job id 1127
galaxy.tools.actions.upload DEBUG 2016-02-24 16:15:07,239 Created upload job (68.360 ms)
galaxy.tools.execute DEBUG 2016-02-24 16:15:07,239 Tool [upload1] created job [1127] (184.070 ms)
galaxy.tools.execute DEBUG 2016-02-24 16:15:07,248 Executed all jobs for tool request: (211.845 ms)

@jmchilton
Copy link
Member

  • Do you have a job_conf.xml config file? (If yes, can you post it?)
  • Are you starting multiple processes? If yes - how (uwsgi, supervisor, run.sh with GALAXY_RUN_ALL)?

A quick note about the line:

galaxy.tools.execute DEBUG 2016-02-24 16:15:07,248 Executed all jobs for tool request: (211.845 ms)

It is a bit misleading in retrospect. It just means the API request to run the tool has been executed, the job is probably sitting in a NEW state ready for a job handler to pick it up and execute it.

@natefoo natefoo self-assigned this Feb 26, 2016
@rekado
Copy link
Author

rekado commented Feb 27, 2016

This is our job_conf.xml:

<?xml version="1.0"?>
<!-- A sample job config that explicitly configures job running the way it is configured by default (if there is no explicit config). -->
<job_conf>
    <plugins>
        <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
    </plugins>
    <handlers>
        <handler id="main"/>
    </handlers>
    <destinations>
        <destination id="local" runner="local"/>
    </destinations>
</job_conf>

I don't know what you mean by the second question (multiple processes of what?). We just start Galaxy with run.sh; we did not explicitly set GALAXY_RUN_ALL.

@bgruening
Copy link
Member

@rekado would be nice if you could provide a gist to your galaxy.ini file. Do you have an nginx or Apache running? Can you post this config as well?
Have you updated your config files witht these from the *.sample files after updating?
Any other changes you have done in your settings? What does git diff show?

@rekado
Copy link
Author

rekado commented Mar 2, 2016

@bgruening This is our configuration file: https://gist.github.com/rekado/726e7d34033cde9f83d8
(I replaced private information with "SECRET" and removed comments for clarity.)

I was not involved in the upgrade, but it seems that the config files have not been changed after the upgrade. Are there release notes that show me what config keys should be added or changed?

There are no changes in our git checkout. We use an unmodified checkout of the release_16.01 branch.

@dyusuf
Copy link

dyusuf commented Mar 15, 2016

@bgruening hi Bjoern, we are still stuck in the same problem. Now the issue haunt both our dev and production servers. The last solution which we could do is to install v16.01 from the scratch. this could solve the problem while it would cause data loss from the users. Could you guys continue with some suggestions?

@mvdbeek
Copy link
Member

mvdbeek commented Mar 15, 2016

I don't really see any obvious problem with your config files. Are there any jobs listed as running/new in the admin section ? Can you kill those?
What you could try is following the instructions to setup multiple job handlers. While that is not directly solving your problems, it might be good enough to get your instances back online
https://wiki.galaxyproject.org/Admin/Config/Performance/Scaling#Job_Handler.28s.29.
You should be able to just copy the example into the galaxy.ini and adjust the job_conf.xml, and then you'll need to start galaxy with GALAXY_RUN_ALL=1 ./run.sh

@rekado
Copy link
Author

rekado commented Mar 15, 2016

We have plenty of jobs in "new" status; they are just never executed.
I'll try to use multiple job handlers next and will report back.

@bgruening
Copy link
Member

@rekado can you run this query against your DB:

select * from workflow_invocation where state = 'ready' or state = 'new';

@rekado
Copy link
Author

rekado commented Mar 15, 2016

On our dev system this query returns no rows. They are all in "scheduled" state.

@bgruening
Copy link
Member

@rekado can you try something like this:

[server:web0]
use = egg:Paste#http
port = 8081
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 7

[server:web1]
use = egg:Paste#http
port = 8082
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 7

[server:web2]
use = egg:Paste#http
port = 8083
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 7

[server:web3]
use = egg:Paste#http
port = 8084
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 7

[server:handler0]
use = egg:Paste#http
port = 8085
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 10

[server:handler1]
use = egg:Paste#http
port = 8086
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 10

[server:handler2]
use = egg:Paste#http
port = 8087
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 10

[server:handler3]
use = egg:Paste#http
port = 8088
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 10

@rekado
Copy link
Author

rekado commented Mar 15, 2016

I tried that just now, restarted Galaxy, and submitted a new upload job, but there's no change in behaviour. We use the following systemd unit to provide the Galaxy service:

[Unit]
Description=MDC galaxy server
Documentation=https://wiki.galaxyproject.org/Admin/Config/
After=network.target remote-fs.target nss-lookup.target

[Service]
PIDFile=/galaxy_server/galaxy/paster.pid
WorkingDirectory=/galaxy_server/galaxy
ExecStart=/bin/bash -c '/galaxy_server/galaxy/run.sh --daemon'
ExecReload=/bin/bash -c '/galaxy_server/galaxy/run.sh restart'
ExecStop=/bin/bash -c '/galaxy_server/galaxy/run.sh --stop-daemon'
Type=forking
User=galaxy_server
Group=galaxy_server_usr

[Install]
WantedBy=multi-user.target

I should also note that workflow_invocation does not hold any record after 2016-01-14, so none of the new jobs (including the upload job I just created) appear there.

@bgruening
Copy link
Member

What do you see in the logs during tool execution. This is really weird - I have now migrated 5 Galaxy instances in Germany and did not see this issue at all.

@rekado
Copy link
Author

rekado commented Mar 15, 2016

@bgruening

This is what I see in /galaxy_server/galaxy/paster.log when I upload a file:

141.80.xxx.xxx - - [15/Mar/2016:17:09:54 +0200] "GET /galaxy/welcome HTTP/1.0" 302 - "https://galaxy-dev.mdc-berlin.net/galaxy/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
141.80.xxx.xxx - - [15/Mar/2016:17:09:54 +0200] "GET /galaxy/api/histories/d0bfe935d0f5258d/contents?dataset_details=807261053283f8ac HTTP/1.0" 200 - "https://galaxy-dev.mdc-berlin.net/galaxy/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
141.80.xxx.xxx - - [15/Mar/2016:17:09:58 +0200] "GET /galaxy/api/histories/d0bfe935d0f5258d/contents HTTP/1.0" 200 - "https://galaxy-dev.mdc-berlin.net/galaxy/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
141.80.xxx.xxx - - [15/Mar/2016:17:10:02 +0200] "GET /galaxy/api/histories/d0bfe935d0f5258d/contents HTTP/1.0" 200 - "https://galaxy-dev.mdc-berlin.net/galaxy/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
141.80.xxx.xxx - - [15/Mar/2016:17:10:06 +0200] "GET /galaxy/api/histories/d0bfe935d0f5258d/contents HTTP/1.0" 200 - "https://galaxy-dev.mdc-berlin.net/galaxy/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
galaxy.tools DEBUG 2016-03-15 17:10:09,160 Validated and populated state for tool request (34.529 ms)
galaxy.tools.actions.upload DEBUG 2016-03-15 17:10:09,200 Persisted uploads (19.550 ms)
galaxy.tools.actions.upload DEBUG 2016-03-15 17:10:09,272 Checked and cleaned uploads (71.889 ms)
galaxy.tools.actions.upload_common INFO 2016-03-15 17:10:09,289 tool upload1 created job id 1115
galaxy.tools.actions.upload DEBUG 2016-03-15 17:10:09,324 Created upload job (52.029 ms)
galaxy.tools.execute DEBUG 2016-03-15 17:10:09,324 Tool [upload1] created job [1115] (143.908 ms)
galaxy.tools.execute DEBUG 2016-03-15 17:10:09,330 Executed all jobs for tool request: (170.521 ms)
141.80.xxx.xxx - - [15/Mar/2016:17:10:09 +0200] "POST /galaxy/api/tools HTTP/1.0" 200 - "https://galaxy-dev.mdc-berlin.net/galaxy/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"
141.80.xxx.xxx - - [15/Mar/2016:17:10:09 +0200] "GET /galaxy/api/histories/d0bfe935d0f5258d/contents HTTP/1.0" 200 - "https://galaxy-dev.mdc-berlin.net/galaxy/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:42.0) Gecko/20100101 Firefox/42.0"

Are there other logs that would give me more information?

@bgruening
Copy link
Member

If you run multiple handler and workers you will have for every handler and worker one log file.

@rekado
Copy link
Author

rekado commented Mar 15, 2016

Then I'd have to start with GALAXY_RUN_ALL set to 1, right? We don't do this at the moment (see systemd unit above).

@bgruening
Copy link
Member

I never used systemd, maybe you can test at first without this.

GALAXY_RUN_ALL=1 sh run.sh --daemon will start your instance.

@rekado
Copy link
Author

rekado commented Mar 16, 2016

With GALAXY_RUN_ALL=1 all web handlers are started, but I see no change in the behaviour. What should I expect when using the configuration at #1789 (comment) ?

It just spins up more HTTP handlers. But the problem we have seems to be that no jobs are actually run in the background. The web interface works just fine.

@bgruening
Copy link
Member

The jobs should be distributed to the handlers and they pass it over the the scheduler. What do you see in the logs if you start a Job? I'm wondering how this systems ever worked in production if you have never setup handlers.

@rekado
Copy link
Author

rekado commented Mar 16, 2016

I see nothing interesting at all in the logs.

I added the above snippet below the existing [server:main] declaration and when I upload something to Galaxy (forwarded via nginx to the main web server instance running on localhost:8080) I get this on the handler logs:

[galaxy_server@bimsb-galaxy-dev:/galaxy_server/galaxy] (1022) $ tail -f handler*.log
==> handler0.log <==
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:16,994 added url, path to static middleware: /plugins/visualizations/graphview/static, ./config/plugins/visualizations/graphview/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:16,994 added url, path to static middleware: /plugins/visualizations/graphviz/static, ./config/plugins/visualizations/graphviz/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:16,994 added url, path to static middleware: /plugins/visualizations/scatterplot/static, ./config/plugins/visualizations/scatterplot/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:16,995 added url, path to static middleware: /plugins/interactive_environments/bam_iobio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/bam_iobio/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:16,995 added url, path to static middleware: /plugins/interactive_environments/ipython/static, /galaxy_server/galaxy/config/plugins/interactive_environments/ipython/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:16,995 added url, path to static middleware: /plugins/interactive_environments/jupyter/static, /galaxy_server/galaxy/config/plugins/interactive_environments/jupyter/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:16,995 added url, path to static middleware: /plugins/interactive_environments/rstudio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/rstudio/static
galaxy.queue_worker INFO 2016-03-16 11:06:16,995 Binding and starting galaxy control worker for handler0
Starting server in PID 2135.
serving on 0.0.0.0:8085 view at http://127.0.0.1:8085

==> handler1.log <==
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,209 added url, path to static middleware: /plugins/visualizations/graphview/static, ./config/plugins/visualizations/graphview/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,209 added url, path to static middleware: /plugins/visualizations/graphviz/static, ./config/plugins/visualizations/graphviz/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,209 added url, path to static middleware: /plugins/visualizations/scatterplot/static, ./config/plugins/visualizations/scatterplot/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,209 added url, path to static middleware: /plugins/interactive_environments/bam_iobio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/bam_iobio/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,209 added url, path to static middleware: /plugins/interactive_environments/ipython/static, /galaxy_server/galaxy/config/plugins/interactive_environments/ipython/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,210 added url, path to static middleware: /plugins/interactive_environments/jupyter/static, /galaxy_server/galaxy/config/plugins/interactive_environments/jupyter/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,210 added url, path to static middleware: /plugins/interactive_environments/rstudio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/rstudio/static
galaxy.queue_worker INFO 2016-03-16 11:06:17,210 Binding and starting galaxy control worker for handler1
Starting server in PID 2144.
serving on 0.0.0.0:8086 view at http://127.0.0.1:8086

==> handler2.log <==
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,521 added url, path to static middleware: /plugins/visualizations/graphview/static, ./config/plugins/visualizations/graphview/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,521 added url, path to static middleware: /plugins/visualizations/graphviz/static, ./config/plugins/visualizations/graphviz/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,521 added url, path to static middleware: /plugins/visualizations/scatterplot/static, ./config/plugins/visualizations/scatterplot/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,521 added url, path to static middleware: /plugins/interactive_environments/bam_iobio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/bam_iobio/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,521 added url, path to static middleware: /plugins/interactive_environments/ipython/static, /galaxy_server/galaxy/config/plugins/interactive_environments/ipython/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,522 added url, path to static middleware: /plugins/interactive_environments/jupyter/static, /galaxy_server/galaxy/config/plugins/interactive_environments/jupyter/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,522 added url, path to static middleware: /plugins/interactive_environments/rstudio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/rstudio/static
galaxy.queue_worker INFO 2016-03-16 11:06:17,522 Binding and starting galaxy control worker for handler2
Starting server in PID 2153.
serving on 0.0.0.0:8087 view at http://127.0.0.1:8087

==> handler3.log <==
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,514 added url, path to static middleware: /plugins/visualizations/graphview/static, ./config/plugins/visualizations/graphview/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,514 added url, path to static middleware: /plugins/visualizations/graphviz/static, ./config/plugins/visualizations/graphviz/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,515 added url, path to static middleware: /plugins/visualizations/scatterplot/static, ./config/plugins/visualizations/scatterplot/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,515 added url, path to static middleware: /plugins/interactive_environments/bam_iobio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/bam_iobio/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,515 added url, path to static middleware: /plugins/interactive_environments/ipython/static, /galaxy_server/galaxy/config/plugins/interactive_environments/ipython/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,515 added url, path to static middleware: /plugins/interactive_environments/jupyter/static, /galaxy_server/galaxy/config/plugins/interactive_environments/jupyter/static
galaxy.webapps.galaxy.buildapp DEBUG 2016-03-16 11:06:17,515 added url, path to static middleware: /plugins/interactive_environments/rstudio/static, /galaxy_server/galaxy/config/plugins/interactive_environments/rstudio/static
galaxy.queue_worker INFO 2016-03-16 11:06:17,516 Binding and starting galaxy control worker for handler3
Starting server in PID 2162.
serving on 0.0.0.0:8088 view at http://127.0.0.1:8088

Nothing at all is output beyond that.

The rest of the galaxy.ini has not been changed (here's the link to our config again https://gist.github.com/rekado/726e7d34033cde9f83d8).

@mvdbeek
Copy link
Member

mvdbeek commented Mar 16, 2016

okay, so there is no communication with the handlers, which leads me to think that your job_conf.xml
is not okay. following @bgruening example, did you add

<handlers default="handlers">
    <handler id="handler0" tags="handlers"/>
    <handler id="handler1" tags="handlers"/>
    <handler id="handler2" tags="handlers"/>
    <handler id="handler3" tags="handlers"/>
</handlers>

?

Also, there is no reference to the job_config_file in the galaxy.ini,
I guess it wouldn't hurt to be explicit and add the path like so:

job_config_file = config/job_conf.xml

@rekado
Copy link
Author

rekado commented Mar 16, 2016

I added both these things, but I don't see anything in the handler logs.

How is Galaxy supposed to communicate with the workers? I see that the handlers listen on local ports --- does communication go over the network? What's the messaging mechanism here?

@bgruening
Copy link
Member

@rekado last resort until we need @natefoo ... call me ;)

@jmchilton
Copy link
Member

The messaging happens via polling the database - all the handlers should be watching the database for jobs assigned to them.

@natefoo
Copy link
Member

natefoo commented Mar 16, 2016

Please check the job record in the database. It should be assigned a handler. e.g.:

galaxy_test=> SELECT id, create_time, update_time, tool_id, tool_version, state, command_line, runner_name, handler, destination_id, destination_params FROM JOB ORDER BY ID DESC LIMIT 1;
   id   |        create_time         |        update_time         |      tool_id       | tool_version | state | command_line | runner_name |    handler    | destination_id | destination_params 
--------+----------------------------+----------------------------+--------------------+--------------+-------+--------------+-------------+---------------+----------------+--------------------
 520626 | 2016-02-21 14:59:33.100898 | 2016-02-21 14:59:33.100923 | ucsc_table_direct1 | 1.0.0        | new   |              |             | test_handler1 |                | 
(1 row)

@rekado
Copy link
Author

rekado commented Mar 16, 2016

The latest job has in fact a handler assigned to it

galaxy_server=> SELECT id, create_time, update_time, tool_id, tool_version, state, command_line, runner_name, handler, destination_id, destination_params FROM JOB ORDER BY ID DESC LIMIT 1;
  id  |        create_time         |        update_time         | tool_id | tool_version | state | command_line | runner_name | handler  | destinati
on_id | destination_params 
------+----------------------------+----------------------------+---------+--------------+-------+--------------+-------------+----------+----------
------+--------------------
 1120 | 2016-03-16 13:15:47.987302 | 2016-03-16 13:15:48.035144 | upload1 | 1.1.4        | new   |              |             | handler3 |          
      | 
(1 row)

@natefoo
Copy link
Member

natefoo commented Mar 16, 2016

Can you also provide the contents of handler3.log beginning with the most recent start? There should be output corresponding to loading job plugins if the processes believes that it is a handler.

@rekado
Copy link
Author

rekado commented Mar 17, 2016

Here's the full contents of handler3.log since restarting it.

https://gist.github.com/rekado/f826cfbaa981c347a612

@natefoo
Copy link
Member

natefoo commented Mar 17, 2016

It is starting as a handler:

galaxy.jobs.manager DEBUG 2016-03-16 14:14:40,200 Starting job handler
galaxy.jobs INFO 2016-03-16 14:14:40,200 Handler 'handler3' will load all configured runner plugins
galaxy.jobs.runners.state_handler_factory DEBUG 2016-03-16 14:14:40,201 Loaded 'failure' state handler from module galaxy.jobs.runners.state_handlers.resubmit
galaxy.jobs.runners DEBUG 2016-03-16 14:14:40,201 Starting 4 LocalRunner workers
galaxy.jobs DEBUG 2016-03-16 14:14:40,202 Loaded job runner 'galaxy.jobs.runners.local:LocalJobRunner' as 'local'
galaxy.jobs.handler DEBUG 2016-03-16 14:14:40,202 Loaded job runners plugins: local
galaxy.jobs.handler INFO 2016-03-16 14:14:40,202 job handler stop queue started
galaxy.jobs.handler INFO 2016-03-16 14:14:40,213 job handler queue started

Can you set database_engine_option_echo = True in galaxy.ini and restart one of the handlers? This will yield a lot of output but once the server has started it will show whether or not the handler is even performing the query looking for jobs that are ready to run.

@rekado
Copy link
Author

rekado commented Mar 17, 2016

That's a lot of output, but it does seem to perform the queries. I cannot see from the output whether the queries were successful:

....
2016-03-17 15:03:34,126 INFO sqlalchemy.engine.base.Engine SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job 
WHERE job.state = %(state_1)s AND job.handler = %(handler_1)s
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:34,126 SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job 
WHERE job.state = %(state_1)s AND job.handler = %(handler_1)s
2016-03-17 15:03:34,126 INFO sqlalchemy.engine.base.Engine {'state_1': 'deleted_new', 'handler_1': 'handler3'}
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:34,126 {'state_1': 'deleted_new', 'handler_1': 'handler3'}
2016-03-17 15:03:34,776 INFO sqlalchemy.engine.base.Engine SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job LEFT OUTER JOIN galaxy_user ON galaxy_user.id = job.user_id 
WHERE job.state = %(state_1)s AND (job.user_id IS NULL OR galaxy_user.active = true) AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id 
WHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id 
WHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:34,776 SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job LEFT OUTER JOIN galaxy_user ON galaxy_user.id = job.user_id 
WHERE job.state = %(state_1)s AND (job.user_id IS NULL OR galaxy_user.active = true) AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id 
WHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id 
WHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id
2016-03-17 15:03:34,777 INFO sqlalchemy.engine.base.Engine {'state_4': 'new', 'handler_1': 'handler3', 'state_1': 'new', '_state_1': 'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_3': 'ok'}
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:34,777 {'state_4': 'new', 'handler_1': 'handler3', 'state_1': 'new', '_state_1': 'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_3': 'ok'}
2016-03-17 15:03:34,786 INFO sqlalchemy.engine.base.Engine SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job 
WHERE job.state = %(state_1)s AND job.handler = %(handler_1)s ORDER BY job.id
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:34,786 SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job 
WHERE job.state = %(state_1)s AND job.handler = %(handler_1)s ORDER BY job.id
2016-03-17 15:03:34,786 INFO sqlalchemy.engine.base.Engine {'state_1': 'resubmitted', 'handler_1': 'handler3'}
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:34,786 {'state_1': 'resubmitted', 'handler_1': 'handler3'}
2016-03-17 15:03:35,097 INFO sqlalchemy.engine.base.Engine SELECT workflow_invocation.id AS workflow_invocation_id, workflow_invocation.create_time AS workflow_invocation_create_time, workflow_invocation.update_time AS workflow_invocation_update_time, workflow_invocation.workflow_id AS workflow_invocation_workflow_id, workflow_invocation.state AS workflow_invocation_state, workflow_invocation.scheduler AS workflow_invocation_scheduler, workflow_invocation.handler AS workflow_invocation_handler, workflow_invocation.uuid AS workflow_invocation_uuid, workflow_invocation.history_id AS workflow_invocation_history_id, workflow_invocation_step_1.id AS workflow_invocation_step_1_id, workflow_invocation_step_1.create_time AS workflow_invocation_step_1_create_time, workflow_invocation_step_1.update_time AS workflow_invocation_step_1_update_time, workflow_invocation_step_1.workflow_invocation_id AS workflow_invocation_step_1_workflow_invocation_id, workflow_invocation_step_1.workflow_step_id AS workflow_invocation_step_1_workflow_step_id, workflow_invocation_step_1.job_id AS workflow_invocation_step_1_job_id, workflow_invocation_step_1.action AS workflow_invocation_step_1_action 
FROM workflow_invocation LEFT OUTER JOIN workflow_invocation_step AS workflow_invocation_step_1 ON workflow_invocation.id = workflow_invocation_step_1.workflow_invocation_id 
WHERE (workflow_invocation.state = %(state_1)s OR workflow_invocation.state = %(state_2)s) AND workflow_invocation.scheduler = %(scheduler_1)s AND workflow_invocation.handler = %(handler_1)s
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:35,097 SELECT workflow_invocation.id AS workflow_invocation_id, workflow_invocation.create_time AS workflow_invocation_create_time, workflow_invocation.update_time AS workflow_invocation_update_time, workflow_invocation.workflow_id AS workflow_invocation_workflow_id, workflow_invocation.state AS workflow_invocation_state, workflow_invocation.scheduler AS workflow_invocation_scheduler, workflow_invocation.handler AS workflow_invocation_handler, workflow_invocation.uuid AS workflow_invocation_uuid, workflow_invocation.history_id AS workflow_invocation_history_id, workflow_invocation_step_1.id AS workflow_invocation_step_1_id, workflow_invocation_step_1.create_time AS workflow_invocation_step_1_create_time, workflow_invocation_step_1.update_time AS workflow_invocation_step_1_update_time, workflow_invocation_step_1.workflow_invocation_id AS workflow_invocation_step_1_workflow_invocation_id, workflow_invocation_step_1.workflow_step_id AS workflow_invocation_step_1_workflow_step_id, workflow_invocation_step_1.job_id AS workflow_invocation_step_1_job_id, workflow_invocation_step_1.action AS workflow_invocation_step_1_action 
FROM workflow_invocation LEFT OUTER JOIN workflow_invocation_step AS workflow_invocation_step_1 ON workflow_invocation.id = workflow_invocation_step_1.workflow_invocation_id 
WHERE (workflow_invocation.state = %(state_1)s OR workflow_invocation.state = %(state_2)s) AND workflow_invocation.scheduler = %(scheduler_1)s AND workflow_invocation.handler = %(handler_1)s
2016-03-17 15:03:35,098 INFO sqlalchemy.engine.base.Engine {'state_1': 'new', 'handler_1': 'handler3', 'state_2': 'ready', 'scheduler_1': 'core'}
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:35,098 {'state_1': 'new', 'handler_1': 'handler3', 'state_2': 'ready', 'scheduler_1': 'core'}
2016-03-17 15:03:35,131 INFO sqlalchemy.engine.base.Engine SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job 
WHERE job.state = %(state_1)s AND job.handler = %(handler_1)s
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:35,131 SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job 
WHERE job.state = %(state_1)s AND job.handler = %(handler_1)s
2016-03-17 15:03:35,131 INFO sqlalchemy.engine.base.Engine {'state_1': 'deleted_new', 'handler_1': 'handler3'}
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:35,131 {'state_1': 'deleted_new', 'handler_1': 'handler3'}
2016-03-17 15:03:35,799 INFO sqlalchemy.engine.base.Engine SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job LEFT OUTER JOIN galaxy_user ON galaxy_user.id = job.user_id 
WHERE job.state = %(state_1)s AND (job.user_id IS NULL OR galaxy_user.active = true) AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id 
WHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id 
WHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id
sqlalchemy.engine.base.Engine INFO 2016-03-17 15:03:35,799 SELECT job.id AS job_id, job.create_time AS job_create_time, job.update_time AS job_update_time, job.history_id AS job_history_id, job.library_folder_id AS job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS job_tool_version, job.state AS job_state, job.info AS job_info, job.command_line AS job_command_line, job.param_filename AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code, job.traceback AS job_traceback, job.session_id AS job_session_id, job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name, job.job_runner_external_id AS job_job_runner_external_id, job.destination_id AS job_destination_id, job.destination_params AS job_destination_params, job.object_store_id AS job_object_store_id, job.imported AS job_imported, job.params AS job_params, job.handler AS job_handler 
FROM job LEFT OUTER JOIN galaxy_user ON galaxy_user.id = job.user_id 
WHERE job.state = %(state_1)s AND (job.user_id IS NULL OR galaxy_user.active = true) AND job.handler = %(handler_1)s AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN history_dataset_association ON history_dataset_association.id = job_to_input_dataset.dataset_id JOIN dataset ON dataset.id = history_dataset_association.dataset_id 
WHERE job.state = %(state_2)s AND (history_dataset_association.state = %(_state_1)s OR history_dataset_association.deleted = true OR dataset.state != %(state_3)s OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id 
FROM job JOIN job_to_input_library_dataset ON job.id = job_to_input_library_dataset.job_id JOIN library_dataset_dataset_association ON library_dataset_dataset_association.id = job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id = library_dataset_dataset_association.dataset_id 
WHERE job.state = %(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR library_dataset_dataset_association.deleted = true OR dataset.state != %(state_5)s OR dataset.deleted = true)) ORDER BY job.id
...

Here's the record for that last job:

psql (9.2.14)
Type "help" for help.

galaxy_server=> SELECT id, create_time, update_time, tool_id, tool_version, state, command_line, runner_name, handler, destination_id, destination_params FROM JOB ORDER BY ID DESC LIMIT 1;
  id  |        create_time         |        update_time         | tool_id | tool_version | state | command_line | runner_name | handler  | destinati
on_id | destination_params 
------+----------------------------+----------------------------+---------+--------------+-------+--------------+-------------+----------+----------
------+--------------------
 1121 | 2016-03-17 14:03:34.147511 | 2016-03-17 14:03:34.184615 | upload1 | 1.1.4        | new   |              |             | handler3 |          
      | 
(1 row)

galaxy_server=> 

I notice that the timestamp is off by one hour.

@natefoo
Copy link
Member

natefoo commented Mar 17, 2016

Ahh, a clue in that output - by any chance are the users missing the activated flag on their record in the galaxy_user table? I see that you have activation turned on.

@martenson
Copy link
Member

And even if they have the correct flag please turn off the activation and see if anything changes.

@rekado
Copy link
Author

rekado commented Mar 17, 2016

Yes!! The users' active fields were all set to f --- with the exception of the user record of the admin who performed the upgrade.

There's another error I'm getting now (encoding related?) that's failing the jobs, but at least the jobs are finally executed!

Thank you so much for your support!

@martenson
Copy link
Member

@rekado when we introduced the activation feature (~2 yrs ago?) a part of the db migration script sets all existing users to active automatically - maybe it failed on your DB for some reason
I am glad you resolved your problem, thank you for using Galaxy!
(please create a new issue for the encoding if you want assistance :)

special thanks to @jmchilton @mvdbeek @bgruening and @natefoo !

p.s. @dyusuf does the fix work for you too?

@bgruening
Copy link
Member

@rekado as a quick workaround try to set your locale setting to UTF-8

@bgruening
Copy link
Member

@dyusuf and @rekado are working in the same group afaik.

@dyusuf
Copy link

dyusuf commented Mar 19, 2016

@martenson @jmchilton @mvdbeek @bgruening @natefoo thank you greatly for the trouble-shooting. it saves our days.

just a quick comment.

it is hard to debug this type of errors without the experts like you. In addition to admin issues, I often encounter issues while developing galaxy tools that I always have to send sos to @bgruening since it is not easy for me to find solutions in the galaxy documentation.

To make galaxy admins/tool_developers more independent in trouble-shooting, would be possible to improve galaxy documentation some way or to have some other means. If your think it is needed and can provide support, I would certainly like to invest some efforts on it.

@mvdbeek
Copy link
Member

mvdbeek commented Mar 19, 2016

All documentation efforts are very welcome, on the one hand there is the https://wiki.galaxyproject.org/Admin as well as some code-specific documentation in here https://github.com/galaxyproject/galaxy/tree/dev/doc .

I think this particular issue is very tricky, but perhaps we could log a warning if a job is submitted by a user who hasn't been activated.

@dyusuf
Copy link

dyusuf commented Mar 20, 2016

@martenson thanks for the info!

@natefoo
Copy link
Member

natefoo commented Mar 22, 2016

@dyusuf There should be a banner at the top of Galaxy indicating that the account has not been activated. Was this not the case for these users?

@rekado
Copy link
Author

rekado commented Mar 22, 2016

@natefoo There is no banner shown for users that are deactivated. We just set the "active" field back to "f" for one user and we do not see any hint in the UI. The account is marked as inactive only in the admin interface.

@rekado
Copy link
Author

rekado commented Mar 22, 2016

Oh, I misunderstood. This has just been added by merging @mvdbeek's pull request.

@natefoo
Copy link
Member

natefoo commented Mar 22, 2016

@martenson Did something get broken here with the masthead message?

@mvdbeek
Copy link
Member

mvdbeek commented Mar 22, 2016

Oh, I misunderstood. This has just been added by merging @mvdbeek's pull request.

Now I may be misunderstanding things, but my PR introduces a warning message being printed to the console in case user_activation is on, a job is submitted and the user is not active or anonymous. I believe @natefoo is talking about web interface functionality that has been pre-existing. (I have not used the user activation feature, so I can't help any further)

@martenson
Copy link
Member

@natefoo I assume they just do not have a message set up (according to their config - #1789 (comment) )

@martenson
Copy link
Member

check out https://wiki.galaxyproject.org/Admin/UserAccounts for details

@nsoranzo
Copy link
Member

@martenson I think inactivity_box_content in lib/galaxy/config.py (and in lib/galaxy/webapps/tool_shed/config.py) should have the same default value that is present in config/galaxy.ini.sample, not None.

@martenson
Copy link
Member

@nsoranzo agreed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants