Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracing could not be found. #5680

Closed
paulobaptista opened this issue Aug 19, 2021 · 10 comments
Closed

Tracing could not be found. #5680

paulobaptista opened this issue Aug 19, 2021 · 10 comments
Labels

Comments

@paulobaptista
Copy link

Context

User is trying to view Shared Annotation.

Version 20.12.0

Expected Behavior

Other Shared Annotations are viewable.

Current Behavior

User reported that when viewing a Shared Annotation they receive an error.

Initialization error. Please refresh the page to retry. If the error persists, please contact an administrator. [object Object]

See this in dev tools:

https://connectomics.clps.brown.edu/tracings/skeleton/340d7b8d-4e06-46e1-a508-415dcddeaecd?token=arS9Er3e9YvaL6Fa_DTUHA
Status 400

and

https://api.airbrake.io/api/v3/projects/empty/notices?key=empty
Status 400

I see this in the webknossos logs

2021-08-19 14:13:52,683 [WARN] com.scalableminds.webknossos.tracingstore.controllers.SkeletonTracingController - Answering 400 at /tracings/skeleton/340d7b8d-4e06-46e1-a508-415dcddeaecd?token=wv9iZ8Zcgt-xNdV8jBWfhQ – {"messages":[{"error":"Tracing couldn't be found"}]}

Steps to Reproduce the bug

  1. Log in
  2. Click Shared Annotations
  3. Click @ View under Actions column for the f70f4c | @archival annotation
  4. Wait few seconds and get the error

Your Environment for bug

Production environment

  • Chrome 92.0.4515.131 (Official Build) (x86_64)
  • Server running Red Hat 7.9, Docker 19.03.5
  • Version of WebKnossos (Release or Commit): 20.12.0
@philippotto
Copy link
Member

Hi @paulobaptista, thank you for creating this issue. Since you are using version 20.12.0, I recommend to update to the latest version which is 21.07.0 (or: in a couple of days there will be 21.08.0, too). A lot has changed in the last months and chances are that the bug will simply be fixed by the upgrade.

@hotzenklotz
Copy link
Member

FYI, we keep a change log of all changes, bug fixes, new features and updates notes: https://github.com/scalableminds/webknossos/blob/master/CHANGELOG.released.md

@philippotto
Copy link
Member

philippotto commented Aug 23, 2021

Good point. Note that 21.02.0 and 21.03.0 both contain fixes related to sharing tokens.

@paulobaptista
Copy link
Author

paulobaptista commented Aug 23, 2021

Hi

1st attempt at migrating from 20.12 to 21.07 failed. :(

Here were my steps:

the container came up, but it asked to create a new organization.

what should I do about these lines in docker-compose?

Do I need integrated datastore?

. # the following lines disable the integrated datastore:
. # - -Dplay.modules.enabled-="com.scalableminds.webknossos.datastore.DataStoreModule"
. # - -Dplay.http.router="noDS.Routes"

I also disabled these sections. Are they needed? The 20.12 didn't have these containers. My guess they we considered the integrated datastore and not needed.

. # webknossos-datastore:

. # webknossos-tracingstore:

Do I keep -Dhttp.uri=http://webknossos-datastore:9090 that same, or do I need to update with PUBLIC_HOST url? Is this for backend communication, between containers, or do the clients connect directly to these ports?

@fm3
Copy link
Member

fm3 commented Aug 24, 2021

Hi @paulobaptista

webKnossos can either be run with its own datastore (default) or it can be run without one, if a standalone datastore is also active. This has not changed from 20.12 to 21.07.
What has changed is how the config entries are named that control which of the two happens (compare config changes table).
So it would be important to find out which of the two setups you used before updating. I will assume you used the internal store, which is the default as described in the webKnossos repository (and has been, since before 2020)

Note that the # sign in docker-compose marks a line as commented out, so if it was commented out before, you don’t need to activate it now. If you start the containers with docker-compose up webknossos, the docker-compose lines under the services webknossos-tracingstore and webknossos-datastore respectively will not be used.

webKnossos now asking you to create a new organization and not recognizing your old one sounds to me like a problem with the database connection. Are there any hints about this in the console output of the webKnossos backend?

Also, does the sql command select * from webknossos.organizations_ yield a result (double-check that the db data is still valid)?

I hope this helps!

@paulobaptista
Copy link
Author

Hi...

I made progress in my test environment and updated 21.07.

In my production environment, I hit this error:

at fossildb:7155. Reply: SERVING
2021-08-25 15:34:58,586 [INFO] Startup - Executing Startup
2021-08-25 15:34:58,597 [INFO] Startup - Running ensure_db.sh with POSTGRES_URL Some(jdbc:postgresql://postgres/webknossos)
2021-08-25 15:34:58,894 [INFO] Startup - Database already exists
2021-08-25 15:34:59,085 [INFO] Startup - Schema already exists
2021-08-25 15:34:59,088 [INFO] Startup - Running diff_schema.js tools/postgres/schema.sql DB
2021-08-25 15:35:01,271 [ERROR] Startup - Database schema does not fit to schema.sql!
Wrong dbName
{ [InternalError: Script exit: Normal] message: 'Script exit: Normal', fileName: '' }

2021-08-25 15:35:01,368 [INFO] com.zaxxer.hikari.HikariDataSource - slick.db - Started.
2021-08-25 15:35:01,971 [INFO] controllers.InitialDataService - Inserting local datastore
2021-08-25 15:35:02,054 [ERROR] models.binary.DataStoreDAO - SQL Error: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "datastores_pkey"
Detail: Key (name)=(localhost) already exists.
2021-08-25 15:35:02,055 [DEBUG] models.binary.DataStoreDAO - Caused by query:
[insert into webknossos.dataStores(name, url, publicUrl, key, isScratch, isDeleted, isForeign, isConnector, allowsUpload)
values(?, ?, ?, ?, ?, ?, ?, ?, ?)]
2021-08-25 15:35:02,058 [INFO] Startup - No initial data inserted: SQL Failure: ERROR: duplicate key value violates unique constraint "datastores_pkey"
Detail: Key (name)=(localhost) already exists.

Also

2021-08-25 15:41:11,518 [ERROR] Startup - Database schema does not fit to schema.sql!
Creating DB wk_tmp_aqg8ll30
psql:tools/postgres/schema.sql:1: NOTICE: schema "webknossos" does not exist, skipping
CLEANUP: remove
{ [AssertionError: rimraf: missing path]
name: 'AssertionError',
actual: '',
expected: true,
operator: '==',
message: 'rimraf: missing path',
generatedMessage: false,
stack: [Getter/Setter] }
CLEANUP: DROP DATABASE wk_tmp_aqg8ll30
{ [InternalError: Script exit: Normal] message: 'Script exit: Normal', fileName: '' }

I can log in and use the application.

We still are experiencing issue with viewing a Shared Annotation.

Initialization error. Please refresh the page to retry. If the error persists, please contact an administrator. [object Object]

Get 400 for https://connectomics.clps.brown.edu/tracings/skeleton/340d7b8d-4e06-46e1-a508-415dcddeaecd?token=moU-veKFRUKJ00K0SvAY9A

and

Get 400 for https://api.airbrake.io/api/v3/projects/insert-valid-projectID-here/notices?key=insert-valid-projectKey-here

@fm3
Copy link
Member

fm3 commented Aug 30, 2021

Thanks for your reply and good to hear that more things are working now!

As for the two kinds of errors on startup, I’d say both can “safely” be ignored:

  • The first (“duplicate key“) stems from webKnossos trying to initialize the internal datastore on startup. Somehow the information that this already happened seems to be lost (this likely means that the http.uri in the config does not match the uri field in the database relation webknossos.datastores. However, this attempt to re-initialize fails, but has no further effect.
  • The second (“Database schema does not fit to schema.sql”) means that the script that performs this check of the database schema being correct cannot run. This means that this assertion does not pass, but this, too, will not interrupt normal operation of webKnossos. If you ran all the sql evolutions for the respective release, the schema should match.
    Both work for my setup, so I am afraid without concrete steps to reproduce this, I don’t know how to support this remotely.

Another note on the airbrake error 400 – this is expected, as your webKnossos instance is presumably not connected to our airbrake monitoring, so nothing to worry about here either.

Now for the interesting part, the failing skeleton tracing request. There are several mechanisms involved where a bug could from. In the worst case the tracing was never correctly saved and is not present in the database. But since you specifically mentioned that this is a shared annotation, I would hope that the problem is instead with access permissions checks.

  • Is the owner/author of the annotation still able to load it normally?
  • Are you (or the user that attempts to view it) in the team the author selected for sharing?
  • Does the url for the annotation contain “Explorational”? (As opposed to “Task”/“TaskType”/”Project”?

@paulobaptista
Copy link
Author

Hi!

OK great thanks for the feedback. After restarting the fossildb, I can now view the shared annotation.

There is a problem with the fossildb though.

I see the sst files increasing in space. How can we debug this?

Size of root folder over time

Tue Sep 7 14:07:18 EDT 2021
/dev/sda2 59G 38G 19G 68% /

Tue Sep 7 14:23:24 EDT 2021
/dev/sda2 59G 54G 2.2G 97% /

Size of persistent fossildb folder in MB
Tue Sep 7 14:11:03 EDT 2021
33885 fossildb/

Tue Sep 7 14:23:50 EDT 2021
47232 fossildb/

SST files are being creating often

-rw-r--r--. 1 polkitd ssh_keys 1123897489 Sep 7 14:21 002717.sst
-rw-r--r--. 1 polkitd ssh_keys 1103230144 Sep 7 14:22 002718.sst
-rw-r--r--. 1 polkitd ssh_keys 1158262211 Sep 7 14:23 002719.sst
-rw-r--r--. 1 polkitd ssh_keys 82733 Sep 7 14:23 LOG
-rw-r--r--. 1 polkitd ssh_keys 1079242756 Sep 7 14:24 002720.sst

@paulobaptista
Copy link
Author

after restarting fossildb, space is cleared up

du -sm data
49442 data

docker-compose restart fossildb
Restarting webknossos_fossildb_1 ... done
du -sm data
28546 data

@fm3
Copy link
Member

fm3 commented Sep 14, 2021

Hi @paulobaptista
the rocksDB backend used in fossildb is optimized for fast reads/writes and is known to create fairly large amounts of files, which are then periodically compacted. If the default ratio is too aggressive for your file system, It is possible to configure it with a rocksDB options file, see scalableminds/fossildb#20
We have not really experimented with different compaction strategies, so I can’t recommend specific settings.
There is also a specific compactAllData grpc request you can send to fossildb, but it is probably not trivial to set up a grpc client.
I’m closing this issue as your tracing can now be found. If you experience further problems, feel free to reopen this or a new issue.

@fm3 fm3 closed this as completed Sep 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants