848 move check feed listeners to async service #1245
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This moves the listeners to an async service so that there is no longer a need for the time.sleep hack after indexing a file to get it to work.
To fix this, I used an async service and added a different type of event - a 'file indexed' event that the message_listener is listening to.
In order to make all this work, I did change the rabbitmq dependency from a blockingChannel to an AbstractChannel. I tested to make sure that files trigger extractors when added/indexed and also, that manual submission of datasets and files still works.
To test this, do the following.
First, comment out in docker-compose.dev.yml the part related to the message_listener.
Second, run the dependencies using docker-dev.sh up as usual.
Third, run the message_listener using pycharm, in debug mode.
Fourth, run the backend in debug mode. Run the front end as usual.
Test the following:
Create a dataset and add a file - it should create an event that is picked up by the message_listener when the file is indexed, and this should trigger the check_feed_listeners and any extractors that should run on the file.
Also, to verify that everything extractor related works as before, manually submit files and datasets to extractors.