Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow building with thin module Docker containers #69

Open
mikegerber opened this issue Feb 25, 2020 · 28 comments
Open

Allow building with thin module Docker containers #69

mikegerber opened this issue Feb 25, 2020 · 28 comments
Labels
enhancement New feature or request

Comments

@mikegerber
Copy link
Contributor

In #68 @bertsky :

But the real problem is that TF2 dependencies are lurking everywhere, so we will very soon have the unacceptable state that no catch-all venv (satisfying both TF1 and TF2 modules) is possible anymore. By then, a new solution needs to be in place, which (at least partially) isolates venvs from each other again.

@mikegerber
Copy link
Contributor Author

I think OCR-D has to decide if you want a "have it all" environment. Then either separate venvs per processor or some kind of Docker setup could be the solution.

I'd go for a Docker solution because that also solves dependency issues outside of the Python world. And it is actually possible to do without over-engineering too hard. I.e. I use a run one-liner that just calls a container with the right UID and working directory in place.

@mikegerber
Copy link
Contributor Author

docker run --rm -t  --user `id -u`:`id -g` --mount type=bind,src="$(pwd)",target=/data ocrd_example_processor

@bertsky
Copy link
Collaborator

bertsky commented Feb 26, 2020

I think OCR-D has to decide if you want a "have it all" environment. Then either separate venvs per processor or some kind of Docker setup could be the solution.

As long as we don't have REST integration, all-in-one is all we have. Isolation within that approach will always be up to demand and interdependencies. Whether we then delegate to local venvs or thin Docker containers is indeed another freedom of choice (which we ideally should also make a user choice here).

I'd go for a Docker solution because that also solves dependency issues outside of the Python world.

Yes, but it may also increase space and time cost unnecessarily under circumstances (depending on which modules are enabled and what platform the host is). So I'd really like that to be for the user to decide ultimately.

And it is actually possible to do without over-engineering too hard. I.e. I use a run one-liner that just calls a container with the right UID and working directory in place.

Yes of course, once we wrap the processor CLIs in another shell script layer, we can make that local venv or Docker run again. And for the latter we only need to ensure we pass on all the arguments. (Perhaps we could even avoid spinning up a new container with each invocation by using --attach somehow.)

The only thing that troubles me with delegating to thin module Docker containers is that we more or less surrender version control. It's really difficult to do that with Docker images solely based on digest numbers. But we could of course petition module providers to use the docker tag mechanism in a certain way.

@kba
Copy link
Member

kba commented Jun 25, 2020

Unfortunately, we have now a situation where both tensorflow (tf2) and tensorflow-gpu (tf1) can be installed side-by-side, so scripts won't fail at startup anymore but when doing the customary import tensorflow as tf, this will lead to runtime errors if the script (or its dependencies!) expect tf1 API but actually get tf2 API.

After discussing with @bertsky I see no alternative to isolated venvs. Implementing this in the Makefile is tedious but doable. However for our Docker builds, we need to decide on a mechanism to create an entry-point "venv broker" script that activates the right environment or similar. I'm stumped on how we can sensibly support ocrd process in this scenario.

But the situation right now, with runtime errors instead of startup failures, is unacceptable, I see no alternative to package isolation. If anyone else does see an alternative, I'd be happy to hear.

@stweil
Copy link
Collaborator

stweil commented Jun 25, 2020

Are there plans to upgrade models and software to TF 2? I think it would help to get an overview of all processors which still use TF 1 with an estimation whether and when they will run with TF 2 and who is responsable for that. And we should have all TF 1 based processors in their own group TF1_EXECUTABLES in the ocrd_all Makefile.

As soon as there is a separate group of TF 1 executables, the Makefile implementation could be straight forward by calling make recursively. We'd have the normal venv with all other processors. And we'd have a 2nd virtual environment, maybe simply a subdirectory of the other one, with the TF 1 processors. So all TF 1 processors would be in venv/tf1/bin. We'd have a generic executablevenv/bin/tf1 which can be linked to venv/bin/processor_x and which simply runs venv/tf1/bin/processor_x, passing all arguments and nearly all environment variables unmodified. Only PATH and VIRTUAL_ENV would be fixed.

@kba
Copy link
Member

kba commented Jun 25, 2020

From the direct requirements:

project tf1 tf2
cor-asv-ann
ocrd_anybaseocr
ocrd_calamari
ocrd_keraslm
sbb_textline_detector

But we'll also have to check transitive dependencies.

@stweil
Copy link
Collaborator

stweil commented Jun 25, 2020

ocrd_pc_segmentation depends indirectly on tensorflow<2.1.0,>=2.0.0. Again I have the problem that there is no prebuild tf2 for Python 3.8. That is not available for Python 3.8 and conflicts with tensorflow 2.2.0 which is available.

@stweil
Copy link
Collaborator

stweil commented Jun 25, 2020

Because of the close relationship between Python version(s) and available prebuild Tensorflow versions we must be prepared that TF1 might require a different (= older) Python version than TF2. I already have that situation when I want to build with a recent Linux distribution, and because of a bug in Debian / Ubuntu it is currently not possible to create a Python 3.7 venv when Python 3.8 is installed, too. This is of course not relevant for the old Ubuntu which is currently our reference platform.

@stweil
Copy link
Collaborator

stweil commented Jun 25, 2020

I'm stumped on how we can sensibly support ocrd process in this scenario.

I did not look into the details of the code. Does it spawn processes for the single steps (then it should work), or do all steps run in the same Python process?

@bertsky
Copy link
Collaborator

bertsky commented Jun 25, 2020

@stweil

Are there plans to upgrade models and software to TF 2? I think it would help to get an overview of all processors which still use TF 1 with an estimation whether and when they will run with TF 2 and who is responsable for that

The README is up-to-date w.r.t. that.

TF2 migration can be less or more effort, depending on what part of the API the module (or its dependent) relies. TF offers an upgrade script to rewrite the code to use tf.compat.v1 where necessary, but that is not always enough (automatic). True (native) TF2 migration also involves model retraining.

Plus TF 2.2 brings even more breaking changes (as you observed for ocrd_pc_segmentation already).

Since we have modules like ocrd_calamari (which depends on calamari_ocr which depends on TF1 with no visible migration activity) and ocrd_segment (which has a tool that depends on maskrcnn which depends on TF1 and is not maintained at the moment), this cannot be forced. (And forcing it would probably mean a lot of work for cor-asv-ann and ocrd_keraslm, too.)

As soon as there is a separate group of TF 1 executables, the Makefile implementation could be straight forward by calling make recursively. We'd have the normal venv with all other processors. And we'd have a 2nd virtual environment, maybe simply a subdirectory of the other one, with the TF 1 processors. So all TF 1 processors would be in venv/tf1/bin. We'd have a generic executablevenv/bin/tf1 which can be linked to venv/bin/processor_x and which simply runs venv/tf1/bin/processor_x, passing all arguments and nearly all environment variables unmodified. Only PATH and VIRTUAL_ENV would be fixed.

This would work, but why make an exception for TF? We have seen other conflicting dependencies already, and we know that pip is as bad as it gets in resolving these. See above for broader recipes, including delegation to thin Docker containers (possibly even behind REST in the future).

@kba

Implementing this in the Makefile is tedious but doable. However for our Docker builds, we need to decide on a mechanism to create an entry-point "venv broker" script that activates the right environment or similar. I'm stumped on how we can sensibly support ocrd process in this scenario.

The recipe is simple (and has already been discussed above): the top-level PATH directory (which will be a tiny venv merely for ocrd CLI, or /usr/local/bin in a Docker image) contains only shell scripts (created automatically at install-time) which then delegate to the right venv or Docker container. (Which modules are grouped together can be organised centrally in the Makefile.) Why shouldn't we allow venvs inside Docker images?

@kba
Copy link
Member

kba commented Jun 26, 2020

Why shouldn't we allow venvs inside Docker images?

I was not clear: That can also be done with a mechanism like the one you describe. No reason not to use venv in a Docker image, on the contrary.

I'm stumped on how we can sensibly support ocrd process in this scenario.

After sleeping on it, I am indeed unstumped. I misremembered ocrd process to use run_processor but it obviously uses run_cli, so the processors are run with their own python instance (or bash or compiled code) which can be set at install time as we're discussing.

@stweil
Copy link
Collaborator

stweil commented Jun 26, 2020

O what a mess! I now tried an installation of ocrd_all with a 2nd venv for sbb_textline_detector as an example of a TF1 processor. The build worked, no obvious conflicts, tensorflow-gpu 1.15.3 correctly installed for sbb_textline_detector. But it also installed Keras 2.4.3. And here is the result when running sbb_textline_detector --help:

ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via `pip install tensorflow`

@mikegerber, I am afraid that requirements.txt should be more precise regarding the required Keras version.

@bertsky
Copy link
Collaborator

bertsky commented Jun 26, 2020

But it also installed Keras 2.4.3. And here is the result when running sbb_textline_detector --help:

ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via `pip install tensorflow`

@mikegerber, I am afraid that requirements.txt should be more precise regarding the required Keras version.

@stweil please report to the respective repo (yes, probably keras < 2.4, maybe even keras < 2.3)!

@stweil
Copy link
Collaborator

stweil commented Jun 26, 2020

please report to the respective repo

See qurator-spk/sbb_textline_detection#34

@stweil
Copy link
Collaborator

stweil commented Jun 26, 2020

@bertsky, ocrd_keraslm must also limit the Keras version. It currently uses keras >= 2.3.1.

@mikegerber
Copy link
Contributor Author

The only thing that troubles me with delegating to thin module Docker containers is that we more or less surrender version control. It's really difficult to do that with Docker images solely based on digest numbers. But we could of course petition module providers to use the docker tag mechanism in a certain way.

(Sorry for the late reply to this, I'm reviewing open issues.) I'm trying to understand this and I think you're saying that going from, for example, ocrd_calamari==1.2.3 to the Docker container ocrd_calamari b32098d882d6 would surrender version control? If so, then yes, tagging that container with version 1.2.3 is absolutely required and would have to be done by CI/CD. The same goes for a hypothetical AppImage (my Q&D example just sets the version from pip output).

@bertsky
Copy link
Collaborator

bertsky commented Aug 14, 2020

The only thing that troubles me with delegating to thin module Docker containers is that we more or less surrender version control. It's really difficult to do that with Docker images solely based on digest numbers. But we could of course petition module providers to use the docker tag mechanism in a certain way.

I'm trying to understand this and I think you're saying that going from, for example, ocrd_calamari==1.2.3 to the Docker container ocrd_calamari b32098d882d6 would surrender version control?

This is not about containers, but images. And digest numbers are the only reliable identification that Docker images get unconditionally (without extra steps at build time). But then digest numbers would have to be mapped to the git submodule commits that ocrd_all already uses, which seems unmanagable to me. So practically I guess everyone would just try to get the most recent image and pray they never have to go backwards.

If so, then yes, tagging that container with version 1.2.3 is absolutely required and would have to be done by CI/CD.

ocrd_all is more fine-grained than version numbers / release tags – it manages the submodules' commits. So if you replace version with commit then yes, that's what I mean. All Docker builds need to automatically include their git revisions.

With that in place, and with some script foo, we could selectively exchange native installations with thin Docker containers per module as needed – to the point where all modules are Dockerized, so the top level (be it native or Docker itself) becomes thin itself.

@bertsky
Copy link
Collaborator

bertsky commented Aug 21, 2020

Anyway, with #118 ff. the original issue (different TF requirements) has been solved. The topic has since moved on to how do we integrate/compose thin module Docker images (where available) as an alternative, without giving up version control.

I therefore suggest to rename the issue to Allow building with thin module Docker containers.

@kba kba changed the title Resolve the conflicting requirements issues Allow building with thin module Docker containers Oct 2, 2020
@kba kba added the enhancement New feature or request label Oct 2, 2020
@bertsky
Copy link
Collaborator

bertsky commented Oct 8, 2020

And it is actually possible to do without over-engineering too hard. I.e. I use a run one-liner that just calls a container with the right UID and working directory in place.

Yes of course, once we wrap the processor CLIs in another shell script layer, we can make that local venv or Docker run again. And for the latter we only need to ensure we pass on all the arguments.

Since we now have a script mechanism in place delegating to sub-venvs, we could start delegating to thin Docker containers. But we have to consider that we would be calling Docker containers from Docker containers. It's doable, but needs to be accounted for. Especially that the existing mountpoints and bind-mounts need to be passed on. (The situation is different for @mikegerber's solution IIUC, because its outer layer is native, not Docker.)

ocrd_all is more fine-grained than version numbers / release tags – it manages the submodules' commits. So if you replace version with commit then yes, that's what I mean. All Docker builds need to automatically include their git revisions.

With that in place, and with some script foo, we could selectively exchange native installations with thin Docker containers per module as needed – to the point where all modules are Dockerized, so the top level (be it native or Docker itself) becomes thin itself.

So maybe we should start by devising a scheme for including the git version number into all thin/module images. ocrd/all already uses these labels:

  • org.label-schema.vcs-ref
  • org.label-schema.vcs-url
  • org.label-schema.build-date

Let's extend that to all existing submodule images, i.e.

  • ocrd/core
  • ocrd/olena,
  • ocrd/im6convert,
  • ocrd/pagetopdf
  • ocrd/tesserocr,
  • ocrd/fileformat
  • ocrd/anybaseocr,
  • ocrd/cis (currently flobar/ocrd_cis),
  • ocrd/pc_segmentation (currently ls6uniwue/ocrd_pixelclassifier_segmentation),
  • ocrd/sbb_textline_detector (currently not on Dockerhub, as this is built differently than in the module itself)

Then we can follow up with a PR here that inserts the docker (pull and) run for the revision of the respective submodule into the CLI delegator script.

@mikegerber
Copy link
Contributor Author

(The situation is different for @mikegerber's solution IIUC, because its outer layer is native, not Docker.)

Yes, the containers are intended to provide dependency-isolated processors to the native/host-side workflow script ("the outer layer").

@bertsky
Copy link
Collaborator

bertsky commented Feb 21, 2022

Besides spinning up multiple CLI-only containers (somehow) sharing volumes, we could also integrate containers as true network services, but merely by installing a thin OpenSSH server layer on top of each module's CLI offerings. This was done for ocrd/all in https://github.com/bertsky/ocrd_controller, but the same recipe could be used for each module.

@bertsky
Copy link
Collaborator

bertsky commented Jun 21, 2023

Besides spinning up multiple CLI-only containers (somehow) sharing volumes, we could also integrate containers as true network services, but merely by installing a thin OpenSSH server layer on top of each module's CLI offerings. This was done for ocrd/all in https://github.com/bertsky/ocrd_controller, but the same recipe could be used for each module.

Not necessary anymore: we now have the possibility to build network services for module processors by either

both options could be predefined in a docker-compose.yml, each using the same (module) image but differing entry points (i.e. command attribute):

  • ocrd network processing-worker EXECUTABLE-NAME --database MONGO-URL --queue RABBITMQ-URL, or
  • ocrd network processor-server EXECUTABLE-NAME --database MONGO-URL --address HOST:PORT

Thus, in ocrd_all, make all could then aggregate and generate two global docker-compose.yml files (one for each mode, i.e. docker-compose.processing-server.yml and docker-compose.processor-servers.yml) which simply delegate to all the services in the modules, e.g.

...
  ocrd-tesserocr-recognize:
    extends:
      file: ocrd_tesserocr/docker-compose.yml
      service: ocrd-tesserocr-recognize
    command: ocrd network processing-worker ocrd-tesserocr-recognize --database $MONGO_URL --queue $RABBITMQ_URL
    depends_on:
      - ocrd-processing-server
      - ocrd-mongo-db
      - ocrd-rabbit-mq
...
...
  ocrd-tesserocr-recognize:
    extends:
      file: ocrd_tesserocr/docker-compose.yml
      service: ocrd-tesserocr-recognize
    command: ocrd network processor-server ocrd-tesserocr-recognize --database $MONGO_URL --address ocrd-tesserocr-recognize:80
    depends_on:
      - ocrd-mongo-db
...

where configuration (i.e. setting environment variables) can happen via .env mechanism or shell.

Now, what's left is generating CLI entry points that delegate to each respective REST endpoint:

ifneq ($(findstring ocrd_tesserocr, $(OCRD_MODULES)),)
OCRD_TESSEROCR := $(BIN)/ocrd-tesserocr-binarize
OCRD_TESSEROCR += $(BIN)/ocrd-tesserocr-crop
OCRD_TESSEROCR += $(BIN)/ocrd-tesserocr-deskew
OCRD_TESSEROCR += $(BIN)/ocrd-tesserocr-recognize
OCRD_TESSEROCR += $(BIN)/ocrd-tesserocr-segment-line
OCRD_TESSEROCR += $(BIN)/ocrd-tesserocr-segment-region
OCRD_TESSEROCR += $(BIN)/ocrd-tesserocr-segment-word
OCRD_EXECUTABLES += $(OCRD_TESSEROCR)
$(OCRD_TESSEROCR): ocrd_tesserocr
	$(file >$@,$(call delegator,$(@F)))
	chmod +x $@
endif

...

define delegator
#!/bin/bash
ocrd network client processing process $(1) "$$@"
endef

So this would create executable files like venv/bin/ocrd-tesserocr-recognize behaving like the true CLI but really just passing arguments to the server (that must have been started before via docker compose start).

This whole approach could replace both the sub-venvs and pip install rules. Of course, it requires that each module provides a Dockerfile (and docker-compose.yml, perhaps even prebuilt images on the registry).

I am not sure I have the full picture of what we should do, though. Thoughts @kba ?

@kba
Copy link
Member

kba commented Jun 21, 2023

I am not sure I have the full picture of what we should do, though. Thoughts @kba ?

This looks fairly complete to me, I'm currently trying out the updated ocrd_network in core and will post a proof-of-concept based on docker-compose like you propose.

@joschrew
Copy link

How would you address the problem regarding getting the workspace to be processed to the processing-worker? Currently when running and using ocrd in docker the current directory is volume-mounted into /data of the container.
But when installing core to a venv and delegating the processor calls to the processing-workers, the container is already running. Because of that you cannot invoke the processors on workspaces from anywhere you want (as you could do with ocrd installed to a venv) because the volume mounting must have already been done.

@bertsky
Copy link
Collaborator

bertsky commented Jun 29, 2023

@joschrew
I don't understand your logic here. You could easily bind-mount a NFS or other distributed storage, both on the Processing Server and on the Processing Worker and on the client side, so IMO accessibility is not the issue.

But we do have to talk about efficiency, in fact we already did – last time was when conceptualising the METS Server. There in particular, I laid out the existing (in the sense of currently available) implicit transfer model (backed by ad-hoc Workspace.download_file calls of the image URLs) vs. some future (in the sense of still unimplemented) explicit transfer model (prior, queued download to fast local storage). In my understanding, the latter is what is assumed in the /workspace part of https://github.com/OCR-D/ocrd-webapi-implementation, i.e. data ingest at workspace definition (but I could be wrong and IIRC there was also a discussion whether it should be allowed to just clone from a METS instead of uploading a complete ocrd-zip).

So these questions unfortunately hinge on a lot of unfinished business:

@bertsky
Copy link
Collaborator

bertsky commented Aug 1, 2024

So these questions unfortunately hinge on a lot of unfinished business:

By now, everything needed has been completed:

– implemented and already in productive use.

– some issues remain with Resolver.workspace_from_url (and cloning on the command line) and bashlib processors do not have the automatic download yet, but as long as there are workarounds, implicit (in the processor, URL-based) or explicit (before and after processing, via OCRD-ZIP or file copying) data transfers are feasible, so transparent network storage is not strictly required (but the setup may not be trivial).

  • how to do data pipelining (page by page or workspace by workspace) in the Processing Server and Workflow Server

– the Processing Server now schedules jobs and manages their interdependencies, both for page ranges and page_wise mode; the Processing Worker or Processor Server can be multiscalar and distributed.

We still need some more client CLIs to reduce complexity (like avoiding curl scriping) IMO.

However, https://github.com/joschrew/workflow-endpoint-usage-example already contains a complete, flexibly configurable deployment, generating Compose files and using ocrd-all-tool.json – but it is still based on ocrd/all (fat image) for the services. So AFAICS we now need

  • delegating to public module images (thin containers) to be pulled (or locally checked out Dockerfiles to be built) instead
  • generating processor CLI scripts that mimic the old non-networked processor CLIs under the network regime
  • integrating this into ocrd_all itself; since most logic is already in Python which is more flexible and easier to maintain than make and bash code, the Makefile should be stripped down to just a few rules calling the Python scripts; a make all should basically
    1. clone all modules (for their ocrd-tool.json, optionally for their Dockerfile and complete build context), i.e. make modules as it is now
    2. generate the ocrd-all-tool.json by joining all ocrd-tool.json files
      not sure how to deal with ocrd-all-module-dir.json (since resmgr cannot currently handle remote processor installations)
    3. generate an accumulated docker-compose.yml (by generating per-module configs and server+db configs and including everything)
    4. generate and install non-native CLIs for all processors that are just network clients
    5. perhaps even docker compose up -d the whole thing (or reserve that for a separate target like make start)

@bertsky
Copy link
Collaborator

bertsky commented Oct 10, 2024

@joschrew @kba

not sure how to deal with ocrd-all-module-dir.json (since resmgr cannot currently handle remote processor installations)

I gave this – and the larger subject volumes and resource locations – some thought:

Current situation

In core we provide a 4 dynamically configurable locations for every processor:

  • cwd: current working directory, at the time of processor invocation
  • data: $XDG_DATA_HOME/ocrd-resources where we default the env variable to $HOME/.local/share
  • system: /usr/local/share/ocrd-resources
  • module: module-dependent (usually Python distribution directory of the pkg)

At runtime, Processor.resolve_resource will look up resource names in that order, unless the processor's (ocrd-tool-configured) resource_locations restricts the allowable locations. In ocrd_tesserocr, it's just module due to Tesseract's single tessdata directory.

Because of that, resmgr has to acknowledge the same resource locations for each processor. That entails looking up the resource locations (by calling the processor runtime with --dump-json) and the module directory (by calling the processor runtime with --dump-module-dir) – both for listing what is installed and for downloading.

To short-circuit the dynamic calls (which have significant latency, esp. if resmgr must do it for *), we came up with the idea of precomputing both kinds of information, hence ocrd-all-tool.json and ocrd-all-module-dir.json. If these files can be found in ocrd's distribution package directory (i.e. ocrd.__path__[0]), they will be used for lookup. (And of course, if the respective tool is missing, this falls back to dynamic calls.)

Now, in a native installation of ocrd_all, we simply install all tools and

  1. concatenate all the ocrd-tool.json files (see ocrd-all-tool.py)
  2. make a single invocation of the processor to dump and store the module dir (see ocrd-all-module-dir.py)
  3. copy the two resulting files into the distribution package directory of ocrd to "install" them (overwriting the default in core which naturally only covers ocrd-dummy)

In our Docker rules for the fat container, we also did this as part of the build recipe. But we then added a few tricks making it easier for users to have persistent volumes for their models (including both the pre-installed ones and any user-downloaded ones):

  • We preset XDG_DATA_HOME=/usr/local/share (conflating the data and system locations and avoiding ambiguous HOME) and XDG_CONFIG_HOME=/usr/local/share/ocrd-resources for the user database (i.e. the same path where system and data will be looked up now).
  • We moved and symlinked /usr/local/share/ocrd-resources to /models as an abbrevation. That means to OCR-D the path is still XDG_DATA_HOME = data = system location, but actually (in the filesystem) it is under /models, which the user gets encouraged (via Setup Guide and Readme) to put into a named volume whenever they run the image.

This includes all cases, including ocrd_tesserocr which additionally uses the same trick to hide away its module location /usr/local/share/tessdata, conflating it with /usr/local/share/ocrd-resources/ocrd-tesserocr-recognize = data = system location. (Since for ocrd-tesserocr, resmgr must download to the module location only, this simplifies persistence, because it's the same single path to mount. All other modules with resources in the module location are effectively "read-only", i.e. it suffices to download to the data location, so the module location does not need to be conflated/exposed.)

Future solution

For slim containers, there will be no more single Dockerfile, and we cannot expect all modules to agree on the same "trick" alias in their local Dockerfile. Rathermore, since services have to be defined in a docker-compose.yml that is generated (from the available git modules and/or the scenario config) anyway, we can also generate the named volume path and environment variables for them as we like. So we don't need the /models alias – the correct full path for the data=system location can be passed in and volume-mounted flexibly. For ocrd-tesserocr, the trick conflating module=data location is still needed, but it is already in that slim Dockerfile anyway.

Now, what does that mean for ocrd-all-tool.json and ocrd-all-module-dir.json installed into core?

First of all, we do not have a single distribution target anymore, but a bunch of images each with their own ocrd installation inside. As they are built from their local Dockerfiles, they cannot know anything about the other modules at build time. So where we really need to see all modules at once (i.e. in the Processing Server / deployer), we have to bind-mount the generated (concatenated) ocrd-all-tool.json into the container's ocrd. For the individual module service containers however, it would make sense to prepackage their respective ocrd-tool.json as ocrd-all-tool.json. That would have to happen in each individual Dockerfile (as final step):

RUN cat ocrd-tool.json | jq .tools[] > `python -c "import ocrd; print(ocrd.__path__[0])"`/ocrd-all-tool.json

Finally, for ocrd-all-module-dir.json, the question is how and when OcrdResourceManager should get used:

  1. On the ocrd resmgr CLI, which could be run in a native installation of core or via one of the Docker images as a single-use container: Here we cannot even see the installed modules (as they are spread across a multitude of images), so there is no way for dynamic lookup of the module location. And even if we did provide some precomputed location, how could we be sure that it is mounted at the same place as the respective module's container? So IMO the solution must be to avoid (or prohibit) using the module location entirely. For resmgr download, we must only use data location and the user/admin (by way of their operational knowledge) must make sure this matches the path actually used by the respective module's allowed resource-location. (E.g. in ocrd-tesserocr, we must mount their module location's named volume as data location in resmgr.) Worse, for resmgr list-installed we cannot even see the pre-installed resources (unless they are exposed to the shared volume as in ocrd-tesserocr's case). So the only way to get a correct answer from list-installed is running resmgr on the very module image that resources are queried for. Which by itself defeats use of *.

  2. On some new server API for resmgr. For example, we could add /discovery/processor/resources (with methods GET for listing and describing, POST for uploading from the client or the registered URL) to the Processing Server. This in turn would delegate to its deployed modules (for lack of a precise word): if using Processing Servers, these should have corresponding endpoints to delegate to. But for Processing Workers, I'm afraid one would have to send requests about resources (describe, list-installed, download) via the queues...

So it really is complicated, and we don't have a good concept how to query and install processor resources in networked ocrd_all.

@kba argued elsewhere his workaround is to ocrd resmgr download ocrd-processor "*" for every ocrd-processor in every image once, prior to startup. But some processors have many and huge models (think ocrd-detectron2-segment), and that would still make it hard for unregistered resources like newly trained or unpublished models (as is often the case for text recognition). So that's both too much (in terms of required space) and too little (in terms of available models) at the same time.

Opinions?

@bertsky
Copy link
Collaborator

bertsky commented Nov 12, 2024

Elaborating a bit on option 2: of course, the (generated) docker-compose.yml for each module could also provide an additional server entry point – a simple REST API wrapper for resmgr CLI. Its (generated) volume and variable config would have to match the respective Processing Worker (or Processor Server) to be used. But the local resmgr would not need to "know" anything beyond what it can see in its thin container – a local ocrd-all-tool.json and ocrd-all-module-dir.json precomputed for the processors of that module at build time, plus the filesystem in that container and mounted volumes.

In addition, to get the same central resmgr user experience (for all processor executables at the same time), one would still need

  • either a single server (with resmgr-like endpoints or even providing some /discovery/processor/resources) which delegates to the individual resmgr servers,
  • or an intelligent resmgr client doing the same.

Regardless, crucially, this central component needs to know about all the deployed resmgr services – essentially holding a mapping from processor executables to module resmgr server host-port pairs. This could be generated along with the docker-compose.yml (in a new format like ocrd-all-module-dir.json), or the latter even gets parsed directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants