Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single Process Docker Image & Server Adjustments #88

Merged
merged 26 commits into from
May 16, 2024

Conversation

meksor
Copy link
Contributor

@meksor meksor commented May 3, 2024

Adjusts the docker image to use only a single operating system process. Since replication will be handled by docker compose or kubernetes in the future, installing and running gunicorn to manage multiple server process is not necessary anymore...

If a user really wants to run multiple workers in a single pod they can start the server with: ixmp4 server start --workers=4
More info in DEVELOPING.md.

Updates the docker workflow to publish to registry.iiasa.ac.at as well.
Also fixes the broken server logging config.

@meksor meksor changed the title Single Process Docker Image Single Process Docker Image & Server Adjustments May 3, 2024
Copy link
Member

@glatterf42 glatterf42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't have too much experience with docker, so these changes look good to me since the tests pass.
One quick question about .github/workflows/push_tag_event.json: what is this used for?

However, I'm pretty sure I don't like reverting to server.conf. In #83, we migrated to the server.json as the recommended format since Python 3.2. As far as I understood back then, the main issue was that the FileConfig format requires settings for the root logger, which overwrites regular settings to the logger downstream.
In theory, both formats provide the same functionality, so whatever fixes you made in this server.conf file can also be made in server.json.
Finally, if you don't like the format of server.json, the dictionary in there can be created in whichever way you like, even as part of a code file, so we don't need to keep this particular format/file around. However, we should use the dictConfig method to avoid overwriting the root logger. Whichever solution we choose, we might want to streamline the other logging/config files as well.

@meksor
Copy link
Contributor Author

meksor commented May 14, 2024

push_tag_event.json is a test payload used to test the github action locally...

server.conf: I reverted this because you broke it! I dont know what exactly the pros and cons are of these formats, but saying "its recommended" and incorrectly migrating it is not good. If we deploy now we basically lose all logging functionality on the server...

If you spend a little time looking into how this file is used you will see that i never explicitly set up the logger with this file. I just pass the path to uvicorn and it does the rest. This is straight from an old version of the fastapi docs, which was released well after python 3.2, so it can't be that bad.

In theory, both formats provide the same functionality, so whatever fixes you made in this server.conf file can also be made in server.json.

If this is correct, please enlighten me because i could not make it work or find any more information on this.
I'm open for a PR that does this correctly, but I have other priorities with this project and this PR specifically is just trying to fix what got broken... + We need it for kubernetes deployment...

@glatterf42
Copy link
Member

I'm sorry, it was not my intention to break anything here. All tests were passing and the ones downstream passed (again), too, so I thought the migration was fine (might speak to the requirement of another test to avoid this in the future). Could you please point me to the error messages in question? Otherwise, it's hard to pinpoint what went wrong.
From uvicorn's docs, I understand that using a .json file will cause dictConfig() to process that file, which is what I was aiming for.

@glatterf42
Copy link
Member

uvicorn.run() passes its log_config parameter all the way to its Config.__init__() function, where self.configure_logging() is called. This function looks like it expects an unpacked dict, a .json file or a .yaml/.yml file, all of which are processed via dictConfig(). All other formats get passed to fileConfig(), but only after pointing readers to this note: https://docs.python.org/3/library/logging.config.html#configuration-file-format
So my first suspicion would be, if logging is missing from the server loggers, that the log_level is not set (correctly). This setting might only affect uvicorn's own loggers, though.
As the final note for now, maybe uvicorn's default logging config can shed light on why ours doesn't work.

@glatterf42
Copy link
Member

Wait, could it be that I simply forgot to adapt server.py? I remember I was confused why the server.json file was not used anywhere. And it looks to me like we still pass ...server.conf to uvicorn.run(), while this file does not exist anymore. Let me try to push a commit.

* Pass this to uvicorn.run, too
@meksor
Copy link
Contributor Author

meksor commented May 14, 2024

No. If you do that the server will write to three files called something along the lines of 'os.getenv' in the wrong location.
The problem is that the way we need to find out how the 'args' handler parameter is migrated properly. I have sunk 2 hours into this and not found a solution so lets please not do it in this PR.

@meksor
Copy link
Contributor Author

meksor commented May 14, 2024

Ad Tests: Yeah there should probably be more logging tests, but i feel this is a big task and will probably mean that we often have to change these tests when we change code so im not sure how to best go about this...
As an intermediate solution one could add a simple test that calls the ixmp4 server start command and looks if the server logs exists. But I also think this is out of the scope of this PR.

@meksor
Copy link
Contributor Author

meksor commented May 14, 2024

Just saw you reverted the reversion here. If you now execute ixmp4 server start you will see what i mean.

@glatterf42
Copy link
Member

Thanks, I see now what you mean with files in the wrong location. I'd like one more try to fix it though: I've moved the reading of the environment variables to conf/__init__ and read these variable in the server.json now. This is based on this answer and some docs examples.
Locally, the logs are now again created where I would expect them, please confirm this on your system :)
We could also create an __init__ file in conf/logging and import the variables from there to keep the common __init__ clean. However, the whole way we go about these variables is sort of strange now, I think: we create an instance of Settings which has a log_dir. From that, we store access_file etc as environment variables, which we can then read from another location. It feels like this should be possible in a more elegant way, but I'm not entirely sure how we would go about this.

@meksor
Copy link
Contributor Author

meksor commented May 15, 2024

This stops working if someone switches the initialization of Settings and those variables, can we put this in the settings class directly to avoid that?
Looks good though, thanks!

@glatterf42
Copy link
Member

Do you mean like this?

@meksor
Copy link
Contributor Author

meksor commented May 16, 2024

hm, now you patched out the ability to set these via env variables... No biggie imo, i dont think this is actually used.

Copy link
Member

@glatterf42 glatterf42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh, I thought that was just there as a workaround to update the values in the config files. If we still need it, we can of course set both the envvars and Settings.access_file etc. If not, I think this is good to go.

@meksor meksor merged commit 599ba10 into main May 16, 2024
9 checks passed
@meksor meksor deleted the enhancement/single-process-dkrimg branch May 16, 2024 12:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants