New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not able to export via bin/kc.sh #28384
Comments
By spinning up another pod that is similar to the main one (but with
|
Thanks for reporting this issue, but as this is reported against an older and unsupported release we are not able to evaluate the issue. Please verify with the nightly buildor the latest release. If the issue can be reproduced in the nightly build or latest release add a comment with additional information, otherwise this issue will be automatically closed within 14 days. |
Huh, I guess the Helm chart lags a version. Will do - thanks! |
Recreated the application with the following Argo definition:
And re-verified behaviour:
|
That is correct, there is no stop command in the kc scripts. A kill -9 should stop the server.
@scubbo that logic is part of your version - there's no explicit option, it happens implicitly for import/export. Your problem is related to jgroups, not http serving. Your export command is likely picking up the cache / cache-stack from environment variables, and thus is trying to join the cache cluster. Can you try forcing local cache instead:
The keycloak operator sets --cache=local for a similar reason when doing an import. cc @vmuzikar if there is never a scenario where joining the cache makes sense with import / export we could for this handling on the server side. |
Looks like there's no such option:
|
Sorry I did not double check that before making the suggestion. The operator is actually using an environment variable, so try
|
Still no luck:
|
this export is still behaving as if cache=ispn and cache-stack=kubernetes are set. Can you confirm if those are otherwise being set via your env or are they already built into the optimized image? I should be able to reproduce what you are seeing from there. |
Hi, I have tested the current nightly build and the export fails as below.
The current latest do allow exporting the master realm.
However , exporting any other realm causes an error.
Can someone kindly help please. |
@suthagarht please open a separate issue / discussion. None of that is related to this issue. |
@scubbo Tried this scenario with cache=ispn baked into the image - at that point even using an environment variable for the cache option does not work. The reason this works for the operator is that the import test cases are working against an unoptimized image or for the custom image test - that's running in a separate pod so there's no port contention. I think I've seen other import/export issues due to picking up configuration from an optimized image, namely #15898 Some possible workaround:
At the very least import / export behavior with a shared optimized image is very confusing. @keycloak/cloud-native let's use this issue to come up with at least better documentation about the expectations if not continue to refine the behavior to make it more sensible. ~priority-important |
Appreciate your attention on this! Sorry I haven't been more responsive, day-job things 😞 I'll be able to give this more attention over the weekend. Here's a dump of
I didn't explicitly set either of those, but there are plenty of preset default env variables via the Helm chart:
I'll poke around more this evening or tomorrow. |
What should be happening here is that KEYCLOAK_CACHE_STACK and KEYCLOAK_CACHE_TYPE are getting converted to arguments to a kc.sh build (Keycloak will only look for environment variables with the KC_ prefix). Then that optimized image will have those values persisted, and will be picked up by your export command. There is logic that will check for build time option changes when running an import / export - but only if it's a cli option, which cache is not. Also moving forward we're talking about making cache options runtime, instead of build time - see #27549 cc @mabartos - which would help here as well. See if the workarounds from the previous comment make sense. |
Can you expand on what "make more ports available" means? I'm guessing this means:
|
I was thinking that later versions were using a port range for the base port, but now I'm not so sure after looking at the infinispan configuration.
Yes adding -Djgroups.bind.port=7900 should do that.
Yes adding -Djgroups.bind.port=7900 is picked up via the export command, the log shows:
Which is expected to be at the base port + 50000. But at least for my local tests on main with is using the default cache-stack, that is not actually connecting to port 7900 (nor the server to 7800). |
OK - with an Argo app defined like so:
, the required parameter was passed to the
I.e. same behaviour as before (except it's now failing to bind to the new port). I had a look at how the image itself is built. The Dockerfile is here, and the entrypoint pulls a bunch of environment variables (including |
Although I didn't have high hopes for this working (because, if I'm understanding your comments correctly, this value is getting set at image-creation time and not at runtime) I also tried explicitly setting the
So - while I am technically unblocked (I can export realms, by setting |
(Just for completeness - I tested removing the |
Unfortunately I spoke too soon - I'm able to carry out a manual command to export a realm, but not able to automate it:
I'm not sure if that qualifies as a Bitnami issue or a Keycloak one. The absence of |
I was able to get a separate Pod to successfully run a Realm Export by adding a |
I'll try to address as much as I can from the preceeding comments - let me know if I miss something.
Is there something you need to do in your environment to declare / allow for port usage? Unless I was really unlucky at picking an alternative port, I would expect that to work.
It's not using an explicit build. The start command will implicitly perform a build if needed. What I don't see at first glance is where KEYCLOAK_ env variables are being mapped to something that keycloak will use.
No, you would not want to do it this way because you don't want to use cache type local for the server. You can achieve the same thing by using the workaround of first performing a build, then doing your export. If you will not be restarting the keycloak process from within the same pod (which looks to be the case), you shouldn't need to worry about optimizing the image again back to what would be used by a regular start command. But of course using a separate pod makes this even clearer.
Yes, this is effectively the operator flow for import when using a separate Pod. It is given all of the environment variables and command line arguments as would be given to the regular server process, but is scrubbed of the cache related settings so that local can be forced. What we should use this issue for to rationalize exactly what import/export is inferring it needs - similar to what was done for the http serving port it doesn't seem like it should ever need cache access nor the mangement interface. |
Great, thanks. Sorry, I know I dumped a lot of information on you while investigating - appreciate the helpful responses and explanations (especially the explanation that I should have been trying to do an export from a separate Pod in the first place anyway 🙃)! |
@vmuzikar @mabartos I have this marked as important because it seems there are issues with users trying to figure out what needs supplied to import / export. There are several cases now:
It seems like we should generally avoid any workarounds that require using an explicit build or even relying upon an implicit build when dealing with import / export - that gets pretty confusing when a single installation. Do you want to stick with important or lessen this? |
@shawkins Do you see this as a bug? From my perspective I perceive it as a documentation issue (we could be more clear on the behavior), or a UX enhancement (to improve the behavior to be more intuitive). |
@vmuzikar I would go as far to call it a bug if you need to adjust anything build time related - the build vs run time is already confusing enough for our users having to consider that you may need two separate build time profiles is even worse. Unless there are other possible build time options in play, getting the cache properties to runtime would make this more of a doc / UX issue. |
The most prominent build time options relevant to export are the |
Before reporting an issue
Area
dist/quarkus
Describe the bug
Replica of this issue - I'm not able to export realms via
kc.sh
, on Keycloak23.0.7
- error reports an inability to bind to a port (full Stack Trace in "Actual Behaviour").Version
23.0.7
Regression
Expected behavior
Realm information exported to output file
Actual behavior
Logs and stack trace as below:
How to Reproduce?
Install Keycloak as an ArgoCD Application via a Helm chart:
Create a realm via the UI.
Open a shell on the container, and attempt to export:
Anything else?
I note from here that "as part of [import/export, Keycloak] spins up the http server, so when that port is blocked (e.g. keycloak runs or another app has a binding) this could cause the error you are observing" - that is, that you can't export from a pod with a running server. That's unexpected! (though, in fairness, documented here)
The workaround mentioned in that discussion (
QUARKUS_HTTP_HOST_ENABLED=false ./bin/kc.sh ...
) also failed with a similar-looking stack trace:I also can't see any way of temporarily stopping the server -
kill -9 <pid>
does not seem to do anything, and./kc.sh --help
does not list astop
command.It looks like the option introduced here is not in my version of
kc.sh
- I don't see such an option from./kc.sh export --help
. In case I was misunderstanding that file, I also tried prependingQUARKUS_HTTP_SERVER_ENABLED=false
orHTTP_SERVER_ENABLED=false
to thekc.sh
command (i.e.QUARKUS_HTTP_SERVER_ENABLED=false /opt/bitnami/keycloak/bin/kc.sh ...
), to no avail../kc.sh --version
gives:The text was updated successfully, but these errors were encountered: