-
Notifications
You must be signed in to change notification settings - Fork 49
Error Message when running register-dids.sh script #857
Comments
|
gp error is gone now after making the suggested changes in the script, but after that when I am trying to run docker compose, I am getting the below mentioned message: WARN[0000] The "POSTGRESQL2_PORT" variable is not set. Defaulting to a blank string. |
Note: Our VM is hosted in public cloud, we have also provided internet access to our VM and no such firewall in place. |
This means that the .env file is missing, what does the output of the register-dids.sh script say? You are probably running into another failure and hence the .env file is never created and so you are missing all the properties. You can of course do what the script does manually.
|
I get this output when I run register-dids.sh [mytestlinux@myssi scripts]$ ./register-dids.sh |
Do I need to set this seed manually in .env file, please guide, I am little bit confused. Also let me know after running docker compose up, will everything be functional or I need to do some other configuration. |
Ok, this looks like a successful run, so no you should not do anything else. Still your output above looks like there is no .env file, but this can also happen if you run docker compose from outside the scripts folder for example. How do you start the compose file and from where? |
I am trying to run the docker compose up from the script folder. |
Can you double check if the .env file is there by running |
after running this command getting the same error and one more thing, .env file is hidden so I have to ls -la to show up. |
Adjusted documentation so that the default is not always recompiling the bpa Signed-off-by: Philipp Etschel <philipp@etschel.net>
Does the .env file have any content? What is the full log output? |
I see the message now build finished, however I also get this message at the last, is it done if yes then what are the next steps? => CANCELED [ghcr.io/hyperledger-labs/business-partner-agent:local internal] load metadata for docker.io/library/e 0.0s |
Did you start with |
I get the below mentioned message when I run the command suggested by you [mylinux@myssi scripts]$ docker compose --env-file .env -f docker-compose.yml up logs [mylinux@ssi scripts]$ docker compose logs -f |
Weird there should not be any Dockerfile involved. try the following:
|
I performed the aforementioned steps and then ran this command "docker compose --env-file .env -f docker-compose.yml up" I am getting this message now [mylinux@ssi scripts]$ docker compose --env-file .env -f docker-compose.yml up logs [mylinux@ssi scripts]$ docker compose logs -f And one more questions, I am trying to run a business partner agent with docker-compose on our VM hosted on cloud or shall I follow the steps of Public VM Deployment? |
The next thing you can try is to remove all containers and images and then start up again same as above: docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
docker rmi $(docker images -q) |
Thanks for all the help, all the containers are running fine except aries cloud agent, getting this error message Error Message: [main] TRACE AriesClient - aca-py not ready yet, reason: Failed to connect to myssilab-aries-agent/192.168.77.3:11708 And we have opened all the ports, don't really know what is the issue. Additional logs: at org.hyperledger.bpa.impl.StartupTasks$ApplicationEventListener$onServiceStartedEvent1$Intercepted.onApplicationEvent(Unknown Source) ~[business-partner-agent.jar:?] More Do we need to set these value also WARN[0000] The "ACAPY_ADMIN_URL_API_KEY" variable is not set. Defaulting to a blank string. Please confirm |
The log output is not enough to tell what's wrong. The "aca-py not ready yet" message can happen a couple of times depending on the startup order of the containers and is on it self not problematic. If the above exception happens because of a timeout this can have two reasons.
ACAPY_ADMIN_CONFIG=--admin-insecure-mode
# Production setup (change the key to a generated secret one)
#ACAPY_ADMIN_URL_API_KEY=change-me
#ACAPY_ADMIN_CONFIG=--admin-api-key ${ACAPY_ADMIN_URL_API_KEY} So if line 91ff in your .env file looks like above 2 is not the reason, and it is probably 1. And you have to check your logs for the aca-py output. If you see the following:
aca-py comes up and I need the full BPA stack trace to see what is going on. If not then I need the aca-py part |
It is exactly like this in my .env file as you mentioned. ACAPY_ADMIN_CONFIG=--admin-insecure-mode Production setup (change the key to a generated secret one)#ACAPY_ADMIN_URL_API_KEY=change-me how do I see the entire logs? |
If you start with docker compose up the log is available in you console |
I did this, but I don't see any error here |
[+] Running 5/5 Everything is running , did not get any error, do I need to check the logs of aries agent by running docker logs mylab-aries-agent |
The containers where not properly stopped: docker compose down |
All the containers are up and running but still unable to connect |
what do you mean by that:
I can not help you if you do not give me more context, as I have no clue what you are doing, sorry |
|
Also when I try to run curl command it says, failed to connect. |
i recommend using bpa's UI for this, as the proof template api works different, as it is not based on the proof-request, it is a template that renders a proof-request later on. if you want to reverse engineer and do everything with postman, see what the browser sends to the backend controller and go from there. |
or check the swagger docs for the rest api e.g. http://localhost:8080/swagger-ui/ |
How do I access the bpa's UI, is it the BPA_WEBHOOK_URL? |
When I am using the below mentioned proof template, I get success response in postman but the attribute values are coming as blank in the wallet when trying to verify it. { When I am using this template, it doesn't work, I get error message Template
Also unable to access the front end. |
|
Yes, I did the same thing that you are mentioning here, after creating the proof template, I sent the proof request Step 1: Created a Schema Following all the process but still unable to figure it out why the blank values are showing up when verifying the credential and all the previous steps are working fine. |
did you map the port?
what do you mean by that and where? the app the bpa? be aware that here is a difference between a proof request and the presentation. |
When I say values are coming as blank, it is coming as blank in the wallet. I know proof request and template are different. Proof Request Sample:
And the above one I am using it to create proof template. |
it totally depends on the wallet app how it displays the proof request, with value restrictions or not. the important part is what the wallet app responds. so if you have two credentials one with name=test and one with name=other and you send a proof request with a value restriction name=test and the wallet app selects the one with name=test, and the bpa receives name=test then everything works as expected. |
Exactly, that is how it should work, whenever I am verifying the credential, it is verifying the same credentials present in the wallet for the respective schema id and credential id. Anything else that I need to look into this issue? |
Issue is resolved now, I found that the DB PostgreSQL port was not open, credentials were appearing in the wallet but it was not persisting the data, hence the values were coming as blank in the wallet at the time of verifying the credentials. I think this could be the only issue, after opening the port, container restart, everything started working fine. Thanks to you also for helping me every time to dig further into the issue and find the root cause. There is one more question, I find lissi wallet slower than estatus wallet, which wallet works well lissi or estatus? |
If you look at the app store entries you see that the Lissi wallet is being more actively maintained. In the end it pretty much depends on your use case and with whom you want to interact with in your ecosystem. As of now there is no app that does it all. So far the esatus wallet was a basic but reliable app for everything anoncreds and Indy ledger related. The Lissi app does a bit more. |
I am facing intermittent issues, it works sometimes and sometimes it doesn't work. I have also noticed that after doing restart of the containers it works, and now we are getting this issue: Error Message: 06:50:13.991 [default-nioEventLoopGroup-1-2] ERROR AcaPyAuthFetcher - aca-py webhook authentication failed. Configured bpa.webhook.key: @_:xxxxxxxx33On, received x-api-key header: null |
This means you are running something very old, is this intentional? This has been removed quite a while ago. This means you have configured the bpa for webhook security, but did not do it for the acapy. You need to check if |
Yes, it is set, here is docker compose yml section for aries agent
|
Like I said, this is all very old stuff, and has already been fixed a while back. I believe there was a issue with acapy where the api key got lost after some time, or after a exception (don't remember exactly) and after a restart it was set again. You either have to upgrade, or turn of webhook authentication. As long as you do not expose BPA's webhook URL to the internet this should be ok. Otherwise upgrade, because no one will support the stack you are running. |
Okay, got it. If I understand you currently, you are saying that aries cloud agent version present in the git hub repository is "image: bcgovimages/aries-cloudagent:py36-1.16-1_0.7.5" and my code has " image: bcgovimages/aries-cloudagent:py36-1.16-1_0.7.0 Possible Solutions to fix this issue:
Is that correct? |
Like always I'm having no clue what you are doing, and like I said it looks like you are running something very old, so just bumping up one version without considering the rest will cause other issues. So you have two options:
|
leave everything as it is and just set BPA_WEBHOOK_KEY= in your .env file : This is already set. |
what version of the bpa are you running? or commit version? |
@etschelp |
Then you are all set, because this version does not use webhooks any more and the exception above can physically not happen because the code that logs the exception above is gone. If you are still seeing this exception my guess is that you have build your own bpa images locally and tagged it as edge. To be sure you can use the latest tagged stable version: |
Yeah, I know the wierd thing is I get only error at the time of receiving
proof presentation api, Unable to receive proof data and in the logs I get
api key error message because of that entire flow is not running. Strange
part is after restart of fhe containers everything works fine for 1 days
and then the next day same issue.
…On Mon, 3 Apr 2023, 17:57 Philipp Etschel, ***@***.***> wrote:
Then you are all set, because this version does not use webhooks any more
and the exception above can physically not happen because the code that
logs the exception above is gone. If you are still seeing this exception my
guess is that you have build your own bpa images locally and tagged it as
edge. To be sure you can use the latest tagged stable version:
ghcr.io/hyperledger-labs/business-partner-agent-new:0.12.0, but from what
I have seen above your docker compose file and your config will then not
match anymore, and you have to migrate.
—
Reply to this email directly, view it on GitHub
<#857 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3AXSCUIGPKG7XMYNPRR2YDW7K62PANCNFSM6AAAAAAUT4KWYU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Ok from the top, yes acapy had a bug that did reset the api key after a time, if you restart it will work for a while and then it will be gone again. To fix that you will have to upgrade acapy. BUT, the webhook api key should not be needed at all, because it is removed from the latest BPA version so you should not see this at all. This means you are not only running an old acapy version but also an old BPA version. I already wrote tons of hints on how to debug and fix this with docker, you just have to scroll up in this very long dialog. |
Okay, let me scroll up and find any solution if you have mentioned it in
our thread
…On Mon, 3 Apr 2023, 19:51 Philipp Etschel, ***@***.***> wrote:
Ok from the top, yes acapy had a bug that did reset the api key after a
time, if you restart it will work for a while and then it will be gone
again. To fix that you will have to upgrade acapy.
BUT, the webhook api key should not be needed at all, because it is
removed from the latest BPA version so you should not see this at all. This
means you are not only running an old acapy version but also an old BPA
version. I already wrote tons of hints on how to debug and fix this with
docker, you just have to scroll up in this very long dialog.
—
Reply to this email directly, view it on GitHub
<#857 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3AXSCQQMNYW7TIYOFNURILW7LMHDANCNFSM6AAAAAAUT4KWYU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Are you saying that temporary fix would be to restart the containers for now and permanent fix is to upgrade acapy version and BPA version, is that correct? |
Awaiting your reply |
thats what I said |
Discussed in #856
Originally posted by msingh1304 February 7, 2023
Hello Team,
We have completed the pre-requisite required for the installation but now facing an issue when trying to run the script.
Pre-requisite completed:
VM has been configured on Cloud.
Pre-requisite of installing the below mentioned tools on VM have already been completed.
o docker
o docker-compose
o git
Current Impediment/Blocker:
When trying to run the script “register-dids.sh” as per the docs, we are getting the below mentioned message, could you please confirm what is this gp used for and which package is required to run gp?, we could not find this in the pre-requisite list document.
Error Message: /usr/bin/which: no gp in (/home/linux/.local/bin:/home/linux/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
The text was updated successfully, but these errors were encountered: