-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(test-tooling): fabric AIO image docker in docker support #279
Labels
bug
Something isn't working
Fabric
good-first-issue
Good for newcomers
Hacktoberfest
Hacktoberfest participants are welcome to take a stab at issues marked with this label.
help wanted
Extra attention is needed
Milestone
Comments
petermetz
added
bug
Something isn't working
good-first-issue
Good for newcomers
help wanted
Extra attention is needed
Fabric
labels
Sep 5, 2020
petermetz
added
the
Hacktoberfest
Hacktoberfest participants are welcome to take a stab at issues marked with this label.
label
Oct 13, 2020
petermetz
added a commit
to petermetz/cacti
that referenced
this issue
Nov 18, 2020
Fixes hyperledger-cacti#279 Note: Although this image now uses Docker in Docker (DinD) it does not actually solve the problem of randomized ports completely because as it turns out there's a second issue with the randomized ports namely that Fabric's service discovery doesn't have a port mapping feature in it so we are not able to specify the associations between the public and private ports where the public (host) ports are randomized and the private ones are the ones returned by the service discovery algorithms. For example, if you start the new AIO image with randomized ports, then it will run one of the peers on port 7051 of the AIO container and the host will map that to something random typically somewhere in the 30000 to 40000 port range. When the Fabric connector (that's running on the host machine) instantiates a Fabric Gateway object of the Fabric Node SDK it performs service discovery where the peer is described by the service discovery as listening on port 7051, but from the network of the host (where the Fabric connector is) that port is not correct because the real one is the random one as explained in the above paragraph. What we need to solve this is a way to inject into the service discovery mechanism our own port mappings so that the Fabric connector related tests can run in parallel and the CI can stop being flaky. Because of what is explained above, in this commit we also skip the Fabric AIO image dependent test cases until they can be fixed via refactoring the test cases and the test ledger class in a way so that it works with the new AIO image. These changes will need to have some method deletions and renames which will make the change quite big on its own and therefore will go in a separate commit which will also re-activate the test cases that are currently being skipped as per this current change. Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz
added a commit
to petermetz/cacti
that referenced
this issue
Dec 1, 2020
Fixes hyperledger-cacti#279 Note: Although this image now uses Docker in Docker (DinD) it does not actually solve the problem of randomized ports completely because as it turns out there's a second issue with the randomized ports namely that Fabric's service discovery doesn't have a port mapping feature in it so we are not able to specify the associations between the public and private ports where the public (host) ports are randomized and the private ones are the ones returned by the service discovery algorithms. For example, if you start the new AIO image with randomized ports, then it will run one of the peers on port 7051 of the AIO container and the host will map that to something random typically somewhere in the 30000 to 40000 port range. When the Fabric connector (that's running on the host machine) instantiates a Fabric Gateway object of the Fabric Node SDK it performs service discovery where the peer is described by the service discovery as listening on port 7051, but from the network of the host (where the Fabric connector is) that port is not correct because the real one is the random one as explained in the above paragraph. What we need to solve this is a way to inject into the service discovery mechanism our own port mappings so that the Fabric connector related tests can run in parallel and the CI can stop being flaky. Because of what is explained above, in this commit we also skip the Fabric AIO image dependent test cases until they can be fixed via refactoring the test cases and the test ledger class in a way so that it works with the new AIO image. These changes will need to have some method deletions and renames which will make the change quite big on its own and therefore will go in a separate commit which will also re-activate the test cases that are currently being skipped as per this current change. Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz
added a commit
to petermetz/cacti
that referenced
this issue
Dec 11, 2020
Fixes hyperledger-cacti#279 Note: Although this image now uses Docker in Docker (DinD) it does not actually solve the problem of randomized ports completely because as it turns out there's a second issue with the randomized ports namely that Fabric's service discovery doesn't have a port mapping feature in it so we are not able to specify the associations between the public and private ports where the public (host) ports are randomized and the private ones are the ones returned by the service discovery algorithms. For example, if you start the new AIO image with randomized ports, then it will run one of the peers on port 7051 of the AIO container and the host will map that to something random typically somewhere in the 30000 to 40000 port range. When the Fabric connector (that's running on the host machine) instantiates a Fabric Gateway object of the Fabric Node SDK it performs service discovery where the peer is described by the service discovery as listening on port 7051, but from the network of the host (where the Fabric connector is) that port is not correct because the real one is the random one as explained in the above paragraph. What we need to solve this is a way to inject into the service discovery mechanism our own port mappings so that the Fabric connector related tests can run in parallel and the CI can stop being flaky. Because of what is explained above, in this commit we also skip the Fabric AIO image dependent test cases until they can be fixed via refactoring the test cases and the test ledger class in a way so that it works with the new AIO image. These changes will need to have some method deletions and renames which will make the change quite big on its own and therefore will go in a separate commit which will also re-activate the test cases that are currently being skipped as per this current change. Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz
added a commit
to petermetz/cacti
that referenced
this issue
Dec 14, 2020
Fixes hyperledger-cacti#279 Note: Although this image now uses Docker in Docker (DinD) it does not actually solve the problem of randomized ports completely because as it turns out there's a second issue with the randomized ports namely that Fabric's service discovery doesn't have a port mapping feature in it so we are not able to specify the associations between the public and private ports where the public (host) ports are randomized and the private ones are the ones returned by the service discovery algorithms. For example, if you start the new AIO image with randomized ports, then it will run one of the peers on port 7051 of the AIO container and the host will map that to something random typically somewhere in the 30000 to 40000 port range. When the Fabric connector (that's running on the host machine) instantiates a Fabric Gateway object of the Fabric Node SDK it performs service discovery where the peer is described by the service discovery as listening on port 7051, but from the network of the host (where the Fabric connector is) that port is not correct because the real one is the random one as explained in the above paragraph. What we need to solve this is a way to inject into the service discovery mechanism our own port mappings so that the Fabric connector related tests can run in parallel and the CI can stop being flaky. Because of what is explained above, in this commit we also skip the Fabric AIO image dependent test cases until they can be fixed via refactoring the test cases and the test ledger class in a way so that it works with the new AIO image. These changes will need to have some method deletions and renames which will make the change quite big on its own and therefore will go in a separate commit which will also re-activate the test cases that are currently being skipped as per this current change. Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz
added a commit
to petermetz/cacti
that referenced
this issue
Dec 15, 2020
Fixes hyperledger-cacti#279 Note: Although this image now uses Docker in Docker (DinD) it does not actually solve the problem of randomized ports completely because as it turns out there's a second issue with the randomized ports namely that Fabric's service discovery doesn't have a port mapping feature in it so we are not able to specify the associations between the public and private ports where the public (host) ports are randomized and the private ones are the ones returned by the service discovery algorithms. For example, if you start the new AIO image with randomized ports, then it will run one of the peers on port 7051 of the AIO container and the host will map that to something random typically somewhere in the 30000 to 40000 port range. When the Fabric connector (that's running on the host machine) instantiates a Fabric Gateway object of the Fabric Node SDK it performs service discovery where the peer is described by the service discovery as listening on port 7051, but from the network of the host (where the Fabric connector is) that port is not correct because the real one is the random one as explained in the above paragraph. What we need to solve this is a way to inject into the service discovery mechanism our own port mappings so that the Fabric connector related tests can run in parallel and the CI can stop being flaky. Because of what is explained above, in this commit we also skip the Fabric AIO image dependent test cases until they can be fixed via refactoring the test cases and the test ledger class in a way so that it works with the new AIO image. These changes will need to have some method deletions and renames which will make the change quite big on its own and therefore will go in a separate commit which will also re-activate the test cases that are currently being skipped as per this current change. Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz
added a commit
that referenced
this issue
Dec 15, 2020
Fixes #279 Note: Although this image now uses Docker in Docker (DinD) it does not actually solve the problem of randomized ports completely because as it turns out there's a second issue with the randomized ports namely that Fabric's service discovery doesn't have a port mapping feature in it so we are not able to specify the associations between the public and private ports where the public (host) ports are randomized and the private ones are the ones returned by the service discovery algorithms. For example, if you start the new AIO image with randomized ports, then it will run one of the peers on port 7051 of the AIO container and the host will map that to something random typically somewhere in the 30000 to 40000 port range. When the Fabric connector (that's running on the host machine) instantiates a Fabric Gateway object of the Fabric Node SDK it performs service discovery where the peer is described by the service discovery as listening on port 7051, but from the network of the host (where the Fabric connector is) that port is not correct because the real one is the random one as explained in the above paragraph. What we need to solve this is a way to inject into the service discovery mechanism our own port mappings so that the Fabric connector related tests can run in parallel and the CI can stop being flaky. Because of what is explained above, in this commit we also skip the Fabric AIO image dependent test cases until they can be fixed via refactoring the test cases and the test ledger class in a way so that it works with the new AIO image. These changes will need to have some method deletions and renames which will make the change quite big on its own and therefore will go in a separate commit which will also re-activate the test cases that are currently being skipped as per this current change. Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
ryjones
pushed a commit
that referenced
this issue
Feb 1, 2023
…elay/miow-0.2.2 Bump miow from 0.2.1 to 0.2.2 in /core/relay
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
bug
Something isn't working
Fabric
good-first-issue
Good for newcomers
Hacktoberfest
Hacktoberfest participants are welcome to take a stab at issues marked with this label.
help wanted
Extra attention is needed
Describe the bug
Right now we are forced to bind to host ports on the fabric AIO image to make it work as a temporary measure (was short on time and decided to defer tackling the DinD support)
Expected behavior
Fabric AIO image should be able to run multiple containers in parallel (right now it's not possible because of the host ports).
Having DinD would allow us to not have to bind to the docker.sock of the host either and so this way the chaincode containers
that the peer launches upon contract instantiation would be hosted within the AIO image and make the host port binding problem go away.
The text was updated successfully, but these errors were encountered: