Conversation
Fault Injection testing shall not access internet so neither https_proxy nor no_proxy variables are required. Signed-off-by: Tomasz Gromadzki <tomasz.gromadzki@hpe.com>
janekmi
left a comment
There was a problem hiding this comment.
I understand this is only a temporary solution until we remove the proxy from our pipelines altogether, right?
| println "DAOS_NO_PROXY: $DAOS_NO_PROXY" | ||
| ret_str += ' --build-arg DAOS_NO_PROXY="' + env.DAOS_NO_PROXY + '"' | ||
| } | ||
| if (!(env.STAGE_NAME?.contains('Fault injection'))) { |
There was a problem hiding this comment.
We should use an argument passed to dockerBuildArgs instead of triggering action based upon stage names.
There was a problem hiding this comment.
I think this is going be a phased approach, with the first just to get things working again, then the next to to work on a plan starting with documenting how external resources should be accessed in a portable and maintainable manner.
The first option to look for is if there is a Artifact server to be used, and we need to make the use of Artifactory or Nexus to be transparent. Which means we also need to have that set up for proxy mirroring. This gives the best performance and reliability for our lab.
It needs to be optional so that the code will work outside of our lab. And the code exposed to the user should hide if Artifactory or Nexus is used. This will a bit of planning and refactoring to roll in correctly.
Next option is that the code should be looking for a proxy is configured to be used, and that gets more complicated because the "noproxy" environment variable is unreliable to use. This is an option to support smaller volume shops.
For both these options we want to look for global configuration files that can be configured early in the script or node setup. Use of proxy environment variables should be avoided in production if at all possible because they have too broad of scope, and "noproxy" may not be sufficient to work around that issue. On a desktop system we can have a script look up the proxy server to do this configuration portably because that is how the proxy for a web browser is typically configure. Unknown if this discovery would work in the lab.
And then the final case where there are known specific proxy or artifact server used, such as a GitHub hosted runner, then we fall back to assuming direct access to the public Internet.
| println "DAOS_NO_PROXY: $DAOS_NO_PROXY" | ||
| ret_str += ' --build-arg DAOS_NO_PROXY="' + env.DAOS_NO_PROXY + '"' | ||
| } | ||
| if (!(env.STAGE_NAME?.contains('Fault injection'))) { |
There was a problem hiding this comment.
I think this is going be a phased approach, with the first just to get things working again, then the next to to work on a plan starting with documenting how external resources should be accessed in a portable and maintainable manner.
The first option to look for is if there is a Artifact server to be used, and we need to make the use of Artifactory or Nexus to be transparent. Which means we also need to have that set up for proxy mirroring. This gives the best performance and reliability for our lab.
It needs to be optional so that the code will work outside of our lab. And the code exposed to the user should hide if Artifactory or Nexus is used. This will a bit of planning and refactoring to roll in correctly.
Next option is that the code should be looking for a proxy is configured to be used, and that gets more complicated because the "noproxy" environment variable is unreliable to use. This is an option to support smaller volume shops.
For both these options we want to look for global configuration files that can be configured early in the script or node setup. Use of proxy environment variables should be avoided in production if at all possible because they have too broad of scope, and "noproxy" may not be sufficient to work around that issue. On a desktop system we can have a script look up the proxy server to do this configuration portably because that is how the proxy for a web browser is typically configure. Unknown if this discovery would work in the lab.
And then the final case where there are known specific proxy or artifact server used, such as a GitHub hosted runner, then we fall back to assuming direct access to the public Internet.
|
Looks like the downstream testing still fails fault injection: https://jenkins-3.daos.hpc.amslabs.hpecorp.net/job/daos-stack/job/daos/job/ci-daos-stack-pipeline-lib-PR-508-master/1/pipeline-overview/?selected-node=854 ?? |
Yes, it is expected until the daos-stack/daos#18024 landed to master. I do not see any other way to fix the problem we have. |
SRE-3737 ci: HOT FIX Fault Injection without proxy
Fault injection testing doesn't access the internet, so you don't need to use the
https_proxyorno_proxyvariables.This PR is verified by daos-stack/daos#18024.
This PR must land before daos-stack/daos#18024.