You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Periscope cannot generate blob name for upload to storage if it is in a namespace other than aks-periscope. I'm not sure if this is a known and accepted limitation, but I couldn't find it documented and it tripped me up.
The reason for the hard-coded namespace dependency is:
We need to generate a blob with a timestamp that's unique to the deployment/run, but shared between all pods.
The method for getting this timestamp is to list all pods in the aks-periscope namespace, and take the creation timestamp of the last one.
Aside from tying us to a particular namespace, this is vulnerable to timing inconsistencies if an additional pod is created after another pod starts executing this code path. It also prevents us from addressing an issue raised in #157 (being unable to re-run Periscope for an existing DaemonSet). So we should probably be looking for a different approach anyway.
A bit of careful planning might enable us to address both this and #157 together. If the timestamp can be passed to periscope from something external to the pod (like the contents of a mounted file that can be watched), it could solve all of the above as well as giving us a potential means to trigger further 'runs' of an existing DaemonSet.
To Reproduce
Deploy using the usual yaml resource specification, but change the namespace to (e.g.) aks-periscope-test.
Wait for pods to be running
Check storage in the Azure Portal. Assuming everything else was set up correctly, the logs should have uploaded successfully, but the blob name will be <no name>.
Expected behavior
The blob name should be something like 2022-03-28T21-26-02Z
Screenshots
Desktop (please complete the following information):
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered:
This is related to an intermittent bug (which occurs more frequently on newer Windows clusters), in which the node folders in the output file structure are divided between more than one timestamped root container.
By using the DIAGNOSTIC_RUN_ID variable now being supplied to Periscope by consuming tools, we can fix both of these bugs.
Describe the bug
Periscope cannot generate blob name for upload to storage if it is in a namespace other than
aks-periscope
. I'm not sure if this is a known and accepted limitation, but I couldn't find it documented and it tripped me up.The reason for the hard-coded namespace dependency is:
aks-periscope
namespace, and take the creation timestamp of the last one.Aside from tying us to a particular namespace, this is vulnerable to timing inconsistencies if an additional pod is created after another pod starts executing this code path. It also prevents us from addressing an issue raised in #157 (being unable to re-run Periscope for an existing DaemonSet). So we should probably be looking for a different approach anyway.
A bit of careful planning might enable us to address both this and #157 together. If the timestamp can be passed to periscope from something external to the pod (like the contents of a mounted file that can be watched), it could solve all of the above as well as giving us a potential means to trigger further 'runs' of an existing DaemonSet.
To Reproduce
aks-periscope-test
.<no name>
.Expected behavior
The blob name should be something like
2022-03-28T21-26-02Z
Screenshots
Desktop (please complete the following information):
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: