You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 12, 2024. It is now read-only.
Describe the bug
Using ValidateSequentialClassifier with a model of length 0 (SequentialModel(new ControlledRotation[0], new Double[0], 0.0)) produces results that are different from using it with a model of length 1 that does nothing (SequentialModel(ControlledRotation((0, new Int[0]), PauliY, 0), [0.0], 0.0)). If I run it on a dataset that is classified perfectly by just encoding the data and measuring it without applying any gates between (classes separated by two diagonals x0=x1 and x0=-x1), the second model performs classification almost perfectly but the first one gives 0.58 error rate.
Expected behavior
I would expect an empty model to produce the same classification results as a model that has gates which don't modify the quantum state.
Additional context
My suspect is this line: it calculates the tolerance by dividing a number by the length of the model, which in this case is 0, and the resulting infinity tolerance can cause the state to be encoded as pretty much anything. I haven't had the time to verify this, though. (Thanks @alexeib2 for helping me form this theory!) To fix it, we could set the tolerance to the numerator in case of an empty model.
The text was updated successfully, but these errors were encountered:
That would definitely do it, yeah. Thanks for raising this edge case, I'll go on and make sure that we fix for the case where the model is entirely empty, as you say. Thanks!
Describe the bug
Using
ValidateSequentialClassifier
with a model of length 0 (SequentialModel(new ControlledRotation[0], new Double[0], 0.0)
) produces results that are different from using it with a model of length 1 that does nothing (SequentialModel(ControlledRotation((0, new Int[0]), PauliY, 0), [0.0], 0.0)
). If I run it on a dataset that is classified perfectly by just encoding the data and measuring it without applying any gates between (classes separated by two diagonals x0=x1 and x0=-x1), the second model performs classification almost perfectly but the first one gives 0.58 error rate.Expected behavior
I would expect an empty model to produce the same classification results as a model that has gates which don't modify the quantum state.
Additional context
My suspect is this line: it calculates the tolerance by dividing a number by the length of the model, which in this case is 0, and the resulting infinity tolerance can cause the state to be encoded as pretty much anything. I haven't had the time to verify this, though. (Thanks @alexeib2 for helping me form this theory!) To fix it, we could set the tolerance to the numerator in case of an empty model.
The text was updated successfully, but these errors were encountered: