-
Notifications
You must be signed in to change notification settings - Fork 184
Description
This is a duplicate of m-labs/nmigen#333 that I've moved here since this seems to be the correct upstream repository.
When creating a signal with shape set to an Enum-derived type, the decoder function is automatically inferred to a built-in function that turns the Enum value and name into a nice human-readable string.
When defining a layout with an field's shape set to an Enum-derived type though, the decoder inference doesn't happen. As far as I can see, this is because the Layout constructor will cast all fields' shapes with Shape.cast, resulting in the original Enum type being lost.
This results in an actual Shape object being passed to the Signal constructor instead of an Enum, and thus the built in Enum decoder can't be used.
I can think of a few ways to fix this, but wanted to hear the maintainers' thoughts to see if they have a preferred way of going about it.
My first idea is to not cast shapes in the Layout constructor, but instead check whether the user-provided type is a valid shape, probably by calling Shape.cast, discarding the value, and checking for an exception.
The other idea is to add a field to Shape which stores the original type that a given object was casted from. This would also require some extra logic in the Signal constructor to not only check the shape's type when checking for default decoders, but also checking the original type of casted Shape objects.
Curious to hear any thoughts on this.
Update: To test out the first idea, I simply removed the shape = from line 50 of the above code and it seems to behave as expected. If this is deemed an acceptable solution I can create a PR.