New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow specification of custom default bit lengths #53
Comments
Here are two wacky ideas (not sure they are bad, good, or indifferent). magic type aliasThe default numeric type could be specified via "special" type alias. Assume we had type aliases in our language (as I believe we did in OCaml land):
We could have a "special" type named
or something, which would result in the behavior we currently have. But an any scope—module, function, etc.—the programmer could shadow the existing
Here, just enough widthOne way to explain the current system is that it "promotes" static integer types to proper, dynamic integer types when they're assigned to a variable—and that the type system chooses For example:
|
The second one is quite easy to do. Is there a reason to prefer the first option over the second? |
I think the latter makes the most sense? It seems nice and clean, and I can’t really think of any real problems with it. Let’s go for it! |
Hi, I think variable bit lengths are great! I have some concerns over automatically inferring the sizes though. These concerns are about enforcing bit width specification and making the automatic inference the default. First up, I agree with the concern @rachitnigam has on why is 32 bit the accepted default. I think a reasonable first take could say bit width default can be byte aligned. But maybe 32 is taken because its a commonly accepted default range. I believe the idea from #46 is to reduce bit width to improve efficiency? I'm finding it a little hard to understand why automatic inference would help.
Also we should certainly use the automatic inference mechanism for constants. I feel it's a perfect match there. Some additional notes, |
More semantics questions,
Will |
To clarify, I think @sa2257 means looking at every use of a variable x and inferring the lowest bit width that satisfies all uses. Something to keep in mind: bit width directly effect type checking and subtyping in the language which makes it hard to treat bit width optimizations transparently. One can imagine an explicit version of the language with no subtyping where everything is explicitly cast or moved into larger bit size registers when needed for a computation with misaligned bit widths. This language might address @sa2257 concerns. Inference is just trying to alleviate the overhead of annotations, but it always needs to be sound and conservative with respect to the explicit language. That is to say, it should never allow a program that is not hardware realizable. @sa2257 can you write a full program for your suggestion? I’m not sure I fully understand how function level type inference should be working in that. |
Equivalent programs with explicit types
|
The short answer to “why infer the type of a variable to be ‘just big enough’ to hold the initial constant value?” is “we need some way to guess the appropriate type.” That is, unless we’re going to require explicit type annotations on all variables that hold integers, or do one of the more exotic proposals discussed here, we need to determine the type of A global unification-based type inference strategy (i.e., one where we look at all the uses of a variable and try to guess a width that would fit all of them) would be convenient and a classic PL approach to the problem. But global type inference is trickier to realize than it seems! So even if we eventually decide to try it, I think we need a simpler strategy for now. About a few of the specific questions from @sa2257 above:
A near-term change we could make—which really just sidesteps the problem—would be to remove any fancy special-casing and require explicit types on any variable you actually want to change. So |
@sa2257 What's the consensus on this? Are you satisfied with saying something like "inference in Fuse is conservative, i.e., it will always infer the smallest bit width of a constant is inferred. If you would like a constant to be stored with a larger bit width, use explicit types for |
I think I'm still not on the same page with you on bit width inference. My specific concern is why are local variables treated this way. I noted above
and it seems @rachitnigam you are referring to constants?
Maybe what I'm missing is the implied notion that this is intended for constants? taking a modified
My concern is why I agree Personally, I would rather have local variables defined bit widths explicitly, as in Verilog, than be inferred, as local variables are supposed to carry different values during operation. We can differentiate between constants and local variables.
What I naively have in mind are the magic types,
Or we can explicitly specify bit widths for variables. This is Verilog like. But designers are fully aware of them, eg. whether an overflow will occur.
|
Yup, I agree. You can do this with or without the PR. The first program you wrote doesn't fully make sense from the type checker's perspective. To the type checker, both Anyways, since you can always specify the types, I think your concerns are covered. Let's merge #102 for now and talk more about it as issues come up. |
Things that are unclear to me,
So I think the answer is yes, we decide based on the operator? I was ignorant of this, my understanding was (assuming bit width not specified) we truncate to fit to the default size int (in our current implementation ). If we decide bit width of
|
Yes, but In my version, I'm imagining the inferred types as constants. I.e. we can't write to them. |
bit<32>
No, the emitted code will say
That depends on the implementation of
Unclear what you meant by this.
Ah, I missed the For an example of why this is considered bad design, look at the semantics of Maybe one thing that would alleviate your concerns would be adding explicit casting in the language. If you want to disable all inference, you can simply use casts and explicit lets to do so. |
constants are things you can store a value to use within a program. You can't write to this store within the program.
That's just an example. We can call
Isn't it 64, if we don't consider
I mean in the current translation. I agree we can handle it.
So we don't throw an error if the bit widths don't match? Okay.
|
Hm, I am finding it hard to distill what needs to happen in the langauge. @sa2257 can you create a proposal titled “Sized Numbers” and explain what you would like the language to do in every case where numbers are used. Assume nothing about numbers in the language as they are. Just describe what your desired behavior would be along with example programs. Instead of me explaining each case, maybe it’ll help if I can probe your reasoning about numbers. |
Perhaps one useful (albeit pessimistic) perspective here is that there is no perfect answer without some significant changes. We need to do something for assignments like
This list does not include what I called "significant changes" above, which would include global constraint-based type inference and the "magic type alias" approach. Given that, I think all of these simple options are unsatisfying in some way—meaning that there's no moral imperative to choose one over any other. The right path in that case is probably to just go for one arbitrarily-selected option and revisit as we gain experience with real-world usage. |
Discussion on bitwidth inference in the Chisel paper. See section 10.4. |
HardCamel requires wires to have the same bitwidth. |
Closing this since #111 and explicit annotation on We can open more issues about subtyping as they come up. In the limit, we might eventually remove subtyping if it confusing enough. |
@sampsyo said the following in #46:
This can be implemented in two ways: (1) Parsing magic comments in fuse files that specify bit lengths, or (2) Making default bit length a CLI option.
The text was updated successfully, but these errors were encountered: