Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SCFToCalyx] Use sequential memories #5599

Merged
merged 9 commits into from
Jul 20, 2023

Conversation

andrewb1999
Copy link
Contributor

Switches SCFToCalyx from using combinational read memories to sequential read memories. Sequential reads are much more realistic for hardware, so the plan is to eventually deprecate combinational read memories in the native Calyx compiler. Unfortunately, the same change cannot be made in LoopScheduleToCalyx yet as it requires further changes to how we build the pipeline. A PR for those fixed to LoopScheduleToCalyx is in the works.

Copy link
Member

@cgyurgyik cgyurgyik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!!

@andrewb1999
Copy link
Contributor Author

I realized I wasn't handling lowering correctly when we access a memory multiple times. In that case we need to do the read and write it into a register for later use, which makes the read take two cycles. This is a reasonable tradeoff between frequency and latency.

@cgyurgyik
Copy link
Member

I realized I wasn't handling lowering correctly when we access a memory multiple times. In that case we need to do the read and write it into a register for later use, which makes the read take two cycles. This is a reasonable tradeoff between frequency and latency.

LGTM. Maybe spell this trade off explicitly in the code as well?

@andrewb1999
Copy link
Contributor Author

LGTM. Maybe spell this trade off explicitly in the code as well?

Good point, added back some comments to clarify what's going on here.

Comment on lines +87 to +92
std::optional<Value> readData;
std::optional<Value> readEn;
std::optional<Value> readDone;
std::optional<Value> writeData;
std::optional<Value> writeEn;
std::optional<Value> writeDone;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason for introducing all of these std::optional wrappers and getter methods, given that a Value is nullable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that's a fair point, I just generally prefer being explicit about optionality when possible. The whole point of structuring things like this is to handle different kinds of memory interfaces (read only, different kinds of read/write enable/done, etc.) so these values are explicitly optional at times, whereas I often think of a null Value as an error state. If this is bad style for CIRCT, let me know and I can change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is somewhat a matter of code style, but I personally prefer optional made explicit as it is here. You could use null Values, but if you ever pass a null Value to MLIR APIs you will quickly hit asserts, or segfaults. Making optionality explicit like this lets the type system enforce that you're considering absence at the right places. This isn't totally fair, but the question is kind of like asking why Java introduced Optional given that references are nullable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, however, not without a cost, and LLVM tends to prefer not having redundant checking code to avoid "slowdown by a thousand minor inefficiencies".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LLVM tends to prefer not having redundant checking code to avoid "slowdown by a thousand minor inefficiencies".

Reference?

First, I too am in favor of using explicit Optional instead of depending on nullability; they have two different meanings. Second, I think we should prove this is actually causing a performance regression before talking about inefficiencies. (In general, I agree with Mike - it boils down to preference. There are arguments both for and against.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The worry about redundant code checks is not really relevant in this case anyways, because we will regularly have to check what ports a memory interface has to make decisions about how to connect to a given interface. We're just using optional to check this instead of explicitly checking for a null Value.

@andrewb1999
Copy link
Contributor Author

Mike and I are in agreement here on making optional explicit. I'm going to go ahead and merge this.

@andrewb1999 andrewb1999 merged commit 3aa1821 into llvm:main Jul 20, 2023
5 checks passed
@andrewb1999 andrewb1999 deleted the use-seq-memories branch March 15, 2024 20:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants