-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write to an existing String buffer instead of allocating a new one #90
Comments
Great to hear that you like it :) Note that the The issue, then, would be with the In my experience, either of these solutions will make Maud less ergonomic to use. I also think that for most users this trade-off is not worth it for them. (Maybe my opinion is wrong here, given that these users also chose Rust!) So if we do end up adding a no-allocation option for Maud, I think it should at least be optional. |
I wonder if we could use trait overloading to pick between the two approaches based on context. Something like this: trait FromTemplate<F: FnOnce(&mut String)> {
fn from_template(template: F) -> Self;
}
impl FromTemplate<F: FnOnce(&mut String)> for Markup {
fn from_template(template: F) -> Markup {
let mut buffer = String::new();
template(buffer);
PreEscaped(buffer)
}
}
struct MarkupFn<F>(F);
impl FromTemplate<F: FnOnce(&mut String)> for MarkupFn<F> {
// ...
}
impl Render<F: FnOnce(&mut String)> for MarkupFn<F> {
// ...
} I'm not sure how the I'm also wary of breaking control flow. For example, currently we can do this: fn query_database() -> impl Iterator<Item=Result<DbRow, DbError>> { ... }
let result = html! {
@for entry in query_database() {
li (entry?)
}
}; where the If we wrap the generated code in a closure then this pattern will no longer work. |
I've not looked at how maud is implemented, so this might be a stupid suggestion, but how about splitting the html! into two different macros? For instance,
I agree for the solution with closures, but not for buffers. The best crates I've ever used let me know where allocations are made. This is more ergonomic:
|
Yep, I think you have a good point there. I'd be okay with adding a |
Okay -- since #92 has landed now, I'll be happy to take a PR to implement an |
html_to!
macro that writes to an existing buffer instead of allocating a new one
html_to!
macro that writes to an existing buffer instead of allocating a new onehtml_to!
macro that writes to an existing String buffer instead of allocating a new one
@P-E-Meunier I've changed the title of the issue -- does this sound like what you're looking for? I haven't done much async I/O work in Rust so I want to confirm that this addresses your use case. |
Yes, it does address my use case (the goal is to allocate a single buffer for all pages served during the entire life of the process). |
This could help lessen the need for #90, since it removes the need to mention a |
html_to!
macro that writes to an existing String buffer instead of allocating a new one
Hi, I found myself writing the following code: impl<'a> maud::Render for MyLittleStructure {
fn render(&self) -> maud::Markup {
html!(
.container {
... Non trivial html stuff ...
}
)
}
} Since I'm rendering something like 16000 divs, I worry that this is going to be a performance bottleneck sooner or later. I'd love to write something like the following: impl<'a> maud::Render for MyLittleStructure {
fn render_to(&self, buffer: &mut String) {
html_to!(buffer,
.container {
... Non trivial html stuff ...
}
)
}
} Are you still accepting PRs for this feature? I think this could be done very easily with the current state of the codebase. |
So, I'm trying to use maud for nest.pijul.com. So far I like it, but nest.pijul.com runs on exactly four threads, forked in the beginning, and is entirely nonblocking. More specifically:
At least one buffer per connection is not really avoidable if you don't want to mix requests between clients, but the internals of a server should not need that.
So, in this context, Maud has the potential to allocate a single buffer per thread, and reuse it between clients, because it is never interrupted by IO. In a standard synchronous server, I agree with your comments on benchmarks (you would need some number of buffers per client), but async IO can allow you to have a constant memory use and constant number of allocations.
The text was updated successfully, but these errors were encountered: