Skip to content

Conversation

@krichprollsch
Copy link
Member

$ zig build run -- fetch  http://www.urlfilterdb.com/
debug(cli): Fetch mode: url http://www.urlfilterdb.com/, dump false
debug(browser): start js env
debug(browser): setup global env
debug(polyfill): load polyfill-fetch: undefined
debug(browser): inspector context created
debug: Inspector contextCreated called
debug(browser): starting GET http://www.urlfilterdb.com/
info(http_client): redirecting to: GET https://www.urlfilterdb.com/
info(browser): GET https://www.urlfilterdb.com/ 200
debug(browser): header content-type: text/html; charset=utf-8
debug(browser): parse html with charset utf-8
debug(browser): inspector context created
debug: Inspector contextCreated called
debug(browser): wait: OK

@krichprollsch krichprollsch requested a review from karlseguin April 3, 2025 16:25
@krichprollsch krichprollsch self-assigned this Apr 3, 2025
@krichprollsch krichprollsch changed the title browser: update urls when redirecting browser: update urls after redirection Apr 3, 2025

// update uri after eventual redirection
var buf: std.ArrayListUnmanaged(u8) = .{};
defer buf.deinit(arena);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can treat the allocator like an ArenaAllocator or like an std.mem.Allocator.

If we treat it like an Arena, we can remove unnecessary frees, which has a super-tiny performance cost.

If we treat it like an Allocator, we gain some flexibility with respect to changing the implementation.

It isn't a system-wide decisions, there are places where we'll have an arena and want to treat it like a generic allocator, and there are places where we'll definitely want to take advantage of the arena's behavior. The reason I like to call these ArenaAllocators arena is that it's clear to the writer & reader what the behavior is.

TL;DR - The deinit is almost certainly a no-op and could be removed. It's up to you.

var buf: std.ArrayListUnmanaged(u8) = .{};
defer buf.deinit(arena);

buf.clearRetainingCapacity();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does nothing

.query = true,
.fragment = true,
}, buf.writer(arena));
self.rawuri = try buf.toOwnedSlice(arena);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that you re-use the buffer later, which I guess is why you copy the contents. But, toOwnedSlice frees the underlying memory, so the underlying slice won't get re-used. I think you can use buf.items to avoid the allocation + copy.

try self.session.window.replaceLocation(&self.location);

// prepare origin value.
buf.clearRetainingCapacity();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As-above, if you stick with the toOwnedSlice(arena), then this code does nothing. toOwnedSlice(arena) calls clearAndFree().

If, above, you decide to do instead:

self.rawuri = buf.items

Then here you'll want to reset the buffer:

buf = .{};

as the clearRetainingCapacity() will break self.rawuri

.scheme = true,
.authority = true,
}, buf.writer(arena));
self.origin = try buf.toOwnedSlice(arena);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the option to use buf.items here.

@krichprollsch
Copy link
Member Author

@karlseguin thanks! I fixed the buffer usage.

@karlseguin karlseguin merged commit fab6ec9 into main Apr 4, 2025
12 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Apr 4, 2025
@krichprollsch krichprollsch deleted the redirect-url branch April 9, 2025 06:46
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants