Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make rama-cli squid-config compatible #128

Open
GlenDC opened this issue Mar 29, 2024 · 3 comments
Open

make rama-cli squid-config compatible #128

GlenDC opened this issue Mar 29, 2024 · 3 comments
Labels
mentor available A mentor is available to help you through the issue. needs input
Milestone

Comments

@GlenDC
Copy link
Member

GlenDC commented Mar 29, 2024

For some use cases or users a config is good enough. Especially if we allow the custom logic to also be controlled by the same config file this can be a very powerful concept.

For the v0.2 case might already be cool if the following config file could work:

acl SSL_ports port 443

acl Safe_ports port 80		# http
acl Safe_ports port 443		# https

acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_port 8080

This would also make the glendc/rama:latest docker image to be useful out of the box for plenty of use cases.

The reference can be found here: https://www.squid-cache.org/Doc/config/

For the one picking this up, it's a nice opportunity to get your hands dirty with something like https://docs.rs/winnow/latest/winnow/ if you haven't used it already.

@GlenDC GlenDC added good first issue Good for newcomers mentor available A mentor is available to help you through the issue. labels Mar 29, 2024
@GlenDC GlenDC modified the milestones: v0.2, v0.3 Mar 29, 2024
@GlenDC
Copy link
Member Author

GlenDC commented Mar 29, 2024

Moving this to v0.3, because as fun as it is, it does seem like a distraction from what we really need for now. Once v0.2 is out, it's definitely something we can tackle for v0.3.

@GlenDC
Copy link
Member Author

GlenDC commented Apr 3, 2024

Those that want to pick up this issue, be warned that it will require a decent time investment. There is a lot to be learned from this issue and pleasure taken from it, but it will require also plenty of work.

The code for the config will live in a new module rama::config, and can be seen as an alternate interface to the rust API. This will be the API that drives the rama binary tool, but next to that it should also be possible for people building proxies with rama to make use of this config module. Meaning that people should be able to expose their own middleware/services to be able to be configured/used with the config.

This story is about getting this API started. In reality this interface is a never ending story, just like the Rust API is a never ending one, but we have to start somewhere and for the config API that start is this story.

The steps that have to be completed for this story to be considered "resolved" (done):

  1. add a chapter about the rama binary tool in https://github.com/plabayo/rama/tree/main/docs/book/src, here you are also to add a fully documented example of a config file that showcases all the possibilities. It is to serve as our roadmap for all that is considered the config so that we can also ensure that the parts we develop for this story are already prepared for the road ahead. Obviously stuff can still change a bit during the next weeks and months as we evolve this, but ideally the rough lines are already clear at this point
  2. write a parser (using winnow) to parse the config into a Rust structure, so that the parsed structured (on success) can be used by others and ourself alike to put together a proxy stack. This step does not yet need to be able to fully parse everything that we document / dream of in step (1). Instead if should only parse the stuff that we'll already support in step (3)
  3. create the initial version of the rama binary tool (which lives in https://github.com/plabayo/rama/tree/main/rama-cli) which reads a config (using step (2)) and creates a proxy stack based on that config to then also run it
  4. add a couple of high level tests for it (E2E tests)

These steps are to be done one by one, in order, one branch/PR per step. They do not have to be done by the same person, but only one person/team should be active on this at the same time. And the one already working on one step gets preference on completing also the step after that, if that is what they want.

At all steps feel free to involve me and report regular. This way you are not alone in this and can we ensure that not too much work is done that shouldn't be done. Important at all times that we stay aligned. I am also here for guidance, mentoring and assistance. Can be via here (GitHub) or on Discord, or both.

@GlenDC
Copy link
Member Author

GlenDC commented Apr 15, 2024

What attracted me to the squid config format initially was how it allows you to also declare variables (acl's) and some similar features which makes for a pretty powerful config format.

A more traditional approach would be like toml or yaml. E.g. a much simpler format is something like fly's:

[[services]]
  internal_port = 8080
  protocol = "tcp"

  [services.concurrency]
    hard_limit = 25
    soft_limit = 20

  [[services.ports]]
    handlers = ["http"]
    port = "80"

  [[services.ports]]
    handlers = ["tls", "http"]
    port = "443"

Where you would allow middleware to be configured still in flexible ways, and can map handlers to specific services (e.g. tls could mean the default tls, while we could also allows rustls specifically etc. Http would be the default http proxy etc... Obviously this is a cloud web deployment config example so not 1-to-1 example applicable to proxies but it does show.

This is in a way a lot simpler, as we can just parse toml with serde, and allow within the defined format also dynamic fields that stuff can register into.

E.g. one could register a transport middleware (concurrency) to hook into the concurrency config property, e.g. (a simple example, don't take it as-is, unless you agree and want to take it further):

let mut cfg = rama::config::default();
cfg.register_middleware::<MyConcurrencyConfig>("concurrency", |cfg: MyConcurrencyConfig| {
    ConcurrencyLayer::new(/* ... */)
});

let proxy_services: Vec<_> = cfg.parse().await?.collect();
for proxy_service in proxy_services {
    /* spawn it */
}

There are of course still plenty of questions to answer even for such much simpler approach:

  • how to differentiate between transport middleware and http middleware
  • how to allow for upstream middleware (http layer <-> /* middelware */ <-> target)
  • etc...

And of course, to bring us back to the original topic of squid-inspired config. What does that format give us that would not be possible with a toml-based format. Or put in other words, what are the pros and cons of each approach?

@GlenDC GlenDC added needs input and removed good first issue Good for newcomers labels May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mentor available A mentor is available to help you through the issue. needs input
Projects
None yet
Development

No branches or pull requests

1 participant