Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow using a different linker in deployment #1000

Open
1 task done
that-ambuj opened this issue Jun 13, 2023 · 18 comments
Open
1 task done

Allow using a different linker in deployment #1000

that-ambuj opened this issue Jun 13, 2023 · 18 comments
Labels
S-Investigation This issue needs further investigation or design to figure out a solution T-Feature Request A request for a new feature

Comments

@that-ambuj
Copy link

Describe the improvement

Hi, In my rust projects, I use a different linker like lld or mold for faster compile times because I use a budget laptop for coding and rust is notorious for it's slow default compile times.

Lately, I have started using https://shuttle.rs for personal projects using their cli tool cargo-shuttle. The issue I'm facing here that when running cargo run or cargo build, cargo uses my .cargo/config.toml file to select a linker and that works out just fine and I get great compile times but when I use cargo shuttle run to run the shuttle project locally, the project gets rebuilt using the slow, default gold linker(this is just an assumption) and it takes ~15s to rebuild after every change(I use cargo watch -cx "shuttle run" to reload on change detection) and it is very annoying to debug sessions or just run a dev server while working in general.

The default binary built using cargo build or cargo run is unusable because it requires some options to be set and still doesn't work the same as cargo shuttle run.

There might be two reasons for long compile times that I can think of:

  • cargo shuttle run does not respect the $HOME/.cargo/config.toml or the project/.cargo/config.toml file and uses the default linker every time.
  • cargo shuttle run builds the binary in release mode and release binaries are known for taking a long time to compile.

The improvements that can be done here are:

  • Respect the linker of choice of the users from the configuration.
  • Allow which build mode to use for locally run binaries.
  • Allow users to use the binary as is with some default options set and not requiring them to use the cli for executing the binary.

Above are the best ideas that I could think of, but I'd really like to hear everyone's opinion and ideas about this.

Duplicate declaration

  • I have searched the issues and this improvement has not been requested before.
@that-ambuj that-ambuj added the T-Improvement Improvement or addition to existing features label Jun 13, 2023
@jonaro00
Copy link
Member

Since the last release, #922 has been merged, which means that the locally installed cargo will be used instead of a bundled one. I think that might solve the issue you're facing. We can wait until 0.19 and see if it works.

@jonaro00 jonaro00 added A-cargo-shuttle S-Investigation This issue needs further investigation or design to figure out a solution labels Jun 13, 2023
@that-ambuj
Copy link
Author

Thanks @jonaro00, I'll wait until the 0.19 and if the issue still persists, I'll let you know; otherwise, I'll close this issue.

@chesedo
Copy link
Contributor

chesedo commented Jun 14, 2023

v0.19.0 should be released on Monday (19 Jun) 🤞

@jonaro00
Copy link
Member

jonaro00 commented Jul 3, 2023

Sounds like this will need #928, which I'm addressing in #1050

@that-ambuj
Copy link
Author

I tested this issue on v0.19 and shuttle-cli actually did read my .cargo/config.toml but it failed to build at deployment because the lld binary was not installed in the deployment container.

Maybe we can add a key to Shuttle.toml like required-packages which would install that package at deployment. The package names can be standardised by using the alpine package repository or debian package repo.

Above is just a suggestion but you may have a better idea.

@jonaro00
Copy link
Member

jonaro00 commented Jul 4, 2023

Installing external dependencies is planned, not sure when it will eventually be released.

@jonaro00
Copy link
Member

jonaro00 commented Aug 4, 2023

Similar to #703

@jonaro00 jonaro00 added T-Feature Request A request for a new feature and removed S-Investigation This issue needs further investigation or design to figure out a solution labels Aug 4, 2023
@jonaro00 jonaro00 added S-Blocked Implementing this is currently blocked by other issues and/or upstream dependencies S-Accepted This will be worked on and removed A-cargo-shuttle T-Improvement Improvement or addition to existing features labels Aug 17, 2023
@jonaro00
Copy link
Member

@that-ambuj I just realized: You can get around this issue by having your .cargo/config.toml locally, but add it to .gitignore so that it does not get uploaded. Or is there some other hurdle that will stop this from working?

@that-ambuj
Copy link
Author

Yeah, I have a global .cargo/config.toml in my home directory. It works perfectly fine in local builds but it causes the deployment builds to crash when I don't .gitignore the config.toml because the container would look for lld(linker) binary which does not come pre-installed. Due to this, I have to ignore the .cargo/config.toml and the deployment builds are much slower.

@jonaro00
Copy link
Member

We will consider allowing custom linkers in the upcoming building system. Not sure how possible it is though.

@jonaro00 jonaro00 changed the title Allow using a different linker or allow the compiled binary to be used independently instead of cargo shuttle run Allow using a different linker in deployment Sep 22, 2023
@jonaro00 jonaro00 added S-Investigation This issue needs further investigation or design to figure out a solution and removed S-Blocked Implementing this is currently blocked by other issues and/or upstream dependencies S-Accepted This will be worked on labels Sep 22, 2023
@that-ambuj
Copy link
Author

I have two ideas right now:

  • We allow the users to specify a folder for binaries and the deployment container would copy them to /usr/bin on startup but there's the issue of version and architecture mismatch along with the hassle of cross compilation and copy-pasting binaries.
  • Or we can just specify a package name and version from the Alpine Package Repo in Shuttle.toml with a seperate section for binaries such as [bins]. This way the deployment container would run apk add <package> on startup. This works provided we use an alpine based distro for the deployment container but it's also possible with Debian Package Repo.

What do you think about this?

@jonaro00
Copy link
Member

The first bullet is possible now, by gitignoring, declaring assets, and using relative path to the binaries, but as you say, version+arch issues.
We are planning for something similar to the second bullet.

I should let you know that "hacking" the deployer to run apt install before deployment is possible, but not pretty. #703 (comment)

We could probably add something like lld pre-installed to the deployment container, if it's just a simple apt install.
Could you provide the cargo config showing how lld is used?

@that-ambuj
Copy link
Author

In a .cargo/config.toml file inside your project or workspace root you have to add

[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=/path/to/ld.lld"]

Please note that in one of the recent releases the binary has been renamed form lld to ld.lld, again highlighting a version mismatch issue.

Also, I also want to mention https://github.com/rui314/mold which is a Linux only linker but also faster than lld.

@jonaro00
Copy link
Member

Cool. Does the path /path/to/ld.lld need to be absolute, or can it be ld.lld and found via PATH? It would be optimal to be able to use the same config file locally and in deployer.

@that-ambuj
Copy link
Author

that-ambuj commented Sep 22, 2023

I believe the path needs to be absolute.

In most cases the binaries are located in/usr/bin, so I don't think there'll be any exceptions.

@jonaro00
Copy link
Member

jonaro00 commented Oct 1, 2023

@that-ambuj
Ok, then this sounds like a possible feature to implement.
I'll try adding it in the next release, then you can try if it works. It does end up in /usr/bin/ld.lld.

@jonaro00
Copy link
Member

jonaro00 commented Oct 2, 2023

@that-ambuj We're including lld and mold in 0.28.0. I tried compiling a project without and with them, but saw no difference in compile times. You can try with your project though! (soon)

@that-ambuj
Copy link
Author

that-ambuj commented Oct 3, 2023

@jonaro00 Actually, there is a slight difference in build times when using lld or mold but it may be less noticable because of the overhead introduced by docker. Bigger gains happen when rebuilding and redeploying a program.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-Investigation This issue needs further investigation or design to figure out a solution T-Feature Request A request for a new feature
Projects
None yet
Development

No branches or pull requests

3 participants