Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Cortex-M targets to the compiler + binary releases of core #1645

Closed
wants to merge 5 commits into from

Conversation

japaric
Copy link
Member

@japaric japaric commented Jun 8, 2016

It's been long known that one can build Rust program for ARM Cortex-M microcontrollers.
This RFC proposes making these cross compilation targets more first-class by adding them to the
compiler and providing binary releases of standard crates like core for them.

Rendered

cc @rust-lang/tools
cc @hackndev

[compiler-rt] relatively easy. However, adding these targets to the compiler will make the setup
process simpler (less steps, requires less tooling, etc).

[extensive documentation]: http://japaric.github.io/copper
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Beware: Still vaporware.)

@nrc nrc added the T-dev-tools Relevant to the development tools team, which will review and decide on the RFC. label Jun 8, 2016
@SimonSapin
Copy link
Contributor

I’ve been meaning to try and get Rust running on a BBC micro:bit (when they become more available for sale) or a Teensy. They both have an ARM Cortex-M.

These CPUs seem popular in 32-bit microcontrollers. Implementing this RFC would significantly lower the barrier to entry to Rust for hardware hobbyists. (Think projects that might use C/C++ on an Arduino.)

@durka
Copy link
Contributor

durka commented Jun 8, 2016

Rust runs on the Teensy already, actually.

@SimonSapin
Copy link
Contributor

Yes, but getting to hello world (or blinking LED) is fairly involved: you have to define you own target JSON file, compile libcore for it, etc. This RFC is about making all that easier.

@JayKickliter
Copy link

Implementing this RFC would significantly lower the barrier to entry to Rust for hardware hobbyists.

For a hobby project, I can live with having to build libcore and maintain my own toolchain. My coworkers, some of whom have never heard of Rust, do mind. I see this as lowering the barrier of entry for using Rust in commercial ARM-based embedded projects. This is the number one barrier I have to adopting Rust at my job.

*freestanding*. A freestanding crate is a crate that doesn't depend on an underlying OS, kernel or
runtime. Currently, this is the full list of freestanding crates:

- `alloc`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately I believe that alloc (and then transitively collections) depends on the libc crate right now, which may not be compile-able for these kinds of targets.

Although I've always wanted to remove the dependency here for this exact reason... We're just not quite poised yet to make that work. Also that being said it doesn't really affect this RFC, just a minor nit

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the crates listed here compile just fine for Cortex-M processors. (See Xargo). I haven't (yet) tried to write a program (for these targets) that uses alloc + collections. But, liballoc doesn't seem to contain any undefined reference to a libc symbol (AFAICT):

$ arm-none-eabi-nm -u liballoc-bdc8710db91bbbe1.rlib

alloc-bdc8710db91bbbe1.0.o:
         U __aeabi_memcpy
         U __aeabi_unwind_cpp_pr0
         U __rust_allocate
         U __rust_deallocate
         U __rust_reallocate
         U __rust_reallocate_inplace
         U __rust_usable_size
         U _ZN4core9panicking5panic17h322e6888d03463ddE

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

__rust_allocate and friends are defined in alloc_system, which itself depends on libc for unix systems at least.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

__rust_allocate and friends are left undefined on purpose. That way one can define custom allocators.

@alexcrichton
Copy link
Member

Thanks for the well written RFC @japaric! I'm pretty excited about making bare-metal development as easy as possible, and it definitely sounds like avoiding compiler-rt and custom targets is a great boon to doing just that right now.

My only hesitation on this would be figuring out the names of all these targets (there's quite a few!). I think it's fine to have a bunch, especially if this covers a massive portion of the embedded market. Between the detailed design and your alternative though, things get interesting. So today we actually do somewhat try to encode the CPU into the target itself, for example arm vs armv7 targets as well as i586 vs i686. The main motivation behind that is that dealing with -C target-cpu is a pain because it means that the standard library (in this case libcore) is probably compiled incorrectly for your own target as it's the "least common denominator". That may be ok for libcore as it's mostly generic, but it's still kinda a pain to remember how to set (e.g. you shouldn't require env vars to build a project in Cargo).

Along those lines I'd be in favor of adding a target-per-CPU that we want to support, but the naming definitely seems like it may get tricky. Do you know if there are canonical names for these targets elsewhere? For example do the gcc toolchains pick reasonable target names or are they all just configured slightly differently?

@japaric
Copy link
Member Author

japaric commented Jun 9, 2016

Do you know if there are canonical names for these targets elsewhere? For example do the gcc toolchains pick reasonable target names or are they all just configured slightly differently?

gcc uses a single triple for all these CPUs: arm-none-eabi. And one "selects" a CPU by using a certain set of flags (e.g. -mcpu=cortex-m4 -mthumb -mfpu=fpv4-sp-d16 -mfloat-abi=softfp) when compiling. This works because building a project (program) usually involves building everything from source (the RTOS, peripheral libraries, DSP libraries, etc). In the case that one has to link a pre-compiled library like libm, Ubuntu packages like libnewlib-dev provide several pre-compiled variants of such libraries:

$ find /usr/lib/arm-none-eabi/newlib/ -name libm.a
/usr/lib/arm-none-eabi/newlib/armv7-ar/thumb/softfp/libm.a
/usr/lib/arm-none-eabi/newlib/armv7-ar/thumb/fpu/libm.a
/usr/lib/arm-none-eabi/newlib/armv7-ar/thumb/libm.a
/usr/lib/arm-none-eabi/newlib/armv6-m/libm.a
/usr/lib/arm-none-eabi/newlib/thumb/armv7-ar/fpu/vfpv3-d16/be/libm.a
/usr/lib/arm-none-eabi/newlib/thumb/libm.a
/usr/lib/arm-none-eabi/newlib/armv7e-m/softfp/libm.a
/usr/lib/arm-none-eabi/newlib/armv7e-m/fpu/libm.a
/usr/lib/arm-none-eabi/newlib/armv7e-m/libm.a
/usr/lib/arm-none-eabi/newlib/fpu/libm.a
/usr/lib/arm-none-eabi/newlib/armv7-m/libm.a
/usr/lib/arm-none-eabi/newlib/libm.a

So one simply links to the appropriate one.

Parenthesis: This sounds like a lot of setup steps but embedded C development usually involves an IDE. When using an IDE, there's only a single setup step: you graphically select your microcontroller (this is even more specific than selecting a CPU!) and the IDE takes cares of passing all the right flags to gcc, building and caching libraries, supplying the linker script and starup files, etc. For example, see the demo video here, you can skip to the minute 2.

it means that the standard library (in this case libcore) is probably compiled incorrectly for your own target as it's the "least common denominator"

FWIW, I was looking at the differences between all the CPUs from the POV of LLVM optimzations (you can do this by grepping for "cortex-m" in Rust llvm sources). And, AFAICT, there doesn't seem to be any difference in codegen between the alternative thumbv6m-none-eabi target and the original cortex-m0/cortex-m0plus/cortex-m1 targets.

My only hesitation on this would be figuring out the names of all these targets~

If targets like cortex-m3 seem too far away from our current naming convention. How about these targets:

  • thumbv6m-none-eabi:cortex-m0
  • thumbv6m-none-eabi:cortex-m0plus
  • thumbv7em-none-eabi:cortex-m4
  • thumbv7em-none-eabi:cortex-m4,+soft-float
  • thumbv7em-none-eabi:cortex-m4,+fp-only-sp

I know you have considered this format before but for binary releases (x86_64-unknown-linux-gnu:abort is a binary release of the standard library for x86_64-unknown-linux-gnu compiled with -C panic=abort).

@brson
Copy link
Contributor

brson commented Jun 10, 2016

Most of this looks relatively uncontroversial to me (and may not need an RFC). The target triples you are specifying look really weird though: cortex-m0, cortex-m1, etc. These aren't even triples. Are they standardized in any way, or is this weakening the meaning of 'target triple' further?

From a quick skim the only other major change I see is the definition of a rust-core package, which will require a fair bit of infrastructure work to move to. Not sure offhand what I think of the exact composition of rust-core you've proposed, but I do think it's odd to call it core when it also contains other components. Might be worth disambiguating the two concepts. Though these are mostly unstable components so it wouldn't be unreasonable for us to change the composition as needed over time.

You are overriding rustup target add to mean add either rust-std or rust-core as appropriate. I think this makes sense to me but again I'll want to mull it over.

@Amanieu
Copy link
Member

Amanieu commented Jun 10, 2016

Note that thumbv6m has no support for atomic operations at all. This means that liballoc and all crates that depend on it will not work on that architecture. You will only be able to use libcore with it.

@steveklabnik
Copy link
Member

I love the idea of a rust-core component, but agree that the name is kinda wrong. If we didn't have libcore, it would make a lot of sense...

The Rust OSDev community has been battling with these issues for a while, and a way to say "here's a target.json and a precompiled libcore" and let rustup know about it would be amazing. https://github.com/intermezzOS/kernel/blob/master/src/Makefile#L52-L64 is what I'm doing right now, and it's way worse.

@Amanieu
Copy link
Member

Amanieu commented Jun 10, 2016

Ideally you would just pass a target.json to rustup and have rustup build the standard library for that target from source. This would allow you to create a customized standard library which targets a specific CPU. The hard part however is dealing with the C dependencies of the standard library (jemalloc, compiler-rt and libbacktrace).

All bare-metal rust projects that I've seen so far seem to simply not bother linking in compiler-rt, and instead just use a bare libcore. While this will usually work, it can cause linker errors if you try to perform some operations that aren't supported natively by LLVM, such as 64-bit division on 32-bit ARM.

@steveklabnik
Copy link
Member

@Amanieu many OS-devs also need to turn off floating point, which requires a custom patch to libcore, so unless that ends up landing upstream, rustup probably can't do it yet :/

@japaric
Copy link
Member Author

japaric commented Jun 11, 2016

@brson

The target triples you are specifying look really weird though: cortex-m0, cortex-m1, etc. These aren't even triples. Are they standardized in any way, or is this weakening the meaning of 'target triple' further?

These "triples" match directly the CPU name. The alternative section has more standard triples: thumbv7m-none-eabi -- these match LLVM's triples. GCC only uses a triple for all these targets: arm-none-eabi. Also see my and Alex's previous comments.

Not sure offhand what I think of the exact composition of rust-core you've proposed, but I do think it's odd to call it core when it also contains other components

Well, the rust-std component also contains more than just the std crate (and its dependencies); it also contains the test crate and all the rustc_* crates.

From a quick skim the only other major change I see is the definition of a rust-core package, which will require a fair bit of infrastructure work to move to.

Actually, I think we could just reuse the rust-std package to implement this RFC. Can the rust-std component contain a smaller set of crates than it does today for a few targets?

@Amanieu

Note that thumbv6m has no support for atomic operations at all.

Yeah :-/. You get a "undefined reference to __sync_val_compare_and_swap_4" if you use anything that allocates because of oom. In theory, the user could provide that function themselves but I don't know if there is any sensible implementation for it.

In the meantime, one could fork the alloc + collections crates to bubble up the oom as an error and use that. At least until the Allocator trait lands.

@Amanieu
Copy link
Member

Amanieu commented Jun 11, 2016

@japaric

In theory, the user could provide that function themselves but I don't know if there is any sensible implementation for it.

Just disable interrupts while performing the operation and restore them afterwards.

@brson
Copy link
Contributor

brson commented Jun 11, 2016

Well, the rust-std component also contains more than just the std crate (and its dependencies); it also contains the test crate and all the rustc_* crates.

I would consider that a bug.

Actually, I think we could just reuse the rust-std package to implement this RFC. Can the rust-std component contain a smaller set of crates than it does today for a few targets?

Possibly. We could just consider std to be whatever libraries come with the architecture.

@japaric
Copy link
Member Author

japaric commented Jun 11, 2016

@Amanieu

If there's a sensible implementation and as that symbol is not in compiler-rt, do you think we should provide an implementation of that symbol in libcore (only for thumbv6m targets)?

@brson

I would consider that a bug.

But the rust-std is currently implemented to contain all the target libraries (see 1 and 2). We could add more components in the future as I suggested here; I guess that would be the bug fix ;-).

@brson
Copy link
Contributor

brson commented Jun 15, 2016

But the rust-std is currently implemented to contain all the target libraries (see 1 and 2). We could add more components in the future as I suggested here; I guess that would be the bug fix ;-).

Hm, I guess as defined right now, I'd consider rust-std to just be all the 'target' bins, whatever they are (that's clearly what I was thinking at the time I created it), so it makes sense for core-only platforms to distribute 'core' in 'rust-std'. Perhaps in the future we can clean up the rustup package definitions, but it may not ever be important to distinguish the core package from the std package.

it's possible to package a rust-std component with a smaller set of crates so there is no need to
define a new rust-core component. We can always add a new rust-core component later on if desired.
@japaric
Copy link
Member Author

japaric commented Jun 17, 2016

updated the RFC to re-use the existing rust-std component instead of defining a new rust-core component.

(I think GitHub is now notifying subscribers when a new commit is pushed, but just in case)

@alexcrichton
Copy link
Member

I think that the content of all these targets seems reasonable, as does the distribution strategy of 'rust-std' just containing everything it can for that target (which in this case is not std, but oh well in terms of naming). As before, I'm still hesitant on naming, so I wanted to lay out three possible strategies (two of which are in this RFC) to have it all in one place:

CPU-based

Proposed in this RFC, this is a pretty ergonomic method of specifying the target to the compiler. Tools (including Cargo/rustup) can easily understand this and we'll "just be adding more targets". The downside of this approach, however, is that it's not necessarily scalable as we probably don't want a target-per-cpu. It's also a divergence from today's targets which at least have some semblance of being a triple and conveying information.

Triple-based

Listed in the alternatives section, this is not as ergonomic than the previous method but aligns better with today's "triples". The introduction of architectures like "thumbv7m" where the actual target_arch is "arm" is somewhat odd though. Not unheard of though as "i586" is mapped to "x86" and "armv7" is mapped to "arm", just a little odd.

The downside of this, though, is that we don't have a great vector today to ship as many artifacts as before (they're all target-based, not target + flags based). Another downside is that to be "the most usable" you'll have to pass a bunch of other flags to get code to optimize correctly.

arm-none-eabi

This final alternative is to follow in the footsteps of gcc, which is to only have one target here and have it default to an absolutely minimal set of features. The downside of this is that it's basically an unusable target no one would use, and it's not even worth shipping binaries for it as a result. Everyone using it would want to configure codegen options somehow to get better performing binaries. It may also be difficult to discover the set of flags you need to pass to the compiler if you're just starting at this kind of development.

The benefit of this approach, however, is that it aligns very well with the triple terminology we have today. It also is quite scalable so long as we have the right infrastructure in place.


Of those three strategies, I think the latter two ones will require some form of rustup compiling std or something like that. In the triple-based and arm-non-eabi strategies you're very likely to want to customize std in which case shipping artifacts won't be too useful. If rustup/cargo/something could compile std/core on-demand, however, this may not be too bad (compiling them doesn't take too long).

I somewhat lean towards the arm-none-eabi strategy due to how scalable it is. It's the hardest of all three as it requires to develop this ephemeral idea of a "magical compile std on demand infrastructure", but the payoff is possible massive as it has ramifications for all targets, not just the embedded ones.

Given that these targets are at least usable today through custom json files, maybe that relieves pressure on our end enough to try to pursue this in the meantime?

@SimonSapin
Copy link
Contributor

Given that these targets are at least usable today through custom json files, maybe that relieves pressure on our end enough to try to pursue this in the meantime?

(Take my opinion with a grain of salt since I haven’t actually tried to do this yet, but) I’d nitpick "not impossible" rather than "usable". The process of figuring out what to write in that JSON file, then fetching/patching/building your own libcore seems very manual and fragile at the moment. Far from Cargo’s goal of making builds reproducible as much as possible.

@brson
Copy link
Contributor

brson commented Jul 12, 2016

I'm wary of loosening the definition of what we consider 'target triples' further. The problem of being able to specify all these platform variations is related to this thread about config-specific APIs, activating arbitrary cpu features. Also related to std-aware cargo which will need to understand that changing certain features may force a rebuild of std (e.g. when you activate SSE4 you rebuild std with SSE4).

It seems to me that we need a concept of a target + some additional configuration.

@nagisa
Copy link
Member

nagisa commented Jul 13, 2016

It seems to me that we need a concept of a target + some additional configuration.

Exactly what the custom json target specs are. Perhaps we could improve this “additional configuration” in a way which would allow extending other target specifications, but otherwise seems to me like we’re already set on that regard. I do not disagree that the experience of using this “additional configuration” needs improvements, though.

@parched
Copy link

parched commented Jul 14, 2016

I don't think using CPU based is a good idea as it doesn't scale well and rustc doesn't use it for any of the other architectures. Keeping with standard llvm triples we just need to add

  • thumbv6m-none-eabi for M0, M0+, M1
  • thumbv7m-none-eabi for M3
  • thumbv7em-none-eabi for M4, M7
  • thumbv7em-none-eabihf for M4F, M7F

If someone wants to add any CPU specific optimizations then it's the same problem/solution as for the the other architectures which we can deal with separately.

@japaric
Copy link
Member Author

japaric commented Oct 2, 2016

Updates

I've sent a PR (rust-lang/rust#36874) that adds these target definitions (the thumbv* set) to the compiler.

A PR (rust-lang/rust#36794) that adds a panic-strategy field to targets specifications landed recently. Right now, all the existing targets default to panic=unwind but, with this new field, a target can now change its default to panic=abort. Regardless of that default, one can always pass -C panic=$STRATEGY to rustc to change the panic strategy.

The question is: Should these Cortex-M targets default to panic=abort? The rationale is that these targets, being oriented towards bare metal development, won't likely (or ever?) have an unwinding implementation. But in case that a developer decides to implement it then they'll have to explicitly (that's the downside of this panic=abort default) change the panic strategy. Also, note that, because we likely won't have binary releases of core for these targets, the problem of mixing "panic" crates and "abort" crates is not a problem at all for these targets because everything will be (re)compiled with the same panic strategy.

Second question: What should be the default of relocation-model? Right now, it defaults to pic because that's what all targets default to but I always find myself changing it to static (via RUSTFLAGS, see my previous comment) because pic injects a GOT (Global Offset Table) in the final binary and that breaks my initialization code (it somehow changes the values of _ebss, _sdata, etc. and my .data and .bss sections never get initialized or, worse, Flash memory gets "written to" (hard_fault)). A cursory scan of Zinc and Tock reveals that they also set relocation-model to static in their targets. Should these new targets default to static? cc @alevy I know Tock is exploring dynamic loading using PIC (?). Would setting the default to static affect Tock negatively?

@SimonSapin
Copy link
Contributor

I don’t really understand what relocation-model: static means, but it turned out to be necessary for me on Cortex-M0 to avoid crashing when core::fmt calls a function pointer.

@eddyb
Copy link
Member

eddyb commented Oct 2, 2016

@SimonSapin It means that no dynamic linker is needed to finalize imports (which are usually handled through the many kinds of relocations).
We default to PIC (position independent code) on mainstream OSes so that ASLR works, which means that there are 0 absolute addresses in dylibs and binaries - as they are unknown until dynamic loading.

@SimonSapin
Copy link
Contributor

Ah never mind, I read @japaric’s last message as recommending pic for the default while it only describes the current default.

@alevy
Copy link

alevy commented Oct 2, 2016

In principal I'm also 100% in favor of this.

@alevy I know Tock is exploring dynamic loading using PIC (?). Would setting the default to static affect Tock negatively?

Yes, but that's for a particular relocation model in userspace. For the most part, I have seen scenarios where it makes a ton of sense to use PIC for bare-metal.

What about soft-float? It's not clear to me whether to include it by default or whether it's easy enough to override. There is no strict boundary between cortex-m4s, in particular, that support hardware floating point and don't support it (it's an optional feature in the architecture). Doing soft-float isn't a problem when needed, but at least the newlibc version is fairly large (~10s of KB), so when hardware floating point support is available it's much much better to use it.

@whitequark
Copy link
Member

Should these Cortex-M targets default to panic=abort?

Yes, absolutely. There are rare cases where one would want to have an unwinder on -M, but many -M targets have less RAM than the unwinder requires for its operation in the first place.

Second question: What should be the default of relocation-model?

Definitely static. -M targets don't have an MMU, so in general, even if there would be startup code that can relocate them, they would always get relocated into one place (or one of two places, for SoCs that let you map either flash or RAM starting at zero). The complexity of such startup code, or the increase in overall code size cannot be justified.

@japaric
Copy link
Member Author

japaric commented Oct 2, 2016

@alevy @whitequark Thanks both for the input!

@alevy Sorry, I haven't updated the RFC body but we are leaning towards the alternative that defines 4 targets. Now, about your question, there are two targets you could use for Cortex-M4/7 devices: thumbv7em-none-eabi and thumbv7em-none-eabihf. The former defaults to FP ops via software routines (intrinsics) and uses the soft calling convention (FP values are passed via general purpose registers (e.g. R0)). The latter lowers FP ops to hardware instructions (single precision only by default) and uses the hard calling convention (FP values are passed via FP registers (e.g. S0)). Would these cover your use cases? You can always tweak them further using e.g. -C target-feature.

@japaric
Copy link
Member Author

japaric commented Oct 5, 2016

The target definitions have landed upstream in rust-lang/rust#36874 🎉.

Thanks everyone for participating in the discussion! We couldn't have arrived at the current target definitions without everyone's input. ❤️

@alexcrichton what do we do with this RFC? It wasn't needed for just the target definitions (though it was very helpful!) and we are unlikely to do the binary release / rust-core component part. Should I update it to just talk about the new targets and then we can merge it in its updated form or should I just close it? (yes, I'd like to save myself the job of updating this ...)

@alexcrichton
Copy link
Member

@japaric I'm personally ok with closing this as we tend to not have RFCs for new targets, let's see if this works...

@rfcbot fcp close

(as @japaric mentioned above)

@rfcbot
Copy link
Collaborator

rfcbot commented Oct 5, 2016

Team member alexcrichton has proposed to close this. The next step is review by the rest of the tagged teams:

No concerns currently listed.
Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!

See this document for info about what commands tagged team members can give me.

@nrc
Copy link
Member

nrc commented Oct 6, 2016

I think in situations like this we don't need to wait for the bot - either the author or someone from the team could just close the RFC.

@alevy
Copy link

alevy commented Oct 7, 2016

Not sure if this is the right place to keep talking about, apologize if not...

I'm trying out this newly merged feature from nightly on Tock. It basically works, which is great, except by default it's missing a few linker arguments that seem generally important.

If I don't explicitly add...

  • ... -mfloat-abi=soft and -lm -lgcc I get link errors about missing floating point operations
  • ... -lc I get link errors about missing memcpy and memclr aeabi intrinsics. (by the way, if you use an outdated version of arm-none-eabi-newlib, like the one that comes standard from the Ubuntu and Debian repos, these symbols are just straight up missing. It was fixed years ago, but Debian just hasn't upgraded).

I think these should probably be there by default.

We don't have a platform with hardware floating point yet, but I tried compiling one of our cortex-m4 targets with the hardware floating point target for heck of it. I wasn't able to. I ended up with a linker error:

 error: /home/alevy/hack/helena/tock/boards/storm/target/thumbv7em-none-eabi/release/storm uses VFP register arguments, /home/alevy/hack/helena/tock/boards/storm/target/thumbv7em-none-eabi/release/storm.0.o does not

I'm not familiar enough with hardware floating point to intuit what the problem is, but if there is, in general, support for getting floating support right by default, I'll dig in further.

@whitequark
Copy link
Member

... -lc I get link errors about missing memcpy and memclr aeabi intrinsics. (by the way, if you use an outdated version of arm-none-eabi-newlib, like the one that comes standard from the Ubuntu and Debian repos, these symbols are just straight up missing. It was fixed years ago, but Debian just hasn't upgraded).

IMO you should use rlibc for that, and not have libc a hard dependency on freestanding targets.

@japaric
Copy link
Member Author

japaric commented Oct 7, 2016

-mfloat-abi=soft

That's a gcc multilib thing that tells gcc to pick the libraries that we compiled with soft ABI. arm-none-eabi-gcc ships with several libc/libm/etc. binaries that were compiled with different profiles (soft ABI, hard ABI, no FPU, etc.). That -m flag tell gcc to right set of libraries to link to.

error: /home/alevy/hack/helena/tock/boards/storm/target/thumbv7em-none-eabi/release/storm uses VFP register arguments, /home/alevy/hack/helena/tock/boards/storm/target/thumbv7em-none-eabi/release/storm.0.o does not

Hmm, that sounds like you want to use the thumbv7em-none-eabihf target instead or use thumbv7em-none-eabi target but with an extra -C target-feature=+vfp4.

IMO, linker flags shouldn't be hard coded in a target definition. Linker flags should be supplied by Cargo / rust / user.

-lc -lm -lgcc

You can add all these linker flags using a build script. See this section of the related documentation. You want to use cargo:rustc-link-lib=static=c and so on.

-mfloat-abi=soft

Cargo can't inject this right now (cf. #1766) but you can add these using a .cargo/config file:

[target.thumbv7m-none-eabi]
rustflags = ["-C", "link-arg=-mfloat-abi=soft"]

@japaric
Copy link
Member Author

japaric commented Oct 7, 2016

IMO you should use rlibc for that, and not have libc a hard dependency on freestanding targets.

@whitequark I think Tock has C components so they have to link to newlib. But, I agree that our targets shouldn't, by default, link to libc to support the 100% Rust use case. (You can't remove default linker flags using Cargo or rustc)

@alevy
Copy link

alevy commented Oct 7, 2016

@japaric @whitequark there are no C components. rlibc seems like a reasonable fix for -lc specifically.

@japaric

Hmm, that sounds like you want to use the thumbv7em-none-eabihf target instead

Yes, that's exactly what I did.

IMO, linker flags shouldn't be hard coded in a target definition.

My point is that the targets, as they are, cannot actually compile a rust program with floating point operations. In both soft- and hard-floating point cases, some extra linker flags are needed.

I agree it's possible to get around this right now with .cargo/config files or by using the rustc cargo sub command. However, I can't think of any case where you wouldn't want to have these linker flags (I don't think they ever hurt).

Linker flags should be supplied by Cargo / rust / user.

Are you saying you think the fix is to change Cargo's default for those targets? I'm not very familiar with how Cargo/rustc separate responsibilities, but if Cargo has precedent for doing that, it seems like a perfectly good solution to me.

@japaric
Copy link
Member Author

japaric commented Oct 7, 2016

there are no C components. rlibc seems like a reasonable fix for -lc specifically.

hmm, if there are no C components. Why are you passing -lc, -lm or -lgcc?

My point is that the targets, as they are, cannot actually compile a rust program with floating
point operations.

The only linker flag I need to use with any of these targets is -nostartfiles but that's unrelated
to floating point arithmetic; that flag is needed to not depend on any C code, which is my use case.
I can do floating point math with the thumbv7em-none-eabihf target without further changes or
extra linker flags. If I want to do floating point math with the thumbv7em-none-eabi then I need
to depend on rustc-builtins, which provides the missing intrinsics (aeabi_fadd, etc.).

However, I can't think of any case where you wouldn't want to have these linker flags (I don't
think they ever hurt).

Those flags would make linking always fail if you don't have newlib installed, which is a totally
reasonable scenario if you don't depend on any C code.

Are you saying you think the fix is to change Cargo's default for those targets?

cargo, itself, does nothing extra when used with specific targets; it doesn't special case them in
any way. What I meant is that, in my opinion, linker arguments shouldn't be hard-coded in a target
definition and that, instead, the user / crate author should specify any extra linker arguments they
want / need using crate attributes (e.g. #![link(name = "..")]), build scripts
(cargo:rustc-link-lib=..) or .cargo/config .

@alevy
Copy link

alevy commented Oct 7, 2016

Why are you passing -lc, -lm or -lgcc?

Those libraries are necessary for compiler intrinsics required by ARM. The way GCC has decided to divide things up is to leave those intrinsics to newlibic.

-lc is needed for things like memcpy, memclr etc, which are output but rustc but not implemented by the compiler. -lm includes the actualy definitions of soft-float operations --- again, output by rustc but not implemented elsewhere. -lgcc is for certain wrappers around float operations like sqrt.

@japaric
Copy link
Member Author

japaric commented Oct 7, 2016

Those libraries are necessary for compiler intrinsics

OK, if what you need are compiler intrinsics then what you really want to use, instead of newlib, is "libcompiler-rt.a" which has been renamed to libcompiler-builtins.rlib recently. As there are no binary release of that .rlib for these targets, you'll have to depend on rustc-builtins (possibly with its "c" Cargo feature enabled) to make Cargo compile the intrinsics for you.

As for the other symbols that are not compiler intrinsics. Use rlibc to get memcpy et al. as @whitequark suggested or, even easier, just enable rustc-builtins's "weak" Cargo feature. sqrt and other math functions are outside of what the no_std standard crates offer so you'll have to either use a C implementation or look for a pure Rust implementation on crates.io (I'm not aware of any).

@JinShil
Copy link
Contributor

JinShil commented Oct 15, 2016

Well, the rust-std component also contains more than just the std crate (and its dependencies); it also contains the test crate and all the rustc_* crates.

I would consider that a bug.

But the rust-std is currently implemented to contain all the target libraries (see 1 and 2). We could add more components in the future as I suggested here; I guess that would be the bug fix ;-).

Hm, I guess as defined right now, I'd consider rust-std to just be all the 'target' bins, whatever they are (that's clearly what I was thinking at the time I created it), so it makes sense for core-only platforms to distribute 'core' in 'rust-std'. Perhaps in the future we can clean up the rustup package definitions, but it may not ever be important to distinguish the core package from the std package.

updated the RFC to re-use the existing rust-std component instead of defining a new rust-core component.

If that means what I think it means, it's going to get very confusing especially with the current nomenclature already distinguishing between core and std and language features like #![no_std] and #![no_core] that are already recognized by the compiler.

@michaelwoerister
Copy link
Member

@rfcbot reviewed

1 similar comment
@vadimcn
Copy link
Contributor

vadimcn commented Oct 19, 2016

@rfcbot reviewed

@alexcrichton alexcrichton added the final-comment-period Will be merged/postponed/closed in ~10 calendar days unless new substational objections are raised. label Oct 19, 2016
@alexcrichton
Copy link
Member

🛎 This RFC is now entering its week-long final comment period for closing 🛎

@rfcbot
Copy link
Collaborator

rfcbot commented Oct 19, 2016

All relevant subteam members have reviewed. No concerns remain.

@rfcbot
Copy link
Collaborator

rfcbot commented Oct 29, 2016

The final comment period is now complete.

@alexcrichton
Copy link
Member

Alright, thanks again for all the discussion everyone! Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
final-comment-period Will be merged/postponed/closed in ~10 calendar days unless new substational objections are raised. T-dev-tools Relevant to the development tools team, which will review and decide on the RFC.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet