Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Device trees for platform configuration data #454

Closed
vwadekar opened this issue Feb 1, 2017 · 19 comments
Closed

Device trees for platform configuration data #454

vwadekar opened this issue Feb 1, 2017 · 19 comments

Comments

@vwadekar
Copy link

vwadekar commented Feb 1, 2017

Right now, we have all platforms providing data and configuration settings using C files and platform handlers. Is there a plan to add support for a Trusted Firmware dtb file and converting drivers to use device trees? This would make porting to platforms easier.

We are already facing problems for the Tegra chips, as they use a lot of common code and the differences are present in .c files. These differences have to be replicated per chip.

@danh-arm
Copy link
Contributor

danh-arm commented Feb 2, 2017

Hi @vwadekar

As discussed elsewhere (e.g. #420, ARM-software/arm-trusted-firmware#747), we are considering adding (optional) dtb support to TF. Initially this would be for basic firmware configuration rather than full device description, but it could be extended to cover your use-case. Currently we have a lot of other commitments so we don't plan to do this until the 2nd half of the year. As always, contributions are welcome.

Regards

Dan.

@vwadekar
Copy link
Author

vwadekar commented Feb 3, 2017

@danh-arm

Thanks. Since I don't know your roadmap and plans for adding dtb support, how do you think we should proceed?

I dont want to come up with a solution which works with our downstream code and has to be changed for upstream.

@danh-arm
Copy link
Contributor

danh-arm commented Feb 6, 2017

Hi @vwadekar. If you describe your proposed solution in this forum for us to comment, that would help avoid major rework during later upstreaming. Alternatively, if you want more details on our roadmap or how we think this might be implemented, contact us directly to set up a call.

@vwadekar
Copy link
Author

vwadekar commented Feb 8, 2017

@danh-arm

I don't have a concrete design, but these are some thoughts off the top of my head.

  • dts should be part of the TF build flow and has to be very small due to TZRAM size limitations.
  • Generated dtb should be part of the blxx.bin in its own RO memory region. Need to modify the blxx.ld scripts. This would avoid any integrity checks and decrypt/verify steps. We wont have to map in NS memory to access the dtb this way.
  • libfdt should be pruned to include only required features. Get feedback from the community
  • Driver init functions should look for the presence of a dtb and read the settings from it. Don't know if we should enable/disable by looking at the "status" dts value, like the linux kernel. We might have to rely on drivers to look for this entry and bail out if it say "disabled" or not present.
  • Drivers may fall back to the old way and use default settings, if the dtb is not found.

This would mean we have a dependency on dtc-compiler, but that should be OK.

@jwerner-chromium
Copy link

Sorry, I'm just lurking here but this discussion got me curious because it would be a major new direction for Trusted Firmware to take. Can you explain in a little more detail what use case you're trying to solve with the DTB, Varun? In particular, are you using the whole Trusted Firmware stack (BL1, BL2, BL31), and do you intend each of these stages to contain a DTB (the same one, or individual ones)?

Device trees can be used for two separate use cases: compile-time configuration (like hardware description and such) and run-time configuration by a previous boot stage (such as the amount of available memory that was probed during memory initialization). I get the impression that you're more interested in the former here to allow code reuse for slightly different hardware... which is a good thing, but I don't think device trees are really well suited for that since they add all that extra overhead of evaluating and parsing at runtime. A compile-time configuration system (like Kconfig) can usually solve the same problem much more efficiently. (I know the Linux kernel uses device trees for this kind of configuration, but the Linux kernel also tries to support use cases where the same compiled kernel binary can be deployed on different platforms, which I don't think makes sense to attempt for Trusted Firmware.)

I can also see value in a run-time configuration system (in fact we already have a primitive parameter passing mechanism on the coreboot-loaded rk3399 platform). But DTB is a pretty big hammer for that with quite a lot of overhead (both for the passing stage and the passed-to stage)... if you look at U-Boot I think you have a good example of how this can be over-done and make a lot of things more complicated and slower than they need to be. I wonder if a constrained environment like firmware wouldn't be better suited with something more lightweight. (We've just been passing linked lists of (, , ) in the plat_params_from_bl2 field for now, FWIW.)

@danh-arm
Copy link
Contributor

danh-arm commented Feb 14, 2017

Hi @jwerner-chromium. You make some good points here. The initial use-cases we intended to enable are to support a limited form of runtime configuration. Several partners have expressed an interest in having a single set of firmware binaries run on a family of platforms. You are right this potentially makes this a lot slower and more complicated. Therefore we expect that:

  • This would be an entirely optional mechanism and off by default.
  • Any parsing would be done at the BL1/BL2 stage. If BL31 needs any runtime configuration, these would be passed as params during BL31 init (as you suggest).

For the interested partners we've spoken to, ease of firmware installation is more important than boot time, so DTB seems a reasonable fit.

@vwadekar
Copy link
Author

vwadekar commented Feb 14, 2017

@danh-arm

Hi Dan,

Can you elaborate "If BL31 needs any runtime configuration, these would be passed as params during BL31 init"?

We have custom BL1/BL2. So, does this mean we have to come up with ways to pass config data from BL2 to BL31? We were planning on placing chip specific register settings, IRQs, memory configs, in the dtb. If the parsing is not in BL31, it would be hard to pass all this information to BL31 init

@danh-arm
Copy link
Contributor

Hi @vwadekar. At some stage during boot, there would need to be conversion from a DTB (or other config file) to an in-memory structure. I'm saying it would be better to do that conversion in transient boot code (BL1/BL2) rather than runtime resident code (BL31). Since this in-memory structure will need to be defined somewhere anyway, why would it be hard to pass this during BL31 init?

I guess your concern is that you would have to replicate any DTB parsing in your custom BL1/BL2? If so, my response would be "Your BL1/BL2 solution is under your control!".

@vwadekar
Copy link
Author

vwadekar commented Feb 14, 2017 via email

@danh-arm
Copy link
Contributor

Hi @vwadekar. Typically we expect BL2 and BL31 to be updated at the same time, but I understand that's not the case for you. In your case I can see it's undesirable for BL2 to track struct updates that it has no interest in. But keeping BL31 as simple as possible has always been a design goal for TF, so what do we do?!

One solution might be for your platform to have another BL3X image that does the parsing before passing control to BL31. Those 2 images would be tightly coupled and the memory for the new BL3X image could potentially be reclaimed at runtime.

Alternatively, we have some patches coming soon that will enable dynamic translation table modification. Those patches will facilitate other changes we've been considering, including the ability to unmap transient boot code in BL31. If any DTB parsing code could be unmapped at runtime, that might make having the support in BL31 more palatable.

Anyway, I'm not ruling anything in or out right now - this is all open for discussion.

@vwadekar
Copy link
Author

@danh-arm

I am inclined towards approach #2 - dynamic translation table modification. Thanks

@achingupta
Copy link

Maybe this is a given but just want to say it out loud...

BL31 will be required to parse "some" description of resources for itself or another payload in the future. "some" could be a DTB, a in-memory representation obtained from a previous BL stage or a more lightweight mechanism to represent this information. The choice should be left to the platform code in BL31. More importantly, access to this information from the generic BL31 code should be hidden behind a layer of abstraction instead of invoking the description specific functions directly.

In this case it would be the platform's responsibility to unmap any DTB parsing code. Does this make sense?

@vwadekar
Copy link
Author

@achingupta

Yes, this makes sense

@jwerner-chromium
Copy link

Just quickly chiming in on this again: I think dynamic memory translation would be a nice feature (to reduce memory footprint and for other things... e.g. we recently had some trouble trying to pass data structures between kernel and BL31 which this could make much easier), but it's not the end-all solution to bloating BL31. Larger binary sizes still have negative effects on boot time even if you unmap the stuff again afterwards.

Keeping device tree parsing as a completely optional component sounds like a good solution, though. Drivers should then offer a more generic interface to receive their platform-specific instantiation data, and different platforms can decide whether they want to hardcode that, get it from a DTB or from some other passed-in data structure (maybe even a combination of those options).

@nlebayon
Copy link

Hi all,
I'm a newcomer on TF domain, and I joined this conversation because my first TF activity is linked to device tree use. My scope would be to put in place a DT mechanism on BL2 side only (maybe a further step would use also it for BL32, but not for the moment).

This would be a very light DTB, picked up from our U-boot "SPL" one (from our non-trusted boot chain), which would take care of basic system settings. By this way, I would then be able to convert our drivers.

I read this conversation and understood that there are different ways to approach this subject, and I got some interesting information, even if I don't have all the clues yet.

Would you have some time to give us an update of this study, in order to have some guidelines for a common implementation, i.e. which would stay "more or less" compliant with the expected implementation. And of course, if my contribution would help, then it would be great.

Maybe these discussions continued on another topic or other, then if this is the case, thanks to inform me please.
Best regards

@danh-arm
Copy link
Contributor

Hi @nlebayon.

Thanks for your interest here. Unfortunately there hasn't been a great deal of progress our side on this. Your intended usage sounds reasonable. We'd like to propose a design that caters for all use-cases raised in this thread and at recent Linaro Connect conferences. We should have more to say on this by the next Connect (in September), although any additional input is welcome before then.

I think the overriding concern here is that any additional DT functionality in BL2 (or other BLs) is a platform choice and not imposed on all platforms. The other main concern is about the relationship between a DT for BL2 and other DTs in the system. We need to be very flexible here.

The interaction/relationship between UBoot SPL and TF BL2 is also in scope. I expected platforms to be using one or the other but it sounds like you want to use both? Can you describe your boot flow in a bit more detail?

Regards

Dan.

@nlebayon
Copy link

Hi @danh-arm

Can you describe your boot flow in a bit more detail?

Globally, we have two independent flows, the normal boot chain and the trusted boot chain.
Normal boot chain = ROM code / Uboot-SPL / Uboot / Kernel
Trusted boot chain = ROM code / A-TF / Uboot / Kernel
So, as you said, we are using "one or the other".

But, we would like to use the same DT content for both, to have a consistency in our implementations. So the DT used by A-TF would be the same used with Uboot-SPL.

For the moment, I have more or less used the modifications proposed here: https://github.com/ARM-software/arm-trusted-firmware/pull/747 in order to add capability to build a dtb file. This is located in generic part but this is just adding tools without any dedicated process if not used.

And in my platform directory, I have mapped the dtb file in the binary. I am presently finalizing this step, but I'm not so far from the end. Then, the next step would be to open it from our platform primitives and to make a unitary test to validate access.

Any feedbacks or questions are welcome.
Best regards
Nicolas

@nlebayon
Copy link

Hi @danh-arm

Here is a little update on my progress.

I have now finalized the mapping of the DTB file (generated inside A-TF) in the binary. I'm now ready to put in place an access test in order to validate this. It took me some time because I had also to install my setup on the platform, but now I'm able to build/flash/boot.

Back to the first DTC implementation proposal of @nmenon, I was wondering if it shouldn't be put inside our platform directory , as you said:

"any additional DT functionality in BL2 (or other BLs) is a platform choice and not imposed on all platforms"

What I propose is to implement it firstly only inside our platform sub-directory, because we will be the first guys to use them apparently. and you could decide in a further test to move it to generic part, regarding the multiplication (we hope so...) of "plaftorm specific implementations" (by the way, this could be as different examples as interesting to make the most proper decision about a generic implementation).

What do you think about this way to proceed?

Best Regards
Nicolas

@ghost
Copy link

ghost commented Jan 31, 2019

This was solved when dynamic config support was introduced, right? https://github.com/ARM-software/arm-trusted-firmware/blob/master/docs/firmware-design.rst#dynamic-configuration-during-cold-boot

If not, reopen the issue.

@ghost ghost closed this as completed Jan 31, 2019
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants