NETMF v5 vs Llilum #493

josesimoes opened this Issue Aug 11, 2016 · 26 comments


None yet

josesimoes commented Aug 11, 2016

I’ve asked this before, but considering the ongoing discussion on the vNEXT branch (see #491), I thought it made sense to bring it up again.

Will start by saying that I’m not trying to pick a fight with anyone! Just want to bring different perspectives and ideas to the discussion so it can be more broad and better conclusions rise from it.

I think that no one can argue about what is currently written on the v5 branch docs. Seems like a perfect approach and a good plan to follow.
The thing is that all of that won’t get done in a couple of weekends. It will require a serious amount of effort and work hours.

What about Llilum? Isn’t it supposed to be NETMF revised and augmented? NETMF on steroids? Isn’t Llilum right now on a very advanced development stage?
My point is: all this great momentum and rally of efforts that seems to be picking up in the NETMF community shouldn’t it be better invested in Llilum, rather than on a new version of NETMF?

I’m a huge fan of NETMF, make no mistake! But, if Llilum is the next big thing, if we can get all the goodness of coding embedded systems in C#; have code portability and reuse; awesome experience and productivity with Visual Studio along with all the other good stuff that comes along, why not move there?

Despite also being human, I have no problem with this particular change and I’ll gladly move there and leave NETMF in the past.

To wrap this up I leave these questions for thought:

  • Is it pouring that amount of effort on v5 the best course of action?
  • Does it make sense?
  • Aren't we just being stubborn here by not letting NETMF go and we are trying to perpetuate it just… because…?
  • Wouldn't it prove better or wise transferring all this energy to Llilum?
  • Or is Llilum a different kind of ‘thing’ that has its own place along with NETMF?

Please share your thoughts about this!


Stabilitronas commented Aug 11, 2016

Indeed, I'm having exactly the same doubts!..

I would personally vote for shifting focus on Llilum, and mostly for one specific reason: much wider possibilities of what one can accomplish with C#. For example, an MCU has an integrated temperature sensor. I want to use it in my application. What should I do? At the moment, there are two options:

  1. If I use vanilla NETMF, I have to recompile the whole firmware, which is, as we all know, not the easiest task in the world. Also, I have to use C++ language.
  2. If I use some of GHI products, I can leverage their managed features (very slow and not suitable for some more sophisticated features, like heavily loaded serial ports), or I can use RLP (which is very powerful, but needs a different IDE, has no debugging, and is still a C/C++ stuff).

So, in both cases it's not easy to add capabilities that originally were not added in mainstream images. And, due to huge amount of different MCU capabilities that vendors compete with, I'm sure mainstream FW images will never have all of them implemented.

Now, surely this situation doesn't automatically require Llilum. NETMF could also get some sort of native extension framework, but if one can code his/her drivers using Llilum in Visual Studio and C#, that would certainly make things easier to contribute. Maybe we could even use NuGets for native code development, what a world that would be!..

Concerning stubbornest of not letting NETMF go... Let's take a glance of what has happened in the recent years. NETMF4.4 has been released almost a year ago, and we still have zero adoption. One cannot buy a board running 4.4. One can only get 4.3.1, which is 2.5 years old, and it actually doesn't differ much from 4.3 which is 3.5 years old. Oh, and VS2015 is still not officially supported, and we're about to get VS2017 (or whatever it will be named). We still have to use VS2013 basically. That's where the NETMF world is basically at: a 3.5+ years old environment. Given the advent of other options (like Windows Iot Core), I think this is a serious drawback.

So, NETMF is aging very, very quickly. Pretty much everyone agrees that its codebase is a very complicated one, and since we're going for an almost clean-sheet route anyway, I totally agree with @josesimoes : why not going straight for Llilum?..

@Stabilitronas Now first things first, in the assumption that Llilum is at a ready to continue with state, I would second that one too. I miss the possibility to use the MCU to its full extend. Definately for time critical things. Or Llilum or Netmf extension or a good wiki/doc/blog with clear how to. Preferred choice IS Llilum but at the last I checked it, it was IMHO not even ready for alfa tests. Plus I read something of they complete each other and can/will co-exist

Now for the points you address, I have no (at least not much) issues with compiling whole netmf. Plus if you wanna do native stuff in Llilum you're back in c/c++ too.

For RLP, which is nice but, and I consider that a big but, you have to use GHI boards, nothing against GHI, they rock, but is a dependency.

Now for not being able to buy boards with 4.4, well I had multiple of them and they do run 4.4 with all supported Uart, ADC, I2C, SPI, PWM what ever the board provides and I think there will be more comment to back that up.

And if you read the v5-dev branch dox, there is mentioned to go to the new works of building for VS-Next, if I understood it correctly.

@lt72 and @smaillet-ms can you please clear things up a bit for the Llilum side of things and Netmf, so we all in the discussion have a common base and hopefully understanding how the both Llilum and Netmf go forward.

I am watching and reading. Very exciting to see so much discussion on the future of embedded c# frameworks. I am very interested in what Steve and Lorenzo say about the differences in vNext and Llilum, and why there is a push for vNext instead of all efforts going to Llilum. I don't have an opinion yet until we hear more.

@tpspencer tpspencer referenced this issue in roceh/stm32f7disco_llilum_lcdtest Aug 11, 2016


How to test this sample code? #1

@piwi1263 So where did you buy those 4.4 boards exactly?

piwi1263 commented Aug 11, 2016

@Stabilitronas Ask @IngenuityMicro and those boards were even bought last year and since around Q4/2015 on 4.4

@piwi1263 No no, you misunderstood me. Can you give me a link where I could buy a NETMF 4.4 online? I thought there are none, but maybe I miss something...

@Stabilitronas You still have to talk with @IngenuityMicro or do you expect a link to corporate board with 10K employees ?

@Stabilitronas : we also provide a NETMF board that supports 4.3.1 or 4.4. ( )
But the fact that VS2015 has issues with NETMF does not help in promoting 4.4 :(

And I agree with all that @josesimoes and @piwi1263 have written ;)

@Becafuel Oooh nice, so it's not zero adoption after all!


smaillet-ms commented Aug 11, 2016

@Stabilitronas Can you clarify what you mean when you say VS2015 isn't officially supported? As we released the 4.3 QFE2 to support VS2015 and V4.4 has supported VS2015 from the very beginning. VS "15" will also come along once that's officially released. The work done in 4.3 QFE2 split the VS integration from the SDK so that we could more easily add support for newer versions of VS as they come out without forcing upgrades and allow side by side installation. So I'm confused that someone would claim that VS2015 isn't officially supported.

@smaillet-ms I'm sorry, my bad. I was unaware that Quail actually officially runs 4.4, so technically, yes, I can buy a 4.4 board and use VS2015 with it (although I don't know if they actually ship 4.4, not 4.3.1. Their download site lists it as "beta"). That being said, GHI is still the major NETMF supplier, and they do not run 4.4 and it doesn't look they will anytime soon. So, VS2015 is out of official support (very clearly stated in ) for the absolute majority of the NETMF boards I can buy online, not even talking about the boards that are already bought up until now.

To sum up: GHI and VS2015 — no official support, Microbus and VS2015 — only on beta FW. Anyone else I'm missing?


smaillet-ms commented Aug 11, 2016

One key point on the NETMF v Llilum aspect is that the largest set of changes can (and should) apply to both. The build infrastructure for vNext will be an evolution of the support for Native code projects already in Llilum. The support in Llilum is functional but not ideal, it was built on early work from the VS team for supporting Android and GCC. With the introduction of CLANG with MS CodeGen and other advances in the VS cross platform tooling we can clean that up and make a great story for both platforms. (including native JTAG debug directly in VS!). Furthermore, the PAL work, if designed with unification from the start can serve as a common platform layer for both. (Including a much better and consistent interop story for NETMF)

As to why bother with NETMF at all instead of just going to Llilum... Well that's the community driving that. As pointed out, Llilum is at a functional Alpha state. However, like NETMF isn't a product and relies on volunteer time to take it further. It's a very complex piece of code with a history almost as old as NETMF. Multiple representations of the code are processed, original IL code the internal IR, the generated LLVM IR and ultimately the final output of the LLVM codegen - so getting volunteers up to speed where they can reliably contribute is a tough thing.

Thus, a vNext that can share most of the new effort with Llilum seems like a viable intermediate step. If no-one else believes that to be true it won't happen.

@smaillet-ms I like it. Sounds like parallel development (kinda) on both platforms. Thanks for the explanation.

miloush commented Aug 11, 2016

As for VS15, until it gets supported, my VS15 branch should work in VS15-only build environment and generate VS15 SDK/support for MF.

In MF vs LL there are pros&cons for both and it is probably question of what your requirements are and what you are willing to give up for it. Personally, I wouldn't like to abandon any of the two for the other, but neither will have it easy with the amount of resources there seems to be, less so if split.


Zero adoption of 4.4 is misleading at best.

As you know there are 4.4 builds for a number of Discovery and Nucleo boards which have been available for quite some time now and as Bec a Fuel has pointed out the Quail board has a 4.4 port.

Also when I was offering Oxygen boards last year for sale online they came pre flashed with 4.4 but unfortunately due to moving in-between Europe and New Zealand multiple times over the last year I had to take them offline.

I also know of several 4.4 mature commercial projects running 4.4 on STM32F4’s and Device Solutions Meridian SOM’s.

Obviously a lot more can be done and I will be gearing up asap to start supplying boards with the latest and greatest pre installed.

.NET MF 我是不会使用的 速度太慢了(MF Speed is too low !!!~!),llilum this idea is good .


josesimoes commented Aug 13, 2016

I'm using v4.4 with VS 2015 from the day it was released. Apart from occasional (and expected) connection glitches it works just fine.

Regarding "boards with 4.4". Agreed that there is no significant commercial availability of boards preloaded with a 4.4 image. But a board is 'just' a board. It will run 4.4, 4.3 or whatever good image that you flash it with. The thing here is to be able to build that image.

Untitled86 commented Aug 15, 2016

I'm not on the list, and my opinion might be irrelevant, but I've helped bring three commercial products to market using NETMF. My primary concern about dropping NETMF for Llilum is losing dynamic evaluation. One of the primary reasons I'm still using NETMF instead of FreeRTOS is that I can allow my end-users to write "plugins" (essentially NETMF DLLs) which I dynamically load and execute off their SD card.

The way I've been using NETMF, when I need performance I move a tiny bit of code into C++ and call it from C#. That's got me around all the performance issues.


smaillet-ms commented Aug 17, 2016

Dynamic evaluation is one of the main distinctions and reasons why I think NETMF still has a life of it's own and thus my drive to a vNext. The plan for vNext will hopefully make writing that C++ code portion a bit easier.


cpfister commented Aug 19, 2016

I like the direction in which NETMF vNext appears to be heading, I also like the potential of Llilum, and there certainly could be some synergies between the two. Nevertheless, I'm not sure there are sufficient developer resources available for both, or even for one?


smaillet-ms commented Aug 22, 2016

The number of resources is limited by the number of people willing to commit to contributing the time and energy required. The plan for vNext recognizes that there are a limited resources and thus tries to achieve a "two for one sale" to the maximum extent possible. (e.g. the bulk of the focus for NETMF vNext is also directly useable in Llilum.) Without that the effort doesn't make a lot of sense to me.


martincalsyn commented Aug 31, 2016

I've had my eyes elsewhere for a bit so I missed this conversation. My personal strategy has been to focus on 4.4 for stuff that needs to work Right Now, but invest in Llilum as the eventual solution for getting more perf, capability and programmability out of a given MCU, that is, when Llilum is actually ready for prime-time. Along those lines, I have been using 4.3 (GHI) and 4.4 (IngenuityMicro) for ongoing commercial projects; working to fix a couple bugs in 4.4 (entered here as issues); while also getting up to speed on the Llilum internals so that I can start contributing there as well. Llilum is where we'll get the modern language features and tools and full speed out of the same hardware that we're using today for netmf.

As for untitled86's comment - the impact of the loss of certain dynamic codegen capabilities is probably less than you are thinking. The capacity for dynamic loading can still be there, but it is dynamic codegen that we lose (anything that uses Emit to generate code at runtime - and that's different from the facilities you need for dynamic loading of assemblies). I've been living with this in Xamarin for some time, and the impact has been minimal. There's no reason that the impact here has to be worse than it is for Xamarin AOT compilation. The code may not be there yet for dynamic assembly loading, but I don't think there's anything that precludes it happening. There are good reasons why dynamic codegen is problematic.


smaillet-ms commented Sep 1, 2016

@martincalsyn Dynamicly loading code in Llilum is not really plausible. While .NET Native and the Xamarin AOT solutions do have support for that it isn't without a price in footprint - for the platforms they target it works for them. However, since Llilum targets small embedded devices it uses a different approach. In particular it uses aggressive whole program optimization, at least at the .NET IL level, to eliminate unused code and data. That is it performs inlining, strips out classes and methods not used, and lastly eliminates class data members not used. Thus, it isn't possible for an arbitrarily built module to be dynamically loaded and used as classes, methods and even fields my not exist in the host that tried to load it.

Not every concept of the PC programming world is working in the embedded world... not before we have an i7 with 32GB RAM & 1TB SSD on ONE chip and costs less then 1€.
I think there is something like a border between the 2 worlds. I was always unsure about the idea of going from PC to embedded with .Net...

Maybe "dynamicly loading of code" is one of this impossible concepts.
I often thought about something similar (on a 8051 with 32KB RAM and paged 256KB ROM) and I always ended up with writing an own interpreter... and stopped this idea because of too much work (and impossible on the 8051 :) ).
But... today we have much bigger chips, .NetMF and there is so much code out now in the internet including some concepts of different language interpreters...


lt72 commented Sep 1, 2016

To me, dynamically loading code calls for performance deterioration or requires a JIT -like solution.
I never like a JIT solution for embedded because of the obvious requirements on storage and additional code size, it just seems an approach that is bound to under-deliver on all counts. On the other end, tainting LLILUM with some dynamic loading features that does not require a JIT is not impossible, but I wonder if it is necessary at all at this stage. Definitely interesting as theoretical discussion but I would prefer moving forward with other feature discussions such as networking, security, code size/speed improvements, and so on...

@josesimoes: to answer your original questions, I do believe that if LLILUM had a networking solution on parity with NETMF, it would be possible to concretey talk about pilot products built on LLILUM. Certainly it is premature at this stage. I also believe that the missing pieces could be easily enough provided by the community in collaboration with Microsoft, where Microsoft would act as co-ordinator and eventually fix the code-generation / runtime issues, first and foremost. That is given the current reality about project staffing, of which you are all aware.

As it often happen in OSS projects, momentum can create opportunities and fast growth. I opened a few issues with the label "help wanted", I invite you to check and provide your thoughts in case.


cw2 commented Sep 1, 2016

@lt72 IMVHO the number one issue is to significantly reduce the complexity of the code transformation part of LLILUM - as @smaillet-ms explained above,

It's a very complex piece of code with a history almost as old as NETMF. Multiple representations of the code are processed, original IL code the internal IR, the generated LLVM IR and ultimately the final output of the LLVM codegen - so getting volunteers up to speed where they can reliably contribute is a tough thing.

Adding support for a new board is much easier than in NETMF, at least in case it is an mbed one, but trying to modify anything in the IR-related code is really very hard (at least order of magnitude harder than in NETMF, perhaps even more). IMHO this has to change, or the contributions will never come...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment