Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code Generation Other Targets #17

Closed
sirinath opened this issue Jun 23, 2014 · 2 comments
Closed

Code Generation Other Targets #17

sirinath opened this issue Jun 23, 2014 · 2 comments

Comments

@sirinath
Copy link

Is it possible to consider code targeting to other formats outside of the .net CLI like GPU, FPGA, Native/ASIC, JS etc. with the ability to specify which parts get targeted to which to get best performance and versatility.

I think WebShaper does this for JS.

@polytypic
Copy link
Member

@t0yv0 probably knows exactly what WebSharper does, but I believe it basically contains a compiler from a subset of F# to JS. Alea cuBase similarly compiles F# to Cuda. There are probably a few more examples where F# quotations are being compiled to some target other than CIL.

So, it is certainly possible.

Do you have some more specific application in mind?

Compiling CML style code to Native you could probably improve performance by up to about 10x for code that makes heavy use of message passing and lightweight threads. Other kinds of code might not improve much. (Hopefully, though, the JIT compiler, at some point, implements escape analysis and can eliminate intermediate object allocations. That could give a 5x-10x improvement in some cases with Hopac (it would eliminate a lot of GC pressure and GC write-barriers). This estimate is based on an experiment I did with MLton.) It would be quite a bit of work to make really good run-time environment for such a Native CML system even if you would use something like LLVM to generate the native code.

Depending on what you really want, it might not be the best approach to compile arbitrary Hopac/CML code to a limited execution environment like a GPU. Note that when crossing boundaries between execution environments there are typically fairly large overheads. Not knowing what you have in mind, but I would likely compile some more limited programs to such a special environment and then implement only special purpose message passing abstractions to integrate with Hopac. Something like a channel through which you could asynchronously send messages to a GPU kernel for processing and receive the results. The GPU kernel itself would not necessarily be programmed with anything like CML.

@sirinath
Copy link
Author

The lowest hanging fruit (may be quite a bit of work through) may be to just to optimise for .net platform itself where you remove some costly abstractions and object creation. Though tangential but the following might be of some interest: http://scala-lms.github.io/, http://stanford-ppl.github.io/Delite/, https://github.com/tiarkrompf/scala-virtualized/wiki. One thing that I do not like about the LMS approach is this cannot be done compile time.

Also with regard to GPU what you say is what I had in mind. You extract portion of the code which can benefit from running in a GPU environment through execution is slower, but make upto it by aggressively parallelisation. Similarly for FPGA in some cases.

With regard to FPGA you can perhaps target the whole or larger proportion to be realised in hardware. Some interesting links http://nuggle.me/ (not exactly what would be useful here), https://chisel.eecs.berkeley.edu/, http://www.bluespec.com/ (http://en.wikipedia.org/wiki/Bluespec), http://clash.ewi.utwente.nl/, http://www.myhdl.org/, http://raintown.org/lava/, http://en.wikipedia.org/wiki/Atom_(programming_language),

I don't like the idea for directly to LLVM compiling. This is too low level to work with and depends on portability of LLVM. May you you have to have an intermediate stage where you use a minimal language before this translates to the lower level. The lower level can be LLVM IR also. Languages that use this multi layer approach are Red/System Red, and DSL oriented languages like Racket / Redex, Shen, Qi, Scheme / Lisp variants, also see https://github.com/JeffBezanson/femtolisp (from the creator of Julia Language). In this case the IR will just be used to translate from F# AST to intermediate representation to AST representation conducive to translate to target language to target language. Low level languages can presume to have an AST representation also. So should be close to AST, TST representation and should be easily manipulatable.

Another aspect that may be incorporated is verification (see Why3, WhyML) and fluent testing (like in http://www.pyret.org/) to see if the system behaves as expected and code generated is also accurate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants