Skip to content
This repository has been archived by the owner on Mar 25, 2018. It is now read-only.

Native FFI #3

Open
rauchg opened this issue Mar 2, 2015 · 34 comments
Open

Native FFI #3

rauchg opened this issue Mar 2, 2015 · 34 comments

Comments

@rauchg
Copy link

rauchg commented Mar 2, 2015

Essentially supporting https://github.com/node-ffi/node-ffi directly
(require('ffi'))

@seppo0010
Copy link

+1

@TooTallNate
Copy link

Definitely would eliminate a lot of the pain in users having to compile native addons (especially on Windows).

👍

@chrisdickinson
Copy link

+1 on this. Might be worthwhile to ask FFI maintainers from Python or Ruby if there's any pitfalls we should be aware of when bringing this into core?

@rauchg
Copy link
Author

rauchg commented Mar 2, 2015

@chrisdickinson agreed. We could CC them on this thread even :D

@Qard
Copy link
Member

Qard commented Mar 2, 2015

👍

@chrisdickinson
Copy link

cc @arigo, who maintains Python's cffi package, and @tduehr, who maintains Ruby's ffi package. Are there any pitfalls we should be aware of before incorporating an FFI package into io.js core?

@piscisaureus
Copy link

(y) although can we put the feature behind a flag for some time?

@mikeal
Copy link
Contributor

mikeal commented Mar 2, 2015

is this something we should get in to V8?

On Monday, March 2, 2015, Bert Belder notifications@github.com wrote:

(y) although can we put the feature behind a flag for some time?


Reply to this email directly or view it on GitHub
#3 (comment).

@Qard
Copy link
Member

Qard commented Mar 2, 2015

How would FFI interact with V8, which is heavily template-based? I've only ever seen C FFI before. Is it even possible, or would we need that C abstraction over V8 that has been suggested several times before?

@TooTallNate
Copy link

How would FFI interact with V8

The same way it interacts with libuv (through C code)?

@tduehr
Copy link

tduehr commented Mar 3, 2015

@Qard you have it backward... V8 would be interacting with FFI.

I'm not aware of any implementation pitfalls, mostly because i didn't create mri or JRuby's FFI implementations. I can help with runtime pitfalls somewhat... You're going to want to give users a way to make objects backed by C memory structures to act as much as possible like any other JS object for allocation/deallocation purposes. For all other intents and purposes, interacting directly with the C call should carry C semantics. It should be up the the ffi user to implement more idomatic wrappers for their chosen library not a FFI implementation concern.

@arigo
Copy link

arigo commented Mar 3, 2015

Hi there. Do you have anything more concrete to point me to? All I see is a vague question "what are the pitfalls of incorporating a Foreign Function Interface into xxx"... and I don't know what xxx is.

@Fishrock123
Copy link

See related io.js PRs: nodejs/node#1865, nodejs/node#1762, nodejs/node#1750

@arigo
Copy link

arigo commented Jun 15, 2015

Probably not a useful note, but since you asked me: as maintainer of Python CFFI, I feel that if the goal is to call C code, then the interface should strive for the same simplicity as C itself. Ideally, if a feature doesn't exist in C, it is not part of CFFI either (reading big- or little-endian numbers comes to mind).

CFFI comes with a minimal API: the user writes ffi.cdef("some C declarations") to declare functions and types (using directly the C syntax), and then he uses ffi.new("foo_t *") or ffi.new("int[42]") to make a new C structure or array. You have a few operations on the "pointer" objects, like indexing and field access---and calls if they are pointers to functions. That's it for the core. This approach is completely different from Python's ctypes, even though it basically allows the same results.

(Another difference is the "API mode", a separate mode which produces C source code that must be compiled into a Python extension module; this has the huge advantage that you're working with C code at the level of C, including stuff like macros and structures-with-at-least-a-field-called-x; not at the level of the ABI. But this might not sell so well in the context of JS, for all I know.)

@tduehr
Copy link

tduehr commented Jun 15, 2015

I concur. I often get requests for features based on Ruby features which often cannot be duplicated for the general case of the FFI/C layer. Where ever I make an api decision, I default to C behavior. This is an interface to a lower level language, it should act like that lower level language. You're making it easier to use a C library and remove the need for writing the library hooks in C. For Ruby, this means the resulting gem is more portable between engine implementations and even versions of the same engine. This also means the user, the one implementing a C library wrapper using FFI, does not have to learn the internals of Ruby. The user has the information required to make their api align with language semantics so, C semantics should be expected by users.

@kobalicek
Copy link

Am I the only one skeptical about this module here? :)

I don't like putting something like ffi into the core. I think it's unsafe, it will probably never be stable, and I personally consider it as a workaround, not a solution. Let's be honest here - the biggest problem in native modules development is not a C++ code, it's breaking changes in V8 that ruin all the effort and turn into maintenance nightmares. Solutions like NaN provide short term stability, but we will have soon NaN2, because the latest changes in V8 are so drastic that it cannot be simply solved by more macros and templates in NaN.

Now let's go back to FFI, I have the following concerns:

  1. It's unsafe - Creating bindings usually mean to maintain some data structures, arrays, etc. FFI requires basically to duplicate many things that are already in C header files. If someone does a wrong job here or new version of some library changes something it won't be visible before FFI runs, it will crash node in the best case. Also one point here - it will allow any module to crash the process, even on purpose.
  2. It solves only some use cases when dealing with C code. It cannot be used to bind C++ code anyway, and a lot of libraries are in C++. I guess windows users would be the happiest here as it will enable to call winapi functions without going native.
  3. It's very slow. I think node should focus on high performance and not add something inherently slow into the core.
  4. It will be really easy to create sync bindings. I think node should try to avoid this as much as possible.

Guys these are my 2 cents. I have nothing against having this library as a module, but I really think it shouldn't go to core.

@brendanashworth
Copy link

.. the biggest problem in native modules development is not a C++ code, it's breaking changes in V8 that ruin all the effort and turn into maintenance nightmares.

I agree! However I see FFI in core as a solution to this (not as a workaround as you suggest). If we can provide a stable FFI module that doesn't force users to go through the v8 API, it will allow us to foster a stable native ecosystem where we couldn't have before. It also reduces the C++ tyranny of native modules - to people who don't like C++ (like me), it can be very daunting to write a native module, instead allowing you to choose whichever language you prefer, like Rust.

It's very slow.

I haven't done any benchmarking, but I can't see it being significantly slower than v8 bindings, which are already slower than regular JavaScript calls.

It will be really easy to create sync bindings.

Yes it is easy, but it does look like it is even easier to make async calls with ffi than it is to make them with C++.

I don't know about security, but that seems like it should be discussed.

@rvagg
Copy link
Member

rvagg commented Jun 24, 2015

This discussion really belongs back in io.js now:

I agree with the security concerns, as does @trevnorris, which is why we pushed for isolating the potentially unsafe operations into a separate core module, 'ffi' which could then be more easily isolated during runtime, perhaps with a runtime flag, perhaps with some monkey-patching to disable it if the user wanted. There's probably more discussion needed around that side of it over in #1865 before it lands.

@brendanashworth
Copy link

I've moved the comments back to nodejs/node#1865, sorry - and a --no-ffi flag would be great.

@trevnorris
Copy link

I can't see it being significantly slower than v8 bindings

It is significantly slower than using a native module.

@rvagg
Copy link
Member

rvagg commented Jun 25, 2015

I think the hardware group attempted doing something with it for serialport (or similar) but bailed because it was uber-slow, at least I have a vague memory of a recent issue about it

@tduehr
Copy link

tduehr commented Jun 25, 2015

It will be slower than using the native API.

The big wins are:
Easy module development
Modules built on FFI are immune to internal API changes
corollary - Modules built on FFI already work on other engines with FFI support

FFI in Ruby is often used as a stepping stone. A gem will be built with FFI initially, if it proves popular, the author or someone else will write native versions for the main ruby engines. Usually, the author writes a native MRI version then someone else writes a Java version for JRuby.

@arigo
Copy link

arigo commented Jun 26, 2015

With a good JIT, a FFI can be a lot faster than any alternative. This is the case about PyPy but I'm sure V8 and others are similar. The JIT can basically generate raw C calls; that's the best performance possible. By contrast, needing to go through some custom C API layer to access the objects is slower (because it needs a stable API that presents objects in a way that doesn't change all the time) and it cannot be optimized by the JIT (for example, if you make a temporary "integer" object, the JIT usually removes it; but if it escapes through the C API layer, it can't).

@kobalicek
Copy link

@arigo Without having FFI directly in V8 I think even JIT won't help you. You will probably not be able to extract data from V8 allocated objects without doing it in C++. I remember there was a discussion in V8 about this. It wasn't strictly about FFI, but it was about accessing DOM properties directly by V8.

@trevnorris
Copy link

A call into C++ from JS in recent V8 is a bit under 30 ns. Converting well formed arguments is in the single digit ns. So even a non trivial call to a native library going through the V8 API won't take more than 50 ns of overhead. The most expensive part will probably be the converting of return value(s) to JS objects. Which would exist in the FFI regardless.

I've written tests to verify this, and even doing an operation as simple as summing values from a typed array with as few as 100 items is faster when passed to the native side.

@DemiMarie
Copy link

@arigo makes an excellent point. The key point to remember is that this requires that the FFI be tightly integrated with the JIT compiler -- not be a separate module. The JIT needs special knowledge of the FFI built in to it. This is the case for PyPy and LuaJIT. That means that this feature request really belongs with the V8 project, since only they can provide a performant FFI.

LuaJIT is a good example of how fast a good FFI can be. In LuaJIT, FFI data types are used routinely, because they are faster to access than Lua tables and take up less memory. (LuaJIT is also an example of how to write a fast VM in general, but that is beside the point).

@lygstate
Copy link

This is a really important feature to makes nodejs to communicate with the OS and other frequently used APIs.

@YurySolovyov
Copy link

With the raise of frameworks like NW.js and Electron, also increased the need to call into OS APIs, because it is often enough what most desktop apps need to do anyway.

@PaulBGD
Copy link

PaulBGD commented Dec 6, 2016

It has been a while, has there been any work on this?

@ofrobots
Copy link

ofrobots commented Dec 6, 2016

@matthewloring and I have been working on prototyping a fast dynamic FFI on top of TurboFan JIT in V8. We're proving it out at the moment, and once we are confident it can work, we will share more details publicly.

@SuhairZain
Copy link

Hi @ofrobots, any updates on the FFI you're working on?

@ebraminio
Copy link

This seems progressed internally here and then here. There are some related patches on the project which some are also merged but it seem the project is not finalized yet. I hope to see this soon.

@lygstate
Copy link

So there is no progress still?

@DemiMarie
Copy link

DemiMarie commented Nov 29, 2017 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests