Feature request: providing pre-compiled binary modules for Node.js end users on Windows #1891

Mithgol opened this Issue Dec 16, 2011 · 51 comments


None yet

Since the fix of nodejs/node-v0.x-archive#1746 it became possible to use compiled .node binaries on Windows as file modules.

As @Benvie has said at node-ffi/node-ffi#30, providing some compiled binaries is the usual mode of things for Windows.

However, as @kkaefer has said at mapbox/node-sqlite3#48, there's no clear blessed path to distribute such binaries via npm.

Some of the existing Node.js modules are, in fact, binary, but they are built by npm for end users (not by their developers for packaging), and, as far as I know, currently none of those are intended to run on Windows.

Foreseeing a future in which not every JavaScript developer on Windows is ready to have somewhat huge Visual C++ 2010 Express installed for building Node.js modules (even though that installation is free of charge), I suggest that npm should introduce some package.json options and thus facilitate providing of two versions of pre-compiled binaries (that is, win32 version and win64 version) — pre-compiled by their own module developers.


A general selection of architecture / platform may be netter, so that wie can also distribute binares for other platforms


All it would take is being able to specify platform subfolders or something.


as far as I know, currently none of those are intended to run on Windows.

Correction. Since at least November 2011 node-firebird-libfbclient provides a pre-compiled (and zip-packed) Windows package:


node-gyp takes care of the topics touched on in this discussion.

@luk- luk- closed this Mar 6, 2013

@st-luke can we re-open this to continue discussion on binary distributables? It's something that crosses the npm & node-gyp boundary. We need to come up with a standard solution for optionally distributing native modules for the various platforms supported by Node, particularly Windows where compiling is a major pain-point.

@timoxley is keen to get some movement on this and may be able to put in some time to prototype a potential solution. Will let him comment on the discussion we've been having about this and his thoughts on the topic.

A basic solution would need some way to substitute a node-gyp compile for a downloadable binary (perhaps from npm itsel) that is differentiated at least by module version + arch + NODE_MODULE_VERSION (now available in process.version.modules).

Where and how these binaries are assembled is an issue for discussion!

npm member

in our case, the issue is that we'd like to deploy binary code in multiple environments (windows/linux/darwin, mips/x86/x64/arm, etc) run by non-developers, and/or on systems whereby setting up dev environments is not a simple task (i.e. windows).

Ideally, npm would be able to download precompiled binaries based on current environment, failing that it would try manual compilation. Not sure how it would work though… some git tagging convention or urls specified in the package.json?

@luk- luk- reopened this Apr 25, 2013

@rvagg @timoxley I've also been thinking about this issue lately from a deployment point of view. I think precompiled binaries for platforms would be a very good thing (but also a difficult thing, considering the variables).

I think it would be useful to have a way to optionally pull down multiple precompiled binaries, not just for the module version / arch / NODE_MODULE_VERSION you are currently working with during development. This makes sense for checking into version control for deployment purposes, when you consider the amount of developers that write software on one OS and deploy on another.

npm member

Question: What's the average size for a compiled binary?

Downloading all binaries by default will introduce yet more delays to the already slow process npm install if binary sizes are significant and/or there's a large number of possible deploy targets.


/cc @TooTallNate @isaacs

I think one of the reasons this whole thing is getting held up is that we all want a travis-style build farm to do the building for us. But that problem is just too big to solve in the short term--consider a binary like couchbase which requires a recent build of libcouchbase to be available (i.e. it's not bundled), and on Windows it needs that to be in c:\libcouchbase. How is that going to work on a build farm? node-canvas is even more complicated because it needs cairo dev to be available, on Windows that means a whole gtk dev dist to be available, plus if you want png, jpg and gif support you need libs for each of those to be put in special locations. That's not practical at all for a build farm.

So instead, lets ignore that step for now and deal with it later, it's not necessary to make this work, module authors can be responsible initially for building and distributing binaries, they'll likely get a lot of community support from people working on the various platforms. We just need to get the rest of it working.

Here's an initial proposal:

package.json & npm

Add a new key to package.json: "platformBinaries" that provides a mapping from architecture + platform + NODE_MODULE_VERSION to package names in npm that provide binaries for those platforms.

The key could be: process.arch + '-' + process.platform and npm would need to append process.versions.modules on the end of the value to get the actual package to download.



  "name": "leveldown",
  "platformBinaries": {
    "x64-linux": "leveldown-bin-x64-linux",
    "x86-linux": "leveldown-bin-x86-linux",
    "x64-win"  : "leveldown-bin-x64-win",
    "x86-win"  : "leveldown-bin-x86-win",
    "arm-linux": "leveldown-bin-arm-linux"

npm looks up the appropriate key for the current platform and adds '-' + process.versions.modules to the end, so it might end up with a package name: "leveldown-bin-x64-linux-11" that it will attempt to download.

If a platform binary can't be found from the "platformBinaries" mapping on the current platform & verison, or a download fails (because the package isn't available, perhaps the author hasn't caught up with the current NODE_MODULE_VERSION), then invoke node-gyp as normal.

If the platform binary package is found in npm and successfully downloaded then it's installed as a normal dependency of that package. What is actually in the package is up to the module author but a convention would emerge, probably from the outset.

npm would probably also need a new command line argument for publishing these "platformBinaries" so that it can (1) publish a module with a name other than what's found in package.json (so you don't have to make a new package.json for each binary you publish, just publish a built version of the original) and (2) allow bundling of the ./build/ directory, which it would normally ignore.


bindings would likely become an important (yet still optional) part of the equation. It could be extended to search beyond ./build etc. to look into the dependencies for binaries.

It could attempt to do something like the following to find the binary module:

var pj = require('./package')
var pb = pj.platformBinaries && pj.platformBinaries[process.arch + '-' + process.platform]
// pb is now the platformBinary package prefix from package.json

if (pb) {
  // these would be done in try/catch, and perhaps more, logical, combinations can be tried too
  require(pb + '-' + process.versions.modules + '/build/Release/' + opts.bindings)
  require(pb + '-' + process.versions.modules + '/build/Debug/' + opts.bindings)

So for LevelDOWN example, where I do a require('bindings')('leveldown.node') it'd end up trying to load:


(note that using require to find the dep rather than directly looking in node_modules should account for the complexities of shared dependencies where the actual location isn't entirely predictable)

So what I'd end up doing is just npm publishing duplicate packages under these different "platformBinaries" names that are exactly the same as the original package but have builds in them.

Note that this structure would also allow you to check in your binaries to git for your deployment, if that's what you're in to. And you could have a binary for each platform you're deploying to happily coexisting because they are sitting in node_modules directories with different names.

more thoughts

Arch complexities

There are some complexities to a simple process.arch + process.platform combination for finding binaries:

  • what about different platform standard library versions? libc might be too old on the current platform for what's been made available for distribution for example.
  • what about architecture quirks? a generic 'arm' probably won't work for example, a binary compiled for a Raspberry Pi won't work on a Kindle Paperwhite because the ARM processor has a different feature set. ARM is probably the craziest example here because there are so many ARM variants out in the wild that you can compile Node on to. Do we need an additional specifier that we standardise in between arch and platform? 'arm-cortex_a8-linux', 'arm-kindle5-linux', etc. That list would have to be standardised and compiled in to each node core binary.

The question of Debug vs Release builds is a bit annoying and perhaps it'd be best to ignore Debug builds completely, you have to be running a Release version of node core in order for npm to even bother downloading binaries for you.

User confirmation

@tootallnate suggested on IRC that an npm confirmation might be in order:

i'd imagine a confirmation prompt the first time you download a binary from a particular "uploader"
"x-native-module binary will be downloaded, uploaded by @rvagg, do you accept?"

Node version compatibilty

My current understanding is that NODE_MODULE_VERSION, exposed as process.versions.modules is enough to match a binary addon to the current version of Node, even across minor versions. We had a native addon API change part of the way through 0.9 (made module available as the second argument to Init() in addition to exports as the first) so NODE_MODULE_VERSION was bumped to 11. Then in 0.11 we have a major v8 upgrade in Node 0.11 and I think this is the main reason we've had a NODE_MODULE_VERSION bump to 12.

This should mean we don't need to involve anything else in the current node version to the naming scheme, as long as NODE_MODULE_VERSION is carefully managed we should be ok.



We have this exact need right now, as I think a lot of devs do too.

@rvagg has a pretty decent solution, the key being that there is an optional property in package.json that is considered before the regular build process, keeping it nice and backwards compatible but adding a massive slice of awesomeness to npm.

I agree with considering existing dynamically linked libraries out of the scope of this issue. We have weird and wonderful deps on our arm devices, but manage them tightly and expect other developers will be able to do the same.

However this will require being able to change the location of the workers that do the building. Even if this binary hosting gets adopted by npmregistry, and they do the building, there will always be weird and wonderful deps that require controlling the worker's build process. In this case we will need another property (though these can be lumped into 1 object) with the prefix url for these binaries that can be pulled from.

npm member

Related: #731


@cliffano has done some quick checking of the registry to see what sort of numbers we're talking about.

Packages with "gypfile":true in their latest version (i.e. the "dist-tags":{"latest":"x.y.z"} version) are listed here: https://gist.github.com/cliffano/5465541

That's only 248 out of a total of 28,508 packages, or 0.87%. Tiny, a "build farm" could probably consist of the a bunch of virtual machines on my desktop and some cross-compilers for ARM.

My guess is that anyone who hasn't upgraded to gyp by now probably doesn't have a very popular package and is perhaps unlikely to put in the effort to do so.


Having done a bit of hacking with node on the raspberry pi I can tell you this would be awesome. The option of accepting binary installation would make it much easier for those new to node to get started on this platform, removing the need to install the entire build suite then wait for minutes for stuff to build.

Also +1 for this making stuff easier in windows, getting and installing a compiler environment to use a native module is a nightmare.

As gcc has an existing list of architectures it would probably be wise to stick to this list were possible i386 and x86-64 Options and ARM Options.


Good stuff here guys! I'm liking the enthusiasm.

@rvagg Very good writeup. I think you make a lot of good points. I like your idea of utilizing the NODE_MODULE_VERSION define, but like you say, we're going to need to start being strict about updating it when the ABI changes (or when a dep's ABI changes: libuv, V8, etc.).

I think my main goal here would be that no action would need to be taken on the side of the existing 200-whatever modules with gyp files in them, because migrations suck and frankly I don't think it's necessary to specify anything in the package.json.

From the sounds of it, this feature should be completely built into npm. I think a single new npm publish-binary command and some new smarts built into npm install are all that's really needed.

npm publish-binary (the command name can change, I don't care) would essentially create a tarball of the module's current build directory, and package up the results (or maybe we just do the Release dir directly and ignore Debug builds like @rvagg suggested?), and upload them to the npm registry. The tarball being identified with the tags that @rvagg noted: process.arch, process.platform and NODE_MODULE_VERSION.

Ok ferry is here, gonna continue in another post...


So npm publish-binary would also note the npm user that uploaded the tarball. This will come into play for validation (giving permission) when a user downloads this package for a given module (npm install).

For npm install, it would download the main tarball for the module as usual. But before/instead of running node-gyp, it would first check to see if a compatible binary has been upload already, and if so it would download that tarball rather than running node-gyp rebuild. npm would unpack the binary tarball as the "build" dir of the installed module.

For the "user permission" thing, I'm thinking something along the lines of SSH, where it's a one-time grant of permission for a module+user combination, and npm can cache those entries in the .npmrc file or something.

The flow for a regular using installing a native module through npm install <module> would look something like (annotations inline):

~ (master) ∴ npm install time
npm http GET https://registry.npmjs.org/time
npm http 200 https://registry.npmjs.org/time  # download "time" module metadata
npm http GET https://registry.npmjs.org/time/-/time-0.9.2.tgz
npm http 200 https://registry.npmjs.org/time/-/time-0.9.2.tgz # download main "time" module tarball
# notice that there's a compatible binary uploaded, prompt user for install:
There is a binary for the "time" module uploaded by @TooTallNate (https://npmjs.org/~tootallnate).
Would you like to install it (yes/no)? yes
Warning: Permanently added user 'TooTallNate' to the list of valid publishers for the "time" module.
npm http GET https://registry.npmjs.org/time/-/time-0.9.2-darwin-x64-12.tgz
npm http 200 https://registry.npmjs.org/time/-/time-0.9.2-darwin-x64-12.tgz # binary tarball is downloaded
# ... rest of "time" deps are downloaded~ (master) ∴ ls node_modules/time/build/Release/
linker.lock  obj.target/  time.node*     # precompiled 'time.node' binary is in-place and ready to rock-and-roll

Any comments?


I have some reservations about the potential liabilities of npm being responsible for the distribution of binaries, mostly it worries me from a license perspective.

Also I think you lose some flexibility for people who might want to use private packages but not necessarily an entire private npm repo.

So I would vote for platformBinaries: { '*-linux-*-32': 'http://cdn.example.com/mymodule-linux-32.tar.gz' }

I'm not entirely attached to the wild card mechanism.

I also think users will want more flexibility than making a decision on a target triple, they may need to break it down by linux flavor and version. Trying to cover all those cases is obviously more work than what npm should concern itself with. Perhaps it would be easier to allow specifying a script that should return the platform name that is subsequently used to match against platformBinaries

platformDetect: function () { return execSync('dpkg-architecture | grep DEB_HOST_GNU_TYPE | sed s#DEB_HOST_GNU_TYPE=##'); }


I have some reservations about the potential liabilities of npm being responsible for the distribution of binaries, mostly it worries me from a license perspective.

Can you expand on that? Are you worried about people uploading GPL'd binaries or something? I'd think that if/when that happens "the npm admins" (@isaacs) would simply remove the violating binary (DCMA-style? But you know, without the negative connotation).


It's not really the GPL so much as people distributing binaries that might not be allowed (or at least legally uncertain) to be distributed in all regions, in the vain of a liblame, libdecss, or the old crypto export issue. Copyright is also an issue, but that exists today if people include resources in their sourceball.

Asking the repository administrators to be responsible for removing the offending packages seems like an unnecessary increase in responsibility and effort for them.


I really like the solution proposed by @TooTallNate

Would be a good idea to ensure developers have a method of including one or more licence files with the binary package just to cover that facet of distribution, this could be as simple as a pre publish post build hook but is probably better as an option to ensure people remember.

On the copyright / licensing a take down notice could apply to either code or binaries so as long as this doesn't increase the work load of administrators and there is a method of including the relavent licences it shouldn't be too bad.


@tjfontaine There are definitely libraries where precompiled binaries would just plain be out of the question (libmp3lame, libfaac, etc.) for legal reasons, so for those I suppose we introduce a new package.json field explicitly disallowing binaries of the module to be published. This would be similar in concept to the "publish": false property, but focused on publishing binary tarballs.

Maybe: "publishBinary": false. Then if somebody tries to npm publish-binary said package, npm would refuse and reject the command.


The publishBinary mechanism seems arbitrary and I woud expect result in people forking packages with the simple purpose of not including that field.

Also I question the sanity of using binaries built by others than the maintainers of the package -- even if it is opt in for the user installing the package -- as we know users are just going to hit 'Y' without considering the ramifications.

If the maintainer of the package isn't willing to build the package on that platform (or at least properly delegate authority to build on a given platform) there's probably a good chance they won't care about supporting that platform.


After discussing with @TooTallNate there seems to be some ambiguity in my position.

I have two issues, and they are separate from each other:

  • I have reservations about npm distributing binaries built by anyone, instead of the maintainer being responsible for the hosting of the binary packages
    • But as I'm not the person who would ultimately be accepting that liability for npm, whatever.
  • There needs to be a more customizable mechanism for picking the identifier for the platform
    • The naïve approach of picking a compiler triple (os, arch, bitness) will quickly fall apart when all dependencies aren't shipped with the module (or also found in npm)
    • also any suitably complex module (even with shipping those dependencies) may result in problems where they're looking for syscalls or a specific ABI from the OS

The second one I think is definitely something we should provide for, such that there's something akin to platformDetect that points to a script that can be invoked and output the identifier for this platform the module maintainer needs to differentiate on.


I should also point out that platformDetect would be optional, if your library works with the naïve approach more power to you.


After considering this for a bit would it be better to have this feature outside of npm core initially and simply add an extension point for npm sub commands similar to the mechanism supported by git.

This would enable a user to choose to install @TooTallNate suggested sub command via npm and the choice of implementation. As suggested the process would be unchanged and meta relating to the availability of binaries tar balls living in npm, but the archives themselves being hosted outside.

This would have the added bonus of enabling development of this idea outside of npm and leaving the door open to convergence of implementation once npm has bigger picture problems solved.

I agree with @isaacs these binaries should be built in a sandbox, but again this is up to those publishing the binaries.

I am interested to hear what others think of this idea.


Really awesome stuff so far, a lot of what @rvagg and @TooTallNate bring up is exactly what @timoxley and I were talking about on the train the other day.

npm member

So, here's what has to happen before this is even a conversation worth having:

  1. The package signing story has to be much more awesome, as opposed to being nonexistent like it is now.
  2. The package tarballs need to not live in the couchdb.

(1) is being investigated by someone already. (2) is planned, and not super complicated. Probably in a few weeks, I'll be able to crank out a little daemon that does this.


@isaacs did anything end up happening with package signing and tarballs?

@mscdex mscdex referenced this issue in mscdex/node-pcre Jun 27, 2013

Provide libpcre binaries #7


About binary modules: I don't think it's really needed.

There are some general-purpose package managers like dpkg or rpm. They are used by end users, and they need to have binary packages. Some Gentoo guys might argue, but I'm ok with it.

But npm is a package manager for a programming platform, like ruby gems or pip. It is NOT supposed to be used by end users. It is supposed to be used by developers. And developers without dev tools don't look so awesome you know.

And anyway. I never hear these issues from guys who are using Linux or Mac or BSD. If someone doesn't have python installed there, he can install it with one command (and remove it later just as easy if he wants). Only Windows seem to have that issue. So it can be easily solved by changing OS, right? :)


@st-luke about package signing:

Were here any discussions about package signing before? I'd love to read them. Especially about public keys, how to manage them... we would need something like web of trust, right?

It looks easy to do, currently a package owner can upload package-1.2.3.tgz.sig file to the same npm repository. npm just needs to run gpg to verify this signature before unpacking... Simple patch will do, well except for convincing developers to sign it - that's hard.

Anyway. If no one implements it, I'll start working on it in a few months. That's one of the features that will be in npm or will be in a fork of npm. That's how we need it here. :)


@rlidwka I'm using Linux and OS X as development and build machines.

The problem I have with native modules is that they are compiled on npm install,

  • on travisci where you have to start each build without npm cache, those native modules are getting recompiled on each build, which slows things down
  • I also have a jenkinsci installation with similar setup, either with fresh user workspace or fresh dynamic build slaves, there's no prior npm cache (x)

Each time the native module is compiled, it slows the build down and it also makes npm install more cpu-bound.
So this is definitely not just a problem for Windows.

I'm happy to use package manager like dpkg or rpm, but I'm still looking for a way how they'll help with installing the modules on this list https://gist.github.com/cliffano/5465541 as those modules are now.

(x) I'm willing to trade off compile time for network latency + downloads size because I have a private npm registry


@cliffano , you can try to compile it just once for every architecture you have on one server and distribute it everywhere you need. I believe it can be done. If travis doesn't support it, it isn't npm issue anyway.

Do you have a private registry? Oh, good. You can download packages, compile them and publish it to your private registry under the same name. I think it would work, it should work because npm package is just a tarball at the end.

The issue is that somebody needs to compile these modules anyway. Module author can't do it because he likely doesn't have linux, macos, windows, bsd and sunos all available. So it has to be a user, there's nobody else to do the job.


@rlidwka Is it possible to republish the module using the same name? I thought only the maintainer can publish the module, so that means either I have to modify the authentication details of those users or modify the maintainer username of the modules on my private registry, both of which will modify the documents revision ID and cause conflict (Couch response code 409) on both public_user and registry database replications.

"distribute it everywhere you need. I believe it can be done."
Isn't the registry that distribution mechanism (publish the binary once on the main registry and replicate everywhere).

"Module author can't do it because he likely doesn't have linux, macos, windows, bsd and sunos all available."
Agreed. This is a problem for a lot of projects.
I think one of the above proposals mentions the use of travisci to do this, so it doesn't have to be the module author or the user who does it.


Is it possible to republish the module using the same name?

If you have your own private registry, you can do anything with it, right? Hmm... I don't know, I don't use CouchDB for this task. I'm developing this registry server instead. It can do such things easily, but it's far from being ready for production.

It also can be done theoretically using namespaces, but they are under "isaacs-doesnt-like" badge right now. :(

I think one of the above proposals mentions the use of travisci to do this, so it doesn't have to be the module author or the user who does it.

If travisci can build a package automatically, they could create a service on it like "choose module, choose platform/os, push a button, download compiled stuff". This would be a great step for implementing this feature, and npm can eventually accept a PR to use this service automatically. But I very much doubt travisci can build all packages reliably.


"If you have your own private registry, you can do anything with it, right?"
That's true for a custom node package manager, but not necessarily for a private npm registry.
Private npm registry still depends on design documents which will be replicated from main npm registry, and they need to be replicated in sequence as per registry database _changes.
e.g. if you have the latest design registry before you replicate the module documents, you'll see validation failures during replication due to older module names not adhering newer validation rules.

Another solution to my non-travis build scenario is to have a cascading cache instead of the whole registry.
i.e. an npm install should both keep a cache on my local machine and to company-wide cache.
The next time other dev or build machines need to install that module, it checks local cache first, then the cascading cache(s), before going all the way to the main registry.
This way the compiled binary can be installed once on the company-wide cache and get reused cross machines without recompiling.

Sorry for hijacking this thread with a slightly OT discussion :).

Re travisci, yep, I've seen some false negative builds due to native module compilation failures, but I have high hopes that it will get stabler eventually.


/cc @glennblock
Rumour has it that you Azure guys are working in this area.

IMO this needs to start in userland and migrate in to npm if/when it's mature

@rvagg rvagg referenced this issue in nodejs/node-v0.x-archive Jul 27, 2013

Support for pre-compiled binary modules in npm. #4398


Hi @rvagg sorry for the delayed response. The rumour is indeed correct that we care greatly about improving the status quo of native node modules on Windows / Azure and we are working to improve it :-)

Your concerns are definitely noted. All I can say is the folks who will be doing the work are very familiar with npm...VERY familar.


As far as node-sqlite3 I've recently moved to a homegrown solution from binary deployments. It is documented at https://github.com/developmentseed/node-sqlite3/wiki/Binaries. I plan to see how it goes for a month or so and then if things are working well start cleaning up the js code that implements things into a standalone module that others can help improve or comment on. I focused on getting this working for myself first and plan to skim back through many of the comments above as time permits to glean other methods that might be better.


Again frustrated that this was being done in secret and only announced now, why do we have so much of this from companies that operate in OSS environments?


why do we have so much of this from companies that operate in OSS environments?

If everyone announced things when projects are just starting, we probably would get flooded by them. So people announce them when projects are more or less ready.


And, I guess, a product on an earlier stage can itself be flooded with known issues opened on GitHub by mere bystanders out of concern.


"Hey guys, don't go too far down the path of making your own solutions, we're working on something here!"

I'm not the only one who's been tinkering around the edges here because of the apparent lack of activity by any of the major players.

npm member

For what it's worth, I and the other npm collaborators had no part in creating module foundry. That was all Nodejitsu and Microsoft, and they deserve all the credit, blame, frustration, elation, kudos, etc. From the point of view of npm or node core, it's not "official", and doesn't preclude anyone else from trying other things.

As is so often the case, stuff like this happening on the periphery of npm itself is really important. It helps by exploring the solution space in multiple different directions, and allows for more experimentation than a single centralized effort. It's still tbd whether module foundry will be a good fit, or what problems it'll face. Pre-compiling binary modules is a very challenging problem. There will likely be many iterations.

I agree that it would be nice to know about these things in advance of some big release, in smaller steps. But meh. Wasn't my call. Try it out, provide feedback to the people running it.


I plan to see how it goes for a month or so and then if things are working well start cleaning up the js code that implements things into a standalone module that others can help improve or comment on.

Binary deployment for node-sqlite3 has been going very well. So, I've moved ahead on creating a standalone tool to make this approach easier: node-pre-gyp. It is ready for use by any module maintainers interesting in providing binaries for their module. Please checkout https://github.com/springmeyer/node-pre-gyp and let me know what you think.

A key to my approach is that I lean on travis.ci for automating builds across bitness, node versions, and Linux/OS X. I'm finding that travis-as-build-farm is solid, but it misses windows and sunos support. So, I plan to investigate whether https://github.com/nodejitsu/module-foundry can help here and also push for travis windows support


Today node-pre-gyp has reached enough feature-completeness for me to close this issue.

@Mithgol Mithgol closed this Mar 15, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment