New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nix copy uses too much memory #1681

Closed
ledettwy opened this Issue Nov 15, 2017 · 33 comments

Comments

Projects
None yet
@ledettwy

ledettwy commented Nov 15, 2017

I'm running nix copy in runInLinuxVM, and notice that for any nontrivial closures, the VM will run out of memory during the copying process. I left it set at the default 512 megabytes. I could obviously increase the amount of memory the VM is given, but that doesn't scale for copying complex derivations with many dependencies.

I suggest adding an option to only load and copy the contents of the paths one at a time, or even better, a way to specify an upper bound on the memory to be used while copying.

@copumpkin

This comment has been minimized.

Member

copumpkin commented Nov 15, 2017

Intuitively it feels that it should be possible for it to run in constant memory. What am I missing?

@lheckemann

This comment has been minimized.

Member

lheckemann commented Feb 23, 2018

I'm encountering this issue with a single path — nix copy, nix-store --import, and a number of other commands I've tried all fail to import the path. Would be great to know if there's any way at all I can import it…

@ledettwy

This comment has been minimized.

ledettwy commented Mar 17, 2018

Possibly related to #1969 ? Looks like some patches have gone in recently that might improve things here: 48662d1 3e6b194

edolstra added a commit to edolstra/nix that referenced this issue Mar 26, 2018

Make 'nix copy --to daemon' run in constant memory (daemon side)
Continuation of 97002b6. This makes
the daemon use constant memory. For example, it reduces the daemon's
maximum RSS on

  $ nix copy --from ~/my-nix --to daemon /nix/store/1n7x0yv8vq6zi90hfmian84vdhd04bgp-blender-2.79a

from 264 MiB to 7 MiB.

We now use a TunnelSource to prevent the connection from ending up in
an undefined state if an exception is thrown while the NAR is being
sent.

Issue NixOS#1681.

edolstra added a commit to edolstra/nix that referenced this issue Mar 27, 2018

Make LocalBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.

Continuation of 48662d1.

Issue NixOS#1681.

edolstra added a commit to edolstra/nix that referenced this issue Mar 27, 2018

Make HttpBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)

Issue NixOS#1681.
Issue NixOS#1969.

@shlevy shlevy added the backlog label Apr 1, 2018

@Ralith

This comment has been minimized.

Ralith commented Apr 2, 2018

I see commits purporting to address this for a number of different cases, but none concerning uploading to a S3 bucket. Trying to copy a 2.8GB store path to a S3 bucket took nearly 4GB of memory and more than twenty minutes of 100% CPU. Has that been fixed?

dtzWill added a commit to dtzWill/nix that referenced this issue Apr 4, 2018

Make LocalBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.

Continuation of 48662d1.

Issue NixOS#1681.

dtzWill added a commit to dtzWill/nix that referenced this issue Apr 4, 2018

Make HttpBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)

Issue NixOS#1681.
Issue NixOS#1969.
@andrewchambers

This comment has been minimized.

andrewchambers commented Apr 8, 2018

Hitting this issue trying to do something like - nixos-rebuild build ; nix copy ./result --to ssh://low_ram_machine

@dtzWill will those experimental changes help with ssh copy?

@edolstra

This comment has been minimized.

Member

edolstra commented Apr 12, 2018

@Ralith I'm probably not going to make S3BinaryCacheStore do uploads in constant space. It might not even be supported by aws-sdk-cpp.

I assume the 100% CPU is caused by compression, which you can disable.

@copumpkin

This comment has been minimized.

Member

copumpkin commented Apr 12, 2018

FWIW I too am another big-upload-to-S3 guy using nix copy 😄

It would surprise me if aws-sdk-cpp didn't support it, given that S3 supports almost arbitrarily large objects and multi-part uploads. If someone figured out how to implement it, would you accept the PR?

@Ralith

This comment has been minimized.

Ralith commented Apr 13, 2018

I assume the 100% CPU is caused by compression, which you can disable.

It seems very strange that it would take twenty minutes on my i7-4980HQ, even so. 2.8GB is big but it's not that big.

@edolstra

This comment has been minimized.

Member

edolstra commented Apr 13, 2018

IIRC xz compression can easily take that long.

@coretemp

This comment has been minimized.

coretemp commented Apr 23, 2018

This is what I am seeing too:

a...........> copying path '/nix/store/fl3mcaqqk2vg0dmk01dfbs6nbm5skpzc-systemd-237' from 'https://cache.nixos.org'...
a...........> error: out of memory

The main problem I see is that it merely says "out of memory", instead of saying how much it tried to allocate, and how much was available before the allocation in the error message. Copying data should run in constant space as others have already mentioned.

If the compression is causing higher memory requirements than needed, this is a problem too, because it raises the hosting costs for no reason other than the initial deployment.

Before the deployment at least 300MB was available on host a.

@dtzWill

This comment has been minimized.

Contributor

dtzWill commented Apr 23, 2018

FWIW it looks like they do support streaming at least for fetches:

https://sdk.amazonaws.com/cpp/api/LATEST/index.html

(Near end, look for IOStreams).

Hopefully upload has similar.

Seconded re:xz compression taking that long. There's an option somewhere to enable parallel xz compression is you have idle cores. IIRC the result will be slightly bigger for the same compression level.

Anyway, if someone tackled the API spelunking would it be welcome? Or is there a reason that will have problems or is a bad idea?

EDIT: oops I think we already use the stream thing, although at a glance it looks like we pull it all into a string but that seems resolvable. Anyway fetch from s3 is probably not as important.

@lheckemann

This comment has been minimized.

Member

lheckemann commented May 2, 2018

As far as I can tell, the fixes in 2.0.1 still don't really fix the issue.

@edolstra

This comment has been minimized.

Member

edolstra commented May 3, 2018

@lheckemann IIRC we didn't cherry-pick any memory improvements in 2.0.1. You need master for some of the fixes or my experimental branch for the rest.

@lheckemann

This comment has been minimized.

Member

lheckemann commented May 3, 2018

Oh, that would explain it! Any chance they could be included in a 2.0.2 release? There have been so many complaints about this issue on IRC and I've run into it myself more times than I would like as well.

@SebastianCallh

This comment has been minimized.

SebastianCallh commented May 29, 2018

Does "nixops deploy" use this? I get out of memory during deploy, even though I have several gigabytes free (both on disk and working memory) which is odd. Just wondering if this is addressed here or should be investigated further.

@coretemp

This comment has been minimized.

coretemp commented May 29, 2018

@SebastianCallh you are not specifying which machine goes out of memory, so I assume you don't know it's talking about the machine you are deploying to. The solution to this is to use 512MB of swap.

Perhaps I might commit some of my changes to fix this in an AWS environment when t2.nanos are being used, but only if there is interest in them from people with commit access.

@SebastianCallh

This comment has been minimized.

SebastianCallh commented May 29, 2018

@coretemp That was the machine I was referring to. The machine being deployed too has plenty of both disk and working memory to spare when the error occurs.

edolstra added a commit that referenced this issue May 30, 2018

Make 'nix copy --to daemon' run in constant memory (daemon side)
Continuation of 97002b6. This makes
the daemon use constant memory. For example, it reduces the daemon's
maximum RSS on

  $ nix copy --from ~/my-nix --to daemon /nix/store/1n7x0yv8vq6zi90hfmian84vdhd04bgp-blender-2.79a

from 264 MiB to 7 MiB.

We now use a TunnelSource to prevent the connection from ending up in
an undefined state if an exception is thrown while the NAR is being
sent.

Issue #1681.

edolstra added a commit that referenced this issue May 30, 2018

Make LocalBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.

Continuation of 48662d1.

Issue #1681.

edolstra added a commit that referenced this issue May 30, 2018

Make HttpBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)

Issue #1681.
Issue #1969.
@nh2

This comment has been minimized.

Contributor

nh2 commented Jun 2, 2018

@edolstra Can you put in here a summary of what store code paths you have already fixed in master, which ones you have fixed on your experimental branch (which one is that), and which ones are known to not work yet?

That would help a lot to figure out what exactly to test.

@nh2

This comment has been minimized.

Contributor

nh2 commented Jun 2, 2018

I'm on the latest nix commit 54b1c596435b0aaf3a2557652ad4bf74d5756514 which includes a couple memory fixes from the last days that are not yet in 2.0.4. But it doesn't work for me yet:

nixops deploy to a libvirtd VM (which has said latest nix) still fails with error: out of memory, even if I give the VM 2 GB ram, during the step copying 414 missing paths (5083.46 MiB) to ‘root@192.168.123.41’..., where it copies up the paths via SSH.

I can see in top/ps aux how the memory usage of nix-store --serve --write grows and grows up to 50% and then it crashes.

Here is a gdb dump of where it is while the memory is growing:

Thread 1 (Thread 0x7fc827166000 (LWP 917)):
#0  0x00007fc825750b1d in read () from target:/nix/store/2kcrj1ksd2a14bm5sky182fv2xwfhfap-glibc-2.26-131/lib/libpthread.so.0
#1  0x00007fc8262a1269 in nix::FdSource::readUnbuffered(unsigned char*, unsigned long) ()
   from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixutil.so
#2  0x00007fc8262a04dd in nix::BufferedSource::read(unsigned char*, unsigned long) () from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixutil.so
#3  0x00007fc8265cbc54 in nix::TeeSource::read(unsigned char*, unsigned long) () from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixstore.so
#4  0x00007fc8262a0a88 in nix::Source::operator()(unsigned char*, unsigned long) () from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixutil.so
#5  0x00007fc82627d294 in nix::parse(nix::ParseSink&, nix::Source&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
   from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixutil.so
#6  0x00007fc82627db20 in nix::parse(nix::ParseSink&, nix::Source&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
   from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixutil.so
#7  0x00007fc82627db20 in nix::parse(nix::ParseSink&, nix::Source&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
   from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixutil.so
#8  0x00007fc82627e683 in nix::parseDump(nix::ParseSink&, nix::Source&) () from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixutil.so
#9  0x00007fc8265ca7c8 in nix::Store::importPaths[abi:cxx11](nix::Source&, std::shared_ptr<nix::FSAccessor>, nix::CheckSigsFlag) ()
   from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixstore.so
#10 0x000000000041ecec in opServe(std::__cxx11::list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::__cxx11::list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >) ()
#11 0x000000000041822a in std::_Function_handler<void (), main::{lambda()#1}>::_M_invoke(std::_Any_data const&) ()
#12 0x00007fc8268e09c3 in nix::handleExceptions(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>) ()
   from target:/nix/store/s7fqa57f3z7p2wrimir3mz6wybqc0xfq-nix-2.1pre6148_a4aac7f/lib/libnixmain.so
#13 0x000000000040c49a in main ()

Update: For your reading convenience


#0  read ()
#1  nix::FdSource::readUnbuffered       (unsigned char*, unsigned long)
#2  nix::BufferedSource::read           (unsigned char*, unsigned long)
#3  nix::TeeSource::read                (unsigned char*, unsigned long)
#4  nix::Source::operator()             (unsigned char*, unsigned long)
#5  nix::parse                          (nix::ParseSink&, nix::Source&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
#6  nix::parse                          (nix::ParseSink&, nix::Source&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
#7  nix::parse                          (nix::ParseSink&, nix::Source&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
#8  nix::parseDump                      (nix::ParseSink&, nix::Source&)
#9  nix::Store::importPaths [abi:cxx11] (nix::Source&, std::shared_ptr<nix::FSAccessor>, nix::CheckSigsFlag)
#10 opServe                             (std::__cxx11::list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::__cxx11::list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)
#11 std::_Function_handler<void         (), main::{lambda()#1}>::_M_invoke(std::_Any_data const&)
#12 nix::handleExceptions               (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>)
#13 main ()

maybe this can help figure out whether this code path should already have been moved to a streaming approach?

@nh2

This comment has been minimized.

Contributor

nh2 commented Jun 2, 2018

I think the issue is here:

/* Extract the NAR from the source. */
TeeSink tee(source);
parseDump(tee, tee.source);
uint32_t magic = readInt(source);
if (magic != exportMagic)
throw Error("Nix archive cannot be imported; wrong format");
ValidPathInfo info;
info.path = readStorePath(*this, source);
//Activity act(*logger, lvlInfo, format("importing path '%s'") % info.path);
info.references = readStorePaths<PathSet>(*this, source);
info.deriver = readString(source);
if (info.deriver != "") assertStorePath(info.deriver);
info.narHash = hashString(htSHA256, *tee.source.data);
info.narSize = tee.source.data->size();
// Ignore optional legacy signature.
if (readInt(source) == 1)
readString(source);
addToStore(info, tee.source.data, NoRepair, checkSigs, accessor);

TeeSink is someting that writes a copy of all the input data read to a std::string data; parseDump(tee, tee.source) does the reading.

Then all that data is added to the store with addToStore(info, tee.source.data, ...) but in between it's all in memory.

nh2 added a commit to nh2/nix that referenced this issue Jun 3, 2018

importPaths: Don't copy imported NAR into memory.
Fixes `error: out of memory` of `nix-store --serve --write`
when receiving packages via SSH (and perhaps other sources).

See NixOS#1681 NixOS#1969 NixOS#1988 NixOS/nixpkgs#38808.

Performance improvement on `nix-store --import` of a 2.2 GB cudatoolkit closure:

When the store path already exists:
  Before:
    10.82user 2.66system 0:20.14elapsed 66%CPU (0avgtext+0avgdata   12556maxresident)k
  After:
    11.43user 2.94system 0:16.71elapsed 86%CPU (0avgtext+0avgdata 4204664maxresident)k
When the store path doesn't yet exist (after `nix-store --delete`):
  Before:
    11.15user 2.09system 0:13.26elapsed 99%CPU (0avgtext+0avgdata 4204732maxresident)k
  After:
     5.27user 1.48system 0:06.80elapsed 99%CPU (0avgtext+0avgdata   12032maxresident)k

The reduction is 4200 MB -> 12 MB RAM usage, and it also takes less time.
@nh2

This comment has been minimized.

Contributor

nh2 commented Jun 3, 2018

PR that should fix it for nix-store --import, with reduction from 4GB -> 12 MB ram for a 2 GB cudatoolkit closure: #2206

With convenience to try the patch out: #2206 (comment)

@edolstra

This comment has been minimized.

Member

edolstra commented Jun 4, 2018

@nh2 The master branch now has all fixes except edolstra@c94b4fc because it's controversial.

@domenkozar

This comment has been minimized.

Member

domenkozar commented Jun 19, 2018

Hopefully we can include this for Nix 2.0.5 :)

@edolstra

This comment has been minimized.

Member

edolstra commented Jun 20, 2018

@domenkozar Probably better to do a 2.1 release.

@joepie91

This comment has been minimized.

joepie91 commented Jun 20, 2018

@nh2 I can confirm that that patch solved the problem for me (and has not produced any unforeseen issues, as far as I can tell).

EDIT: Whoops, I meant to post this on NixOS/nixpkgs#38808 instead. NixOps is the context in which I'm seeing this problem.

@dtzWill

This comment has been minimized.

Contributor

dtzWill commented Jun 22, 2018

Please don't release too many of the recent memory fixes until we've fixed #2203--apologies if the proposed changes don't depend on the bits that broke nix log usage for paths built by hydra. Just don't want to accidentally end up with a release with such a regression :).

@coretemp

This comment has been minimized.

coretemp commented Jul 6, 2018

edolstra/nix@c94b4fc is only controversial, because it raises the cost of cloud resources without a good reason in many cases.

If it would simply inspect the machine to check how much storage is available and/or memory is available, it could use that as a default solution.

Another solution is that it would just run until it would go out of memory and then try again with the optimization applied automatically. This way it will always work. For the people that want to optimize the last bits of performance, you could add variables (like already exist, but probably need better names) to control this behavior. By using this design, everyone would be happy. Similarly, you could have flags that optimize for deployment time (e.g. waste more cloud resources to optimize for developer time).

As a guiding principle, I would like to see acknowledged that increasing cloud resource cost is weighed heavily in implementation decisions.

In general, even if you don't implement exactly one of the suggestions above, it is likely possible to create something non controversial. The problem with the existing patch is that the variable one can control is an implementation detail, not a high level policy.

Anton-Latukha added a commit to Anton-Latukha/nix that referenced this issue Jul 12, 2018

Make 'nix copy --to daemon' run in constant memory (daemon side)
Continuation of 97002b6. This makes
the daemon use constant memory. For example, it reduces the daemon's
maximum RSS on

  $ nix copy --from ~/my-nix --to daemon /nix/store/1n7x0yv8vq6zi90hfmian84vdhd04bgp-blender-2.79a

from 264 MiB to 7 MiB.

We now use a TunnelSource to prevent the connection from ending up in
an undefined state if an exception is thrown while the NAR is being
sent.

Issue NixOS#1681.

Anton-Latukha added a commit to Anton-Latukha/nix that referenced this issue Jul 12, 2018

Make LocalBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.

Continuation of 48662d1.

Issue NixOS#1681.

Anton-Latukha added a commit to Anton-Latukha/nix that referenced this issue Jul 12, 2018

Make HttpBinaryCacheStore::narFromPath() run in constant memory
This reduces memory consumption of

  nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79

from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)

Issue NixOS#1681.
Issue NixOS#1969.
@vcunat

This comment has been minimized.

Member

vcunat commented Jul 22, 2018

The OOM condition is rather hard to handle, as it depends on the host OS. Typically it will let you allocate too much and then invoke an OOM killer later, so you don't have the option to react to the condition nicely.

@vaibhavsagar

This comment has been minimized.

vaibhavsagar commented Sep 7, 2018

Has this been fixed in Nix 2.1?

@coretemp

This comment has been minimized.

coretemp commented Sep 18, 2018

Why is this critical issue not being addressed?

@edolstra edolstra closed this Sep 18, 2018

@nh2

This comment has been minimized.

Contributor

nh2 commented Sep 20, 2018

Has this been fixed in Nix 2.1?

@vaibhavsagar I think so.

Why is this critical issue not being addressed?

@coretemp It was addressed in 2825e05.

@coretemp

This comment was marked as disruptive content.

coretemp commented Sep 21, 2018

@nh2 Yes, I figured that out a few days ago, because I thought it wasn't closed for nothing. I think it's sloppy and slightly rude to close an issue without referring to a commit. It is rude, because it says to users (which are almost all software developers) "My time as a developer is more important than the time of N typically highly skilled software developers". It does not compute, I can tell you that. The fact that the software is provided for free does not change the economics. If every developer in the project would behave that way, you can see how efficiency goes down. So, there's my proof that it's rude and inefficient behavior.

In case you are wondering, I also considered to share the same information you shared, but because of the negative atmosphere I chose against it.

I am using the feature and while I have not tested the failure scenario myself (a from scratch deployment used to show the problem), it does feel faster.

I don't really like people using negative emoticons. Clearly, this was an important issue for everyone and if no response has been given in 12 days to a question (from someone else even) it doesn't seem as if anyone cares.

Additionally, I provided design feedback, which may or may not have been included in the final version (which seems to have taken that into account). As such, I would like to ask everyone to stop talking in negative emoticons. If you have something to say, just say it.

I can share the expression we are using (which compiles a version with these features from source), but I wasn't able to get the overlay version of it working in 5 minutes, which I imagine would be what the rest of you is using.

@nh2 Thank you for giving the right example, though.

@domenkozar

This comment has been minimized.

Member

domenkozar commented Sep 21, 2018

@coretemp please behave with respect and avoid Ad Hominem, as it does no good to anyone.

Nix is provided for free and comes with zero obligations from developers. If you'd like professional support, I'd recommend contacting some of the consulting companies: https://nixos.org/nixos/support.html

I'm locking this issue as nothing good can come out of this, if there's an issue with the recent fix, please open another issue describing the problem.

@NixOS NixOS locked as too heated and limited conversation to collaborators Sep 21, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.