Major performance (I/O?) issue in /mnt/* and in ~ (home) #873

Open
Mika56 opened this Issue Aug 11, 2016 · 136 comments

Comments

Projects
None yet
@Mika56

Mika56 commented Aug 11, 2016

A brief description

As a Symfony developer, it's always been hard to get a stable/fast development environment. My current setup is a Ubuntu running under VirtualBox (using vagrant). While page generation is fast, my IDE accesses my PHP files through SMB, which is really (sometimes horribly) slow.
I'm now trying to use WSL to improve all of this. However, I'm having a major performance issue when using /mnt/* folders.
If I set up a Symfony project under /mnt/c, it is really slow. If I set it up under /home/mikael, it is very fast.

Expected results

Drives mounted under /mnt should be as fast a any other folder.

Actual results

With a new Symfony 3.1.3 project, under /home/mikael takes between 100ms and 130ms to generate the home page.
The same project under /mnt/c/ takes between 1200ms and 1500ms.

Your Windows build number

10.0.14393.51

Steps / commands required to reproduce the error

# Install PHP5
$ sudo apt-get install -y php5 php5-json

# Download Symfony installer
$ sudo curl -LsS https://symfony.com/installer -o /usr/local/bin/symfony
$ sudo chmod a+x /usr/local/bin/symfony

# Download Symfony
cd
symfony new symfony_test

# Start Symfony
cd symfony_test
php bin/console server:run

Open your browser and go to http://127.0.0.1:8000/.
Once the page is loaded, refresh it (on first request, Symfony had to generate its cache).
Generation time is displayed on the bottom left
Image

You can then do the same under /mnt/c/

cd /mnt/c/
symfony new symfony_test
cd symfony_test
php bin/console server:run

Additional information

I've added my dev folders as excluded folders in Windows Defender, as well as %LOCALAPPDATA%\lxss.
I've tried having my project in ~ and pointing my IDE to %LOCALAPPDATA%\lxss\home\mikael\ but as I've later read, there is no supported way of editing WSL files.
WSL is installed in its default location under C (no strange junction or symlink), which is a healthy SSD.
My computer is attached to a domain, if this might have any influence.

@Mika56 Mika56 changed the title from Major performance (I/O?) issue to Major performance (I/O?) issue in /mnt/* Aug 11, 2016

@fcicq

This comment has been minimized.

Show comment
Hide comment
@fcicq

fcicq Aug 12, 2016

with the current design of DrvFs, I guess this problem is hard to resolve. I recommend you to run a ssh server inside WSL and do the edit via ssh to bypass some other issues.

from https://blogs.msdn.microsoft.com/wsl/2016/06/15/wsl-file-system-support/
"DrvFs also disables directory entry caching to ensure it always presents the correct, up-to-date information even if a Windows process has modified the contents of a directory."

fcicq commented Aug 12, 2016

with the current design of DrvFs, I guess this problem is hard to resolve. I recommend you to run a ssh server inside WSL and do the edit via ssh to bypass some other issues.

from https://blogs.msdn.microsoft.com/wsl/2016/06/15/wsl-file-system-support/
"DrvFs also disables directory entry caching to ensure it always presents the correct, up-to-date information even if a Windows process has modified the contents of a directory."

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Aug 12, 2016

Also @Mika56 the devs have said they are working on improving performance by redesigning parts of how file I/O works. That's probably why they haven't documented exactly how it works yet, since it is very much in flux at the moment.

fpqc commented Aug 12, 2016

Also @Mika56 the devs have said they are working on improving performance by redesigning parts of how file I/O works. That's probably why they haven't documented exactly how it works yet, since it is very much in flux at the moment.

@ajaykagrawal

This comment has been minimized.

Show comment
Hide comment
@ajaykagrawal

ajaykagrawal Aug 12, 2016

there is no supported way of editing WSL files

I wonder how difficult it would be for popular IDE and text editors to become WSL aware and preserve the attribute of WSL files (the security model may have some inconsistencies but I might be ok with it)

there is no supported way of editing WSL files

I wonder how difficult it would be for popular IDE and text editors to become WSL aware and preserve the attribute of WSL files (the security model may have some inconsistencies but I might be ok with it)

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Aug 12, 2016

@ajaykagrawal Some text-editors have an ability to save in-place rather than overwrite. I heard someone say that they enabled that setting in one or another text editor and it worked.

fpqc commented Aug 12, 2016

@ajaykagrawal Some text-editors have an ability to save in-place rather than overwrite. I heard someone say that they enabled that setting in one or another text editor and it worked.

@ajaykagrawal

This comment has been minimized.

Show comment
Hide comment
@ajaykagrawal

ajaykagrawal Aug 12, 2016

Thanks @fpqc. I don't quite know what save in-place means but I hope more editors are updated to have this ability

Thanks @fpqc. I don't quite know what save in-place means but I hope more editors are updated to have this ability

@Mika56

This comment has been minimized.

Show comment
Hide comment
@Mika56

Mika56 Aug 12, 2016

@fcicq I'm using an IDE for a reason :) I need my IDE, and mounting a folder through SSH, FTP, SMB or whatever would be no improvement on my current environment

Mika56 commented Aug 12, 2016

@fcicq I'm using an IDE for a reason :) I need my IDE, and mounting a folder through SSH, FTP, SMB or whatever would be no improvement on my current environment

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Aug 12, 2016

@ajaykagrawal Basically instead of writing a new file and saving over it, the text editor has a delta and applies it to the file, at least that's my guess.

fpqc commented Aug 12, 2016

@ajaykagrawal Basically instead of writing a new file and saving over it, the text editor has a delta and applies it to the file, at least that's my guess.

@aseering

This comment has been minimized.

Show comment
Hide comment
@aseering

aseering Aug 12, 2016

Contributor

In the old days, whenever you hit "save", text editors saved into your existing file. This was simple and efficient, but if your editor crashes or was killed mid-save, or if you run out of disk space, or if anything eelse weird happens, you might end up with a corrupt garbled mess that's half the old file and half the new file. Or it might just be completely empty / wipe out all of your data entirely, if your app implemented "save" with an initial truncate, which is easy to do by default on Linux at least.

These days, text editors almost always "save" by creating a temporary file in the same directory, writing the entire new file contents to the temporary file, and then moving the temporary file on top of the old file, overwriting the old file in the process. "Move" within a directory is guaranteed in most cases to be strictly atomic -- even if your computer loses power mid-Move, you'll always be left with either the new file or the old file; not a corrupted intermediate.

Clever, right? Saves lots of "I lost my data!" headaches. Thing is, if an application creates an arbitrary new file, there's no way for the OS to know that it's supposed to have the same magic permissions as some other existing file. That's what's breaking WSL.

Some editors let you go back to the old-style behavior. Has more risk, but avoids issues in some situations.

Contributor

aseering commented Aug 12, 2016

In the old days, whenever you hit "save", text editors saved into your existing file. This was simple and efficient, but if your editor crashes or was killed mid-save, or if you run out of disk space, or if anything eelse weird happens, you might end up with a corrupt garbled mess that's half the old file and half the new file. Or it might just be completely empty / wipe out all of your data entirely, if your app implemented "save" with an initial truncate, which is easy to do by default on Linux at least.

These days, text editors almost always "save" by creating a temporary file in the same directory, writing the entire new file contents to the temporary file, and then moving the temporary file on top of the old file, overwriting the old file in the process. "Move" within a directory is guaranteed in most cases to be strictly atomic -- even if your computer loses power mid-Move, you'll always be left with either the new file or the old file; not a corrupted intermediate.

Clever, right? Saves lots of "I lost my data!" headaches. Thing is, if an application creates an arbitrary new file, there's no way for the OS to know that it's supposed to have the same magic permissions as some other existing file. That's what's breaking WSL.

Some editors let you go back to the old-style behavior. Has more risk, but avoids issues in some situations.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Aug 12, 2016

@aseering it looks like Windows move preserves those attribs, or at least robocopy does?

fpqc commented Aug 12, 2016

@aseering it looks like Windows move preserves those attribs, or at least robocopy does?

@aseering

This comment has been minimized.

Show comment
Hide comment
@aseering

aseering Aug 12, 2016

Contributor

Read more carefully: It does an excellent job of preserving the attributes of the wrong file :-)

[EDIT] I can't speak for robocopy, just regular move, but what you want in this scenario is the opposite of what you want in most scenarios.

Contributor

aseering commented Aug 12, 2016

Read more carefully: It does an excellent job of preserving the attributes of the wrong file :-)

[EDIT] I can't speak for robocopy, just regular move, but what you want in this scenario is the opposite of what you want in most scenarios.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Aug 13, 2016

@aseering yep I am aware. I just thought that Windows commandline tools preserve even hidden ntfs attributes (moving is different from editing a file, since the atomic operation is done at the filesystem level, I think) like robocopy does.

fpqc commented Aug 13, 2016

@aseering yep I am aware. I just thought that Windows commandline tools preserve even hidden ntfs attributes (moving is different from editing a file, since the atomic operation is done at the filesystem level, I think) like robocopy does.

@aseering

This comment has been minimized.

Show comment
Hide comment
@aseering

aseering Aug 13, 2016

Contributor

@fpqc -- ah, I see what you're getting at. Yeah, applications can copy extended attributes, if they're programmed to do so correctly.

Contributor

aseering commented Aug 13, 2016

@fpqc -- ah, I see what you're getting at. Yeah, applications can copy extended attributes, if they're programmed to do so correctly.

@Adraesh

This comment has been minimized.

Show comment
Hide comment
@Adraesh

Adraesh Aug 16, 2016

Exactly the same "issue" detailed above, as I am as well a Symfony dev.

My SF projects are all located into /mnt/d/.... my apache2 server (running on ubuntu bash) root folder is linked using a symlink and the perfs are aweful ...

This also lead to some issue with Symfony cache system btw.

Adraesh commented Aug 16, 2016

Exactly the same "issue" detailed above, as I am as well a Symfony dev.

My SF projects are all located into /mnt/d/.... my apache2 server (running on ubuntu bash) root folder is linked using a symlink and the perfs are aweful ...

This also lead to some issue with Symfony cache system btw.

@baroso

This comment has been minimized.

Show comment
Hide comment
@baroso

baroso Sep 7, 2016

I've not noticed any performance issues with my IDE IntelliJ/PHPStorm, but the execution of PHP/Symfony code (running LAMP/Symfony with SuluCMS installed) ist between 4-10 times slower (total execution time) than on a normal Ubuntu system ...

WSL 14915
image
second run
image

Ubuntu 14.04
image
second run
image

Why is it so slow?
Are there any workarounds, maybe moving the vendors folder of Symfony?

If you can fix this issue, it will be the best environment for developing true DOT.NET and true PHP on Windows!!!
Thanks, keep up the great work!!!

baroso commented Sep 7, 2016

I've not noticed any performance issues with my IDE IntelliJ/PHPStorm, but the execution of PHP/Symfony code (running LAMP/Symfony with SuluCMS installed) ist between 4-10 times slower (total execution time) than on a normal Ubuntu system ...

WSL 14915
image
second run
image

Ubuntu 14.04
image
second run
image

Why is it so slow?
Are there any workarounds, maybe moving the vendors folder of Symfony?

If you can fix this issue, it will be the best environment for developing true DOT.NET and true PHP on Windows!!!
Thanks, keep up the great work!!!

@baroso

This comment has been minimized.

Show comment
Hide comment
@baroso

baroso Oct 12, 2016

Is this issue beeing worked on?

baroso commented Oct 12, 2016

Is this issue beeing worked on?

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Oct 12, 2016

@baroso Yep. It's one of their top priorities after getting all the web-programming stuff working.

fpqc commented Oct 12, 2016

@baroso Yep. It's one of their top priorities after getting all the web-programming stuff working.

@nickjj

This comment has been minimized.

Show comment
Hide comment
@nickjj

nickjj Nov 19, 2016

@fpqc Is there another issue to watch to see progress?

Slow mount performance is the only thing holding me back from upgrading to Windows 10 and using WSL.

nickjj commented Nov 19, 2016

@fpqc Is there another issue to watch to see progress?

Slow mount performance is the only thing holding me back from upgrading to Windows 10 and using WSL.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Nov 19, 2016

@nickjj All I know is that the devs have said that it is a top priority, but they haven't released details on how they are going to pull it off. It did sound like they have an idea of how to do it though.

fpqc commented Nov 19, 2016

@nickjj All I know is that the devs have said that it is a top priority, but they haven't released details on how they are going to pull it off. It did sound like they have an idea of how to do it though.

@pachkovsky

This comment has been minimized.

Show comment
Hide comment
@pachkovsky

pachkovsky Jan 24, 2017

Any updates on this?

Any updates on this?

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Jan 24, 2017

nope. Mum's the word on this'n. Major performance improvements I think are targeted for RS3 not RS2.

fpqc commented Jan 24, 2017

nope. Mum's the word on this'n. Major performance improvements I think are targeted for RS3 not RS2.

@nickjj

This comment has been minimized.

Show comment
Hide comment
@nickjj

nickjj Jan 24, 2017

What does RS3 mean in the grand scheme of things? Will it still make it into the next official big update?

nickjj commented Jan 24, 2017

What does RS3 mean in the grand scheme of things? Will it still make it into the next official big update?

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Jan 24, 2017

RS3 insiders builds will be rolling out probably in May. Release in December?

fpqc commented Jan 24, 2017

RS3 insiders builds will be rolling out probably in May. Release in December?

@nickjj

This comment has been minimized.

Show comment
Hide comment
@nickjj

nickjj Jan 24, 2017

Thanks. Guess I'll come back next year. Hopefully things are ironed out.

Using the insider's build is out of the question for me due to the severe breach of privacy that it entails.

nickjj commented Jan 24, 2017

Thanks. Guess I'll come back next year. Hopefully things are ironed out.

Using the insider's build is out of the question for me due to the severe breach of privacy that it entails.

@bitcrazed

This comment has been minimized.

Show comment
Hide comment
@bitcrazed

bitcrazed Feb 13, 2017

Collaborator

@ajaykagrawal As a general rule of thumb, whenever the question "I wonder how difficult it would be" is asked, assume that the answer is "very, very difficult", and/or "takes an enormous amount of time with no guarantee of success" ;)

In this case, imagine trying to get the owner of every application that opens and saves text files to modify their apps to support a different way of opening the files without read locks, and writing changes to those files without destroying extended properties managed by an external process.

If the underlying issue was simple to solve, we'd likely have already solved it. However, yes, we are very aware of the issues and we do aim to work on improving Windows <--> Linux filesystem interop in the future.

Collaborator

bitcrazed commented Feb 13, 2017

@ajaykagrawal As a general rule of thumb, whenever the question "I wonder how difficult it would be" is asked, assume that the answer is "very, very difficult", and/or "takes an enormous amount of time with no guarantee of success" ;)

In this case, imagine trying to get the owner of every application that opens and saves text files to modify their apps to support a different way of opening the files without read locks, and writing changes to those files without destroying extended properties managed by an external process.

If the underlying issue was simple to solve, we'd likely have already solved it. However, yes, we are very aware of the issues and we do aim to work on improving Windows <--> Linux filesystem interop in the future.

@aseering

This comment has been minimized.

Show comment
Hide comment
@aseering

aseering Feb 26, 2017

Contributor

I realize this is a hard problem, but, just reporting that I'm still (build 15042) finding DrvFs to be much slower than Linux for my common use cases, to a degree that impacts my productivity somewhat.

A good specific representative-example workload that I would like to see optimized is the Boost build process:

http://www.boost.org/doc/libs/1_61_0/more/getting_started/unix-variants.html#easy-build-and-install

The build steps should be heavily CPU-bound in the compiler; that's fine. But I would love to see the final ./b2 install command run faster. In my experience, it's many times slower on Windows than on Linux.

Note that Boost's build system is cross-platform. And the final install command is just as slow under regular Windows :-) It would be wonderful if the Windows version got faster too.

Contributor

aseering commented Feb 26, 2017

I realize this is a hard problem, but, just reporting that I'm still (build 15042) finding DrvFs to be much slower than Linux for my common use cases, to a degree that impacts my productivity somewhat.

A good specific representative-example workload that I would like to see optimized is the Boost build process:

http://www.boost.org/doc/libs/1_61_0/more/getting_started/unix-variants.html#easy-build-and-install

The build steps should be heavily CPU-bound in the compiler; that's fine. But I would love to see the final ./b2 install command run faster. In my experience, it's many times slower on Windows than on Linux.

Note that Boost's build system is cross-platform. And the final install command is just as slow under regular Windows :-) It would be wonderful if the Windows version got faster too.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Feb 26, 2017

@aseering fwiw, the major performance improvements were not planned for RS2. Even 2 months ago the devs were saying it was planned for RS3.

fpqc commented Feb 26, 2017

@aseering fwiw, the major performance improvements were not planned for RS2. Even 2 months ago the devs were saying it was planned for RS3.

@aseering

This comment has been minimized.

Show comment
Hide comment
@aseering

aseering Feb 26, 2017

Contributor

@fpqc -- yep, I'm just keeping the ticket alive :-)

Contributor

aseering commented Feb 26, 2017

@fpqc -- yep, I'm just keeping the ticket alive :-)

@matthewrk

This comment has been minimized.

Show comment
Hide comment
@matthewrk

matthewrk Mar 10, 2017

Is there any workaround to this for the time being? I'm considering switching to Windows for my main OS (currently macOS) but this presents a roadblock for me, I'm seeing around 1s added to RoR response times which makes development using WSL a bit too cumbersome.

Is there any workaround to this for the time being? I'm considering switching to Windows for my main OS (currently macOS) but this presents a roadblock for me, I'm seeing around 1s added to RoR response times which makes development using WSL a bit too cumbersome.

@xob

This comment has been minimized.

Show comment
Hide comment
@xob

xob Mar 10, 2017

@matthewrk Not really a viable workaround for everyone, but what I'm doing in the mean time is running unison to sync files from /mnt/c/pick/a/folder to ~/pick/a/folder, and then I'm running ruby (in my case) from ~/pick/a/folder instead of straight from /mnt/.

It's still slower than running a linux VM, but it's a lot better and good enough for my development needs.

xob commented Mar 10, 2017

@matthewrk Not really a viable workaround for everyone, but what I'm doing in the mean time is running unison to sync files from /mnt/c/pick/a/folder to ~/pick/a/folder, and then I'm running ruby (in my case) from ~/pick/a/folder instead of straight from /mnt/.

It's still slower than running a linux VM, but it's a lot better and good enough for my development needs.

@matthewrk

This comment has been minimized.

Show comment
Hide comment
@matthewrk

matthewrk Mar 10, 2017

@xob thanks for the suggestion, that works pretty well! how did you get around the fact that everything in /mnt/c/ is owned by root? once its synced over to my ~/destination folder its still root:root so my app can't write its logs etc.

@xob thanks for the suggestion, that works pretty well! how did you get around the fact that everything in /mnt/c/ is owned by root? once its synced over to my ~/destination folder its still root:root so my app can't write its logs etc.

@xob

This comment has been minimized.

Show comment
Hide comment
@xob

xob Mar 10, 2017

@matthewrk I have never had problems with root, but I believe that the reason I have never had problems with this is that I use unison with the -perms 0 parameter. If I recall correctly, that parameter means that permissions are not copied when files are synced, and gives the permissions to the user running unison instead.

If that doesn't do it, here's the full unison command that I run: unison -perms 0 -times -auto -batch ~/destination /mnt/c/source -repeat 5

Hope it helps!

xob commented Mar 10, 2017

@matthewrk I have never had problems with root, but I believe that the reason I have never had problems with this is that I use unison with the -perms 0 parameter. If I recall correctly, that parameter means that permissions are not copied when files are synced, and gives the permissions to the user running unison instead.

If that doesn't do it, here's the full unison command that I run: unison -perms 0 -times -auto -batch ~/destination /mnt/c/source -repeat 5

Hope it helps!

@matthewrk

This comment has been minimized.

Show comment
Hide comment
@matthewrk

matthewrk Mar 10, 2017

@xob Thanks! it turns out that although the files are root:root they're also 0777, but my sync was only persisting the owner, not the permissions, so a little fix and it should be all working, I appreciate your help :)

@xob Thanks! it turns out that although the files are root:root they're also 0777, but my sync was only persisting the owner, not the permissions, so a little fix and it should be all working, I appreciate your help :)

@nickjj

This comment has been minimized.

Show comment
Hide comment
@nickjj

nickjj May 1, 2017

@xob Do you have any benchmarks on comparing unison to a Linux VM? What type of apps are you developing?

I'm using a Linux VM currently and it's all great, but since inotify works with WSL now I'm willing to try it again once the performance issues are cleared up (with unison for now).

nickjj commented May 1, 2017

@xob Do you have any benchmarks on comparing unison to a Linux VM? What type of apps are you developing?

I'm using a Linux VM currently and it's all great, but since inotify works with WSL now I'm willing to try it again once the performance issues are cleared up (with unison for now).

@xob

This comment has been minimized.

Show comment
Hide comment
@xob

xob May 1, 2017

@nickjj I work on a SAAS application developed in ruby on rails (ruby 2.2.5, rails 4.1.16).

As for benchmarks, I can easily compare the app running in development environment (no compressed assets or anything), on both Bash on Windows and in a Docker container (which is native Linux, granted it's Debian, not Ubuntu, but that shouldn't make much of a difference).

The benchmark I am doing is a measure of the time it takes to do a full-page refresh (ctrl+F5) on one page of our app, once logged in. The measure is taken 5 times, and here are the averages:
Bash on Windows, in /mnt/c: 32.322 seconds
Bash on Windows, in /home/user (unison): 16.552 seconds
Docker container (Debian): 5.208 seconds

According to the benchmark, at least in my case, using unison to run my server from /home/user makes my app twice as fast, compared to running it from /mnt/c. However, compared to native linux (in my case Docker, but a VM would have similar results) is 3 times as fast as running from /mnt/user.

I hope that answers the question!

xob commented May 1, 2017

@nickjj I work on a SAAS application developed in ruby on rails (ruby 2.2.5, rails 4.1.16).

As for benchmarks, I can easily compare the app running in development environment (no compressed assets or anything), on both Bash on Windows and in a Docker container (which is native Linux, granted it's Debian, not Ubuntu, but that shouldn't make much of a difference).

The benchmark I am doing is a measure of the time it takes to do a full-page refresh (ctrl+F5) on one page of our app, once logged in. The measure is taken 5 times, and here are the averages:
Bash on Windows, in /mnt/c: 32.322 seconds
Bash on Windows, in /home/user (unison): 16.552 seconds
Docker container (Debian): 5.208 seconds

According to the benchmark, at least in my case, using unison to run my server from /home/user makes my app twice as fast, compared to running it from /mnt/c. However, compared to native linux (in my case Docker, but a VM would have similar results) is 3 times as fast as running from /mnt/user.

I hope that answers the question!

@nickjj

This comment has been minimized.

Show comment
Hide comment
@nickjj

nickjj May 1, 2017

Thanks @xob, that clears it up but what hardware specs are you using to get those figures?

I'm using Docker too ("natively" on Linux (xubuntu GUI) with unity mode on VMware) and I'm very happy with the dev speed on my Rails apps. It's basically the same speed as a real native Linux box without VMWare.

Honestly, I don't think I could deal with a 15s wait on each page refresh. Now, of course that's tied into your app's size and computer specs but a 3x difference is pretty big. You have more patience than me!

I think I'll continue holding off on WSL and just see what MS comes up with later this year. I only want WSL because unity mode is no longer officially supported by VMWare for Linux guests, so I'm using an aged distro of xubuntu (14 LTS) since 16 does not work.

nickjj commented May 1, 2017

Thanks @xob, that clears it up but what hardware specs are you using to get those figures?

I'm using Docker too ("natively" on Linux (xubuntu GUI) with unity mode on VMware) and I'm very happy with the dev speed on my Rails apps. It's basically the same speed as a real native Linux box without VMWare.

Honestly, I don't think I could deal with a 15s wait on each page refresh. Now, of course that's tied into your app's size and computer specs but a 3x difference is pretty big. You have more patience than me!

I think I'll continue holding off on WSL and just see what MS comes up with later this year. I only want WSL because unity mode is no longer officially supported by VMWare for Linux guests, so I'm using an aged distro of xubuntu (14 LTS) since 16 does not work.

@xob

This comment has been minimized.

Show comment
Hide comment
@xob

xob May 1, 2017

@nickjj The benchmarks were all run from the same machine. All of my app dependencies (database, etc) run in Docker containers, in all cases. The only difference between the tests is the rails server.

The hardware specs of my machine:
Intel Core i7 4770 CPU @ 3.40GHz
16GB RAM
Kingston SSDNow V300 SSD (240GB)

Not sure that the rest of the specs (which I am too lazy to look up) are relevant.

xob commented May 1, 2017

@nickjj The benchmarks were all run from the same machine. All of my app dependencies (database, etc) run in Docker containers, in all cases. The only difference between the tests is the rails server.

The hardware specs of my machine:
Intel Core i7 4770 CPU @ 3.40GHz
16GB RAM
Kingston SSDNow V300 SSD (240GB)

Not sure that the rest of the specs (which I am too lazy to look up) are relevant.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Mar 4, 2018

@bitcrazed I know MS usually doesn't do this, but I was wondering if you guys could post your current technical assessment of the file I/O bottleneck and maybe how you guys are attacking it? It's been a really long time since the last technical WSL blog entry.

fpqc commented Mar 4, 2018

@bitcrazed I know MS usually doesn't do this, but I was wondering if you guys could post your current technical assessment of the file I/O bottleneck and maybe how you guys are attacking it? It's been a really long time since the last technical WSL blog entry.

@oranja oranja referenced this issue in oh-my-fish/oh-my-fish Mar 16, 2018

Closed

OMF slows down WSL Ubuntu terminal. #598

@Fabyao Fabyao referenced this issue in Microsoft/linux-vm-tools Mar 26, 2018

Closed

16.04 - Configuration / Login Issue #15

@csvan

This comment has been minimized.

Show comment
Hide comment
@csvan

csvan Apr 16, 2018

WSL is excellent overall, but this is a serious drawback. We mostly do Node development and deal with large install sets (60k+ files) and compilation targets. We noticed a major slowdown across the board after moving to WSL from Cygwin - even slower than running Ubuntu in a VM in some cases.

Thanks for all the hard work, will be great to see this resolved eventually.

csvan commented Apr 16, 2018

WSL is excellent overall, but this is a serious drawback. We mostly do Node development and deal with large install sets (60k+ files) and compilation targets. We noticed a major slowdown across the board after moving to WSL from Cygwin - even slower than running Ubuntu in a VM in some cases.

Thanks for all the hard work, will be great to see this resolved eventually.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc Apr 16, 2018

@tara-raj I asked Rich a while ago, but since you're the new PM, do you think this could be arranged?

#873 (comment)

fpqc commented Apr 16, 2018

@tara-raj I asked Rich a while ago, but since you're the new PM, do you think this could be arranged?

#873 (comment)

@morgan-greywolf

This comment has been minimized.

Show comment
Hide comment
@morgan-greywolf

morgan-greywolf May 13, 2018

While you’re waiting on the fix, one way to work around the inability to modify files under WSL without losing filesystem attributes is to install unfsd under WSL, and then access the exported filesystem using Services for NFS on the Windows side. This only works on certain Windows 10 editions, such as Enterprise.

While you’re waiting on the fix, one way to work around the inability to modify files under WSL without losing filesystem attributes is to install unfsd under WSL, and then access the exported filesystem using Services for NFS on the Windows side. This only works on certain Windows 10 editions, such as Enterprise.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc May 13, 2018

Does that actually work now? @therealkenc ?

fpqc commented May 13, 2018

Does that actually work now? @therealkenc ?

@morgan-greywolf

This comment has been minimized.

Show comment
Hide comment
@morgan-greywolf

morgan-greywolf May 13, 2018

I use it all the time. Note that you have to use unfsd, not knfsd. Ubuntu dropped it around 16.04 so you have to compile it from source. You might find cases where it doesn’t work as unfsd does not support NFS locks. You also need an RPC port mapper, of course.

I use it all the time. Note that you have to use unfsd, not knfsd. Ubuntu dropped it around 16.04 so you have to compile it from source. You might find cases where it doesn’t work as unfsd does not support NFS locks. You also need an RPC port mapper, of course.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc May 13, 2018

@morgan-greywolf Yeah, but Ken was trying to get it to work a while ago, but he found a lot of problems with it. I was just pinging him since I thought he might find it interesting.

fpqc commented May 13, 2018

@morgan-greywolf Yeah, but Ken was trying to get it to work a while ago, but he found a lot of problems with it. I was just pinging him since I thought he might find it interesting.

@therealkenc

This comment has been minimized.

Show comment
Hide comment
@therealkenc

therealkenc May 13, 2018

Collaborator

but Ken was trying to get it to work a while ago

At the time I was using nfs-ganesha (as opposed to unfsd3) and fuse-nfs (as opposed to Windows Pro/Server NFS). If Morgan's combination is working usefully that is good news.

But given that Spring Creators (aka Redstone 4 aka 1803 aka 17134) supports Linux metadata, it seems harder to justify going this route. I can't see it solving anyone's performance problems, per this issue topic. Which is one major reason I never took the experiment beyond PoC.

Since I am here; I am finding, anecdotally, that WSL performance has degraded since this issue was opened. Too much water under the bridge like hardware upgrades for me to make any definitive statements, and I don't have concrete metrics. But back in Fall of 2016 I was compiling Chrome under WSL and it was somewhat tolerable. On recent releases, setting aside that WSL craps the bed compiling Chrome, compiling stuff like systemd is too slow to bother. It is more productive to compile in a VM and copy over the binaries.

Collaborator

therealkenc commented May 13, 2018

but Ken was trying to get it to work a while ago

At the time I was using nfs-ganesha (as opposed to unfsd3) and fuse-nfs (as opposed to Windows Pro/Server NFS). If Morgan's combination is working usefully that is good news.

But given that Spring Creators (aka Redstone 4 aka 1803 aka 17134) supports Linux metadata, it seems harder to justify going this route. I can't see it solving anyone's performance problems, per this issue topic. Which is one major reason I never took the experiment beyond PoC.

Since I am here; I am finding, anecdotally, that WSL performance has degraded since this issue was opened. Too much water under the bridge like hardware upgrades for me to make any definitive statements, and I don't have concrete metrics. But back in Fall of 2016 I was compiling Chrome under WSL and it was somewhat tolerable. On recent releases, setting aside that WSL craps the bed compiling Chrome, compiling stuff like systemd is too slow to bother. It is more productive to compile in a VM and copy over the binaries.

@benhillis

This comment has been minimized.

Show comment
Hide comment
@benhillis

benhillis May 13, 2018

Member

Unfortunately much of the new slowness is due to new defender security features. We are working with them to figure out how to be less of a hindrance to our perf.

Member

benhillis commented May 13, 2018

Unfortunately much of the new slowness is due to new defender security features. We are working with them to figure out how to be less of a hindrance to our perf.

@therealkenc

This comment has been minimized.

Show comment
Hide comment
@therealkenc

therealkenc May 13, 2018

Collaborator

I conjecture, but can't prove, make -j 8 doesn't multithread like it used to, and writes are being serialised. Does that ring any bell? I could be absolutely incorrect here because I haven't explored the theory to any extent. Mostly grasping (wildly) for other explanations. I turn off Defender as a matter of SOP. Or maybe Defender just doesn't listen too well when being told to go away. I allow for that possibility too.

Collaborator

therealkenc commented May 13, 2018

I conjecture, but can't prove, make -j 8 doesn't multithread like it used to, and writes are being serialised. Does that ring any bell? I could be absolutely incorrect here because I haven't explored the theory to any extent. Mostly grasping (wildly) for other explanations. I turn off Defender as a matter of SOP. Or maybe Defender just doesn't listen too well when being told to go away. I allow for that possibility too.

@thisguychris

This comment has been minimized.

Show comment
Hide comment
@thisguychris

thisguychris May 23, 2018

there's a new benchmark by phronix testing real-world usage: TLDR, I/O is still abysmally slow

https://www.phoronix.com/scan.php?page=article&item=windows-1804-wsl&num=6

The issue is 2016, and it's now 2018. I do hope WSL team shows us what's planned to address the File I/O issues without being too technical.

there's a new benchmark by phronix testing real-world usage: TLDR, I/O is still abysmally slow

https://www.phoronix.com/scan.php?page=article&item=windows-1804-wsl&num=6

The issue is 2016, and it's now 2018. I do hope WSL team shows us what's planned to address the File I/O issues without being too technical.

@andrey-skat

This comment has been minimized.

Show comment
Hide comment
@andrey-skat

andrey-skat May 23, 2018

Using Ruby/Node.js is very painful under WSL. But I think it's not only WSL itself but also NTFS + Defender + Search indexer. When I make bundle install or npm i these three make everything to slow things down. And even delete many small files using Explorer is veeery slow. Ubuntu under the same laptop is blazing fast.

PS: I moved from Ubuntu after many years of use when they killed Unity. Windows 10 is good, but I can't work normal because of speed issues. I dont' like Mac, but now I think more about give chance to it :(

andrey-skat commented May 23, 2018

Using Ruby/Node.js is very painful under WSL. But I think it's not only WSL itself but also NTFS + Defender + Search indexer. When I make bundle install or npm i these three make everything to slow things down. And even delete many small files using Explorer is veeery slow. Ubuntu under the same laptop is blazing fast.

PS: I moved from Ubuntu after many years of use when they killed Unity. Windows 10 is good, but I can't work normal because of speed issues. I dont' like Mac, but now I think more about give chance to it :(

@BookOfGreg

This comment has been minimized.

Show comment
Hide comment
@BookOfGreg

BookOfGreg May 23, 2018

Is it sensible or possible to configure Defender + Search indexer to ignore WSL folders?
If so does it improve the performance much?

Is it sensible or possible to configure Defender + Search indexer to ignore WSL folders?
If so does it improve the performance much?

@therealkenc

This comment has been minimized.

Show comment
Hide comment
@therealkenc

therealkenc May 23, 2018

Collaborator

Is it sensible or possible to configure Defender + Search indexer to ignore WSL folders?

Anecdotally yes here and here. There is a claim otherwise (no comment links in UserVoice, sigh; search on Algoinde) but I wouldn't vouch for that.

"Doesn't hurt"

Collaborator

therealkenc commented May 23, 2018

Is it sensible or possible to configure Defender + Search indexer to ignore WSL folders?

Anecdotally yes here and here. There is a claim otherwise (no comment links in UserVoice, sigh; search on Algoinde) but I wouldn't vouch for that.

"Doesn't hurt"

@oblitum

This comment has been minimized.

Show comment
Hide comment
@oblitum

oblitum May 23, 2018

Folder exceptions don't work, at least the last time I tried they didn't, I comment on that here.

oblitum commented May 23, 2018

Folder exceptions don't work, at least the last time I tried they didn't, I comment on that here.

@SvenGroot

This comment has been minimized.

Show comment
Hide comment
@SvenGroot

SvenGroot May 24, 2018

Member

Trust me, we're well aware of the pain you feel due to the (lack of) speed when it comes to file system operations. The fundamental difficulty comes from Windows's entirely different I/O stack design, which unfortunately doesn't always lend itself well to performing Linux's I/O operations efficiently, especially given the constraint that we have to emulate Linux behavior exactly (unlike a Windows port of an application, like e.g. Git for Windows, which can work around some of the problems by adapting their behavior to follow the Windows model).

We are constantly attempting to improve our I/O performance, but there are a lot of players involved here. We in the WSL team are making changes to ensure the overhead of WSL's VFS layer and file system plugins (LxFs and DrvFs) is as low as possible. We're also working with the NTFS team to let them know our biggest pain points so they can optimize their code. And we're talking to people who own various inbox file system filters (such as Defender) to see how they can reduce their impact. At the same time, some of these teams (again, primarily Defender) are adding features and making security improvements that unfortunately sometimes decrease performance. It's a complicated balancing act. On top of that, for you, our customers, your performance will also be affected by any third-party file system filters you have installed on your system (e.g. third-party anti-virus).

Because of the complexity of our ecosystem, making changes can be a complicated and slow process, for both features and performance improvements. For example, in the Creator's Update we introduced a new file system API that allowed us to drastically improve the performance of "stat" calls, but we had to work with our filter community (both inbox and third-party) to make sure they were aware of the changes, had time to implement support for the new features, and that things wouldn't break if you have filters on your system that haven't been updated.

TL;DR: File system performance is hard, but we're continuously doing what we can to improve the situation. We're aware of your pain, and we're not ignoring the problem.

Member

SvenGroot commented May 24, 2018

Trust me, we're well aware of the pain you feel due to the (lack of) speed when it comes to file system operations. The fundamental difficulty comes from Windows's entirely different I/O stack design, which unfortunately doesn't always lend itself well to performing Linux's I/O operations efficiently, especially given the constraint that we have to emulate Linux behavior exactly (unlike a Windows port of an application, like e.g. Git for Windows, which can work around some of the problems by adapting their behavior to follow the Windows model).

We are constantly attempting to improve our I/O performance, but there are a lot of players involved here. We in the WSL team are making changes to ensure the overhead of WSL's VFS layer and file system plugins (LxFs and DrvFs) is as low as possible. We're also working with the NTFS team to let them know our biggest pain points so they can optimize their code. And we're talking to people who own various inbox file system filters (such as Defender) to see how they can reduce their impact. At the same time, some of these teams (again, primarily Defender) are adding features and making security improvements that unfortunately sometimes decrease performance. It's a complicated balancing act. On top of that, for you, our customers, your performance will also be affected by any third-party file system filters you have installed on your system (e.g. third-party anti-virus).

Because of the complexity of our ecosystem, making changes can be a complicated and slow process, for both features and performance improvements. For example, in the Creator's Update we introduced a new file system API that allowed us to drastically improve the performance of "stat" calls, but we had to work with our filter community (both inbox and third-party) to make sure they were aware of the changes, had time to implement support for the new features, and that things wouldn't break if you have filters on your system that haven't been updated.

TL;DR: File system performance is hard, but we're continuously doing what we can to improve the situation. We're aware of your pain, and we're not ignoring the problem.

@fpqc

This comment has been minimized.

Show comment
Hide comment
@fpqc

fpqc May 24, 2018

@SvenGroot Would be interested to see a blogpost on this!

fpqc commented May 24, 2018

@SvenGroot Would be interested to see a blogpost on this!

@therealkenc

This comment has been minimized.

Show comment
Hide comment
@therealkenc

therealkenc May 25, 2018

Collaborator

Placeholder ref #2626 (which is an inch from being duped) for when I get a chance come back to this thread.

Collaborator

therealkenc commented May 25, 2018

Placeholder ref #2626 (which is an inch from being duped) for when I get a chance come back to this thread.

@thisguychris

This comment has been minimized.

Show comment
Hide comment
@thisguychris

thisguychris May 25, 2018

@SvenGroot thank you for your detailed post, is patching NTFS feasible on making WSL just as fast with native Linux? WSL has been reported to be even faster on computing benchmarks already, the only bottleneck is really the I/O. I wish Microsoft capitalize on their WSL momentum, if they fix I/O, there were probably be a significant number of switchers from other OS's going to Windows.

@SvenGroot thank you for your detailed post, is patching NTFS feasible on making WSL just as fast with native Linux? WSL has been reported to be even faster on computing benchmarks already, the only bottleneck is really the I/O. I wish Microsoft capitalize on their WSL momentum, if they fix I/O, there were probably be a significant number of switchers from other OS's going to Windows.

@thisguychris

This comment has been minimized.

Show comment
Hide comment
@thisguychris

thisguychris Jun 4, 2018

@SvenGroot and @tara-raj I hope you can post some technical assessment of the file I/O bottleneck as mentioned by @fpqc here: #873 (comment)

Is working with NFS a viable solution for the I/O bottleneck?

@SvenGroot and @tara-raj I hope you can post some technical assessment of the file I/O bottleneck as mentioned by @fpqc here: #873 (comment)

Is working with NFS a viable solution for the I/O bottleneck?

@bitcrazed

This comment has been minimized.

Show comment
Hide comment
@bitcrazed

bitcrazed Jun 4, 2018

Collaborator

@thisguychris NFS is a distributed/network file system protocol, and won't solve the underlying file storage perf challenges @SvenGroot describes above.

As with most of the features we've blogged and posted over the last 2+ years, when we release noticeable performance improvements, we'll provide some description of what the issue was, and how it was addressed.

Until then, we appreciate your continued patience.

Collaborator

bitcrazed commented Jun 4, 2018

@thisguychris NFS is a distributed/network file system protocol, and won't solve the underlying file storage perf challenges @SvenGroot describes above.

As with most of the features we've blogged and posted over the last 2+ years, when we release noticeable performance improvements, we'll provide some description of what the issue was, and how it was addressed.

Until then, we appreciate your continued patience.

@thisguychris

This comment has been minimized.

Show comment
Hide comment
@thisguychris

thisguychris Jun 5, 2018

hey @bitcrazed we missed you! thanks for the update. 👍

hey @bitcrazed we missed you! thanks for the update. 👍

@bitcrazed

This comment has been minimized.

Show comment
Hide comment
@bitcrazed

bitcrazed Jun 5, 2018

Collaborator

ROFL :) Thanks @thisguychris :) I am still here, but focusing more of my time on Console futures, having handed off most of the WSL futures responsibility onto @tara-raj.

Keep the feedback coming y'all - and thanks for all your continued support & patience ;)

Collaborator

bitcrazed commented Jun 5, 2018

ROFL :) Thanks @thisguychris :) I am still here, but focusing more of my time on Console futures, having handed off most of the WSL futures responsibility onto @tara-raj.

Keep the feedback coming y'all - and thanks for all your continued support & patience ;)

@patrikhuber

This comment has been minimized.

Show comment
Hide comment
@patrikhuber

patrikhuber Jul 12, 2018

@SvenGroot I understand there's all the components involved that you mention and there are quite complex interactions. However from my perspective as a user, it looks to me like 99% of the issue(s) is coming from Windows Defender. And it's getting worse and worse. In Windows 7 and 8.1, I hardly ever saw any activity from the Windows Defender process in the Task Manager, both in terms of CPU and disk usage. When WSL was initially released (or the beta or whatever it was), I think I also didn't see Windows Defender activity very often. However over the past one/two/three years, I see massive Windows Defender process activity in the task manager, as soon as you do anything remotely related to filesystem, and it's slowing everything down massively, regular Windows usage as well as WSL. Unzip a file with WinRar, Windows Defender goes to 100% CPU on one core and 100% disk access (it presumably scans every file...). Writing some files (e.g. images) to disk while running some research code in Visual Studio - Windows Defender goes up instantly in CPU and disk usage and seems to check everything.
So it looks to me that the main culprit is only Windows Defender here because it checks everything and anything that has remotely to do with file accessing or writing, which takes up particularly many resources if lots of small files are involved. I can literally start any disk-intensive task and watch Defender go up in disk access massively immediately. That would also explain nicely why the only solution that really works for the WSL disk performance issues is just to turn off Defender, and then it immediately works flawlessly. (And I do have a fast SSD). So I'd say we can sugarcoat this all we want but the issue is just that Windows Defender is aggressively checking everything you do and as soon as you turn it off everything works great (except that it's not a workable solution to turn off the protection completely...).

patrikhuber commented Jul 12, 2018

@SvenGroot I understand there's all the components involved that you mention and there are quite complex interactions. However from my perspective as a user, it looks to me like 99% of the issue(s) is coming from Windows Defender. And it's getting worse and worse. In Windows 7 and 8.1, I hardly ever saw any activity from the Windows Defender process in the Task Manager, both in terms of CPU and disk usage. When WSL was initially released (or the beta or whatever it was), I think I also didn't see Windows Defender activity very often. However over the past one/two/three years, I see massive Windows Defender process activity in the task manager, as soon as you do anything remotely related to filesystem, and it's slowing everything down massively, regular Windows usage as well as WSL. Unzip a file with WinRar, Windows Defender goes to 100% CPU on one core and 100% disk access (it presumably scans every file...). Writing some files (e.g. images) to disk while running some research code in Visual Studio - Windows Defender goes up instantly in CPU and disk usage and seems to check everything.
So it looks to me that the main culprit is only Windows Defender here because it checks everything and anything that has remotely to do with file accessing or writing, which takes up particularly many resources if lots of small files are involved. I can literally start any disk-intensive task and watch Defender go up in disk access massively immediately. That would also explain nicely why the only solution that really works for the WSL disk performance issues is just to turn off Defender, and then it immediately works flawlessly. (And I do have a fast SSD). So I'd say we can sugarcoat this all we want but the issue is just that Windows Defender is aggressively checking everything you do and as soon as you turn it off everything works great (except that it's not a workable solution to turn off the protection completely...).

@DarthSpock

This comment has been minimized.

Show comment
Hide comment
@DarthSpock

DarthSpock Jul 12, 2018

@patrikhuber Having direct hardware access would also help alleviate the problem. You don't see the same type of atrocities on Windows (in general), only in WSL. We just need a kernel API that allows WSL to talk to Windows drivers like everything else does. Even with Defender disabled, you still don't see native Windows perf for large compilations. It's more tolerable but that isn't the underlying issue. The hardware access is. That said, anything Defender can do (as well as any other third-party solution) to minimize its perf cost is still very much desired since it does still affect even Windows only stuff. It wasn't that long ago that a perf issue came up in VS Code that forced the CPU to be 100% utilized (on both windows and wsl) and they fixed it fairly quickly so it is reasonable to expect changes in Defender but it's a different type of beast that requires more careful attention and will take more time as a result.

@patrikhuber Having direct hardware access would also help alleviate the problem. You don't see the same type of atrocities on Windows (in general), only in WSL. We just need a kernel API that allows WSL to talk to Windows drivers like everything else does. Even with Defender disabled, you still don't see native Windows perf for large compilations. It's more tolerable but that isn't the underlying issue. The hardware access is. That said, anything Defender can do (as well as any other third-party solution) to minimize its perf cost is still very much desired since it does still affect even Windows only stuff. It wasn't that long ago that a perf issue came up in VS Code that forced the CPU to be 100% utilized (on both windows and wsl) and they fixed it fairly quickly so it is reasonable to expect changes in Defender but it's a different type of beast that requires more careful attention and will take more time as a result.

@oblitum

This comment has been minimized.

Show comment
Hide comment
@oblitum

oblitum Jul 12, 2018

@DarthSpock, as I've commented long ago (thread is already collapsed by default) on this thread, Defender's real-time protection can turn WSL almost unusable. Yes, Defender is not the only issue here, Microsoft I/O layer implementation for WSL isn't great too, but the rate that Defender can make things worse for this is like 100/1. Defender ON with fast I/O is still unusable, Defender OFF with slow I/O is usable.

oblitum commented Jul 12, 2018

@DarthSpock, as I've commented long ago (thread is already collapsed by default) on this thread, Defender's real-time protection can turn WSL almost unusable. Yes, Defender is not the only issue here, Microsoft I/O layer implementation for WSL isn't great too, but the rate that Defender can make things worse for this is like 100/1. Defender ON with fast I/O is still unusable, Defender OFF with slow I/O is usable.

@MikeGitb

This comment has been minimized.

Show comment
Hide comment
@MikeGitb

MikeGitb Jul 12, 2018

I'm with Patrik Huber here: The main problem is not the raw I/o performance of wsl but the defender.

Of course, there might be techniques to implement the wsl I/O in a way that it is less impacted by the defender, but fundamentally, what we need is an easy solution to COMPLETELY TURN OFF the defender for all wsl-related stuff - at least on a per app basis.

I'm with Patrik Huber here: The main problem is not the raw I/o performance of wsl but the defender.

Of course, there might be techniques to implement the wsl I/O in a way that it is less impacted by the defender, but fundamentally, what we need is an easy solution to COMPLETELY TURN OFF the defender for all wsl-related stuff - at least on a per app basis.

@DarthSpock

This comment has been minimized.

Show comment
Hide comment
@DarthSpock

DarthSpock Jul 12, 2018

COMPLETELY TURN OFF the defender for all wsl-related stuff - at least on a per app basis

On Insider builds, pico processes are treated the same as Windows processes in Defender. In other words, you can create as many exceptions the "right" way as you like. Yet there are still issues with performance from those able to use this new feature. I agree that I/O isn't terrible but I also know that anybody who does development on a system they own that has high performance hardware isn't able to take advantage of that hardware. Take SSDs for instance. The I/O of an SSD on WSL pales in comparison to Windows. So to say raw I/O perf of wsl isn't part of the issue here is illogical. I already said how Defender should improve but that isn't the only thing needed. I am 100% certain that even if Defender was made slightly more perf friendly on WSL (no way it's going to go from 100% to 30% for instance), people would still complain about the perf and I think they would be right do so.

Also your suggestion to Turn off Defender isn't a solution, it's a workaround you're requesting to be implemented. Anybody who cares about the security of their system isn't going to just "Turn Off Defender" if that's the only thing they're using. What needs to happen already has in Insiders: make Defender more Linux process aware. The only other step I think they may want to or should make is to make Defender an Optional Component to be installed if so chosen (with a dialog box asking to install it when Windows detects no product installed.)

COMPLETELY TURN OFF the defender for all wsl-related stuff - at least on a per app basis

On Insider builds, pico processes are treated the same as Windows processes in Defender. In other words, you can create as many exceptions the "right" way as you like. Yet there are still issues with performance from those able to use this new feature. I agree that I/O isn't terrible but I also know that anybody who does development on a system they own that has high performance hardware isn't able to take advantage of that hardware. Take SSDs for instance. The I/O of an SSD on WSL pales in comparison to Windows. So to say raw I/O perf of wsl isn't part of the issue here is illogical. I already said how Defender should improve but that isn't the only thing needed. I am 100% certain that even if Defender was made slightly more perf friendly on WSL (no way it's going to go from 100% to 30% for instance), people would still complain about the perf and I think they would be right do so.

Also your suggestion to Turn off Defender isn't a solution, it's a workaround you're requesting to be implemented. Anybody who cares about the security of their system isn't going to just "Turn Off Defender" if that's the only thing they're using. What needs to happen already has in Insiders: make Defender more Linux process aware. The only other step I think they may want to or should make is to make Defender an Optional Component to be installed if so chosen (with a dialog box asking to install it when Windows detects no product installed.)

@bmayen

This comment has been minimized.

Show comment
Hide comment
@bmayen

bmayen Jul 12, 2018

"fundamentally" the issue is I/O perf. It is exacerbated by Defender, but Defender isn't the fundamental cause of the problem.

bmayen commented Jul 12, 2018

"fundamentally" the issue is I/O perf. It is exacerbated by Defender, but Defender isn't the fundamental cause of the problem.

@bitcrazed

This comment has been minimized.

Show comment
Hide comment
@bitcrazed

bitcrazed Jul 12, 2018

Collaborator

@patrikhuber

"over the past one/two/three years, I see massive Windows Defender process activity … and it's slowing everything down massively, regular Windows usage as well as WSL".

Back when we released WSL, Defender didn't even know what WSL processes were and didn't know how to scan them and their activity. We've worked with the Defender team and several 3rd party anti-malware teams to better understand WSL and how to more effectively scan and manage its activities. Because distros run atop the Windows IO, Process, Memory, Networking, etc. infrastructure means that anti-malware etc. tools can monitor/manage Linux process' activity in a very similar way to Windows processes. This has actually enabled Linux/WSL to be used in many environments in which it was previously not permitted due to difficulties managing complex and large-scale desktop environments.

That said, we are well aware that Defender does introduce overhead. And that Windows file IO infrastructure isn't as optimal as we'd like. And that the networking stack requires some features to light-up WSL capabilities, etc. Many of these improvements are interdependent. Some tread on one another's toes. @tara-raj is working across several teams to remedy these various issues, within the confines of the Windows development process & schedules.

PLEASE BEAR WITH US WHILE WE WORK THROUGH THESE HUGELY COMPLEX ISSUES.

Collaborator

bitcrazed commented Jul 12, 2018

@patrikhuber

"over the past one/two/three years, I see massive Windows Defender process activity … and it's slowing everything down massively, regular Windows usage as well as WSL".

Back when we released WSL, Defender didn't even know what WSL processes were and didn't know how to scan them and their activity. We've worked with the Defender team and several 3rd party anti-malware teams to better understand WSL and how to more effectively scan and manage its activities. Because distros run atop the Windows IO, Process, Memory, Networking, etc. infrastructure means that anti-malware etc. tools can monitor/manage Linux process' activity in a very similar way to Windows processes. This has actually enabled Linux/WSL to be used in many environments in which it was previously not permitted due to difficulties managing complex and large-scale desktop environments.

That said, we are well aware that Defender does introduce overhead. And that Windows file IO infrastructure isn't as optimal as we'd like. And that the networking stack requires some features to light-up WSL capabilities, etc. Many of these improvements are interdependent. Some tread on one another's toes. @tara-raj is working across several teams to remedy these various issues, within the confines of the Windows development process & schedules.

PLEASE BEAR WITH US WHILE WE WORK THROUGH THESE HUGELY COMPLEX ISSUES.

@MikeGitb

This comment has been minimized.

Show comment
Hide comment
@MikeGitb

MikeGitb Jul 13, 2018

Anybody who cares about the security of their system isn't going to just "Turn Off Defender" if that's the only thing they're using.

I'm willing to bet that everyone who is using any kind of maleware protection has some exceptions of either files or processes that are excluded from monitoring - be it implemented by themselves or by the AV vendor. I'm just asking to make this mechanism easier and more effective (I had the impression in the past that excluding wsl files and/or processes was much less effective than just turning of defender completely).

I'm not saying it isn't a workaround, but as with most workarounds, it is something that could probably be implemented much easier/quicker than the actual solution and would be "good enough" for many scenarios. It would certainly be better than the current state, where people are just not using wsl for "serious" work or turn off the defender completely (not just for their develpment related workload)

That doesn't mean that an actual solution should not and is not developed in addition (alhtough I believe in the end there will always remain a tradeoff between security and performance - you can't get malware protection for free - but hopefully, good engineering can bring us to a point where performance is much better without lowering the security bar compared to the currrent state of affairs).

Anyway, a big thank you to @tara-raj for your effort in that regard and to @bitcrazed for chiming in.

Anybody who cares about the security of their system isn't going to just "Turn Off Defender" if that's the only thing they're using.

I'm willing to bet that everyone who is using any kind of maleware protection has some exceptions of either files or processes that are excluded from monitoring - be it implemented by themselves or by the AV vendor. I'm just asking to make this mechanism easier and more effective (I had the impression in the past that excluding wsl files and/or processes was much less effective than just turning of defender completely).

I'm not saying it isn't a workaround, but as with most workarounds, it is something that could probably be implemented much easier/quicker than the actual solution and would be "good enough" for many scenarios. It would certainly be better than the current state, where people are just not using wsl for "serious" work or turn off the defender completely (not just for their develpment related workload)

That doesn't mean that an actual solution should not and is not developed in addition (alhtough I believe in the end there will always remain a tradeoff between security and performance - you can't get malware protection for free - but hopefully, good engineering can bring us to a point where performance is much better without lowering the security bar compared to the currrent state of affairs).

Anyway, a big thank you to @tara-raj for your effort in that regard and to @bitcrazed for chiming in.

@aseering

This comment has been minimized.

Show comment
Hide comment
@aseering

aseering Jul 13, 2018

Contributor

For what it's worth, I will second the perspective that I've seen file- and directory-based whitelists as a standard practice for achieving acceptable performance on dev machines with antivirus installed. Not just on Windows; on Mac and Linux as well, with a variety of different AV solutions.

I recognize that a "correct" solution here is a fundamentally hard and important problem, and I appreciate all your work on it! But, especially given that everyone is motivated to solve this problem and no one seems to have solved it yet, if Windows's whitelists could easily be made to yield comparable performance to disabling A/V completely when most or all I/O is to whitelisted files, I agree that this would be a very useful intermediate step. (I would kind of expect this to be much easier than improving disk performance generally, though I certainly could be wrong.)

Contributor

aseering commented Jul 13, 2018

For what it's worth, I will second the perspective that I've seen file- and directory-based whitelists as a standard practice for achieving acceptable performance on dev machines with antivirus installed. Not just on Windows; on Mac and Linux as well, with a variety of different AV solutions.

I recognize that a "correct" solution here is a fundamentally hard and important problem, and I appreciate all your work on it! But, especially given that everyone is motivated to solve this problem and no one seems to have solved it yet, if Windows's whitelists could easily be made to yield comparable performance to disabling A/V completely when most or all I/O is to whitelisted files, I agree that this would be a very useful intermediate step. (I would kind of expect this to be much easier than improving disk performance generally, though I certainly could be wrong.)

@oblitum

This comment has been minimized.

Show comment
Hide comment
@oblitum

oblitum Jul 13, 2018

Reminder that Defender's directory exclusion won't help here, only turning off Defender's real-time protection helps.

oblitum commented Jul 13, 2018

Reminder that Defender's directory exclusion won't help here, only turning off Defender's real-time protection helps.

@therealkenc

This comment has been minimized.

Show comment
Hide comment
@therealkenc

therealkenc Jul 13, 2018

Collaborator

directory exclusion won't help here

That's mostly the part I don't get; and haven't commented because when one "doesn't get" something one is usually a combination of ignorant and dangerous. A lot of ink spilled on pico processes and third party filter drivers and VERY COMPLEX stuff. Perfectly valid points. Still left with (inside voice): yeahbutt why is Defender looking in the excluded directory in the first place.

Collaborator

therealkenc commented Jul 13, 2018

directory exclusion won't help here

That's mostly the part I don't get; and haven't commented because when one "doesn't get" something one is usually a combination of ignorant and dangerous. A lot of ink spilled on pico processes and third party filter drivers and VERY COMPLEX stuff. Perfectly valid points. Still left with (inside voice): yeahbutt why is Defender looking in the excluded directory in the first place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment