New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test item scheduling does not schedule N items to N workers #40

Open
nchammas opened this Issue Jan 10, 2016 · 5 comments

Comments

Projects
None yet
3 participants
@nchammas
Contributor

nchammas commented Jan 10, 2016

I may just be misunderstanding how xdist works, but I can't seem to get the number of workers I requested to all accept tasks in parallel:

$ py.test -v -n 8
================================ test session starts =================================
platform darwin -- Python 3.5.1, pytest-2.8.5, py-1.4.31, pluggy-0.3.1 -- .../pytest-xdist-testcase/venv/bin/python3
cachedir: .cache
rootdir: .../pytest-xdist-testcase, inifile: 
plugins: xdist-1.13.1
[gw0] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw1] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw2] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw3] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw4] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw5] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw6] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw7] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw0] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw1] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw2] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw3] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw4] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw5] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw6] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw7] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
gw0 [8] / gw1 [8] / gw2 [8] / gw3 [8] / gw4 [8] / gw5 [8] / gw6 [8] / gw7 [8]
scheduling tests via LoadScheduling

test_tests.py::test_1[0] 
test_tests.py::test_1[2] 
test_tests.py::test_2[0] 
test_tests.py::test_2[2] 
[gw0] PASSED test_tests.py::test_1[2] 
[gw7] PASSED test_tests.py::test_1[0] 
[gw2] PASSED test_tests.py::test_2[0] 
[gw4] PASSED test_tests.py::test_2[2] 
test_tests.py::test_2[1] 
test_tests.py::test_1[3] 
test_tests.py::test_1[1] 
test_tests.py::test_2[3] 
[gw7] PASSED test_tests.py::test_1[1] 
[gw0] PASSED test_tests.py::test_1[3] 
[gw4] PASSED test_tests.py::test_2[3] 
[gw2] PASSED test_tests.py::test_2[1] 

============================= 8 passed in 64.12 seconds ==============================

Watching the output as it comes, it looks like the second block of 4 tests doesn't start running until the first block of 4 completes.

Am I misunderstanding something? Why don't all 8 workers start running tests at the same time? Why doesn't gw1, for example, get any tests?

@nonatomiclabs

This comment has been minimized.

Show comment
Hide comment
@nonatomiclabs

nonatomiclabs Jan 11, 2016

Do you have 8 physical cores on your computer?

If not, I think it is normal to see only 4 processes active.

nonatomiclabs commented Jan 11, 2016

Do you have 8 physical cores on your computer?

If not, I think it is normal to see only 4 processes active.

@RonnyPfannschmidt

This comment has been minimized.

Show comment
Hide comment
@RonnyPfannschmidt

RonnyPfannschmidt Jan 11, 2016

Member

it clearly starts 8 workers, however due to a design problem thats sheduled to be fixed in the next bigger sprint it distributes tests in batches of 2 items, and thus sheduling is rather subpar

for your own case, the problem will elevae as the number of tests gows tho

AFAIR this problem was noted on the py.test issue tracker already

its also related to #17 #18 and #20

Member

RonnyPfannschmidt commented Jan 11, 2016

it clearly starts 8 workers, however due to a design problem thats sheduled to be fixed in the next bigger sprint it distributes tests in batches of 2 items, and thus sheduling is rather subpar

for your own case, the problem will elevae as the number of tests gows tho

AFAIR this problem was noted on the py.test issue tracker already

its also related to #17 #18 and #20

@RonnyPfannschmidt RonnyPfannschmidt changed the title from xdist doesn't spin up the requested number of workers to test item scheduling does not schedule N items to N workers Jan 11, 2016

@nchammas

This comment has been minimized.

Show comment
Hide comment
@nchammas

nchammas Jan 11, 2016

Contributor

@filaton

Do you have 8 physical cores on your computer?

If not, I think it is normal to see only 4 processes active.

I only have 2 virtual cores. I don't think this is normal, as @RonnyPfannschmidt's comment confirms. I expect the 8 workers to all start accepting tasks at the same time. This should have nothing to do with my physical cores.

Of course, if I run 8 tasks on a machine with 2 cores, that means there will be a lot of context switching between the tasks. That's fine with me since my tests are I/O-heavy.

@RonnyPfannschmidt

AFAIR this problem was noted on the py.test issue tracker already

Do you have the py.test issue number?

Going forward, should xdist issues be reported here or on the main py.test issue tracker? I only searched here before reporting this issue. Apologies if it's a dup. I think it would be better if xdist issues were tracked exclusively in the xdist repo.

Contributor

nchammas commented Jan 11, 2016

@filaton

Do you have 8 physical cores on your computer?

If not, I think it is normal to see only 4 processes active.

I only have 2 virtual cores. I don't think this is normal, as @RonnyPfannschmidt's comment confirms. I expect the 8 workers to all start accepting tasks at the same time. This should have nothing to do with my physical cores.

Of course, if I run 8 tasks on a machine with 2 cores, that means there will be a lot of context switching between the tasks. That's fine with me since my tests are I/O-heavy.

@RonnyPfannschmidt

AFAIR this problem was noted on the py.test issue tracker already

Do you have the py.test issue number?

Going forward, should xdist issues be reported here or on the main py.test issue tracker? I only searched here before reporting this issue. Apologies if it's a dup. I think it would be better if xdist issues were tracked exclusively in the xdist repo.

@nonatomiclabs

This comment has been minimized.

Show comment
Hide comment
@nonatomiclabs

nonatomiclabs Jan 11, 2016

@nchammas Sorry, my fault. In fact, I meant that if you have only 4 cores, it's normal that tests will pass by group of 4 (supposing that they last the same time).
I'll avoid commenting issues in the morning… ;)

nonatomiclabs commented Jan 11, 2016

@nchammas Sorry, my fault. In fact, I meant that if you have only 4 cores, it's normal that tests will pass by group of 4 (supposing that they last the same time).
I'll avoid commenting issues in the morning… ;)

@RonnyPfannschmidt

This comment has been minimized.

Show comment
Hide comment
@RonnyPfannschmidt

RonnyPfannschmidt Jan 11, 2016

Member

a current problem with xdist is, that the current runtest protocol needs a next item, the issue arises from a legacy problem as non function scoped fixtures/caches got introduced,
its very likely unfixable before pytest 3.0

Member

RonnyPfannschmidt commented Jan 11, 2016

a current problem with xdist is, that the current runtest protocol needs a next item, the issue arises from a legacy problem as non function scoped fixtures/caches got introduced,
its very likely unfixable before pytest 3.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment