Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

option to use deep storage for storing shuffle data #11507

Merged
merged 15 commits into from
Aug 13, 2021

Conversation

pjain1
Copy link
Member

@pjain1 pjain1 commented Jul 28, 2021

Fixes #11297.

Description

Description and design in the proposal #11297


Key changed/added classes in this PR
  • *DataSegmentPusher
  • *ShuffleClient
  • *PartitionStat
  • *PartitionLocation
  • *IntermediaryDataManager

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

Copy link
Contributor

@maytasm maytasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall implementation looks good to me. Please add some unit tests before merging (especially on the new classes like DeepStorageShuffleClient, DeepStorageIntermediaryDataManager, etc.)

segment.getInterval(),
bucketNumberedShardSpec.getBucketId() // we must use the bucket ID instead of partition ID
);
return dataSegmentPusher.pushToPath(segmentDir, segment, SHUFFLE_DATA_DIR_PREFIX + "/" + partitionFilePath);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The temporary zip file created will no longer be at taskConfig.getTaskTempDir(subTaskId);
Will this be a problem? Should we document this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the path taskConfig.getTaskTempDir(subTaskId); is more relevant for local storage and not for deep store. For deep store it would sense to have a fixed separate dir to store shuffle data so that either a coordinator duty can clean it up or the dir can be marked for auto-cleanup. Either I can document this and if required can have this prefix configurable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The taskConfig.getTaskTempDir(subTaskId); is not the local for storing the segment in local storage. The config is for the temp directory to store the zipped file, before moving the zipped file to StorageLocation#path/supervisorTaskId/startTimeOfSegment/endTimeOfSegment/bucketIdOfSegment..

Deep storage also does create a zip file (same as the local storage) before copying it over to the final deep storage location. However, this temporary zip file will not be created at taskConfig.getTaskTempDir(subTaskId); but will be at File.createTempFile("druid", "index.zip");.

I am not sure the exact purpose for taskConfig.getTaskTempDir(subTaskId); but this will no longer holds for the temporary zip file location before pushing to deep storage

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you are saying, the temporary zip file created by DeepStorageIntermediaryDataManager will be cleaned up after the push so I don't think it matters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my thought too however, im not sure why is it configurable and exposed as a taskConfig. I tried looking at the PR that added this taskConfig but isn't sure. Maybe @jihoonson can confirm? Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm where will that temp file be created? My concerns are 2 folds. In general, I prefer having all temp files in one place. It will simplify the problem of cleaning them up.

  1. How will the temp file be deleted if the task fails before it deletes the file? taskTempDir is cleaned up automatically after a task failure.
  2. Users could want to use a particular disk space for ingestion temp files and allocate a reasonable amount of disk for taskDir.

Copy link
Member Author

@pjain1 pjain1 Aug 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thought here is simple, tasks already push segments to deep storage and they use some temporary space for that. I am just using the same mechanism so it will be same lifecycle as of segment being pushed. Am I missing something here ?

Copy link
Contributor

@jihoonson jihoonson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pjain1 thank you for the PR! It looks nice. I left a couple of comments especially on the interface design. Also please check the CI failure. It seems legit.

return Paths.get(getPartitionDir(supervisorTaskId, interval, bucketId), subTaskId).toString();
}

default String getPartitionDir(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe getPartitionDirPath()

@zachjsh
Copy link
Contributor

zachjsh commented Jul 29, 2021

@pjain1 , Thank you for your contribution! This is awesome. Would you please add unit tests for DeepStorageIntermediaryDataManager, DeepStorageShuffleClient, DeepStoragePartitionLocation, and DeepStoragePartitionStat classes, and integration tests as well for this? We want to make sure that this feature does not break between releases. For the integration tests, you should be able to piggy back on the existing S3 deep storage integration tests.

- test of deep store intermediatery data manager
- change PartitionStat and PartitionLocation to interface
@pjain1
Copy link
Member Author

pjain1 commented Aug 2, 2021

Seems like only subTaskId and loadSpec are in use in this class which confuses me why this class needs other fields. I suggest to add a new interface for PartitionLocation and let DeepStoragePartitionLocation implement it instead of extending GenericPartitionLocation because GenericPartitionLocation is designed for local storage for shuffle. Since we already have an abstract class of the same name of PartitionLocation, you will have to rename it or use another name for the new interface.

@jihoonson I moved PartitionLocation to be an interface but realized there are other abstract classes like PartialSegmentMergeTask, PartialSegmentMergeIOConfig, PartialSegmentMergeIngestionSpec similar to PartitionLocation which will ideally also need to be made interfaces and each having two implementations based on PartitionLocation class. However I don't see what will be gained from interfacing these as the logic for these classes will be same for any PartitionLocation as of now, seems like generalizing too much. I think we can keep DeepStoragePartitionLocation an extension of GenericPartitionLocation to avoid this problem but some of the fields would be redundant. What do you think ?

@@ -106,7 +111,7 @@ public DataSegment push(final File indexFilesDir, DataSegment segment, final boo
}

segment = segment.withSize(indexSize)
.withLoadSpec(ImmutableMap.of("type", "c*", "key", key))
.withLoadSpec(ImmutableMap.of("type", "c*", "key", storageDirSuffix))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks its a mistake

@zachjsh
Copy link
Contributor

zachjsh commented Aug 2, 2021

@pjain1, Thanks again for your contribution. We'd like to help you accelerate this PR if possible. What is your timeline for getting this in, with proper unit tests and integration tests written? This week, 2 weeks, month, etc? What is a good email for us to use to communicate with you?

@pjain1
Copy link
Member Author

pjain1 commented Aug 2, 2021

@zachjsh I want to get this in as soon as possible I added few tests and was adding more but need some clarification on this. My email is pjain1[at]apache[dot]org. Once things are clear I can add unit tests that should not take much time, I am not familiar with IT test framework so that may take some time so overall targeting it to finish this week.

@zachjsh
Copy link
Contributor

zachjsh commented Aug 2, 2021

@zachjsh I want to get this in as soon as possible I added few tests and was adding more but need some clarification on this. My email is pjain1[at]apache[dot]org. Once things are clear I can add unit tests that should not take much time, I am not familiar with IT test framework so that may take some time so overall targeting it to finish this week.

Thanks for your quick reply @pjain1. Happy to hear that you are trying to get this in as soon as possible. Will work with you through any doubts or uncertainties that you have. Will let @jihoonson reply to the comment that you linked in your previous message. As for adding an Integration test for this. I think that ITPerfectRollupParallelIndexTest is a good example test to base this after, or extend.

@jihoonson
Copy link
Contributor

Seems like only subTaskId and loadSpec are in use in this class which confuses me why this class needs other fields. I suggest to add a new interface for PartitionLocation and let DeepStoragePartitionLocation implement it instead of extending GenericPartitionLocation because GenericPartitionLocation is designed for local storage for shuffle. Since we already have an abstract class of the same name of PartitionLocation, you will have to rename it or use another name for the new interface.

@jihoonson I moved PartitionLocation to be an interface but realized there are other abstract classes like PartialSegmentMergeTask, PartialSegmentMergeIOConfig, PartialSegmentMergeIngestionSpec similar to PartitionLocation which will ideally also need to be made interfaces and each having two implementations based on PartitionLocation class. However I don't see what will be gained from interfacing these as the logic for these classes will be same for any PartitionLocation as of now, seems like generalizing too much. I think we can keep DeepStoragePartitionLocation an extension of GenericPartitionLocation to avoid this problem but some of the fields would be redundant. What do you think ?

@pjain1 I wanted to understand what the difficulty is, so took a stab and implemented what I suggested in my previous comment. The code is available in my branch. I haven't tested my change, but at least it compiles successfully.

PartialSegmentMergeTask, PartialSegmentMergeIOConfig, and PartialSegmentMergeIngestionSpec still remain as abstract classes today because of a historical reason (I created them as abstract but haven't cleaned them up). So I propomted PartialSegmentMergeIOConfig to be non-abstract which now accepts a list of PartitionLocations instead of a list of extensions of PartitionLocation. ShuffleClient now needs to know what type of PartitionLocation to use. PartialSegmentMergeIngestionSpec is also no longer an abstract class. This way, you don't have to add new PartialSegmentMergeTask, PartialSegmentMergeIOConfig, and PartialSegmentMergeIngestionSpec implementations corresponding to DeepStoragePartitionLocation. I haven't touched PartitionStat in my branch, but think a similar technique can apply. What do you think?

@pjain1
Copy link
Member Author

pjain1 commented Aug 3, 2021

@jihoonson All the different classes with *Generic* in them were tied to GenericPartitionLocation but now I see we are ok in breaking that convention for few classes and they will work for DeepStoragePartitionLocation so thats fine, I can make that change. Thanks. BTW making PartitonLocation and PartitionStat an interface and other related changes was already done here. Working on Task and config related changes.

@pjain1
Copy link
Member Author

pjain1 commented Aug 3, 2021

@jihoonson I see in your branch you have made PartialSegmentMergeIngestionSpec a concrete class but also kept PartialGenericSegmentMergeIngestionSpec, is this for backwards compatibility reasons ?

@pjain1
Copy link
Member Author

pjain1 commented Aug 5, 2021

added IT

@zachjsh
Copy link
Contributor

zachjsh commented Aug 5, 2021

added IT

thanks @pjain1 ! I noticed that you are only running with mm, the existing rollup IT runs on both mm and indexer, any reason to not also run with indexer?

@pjain1
Copy link
Member Author

pjain1 commented Aug 5, 2021

@zachjsh added test with indexer. BTW some checks are failing because of branch coverage issue in classes GenericPartitionLocation, DeepStoragePartitionStat and DeepStoragePartitionLocation which are just POJOs so I think its ok to ignore the failures, these classes already have serde tests and being used in other test.
Screenshot 2021-08-06 at 3 16 25 AM

@pjain1
Copy link
Member Author

pjain1 commented Aug 6, 2021

Travis checks passed except for test coverage issues as mentioned in above comment.

@zachjsh
Copy link
Contributor

zachjsh commented Aug 9, 2021

Travis checks passed except for test coverage issues as mentioned in above comment.

@pjain1 the second phase integration tests did not because of the coverage failure. I've manually started those just now. It looks like a lot of the missed branches are coming from equals method. Can you add tests for these, can use EqualsVerifier to make your life easier. It also looks like you have legitimate failures

[ERROR] Errors: 
[ERROR] org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerAutoCleanupTest.testCleanup(org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerAutoCleanupTest)
[ERROR]   Run 1: LocalIntermediaryDataManagerAutoCleanupTest.testCleanup:128 » ISE Can't find l...
[ERROR]   Run 2: LocalIntermediaryDataManagerAutoCleanupTest.testCleanup:128 » ISE Can't find l...
[ERROR]   Run 3: LocalIntermediaryDataManagerAutoCleanupTest.testCleanup:128 » ISE Can't find l...
[ERROR]   Run 4: LocalIntermediaryDataManagerAutoCleanupTest.testCleanup:128 » ISE Can't find l...
[INFO] 
[ERROR] org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest.deletePartitions(org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest)
[ERROR]   Run 1: LocalIntermediaryDataManagerManualAddAndDeleteTest.deletePartitions:141 » ISE ...
[ERROR]   Run 2: LocalIntermediaryDataManagerManualAddAndDeleteTest.deletePartitions:141 » ISE ...
[ERROR]   Run 3: LocalIntermediaryDataManagerManualAddAndDeleteTest.deletePartitions:141 » ISE ...
[ERROR]   Run 4: LocalIntermediaryDataManagerManualAddAndDeleteTest.deletePartitions:141 » ISE ...
[INFO] 
[ERROR] org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddRemoveAdd(org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest)
[ERROR]   Run 1: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddRemoveAdd:165 » ISE ...
[ERROR]   Run 2: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddRemoveAdd:165 » ISE ...
[ERROR]   Run 3: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddRemoveAdd:165 » ISE ...
[ERROR]   Run 4: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddRemoveAdd:165 » ISE ...
[INFO] 
[ERROR] org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddSegmentFailure(org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest)
[ERROR]   Run 1: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddSegmentFailure:101 » ISE
[ERROR]   Run 2: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddSegmentFailure:101 » ISE
[ERROR]   Run 3: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddSegmentFailure:101 » ISE
[ERROR]   Run 4: LocalIntermediaryDataManagerManualAddAndDeleteTest.testAddSegmentFailure:101 » ISE
[INFO] 
[ERROR] org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest.testFindPartitionFiles(org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManagerManualAddAndDeleteTest)
[ERROR]   Run 1: LocalIntermediaryDataManagerManualAddAndDeleteTest.testFindPartitionFiles:119 » ISE
[ERROR]   Run 2: LocalIntermediaryDataManagerManualAddAndDeleteTest.testFindPartitionFiles:119 » ISE
[ERROR]   Run 3: LocalIntermediaryDataManagerManualAddAndDeleteTest.testFindPartitionFiles:119 » ISE
[ERROR]   Run 4: LocalIntermediaryDataManagerManualAddAndDeleteTest.testFindPartitionFiles:119 » ISE
[INFO] 
[ERROR] org.apache.druid.indexing.worker.shuffle.ShuffleResourceTest.testDeletePartitionWithValidParamsReturnOk(org.apache.druid.indexing.worker.shuffle.ShuffleResourceTest)
[ERROR]   Run 1: ShuffleResourceTest.testDeletePartitionWithValidParamsReturnOk:174 » ISE Can't...
[ERROR]   Run 2: ShuffleResourceTest.testDeletePartitionWithValidParamsReturnOk:174 » ISE Can't...
[ERROR]   Run 3: ShuffleResourceTest.testDeletePartitionWithValidParamsReturnOk:174 » ISE Can't...
[ERROR]   Run 4: ShuffleResourceTest.testDeletePartitionWithValidParamsReturnOk:174 » ISE Can't...
[INFO] 
[ERROR] org.apache.druid.indexing.worker.shuffle.ShuffleResourceTest.testGetPartitionWithValidParamsReturnOk(org.apache.druid.indexing.worker.shuffle.ShuffleResourceTest)
[ERROR]   Run 1: ShuffleResourceTest.testGetPartitionWithValidParamsReturnOk:144 » ISE Can't fi...
[ERROR]   Run 2: ShuffleResourceTest.testGetPartitionWithValidParamsReturnOk:144 » ISE Can't fi...
[ERROR]   Run 3: ShuffleResourceTest.testGetPartitionWithValidParamsReturnOk:144 » ISE Can't fi...
[ERROR]   Run 4: ShuffleResourceTest.testGetPartitionWithValidParamsReturnOk:144 » ISE Can't fi...

@pjain1
Copy link
Member Author

pjain1 commented Aug 10, 2021

@zachjsh they ran once but then I pushed some minor doc change so it did not. Anyways now the docs job is failing because of
Screenshot 2021-08-11 at 1 01 23 AM
This is unrelated to my PR but going to add UNNEST to the spellings file for travis

@pjain1
Copy link
Member Author

pjain1 commented Aug 11, 2021

travis checks passed except from test coverage failures as mentioned here @jihoonson @maytasm @zachjsh

@zachjsh
Copy link
Contributor

zachjsh commented Aug 11, 2021

travis checks passed except from test coverage failures as mentioned here @jihoonson @maytasm @zachjsh

@pjain1 it is also failing a test and the stacktrace shows classes that you've modified. If you look at the failing phase 1 test, you will see this:

[ERROR] testDeletePartitionWithValidParamsReturnOk(org.apache.druid.indexing.worker.shuffle.ShuffleResourceTest)  Time elapsed: 0.013 s  <<< ERROR!

org.apache.druid.java.util.common.ISE: Can't find location to handle segment[DataSegment{binaryVersion=0, id=datasource_2020-01-01T00:00:00.000Z_2020-01-02T00:00:00.000Z_version, loadSpec=null, dimensions=[], metrics=[], shardSpec=Mock for BucketNumberedShardSpec, hashCode: 338609667, lastCompactionState=null, size=10}]

	at org.apache.druid.indexing.worker.shuffle.LocalIntermediaryDataManager.addSegment(LocalIntermediaryDataManager.java:368)

	at org.apache.druid.indexing.worker.shuffle.ShuffleResourceTest.testDeletePartitionWithValidParamsReturnOk(ShuffleResourceTest.java:174)

	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

	at java.lang.reflect.Method.invoke(Method.java:498)

	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)

	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)

	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)

	at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)

	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)

	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)

	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)

	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)

	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)

	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)

	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)

	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)

	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)

	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)

	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)

	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)

	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)

	at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:290)

	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)

	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)

	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)

	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)

	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)

	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

Also for the coverage, it looks like a lot of the missed branches are from equals method. This can be tested using EqualsVerifier, as mentioned before.

@pjain1
Copy link
Member Author

pjain1 commented Aug 12, 2021

@zachjsh thanks! I missed that, fixed it and added tests using EqualsVerifier, all checks passing now.

@zachjsh
Copy link
Contributor

zachjsh commented Aug 13, 2021

@zachjsh thanks! I missed that, fixed it and added tests using EqualsVerifier, all checks passing now.

Thanks for your hard work in getting all the tests an IT working @pjain1! The change looks good to me, but would like @maytasm and @jihoonson to take a final look here. One more thing, can you provide specific steps you took to manually verify this? A specific ingestion spec and data used would be very helpful. Thanks again!

@pjain1
Copy link
Member Author

pjain1 commented Aug 13, 2021

@zachjsh I tested it by indexing example wikipedia data on local druid cluster with hashed partition type and setting Max num concurrent sub tasks to 3 with GCS as well as local deep storage.

Copy link
Contributor

@zachjsh zachjsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@jihoonson jihoonson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @pjain1

@zachjsh
Copy link
Contributor

zachjsh commented Aug 13, 2021

@zachjsh I tested it by indexing example wikipedia data on local druid cluster with hashed partition type and setting Max num concurrent sub tasks to 3 with GCS as well as local deep storage.

@pjain1 , just to confirm, did you verify with perfect rollup option set to true?

@zachjsh zachjsh merged commit c7b4667 into apache:master Aug 13, 2021
@pjain1 pjain1 deleted the shuffle_deep_storage branch August 15, 2021 17:26
@pjain1
Copy link
Member Author

pjain1 commented Aug 15, 2021

@zachjsh I tested it by indexing example wikipedia data on local druid cluster with hashed partition type and setting Max num concurrent sub tasks to 3 with GCS as well as local deep storage.

@pjain1 , just to confirm, did you verify with perfect rollup option set to true?

yes

@clintropolis clintropolis added this to the 0.22.0 milestone Sep 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Using deep storage as intermediate store for shuffle tasks
6 participants