Skip to content

Commit

Permalink
set dev version
Browse files Browse the repository at this point in the history
  • Loading branch information
lhoestq committed Sep 21, 2022
1 parent 6fc30c1 commit 98dec70
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions setup.py
Expand Up @@ -138,7 +138,7 @@
"openpyxl",
"py7zr",
"zstandard",
"bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz",
# "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz",
"sentencepiece", # bigbench requires t5 which requires seqio which requires sentencepiece
"rouge_score", # required by bigbench: bigbench.api.util.bb_utils > t5.evaluation.metrics > rouge_score
"sacremoses",
Expand Down Expand Up @@ -197,7 +197,7 @@

setup(
name="datasets",
version="2.5.0", # expected format is one of x.y.z.dev0, or x.y.z.rc1 or x.y.z (no to dashes, yes to dots)
version="2.5.1.dev0", # expected format is one of x.y.z.dev0, or x.y.z.rc1 or x.y.z (no to dashes, yes to dots)
description="HuggingFace community-driven open-source library of datasets",
long_description=open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown",
Expand Down
2 changes: 1 addition & 1 deletion src/datasets/__init__.py
Expand Up @@ -17,7 +17,7 @@
# pylint: enable=line-too-long
# pylint: disable=g-import-not-at-top,g-bad-import-order,wrong-import-position

__version__ = "2.5.0"
__version__ = "2.5.1.dev0"

import platform

Expand Down

1 comment on commit 98dec70

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.010124 / 0.011353 (-0.001229) 0.005236 / 0.011008 (-0.005772) 0.036114 / 0.038508 (-0.002394) 0.037390 / 0.023109 (0.014281) 0.373150 / 0.275898 (0.097252) 0.424024 / 0.323480 (0.100544) 0.007137 / 0.007986 (-0.000849) 0.004154 / 0.004328 (-0.000175) 0.008610 / 0.004250 (0.004360) 0.051898 / 0.037052 (0.014845) 0.387924 / 0.258489 (0.129435) 0.434172 / 0.293841 (0.140331) 0.045853 / 0.128546 (-0.082693) 0.014842 / 0.075646 (-0.060805) 0.315230 / 0.419271 (-0.104041) 0.062324 / 0.043533 (0.018791) 0.372944 / 0.255139 (0.117805) 0.384924 / 0.283200 (0.101724) 0.107040 / 0.141683 (-0.034643) 1.744664 / 1.452155 (0.292510) 1.796225 / 1.492716 (0.303508)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.209990 / 0.018006 (0.191984) 0.517987 / 0.000490 (0.517497) 0.010726 / 0.000200 (0.010526) 0.000489 / 0.000054 (0.000434)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.024574 / 0.037411 (-0.012838) 0.109142 / 0.014526 (0.094616) 0.118170 / 0.176557 (-0.058386) 0.159898 / 0.737135 (-0.577237) 0.124920 / 0.296338 (-0.171419)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.596056 / 0.215209 (0.380847) 5.969748 / 2.077655 (3.892093) 2.582409 / 1.504120 (1.078289) 2.043097 / 1.541195 (0.501903) 2.069126 / 1.468490 (0.600636) 0.711613 / 4.584777 (-3.873164) 5.616273 / 3.745712 (1.870561) 4.926593 / 5.269862 (-0.343269) 2.956911 / 4.565676 (-1.608765) 0.089672 / 0.424275 (-0.334603) 0.013186 / 0.007607 (0.005579) 0.755638 / 0.226044 (0.529594) 7.361613 / 2.268929 (5.092684) 2.980805 / 55.444624 (-52.463820) 2.347406 / 6.876477 (-4.529071) 2.431318 / 2.142072 (0.289245) 0.866082 / 4.805227 (-3.939145) 0.180204 / 6.500664 (-6.320460) 0.079344 / 0.075469 (0.003875)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.824416 / 1.841788 (-0.017371) 16.097278 / 8.074308 (8.022970) 41.810681 / 10.191392 (31.619289) 1.134898 / 0.680424 (0.454474) 0.669679 / 0.534201 (0.135479) 0.473919 / 0.579283 (-0.105364) 0.616556 / 0.434364 (0.182192) 0.354548 / 0.540337 (-0.185789) 0.347648 / 1.386936 (-1.039288)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.007550 / 0.011353 (-0.003802) 0.004601 / 0.011008 (-0.006408) 0.039943 / 0.038508 (0.001435) 0.033565 / 0.023109 (0.010456) 0.407770 / 0.275898 (0.131872) 0.489830 / 0.323480 (0.166350) 0.004742 / 0.007986 (-0.003243) 0.007490 / 0.004328 (0.003161) 0.005713 / 0.004250 (0.001462) 0.040869 / 0.037052 (0.003817) 0.427993 / 0.258489 (0.169504) 0.479887 / 0.293841 (0.186046) 0.041818 / 0.128546 (-0.086728) 0.014317 / 0.075646 (-0.061329) 0.294013 / 0.419271 (-0.125259) 0.066077 / 0.043533 (0.022544) 0.413525 / 0.255139 (0.158386) 0.462495 / 0.283200 (0.179296) 0.120000 / 0.141683 (-0.021683) 1.691794 / 1.452155 (0.239640) 1.700566 / 1.492716 (0.207850)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.246530 / 0.018006 (0.228524) 0.506984 / 0.000490 (0.506495) 0.005053 / 0.000200 (0.004853) 0.000109 / 0.000054 (0.000054)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.031298 / 0.037411 (-0.006114) 0.100416 / 0.014526 (0.085890) 0.119738 / 0.176557 (-0.056819) 0.162592 / 0.737135 (-0.574543) 0.118323 / 0.296338 (-0.178016)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.680981 / 0.215209 (0.465772) 6.847347 / 2.077655 (4.769692) 2.875341 / 1.504120 (1.371222) 2.443812 / 1.541195 (0.902617) 2.476885 / 1.468490 (1.008395) 0.817750 / 4.584777 (-3.767027) 5.792381 / 3.745712 (2.046669) 5.659706 / 5.269862 (0.389844) 2.430107 / 4.565676 (-2.135569) 0.105447 / 0.424275 (-0.318828) 0.014732 / 0.007607 (0.007125) 0.831453 / 0.226044 (0.605409) 8.457021 / 2.268929 (6.188092) 3.351178 / 55.444624 (-52.093446) 2.634037 / 6.876477 (-4.242439) 2.678499 / 2.142072 (0.536427) 0.906880 / 4.805227 (-3.898347) 0.188219 / 6.500664 (-6.312445) 0.067170 / 0.075469 (-0.008299)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.967505 / 1.841788 (0.125717) 16.333116 / 8.074308 (8.258808) 41.613192 / 10.191392 (31.421800) 1.178769 / 0.680424 (0.498345) 0.736678 / 0.534201 (0.202477) 0.487311 / 0.579283 (-0.091972) 0.590362 / 0.434364 (0.155998) 0.364916 / 0.540337 (-0.175422) 0.349676 / 1.386936 (-1.037260)

CML watermark

Please sign in to comment.