Skip to content

Commit

Permalink
Fix traffic_2008 dataset (#128)
Browse files Browse the repository at this point in the history
  • Loading branch information
ostreech1997 committed Nov 1, 2023
1 parent 54694d6 commit 2b56c37
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 5 deletions.
5 changes: 2 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Add `traffic_2008` to internal datasets ([#94](https://github.com/etna-team/etna/pull/94))
- Add `traffic_2015` to internal datasets ([#100](https://github.com/etna-team/etna/pull/100))
- Add `tourism` to internal datasets ([#120](https://github.com/etna-team/etna/pull/120))
- Add `weather` to internal datasets ([#125](https://github.com/etna-team/etna/pull/125))
- Add `weather` to internal datasets ([#125](https://github.com/etna-team/etna/pull/125))

### Changed
-
Expand All @@ -23,9 +23,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
-

### Fixed
-
-
- Fix links from tinkoff-ai/etna to etna-team/etna ([#47](https://github.com/etna-team/etna/pull/47))
- Fix `traffic_2008` ([128](https://github.com/etna-team/etna/pull/128))
-

### Removed
Expand Down
2 changes: 1 addition & 1 deletion etna/datasets/internal_datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ def read_data(path: Path, part: str) -> np.ndarray:
with open(path, "r") as f:
if part in ("randperm", "stations_list"):
data = f.read().lstrip("[").rstrip("]\n").split(" ")
out = np.array(map(int, data)) if part == "randperm" else np.array(data)
out = np.array(list(map(int, data))) if part == "randperm" else np.array(data)
return out
else:
lines = []
Expand Down
2 changes: 1 addition & 1 deletion tests/test_datasets/test_internal_datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ def test_not_present_part():
],
)
def test_dataset_statistics(dataset_name, expected_shape, expected_min_date, expected_max_date, dataset_parts):
ts_full = load_dataset(dataset_name, parts="full")
ts_full = load_dataset(dataset_name, parts="full", rebuild_dataset=True)
ts_parts = load_dataset(dataset_name, parts=dataset_parts)
parts_rows = sum([ts.df.shape[0] for ts in ts_parts])
assert ts_full.df.shape == expected_shape
Expand Down

0 comments on commit 2b56c37

Please sign in to comment.