v0.2.0
0.2.0
Warning: This update causes some API-breaking changes:
- Changed metadata backend, which breaks tables written in earlier versions
- Changed filtering for
cols
fromcols=['like', <pattern>]
tocols={'like': <pattern>}
- Changed filtering for
rows
fromrows=[<keyword>, <value>]
torows={<keyword>: <value>}
- Changed
store.store_name
tostore.name
- Changed
table.exists
totable.exists()
Enhancements:
- Restoring snapshots no longer overwrites existing store or tables by default
- Added
errors
parameter to adjust this behavior
- Added
- Added
table.name
- Added
is_connected()
- Added
store_exists()
- Added
database_exists()
- Added
table.partition_size
read only property - Added
table.repartition(new_partition_size)
for re-partitioning a table tonew_partition_size
- Dropping both
rows
andcols
intable.drop()
at the same time is now supported - Added support for
numpy
datatypes intable.astype()
rows
andcols
arguments now supports more sequence types than just listsbefore
,after
, andbetween
is no longer invalid index valueslike
is no longer a invalid column nametable.read(rows=[...])
now raises an exception whenrows
are not found in the table- Improved
table.insert()
performance - Made some exceptions messages clearer
Bugfixes:
connect(<Database>)
now correctly switches connection to<Database>
instead of staying connected to the old database- Fixed
append
not working properly with default index - Fixed
read_pandas
not working with binary columns - Fixed
read_pandas
not working with large string columns - Fixed
read_pandas
not working with date32 and date64 columns - Fixed a bug causing
insert
to sometimes delete a partition - Fixed
Table.shape
not working - Fixed
store.rename
not working - Fixed predicate filtering keeping one row to many in special cases
- Fixed being able to write tables with non-string column names
- Fixed performance bottleneck when making hidden files on windows
Other:
- Updated dependency requirements to:
polars[timezone]=0.14.11
pyarrow>=7.0.0