-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specialised data types in R #22
Comments
I agree that this is the way to go, but a plugin system shouldn't impact performance. Currently, the values obtained from the database driver are coerced to their target type (integer, double, int64, string, logical, raw vector) as they arrive. Do you think the decision about the "right" target type can be made from metadata only, without fetching any rows (or after fetching the first row only)? We could offer an interface that allows registration of column handlers for a particular DBI result class. Backends would then be expected to call these handlers with column metadata (as R objects), and the handler decides if he can handle columns of this type or not. If yes, the handler returns an empty container (think The data format for the column metadata and the raw values fully depends on the backend and should match that of the underlying C library. We would obey order of registration, younger handlers are called first. Handlers can also be provided at the connection or the result level. Backends then also could expose "default" built-in handlers for R's data types, and also use this mechanism to decide how to handle integers, blobs, times etc.. If we don't care that much about performance and permit an extra copy operation and memory allocation, we could also package everything as lists of |
@edzer: Do you think geometry columns can be handled by a plugin provided by sf? |
Yes, no problem. |
All roads lead to this thread! What is the state of array types in e.g. RPostgres? > demo <- tibble(reporting_period = list(1L, 2L, 3L),
rp_start = lapply(c("2019-11-01", "2020-11-01", "2021-11-01"), as.Date),
rp_end = lapply(c("2020-10-31", "2021-10-31", "2022-10-31"), as.Date))
> demo
# A tibble: 3 x 3
# reporting_period rp_start rp_end
# <list> <list> <list>
# 1 <int [1]> <date [1]> <date [1]>
# 2 <int [1]> <date [1]> <date [1]>
# 3 <int [1]> <date [1]> <date [1]>
> result <- dbSendQuery(con,
"CREATE TABLE reporting_schedule (
reporting_period INTEGER[] NOT NULL,
rp_start DATE[] NOT NULL,
rp_end DATE[] NOT NULL
)")
> dbClearResult(result)
> dbAppendTable(con, "reporting_schedule", demo)
# Error: Lists must contain raw vectors or NULL
> dbDataType(con, demo)
# reporting_period rp_start rp_end
# "BYTEA" "BYTEA" "BYTEA"
> result <- dbSendQuery(con, "SELECT * FROM reporting_schedule")
> dbColumnInfo(result)
# name type .oid .known .typname
# 1 reporting_period character 1007 FALSE _int4
# 2 rp_start character 1182 FALSE _date
# 3 rp_end character 1182 FALSE _date
> dbClearResult(result) Any suggestions how I can convince DBI to take a table of lists? |
No updates here. This is a complex problem, leaving the issue open as a reminder. |
With the availability of ALTREP, this is easier to design and implement efficiently:
Closing for now, will keep a bookmark to this issue. |
Efficiency gains with ALTREP should be quantified: what's the cost of allocating (and garbage-collecting) several list-of-lists columns in row-major order, in relation to the other costs? |
DBI should offer some type of plugin system that other packages can build upon by offering implementations for representing some of the more exotic data types in R. Throughout the DBI packages, there are many open issues surrounding this problem. A selection:
r-dbi/DBI#199 enum types
r-dbi/DBI#179 uuid types
r-dbi/RPostgres#132 geometry types, sf
r-dbi/RPostgres#114 geometry types, sf
r-dbi/RPostgres#86 geometry types
r-dbi/bigrquery#145 array/struct types
r-dbi/RMariaDB#42 json
other types that fall into this area include arbitrary precision floats (Rmpfr, gmp), xml types (xml2), monetary types, etc.
Now if geometry types are implemented for Postgres, this is great. But they are also available in MySQL/MariaDB. It therefore might be useful to consider these issues in a more general fashion. Furthermore, approaching this in a type by type fashion might not be sufficient. How could a user map a Postgres composite type, if there is not some inherent extensibility?
Unfortunately, I have no idea how to tackle such an issue. Maybe a pragmatic approach, where things such as composite types are simply not considered, is the best we can do. I just was hoping to get a discussion started on this topic.
The text was updated successfully, but these errors were encountered: