-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Polars to v0.36 #797
Comments
2 test cases relating to DataFrame.join/3 are failing due to changes from #784 ?
|
@lkarthee it is fine to mirror the new outer join with polars. The reason why we are having an exception is because the current join assumes all columns from both arguments will be the output, but it seems polars changed it to drop duplicate columns. You will have to mirror this logic in |
@philss Is your sense that we can fix these issues on main? Or should we branch off ps-bump-polars-to-v0.36? (I probably won't have time to look into specifics until tonight.) |
@billylanchantin I was thinking of keeping the work outside the |
That's what I was thinking too. I just wanted to make sure :)
I think it's fine either way. If I tackle any of the pieces I think I'll branch off yours. But others should feel free to do that one off main. |
FYI, the We can use that as a reference :) |
Logged a bug for |
@josevalim One question I have about outer join is - polars returns new columns, should we forward them to Explorer ?
L1_right in below case ? >>> df1.join(df2, on="L1", how="outer")
shape: (4, 4)
┌──────┬──────┬──────────┬──────┐
│ L1 ┆ L2 ┆ L1_right ┆ R2 │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ str ┆ i64 │
╞══════╪══════╪══════════╪══════╡
│ a ┆ 1 ┆ a ┆ 7 │
│ b ┆ 2 ┆ null ┆ null │
│ c ┆ 3 ┆ c ┆ 8 │
│ null ┆ null ┆ d ┆ 9 │
└──────┴──────┴──────────┴──────┘ |
Yes! |
I have fixed outer join in latest pr. Can I fix the describe function ? @philss or @billylanchantin are you working on it ? |
@lkarthee please go ahead, I don't think any of them will reply soon due to timezone :) |
@lkarthee Yeah go for it! I actually tried yesterday morning, but I spun my wheels trying to make it "elegant". More than happy to let you take over :) EDIT: Based on what I tried yesterday, advice would be to just calculate what you need and move on. I kept trying to be clever with loops but I couldn't make it work. |
Thank you @billylanchantin . I have a draft rust version (have to test more). I am exploring if it can be implemented with DF.summarise() in elixir. Took me Down the rabbit hole trying to achieve Is there any way we can achieve |
This is what I tried that didn't work (I hadn't gotten to percentiles yet). Show/Hide def describe(%DataFrame{} = df, _percentiles) do
require Explorer.DataFrame, as: DF
numeric_dtypes = Explorer.Shared.numeric_types()
ordered_dtypes =
List.flatten([
# [:date, :string],
[:date],
numeric_dtypes,
Explorer.Shared.datetime_types(),
Explorer.Shared.duration_types()
])
metrics = [
count: %{dtypes: nil, fun: &Explorer.Series.n_distinct/1},
nil_count: %{dtypes: nil, fun: &Explorer.Series.nil_count/1},
mean: %{dtypes: numeric_dtypes, fun: &Explorer.Series.mean/1},
std: %{dtypes: numeric_dtypes, fun: &Explorer.Series.standard_deviation/1},
min: %{dtypes: ordered_dtypes, fun: &Explorer.Series.min/1},
max: %{dtypes: ordered_dtypes, fun: &Explorer.Series.max/1}
]
metric_dfs =
for {_metric, %{dtypes: dtypes, fun: fun}} <- metrics do
if dtypes == nil do
DF.summarise(df, for(s <- across(), do: {s.name, ^fun.(s)}))
else
metric_df =
DF.summarise(df, for(s <- across(), s.dtype in ^dtypes, do: {s.name, ^fun.(s)}))
# Manually add `nil` to all non-computed columns.
metric_df =
Enum.reduce(df.names, metric_df, fn col, acc ->
if col not in acc.names, do: DF.put(acc, col, [nil]), else: acc
end)
metric_df[df.names]
end
end
metric_df =
metric_dfs
|> DF.concat_rows()
|> DF.put(:describe, metrics |> Keyword.keys() |> Enum.map(&Atom.to_string/1))
metric_df[["describe"] ++ df.names]
end Which gives you (notice the string columns aren't handled right): # test/explorer/data_frame_test.exs:3321
df = DF.new(a: ["d", nil, "f"], b: [1, 2, 3], c: ["a", "b", "c"])
df1 = DF.describe(df)
# +-------------------------------------------+
# | Explorer DataFrame: [rows: 6, columns: 4] |
# +-------------+---------+---------+---------+
# | describe | a | b | c |
# | <string> | <f64> | <f64> | <f64> |
# +=============+=========+=========+=========+
# | count | 3.0 | 3.0 | 3.0 |
# +-------------+---------+---------+---------+
# | nil_count | 1.0 | 0.0 | 0.0 |
# +-------------+---------+---------+---------+
# | mean | | 2.0 | |
# +-------------+---------+---------+---------+
# | std | | 1.0 | |
# +-------------+---------+---------+---------+
# | min | | 1.0 | |
# +-------------+---------+---------+---------+
# | max | | 3.0 | |
# +-------------+---------+---------+---------+ The issue was that our One approach I thought of: use the
This was apparently attempted by someone on the Polars side, but they said they didn't get the performance improvements they expected:
I got stuck on that too! You can see my workaround in my attempt. I don't know if there's a way we can do it easily on our side. |
👍 for adding null type. And I think it is safest to skip mean, std, min, and max for strings and other dtypes. |
I have to expose this on Series.nil_() - figuring out the cogs in the wheel. Its not working yet. #[rustler::nif]
pub fn expr_nil_() -> ExExpr {
ExExpr::new(Expr::Literal(LiteralValue::Null))
} Below works for df with numeric types - pivot is pending. Exprs work and currently data is in columns. def describe(df, opts \\ []) do
opts = Keyword.validate!(opts, percentiles: nil)
if Enum.empty?(df.names) do
raise ArgumentError, message: "cannot describe a DataFrame without any columns"
end
percentiles = process_percentiles(opts[:percentiles])
numeric_dtypes = Shared.numeric_types()
datetime_types = Shared.datetime_types()
duration_types = Shared.duration_types()
stat_cols = for {name, type} <- df.dtypes, type in numeric_dtypes, do: name
min_max_cols =
for {name, type} <- df.dtypes,
type in numeric_dtypes or type in datetime_types or type in duration_types,
do: name
metrics = ["count", "null_count", "mean", "std", "min"]
p_metrics = for p <- percentiles, do: "#{p * 100}%"
metrics = metrics ++ p_metrics
metrics = ["max" | metrics]
df_metrics =
summarise_with(df, fn x ->
counts_exprs = Enum.map(df.names, &{"count:#{&1}", Series.count(x[&1])})
nil_counts_exprs = Enum.map(df.names, &{"nil_count:#{&1}", Series.nil_count(x[&1])})
percentile_exprs =
for p <- percentiles, c <- df.names do
name = "#{p}:#{c}"
if c in stat_cols do
{name, Series.quantile(x[c], p)}
else
{name, Series.nil_()} # this i wrote in rust and exposed in expression.ex, I have to expose it on Series i guess.
end
end
# TODO: handle Series.nil_() for below
mean_exprs = for c <- stat_cols, do: {"mean:#{c}", Series.mean(x[c])}
std_exprs = for c <- stat_cols, do: {"std:#{c}", Series.standard_deviation(x[c])}
min_exprs = for c <- min_max_cols, do: {"min:#{c}", Series.min(x[c])}
max_exprs = for c <- min_max_cols, do: {"max:#{c}", Series.max(x[c])}
counts_exprs ++
nil_counts_exprs ++
mean_exprs ++ std_exprs ++ min_exprs ++ percentile_exprs ++ max_exprs
end)
# Reshape wide result
row = head(df_metrics)
#TODO - pivot columns to rows
end
def process_percentiles(nil), do: [0.25, 0.50, 0.75]
def process_percentiles(percentiles) do
Enum.each(percentiles, fn p ->
if p < 0 or p > 1 do
raise ArgumentError, message: "percentiles must all be in the range [0, 1]"
end
end)
Enum.sort(percentiles)
end
|
👍 to skipping order statistics on strings. While technically possible, I don't think people usually care. |
@billylanchantin Thank you for the pointers, I have completed the percentiles part. I have tried to mirror python code very closely. Hopefully I will figure out more about adding Series.nil_() tomorrow. @josevalim One way to go is exclude non_stat columns from describe and revisit this after null type pr ? Only two metrics will be relevant for non_stat columns - count and nil_count. |
Sounds good to me! |
Went ahead with rust func - only percentiles and pivot logic is in elixir. It fails if there is a non numeric column in data frame. Please review PR, i will fix rest in next pr. |
Implemented describe for stat_cols in elixir and tests are passing |
The work here is complete, so I'm close. Thank you all for the contributions! 💜 |
The Polars team released the version v0.36.2 of the Rust crates yesterday (2024-01-02), and we should bump the version on our side.
I started this work - branch is
ps-bump-polars-to-v0.36
-, but I found some issues and things that were removed, and we need to implement on our side.Series.window_median/3
- done in Ps bump polars to v0.36 #798Series.frequencies/1
done in Ps bump polars to v0.36 #798Series
comparisons operations - done in Ps bump polars to v0.36 #798DataFrame.join/3
using theouter
strategy - done in Bump to v0.36.0 - fix join outer #802DataFrame.describe/
- probably implement on our side - done in Bump to v0.36.0 - implement describe function #803So I leave the issue open, and if anyone wants to work on it, feel free to do so.
I should finish #794 before going back here.
The text was updated successfully, but these errors were encountered: