Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/StardustDocs/topics/dataSources/ApacheArrow.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ and in [`%use dataframe`](SetupKotlinNotebook.md#integrate-kotlin-dataframe) for
> {style="warning"}

> Structured (nested) Arrow types such as Struct are not supported yet in Kotlin DataFrame.
> See an issue: [Add inner / Struct type support in Arrow](https://github.com/Kotlin/dataframe/issues/536)
> See the issue: [Add inner / Struct type support in Arrow](https://github.com/Kotlin/dataframe/issues/536)
> {style="warning"}

## Read
Expand Down
4 changes: 2 additions & 2 deletions docs/StardustDocs/topics/dataSources/Parquet.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Requires the [`dataframe-arrow` module](Modules.md#dataframe-arrow), which is in
> {style="warning"}

> Structured (nested) Arrow types such as Struct are not supported yet in Kotlin DataFrame.
> See an issue: [Add inner / Struct type support in Arrow](https://github.com/Kotlin/dataframe/issues/536)
> See the issue: [Add inner / Struct type support in Arrow](https://github.com/Kotlin/dataframe/issues/536)
> {style="warning"}

## Reading Parquet Files
Expand Down Expand Up @@ -68,7 +68,7 @@ Dataset API to scan the data and materialize it as a Kotlin `DataFrame`.

```kotlin
// Read from file paths (as strings)
val df1 = DataFrame.readParquet("data/sales.parquet")
val df = DataFrame.readParquet("data/sales.parquet")
```

<!---FUN readParquetFilePath-->
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
package org.jetbrains.kotlinx.dataframe.samples.io

import io.kotest.matchers.shouldBe
import java.io.File
import java.nio.file.Path
import java.nio.file.Paths
import org.jetbrains.kotlinx.dataframe.DataFrame
import org.jetbrains.kotlinx.dataframe.api.NullabilityOptions
import org.junit.Test
import org.jetbrains.kotlinx.dataframe.io.readParquet
import org.jetbrains.kotlinx.dataframe.testParquet
import org.junit.Test
import java.io.File
import java.nio.file.Paths

class Parquet {
@Test
Expand Down Expand Up @@ -56,7 +55,7 @@ class Parquet {
val df = DataFrame.readParquet(
file,
nullability = NullabilityOptions.Infer,
batchSize = 64L * 1024
batchSize = 64L * 1024,
)
// SampleEnd
df.rowsCount() shouldBe 300
Expand Down