Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support compression for IPC #1855

Closed
wants to merge 13 commits into from

Conversation

liukun4515
Copy link
Contributor

@liukun4515 liukun4515 commented Jun 13, 2022

Which issue does this PR close?

Closes #1709
Closes #70

Rationale for this change

What changes are included in this PR?

Are there any user-facing changes?

@github-actions github-actions bot added the arrow Changes to the arrow crate label Jun 13, 2022
arrow/src/ipc/reader.rs Outdated Show resolved Hide resolved
arrow/src/ipc/reader.rs Outdated Show resolved Hide resolved
arrow/src/ipc/reader.rs Outdated Show resolved Hide resolved
arrow/src/ipc/writer.rs Outdated Show resolved Hide resolved
arrow/src/ipc/writer.rs Outdated Show resolved Hide resolved
@liukun4515 liukun4515 marked this pull request as ready for review June 18, 2022 16:37
};
let len = data.len() as i64;
// TODO: don't need to pad each buffer, and just need to pad the tail of the message body
// let pad_len = pad_to_8(len as u32);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read format/IPC spec and got that we don't need to align each buffer and pad the 0 to the tail of each buffer, just pad the 0 to the tail of message body.
@martin-g @alamb
Is there any error for my understanding for the IPC format?
Please feel free to point out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this make sense to you, I will drop these commented code.

@codecov-commenter
Copy link

codecov-commenter commented Jun 18, 2022

Codecov Report

Merging #1855 (7c6c2ee) into master (c7f89e1) will increase coverage by 0.04%.
The diff coverage is 85.47%.

@@            Coverage Diff             @@
##           master    #1855      +/-   ##
==========================================
+ Coverage   83.42%   83.47%   +0.04%     
==========================================
  Files         214      215       +1     
  Lines       57015    57286     +271     
==========================================
+ Hits        47567    47819     +252     
- Misses       9448     9467      +19     
Impacted Files Coverage Δ
arrow/src/ipc/compression/ipc_compression.rs 76.36% <76.36%> (ø)
arrow/src/ipc/writer.rs 82.19% <84.45%> (+0.40%) ⬆️
arrow/src/ipc/reader.rs 91.16% <98.18%> (+0.33%) ⬆️
parquet_derive/src/parquet_field.rs 65.53% <0.00%> (-0.69%) ⬇️
parquet/src/util/hash_util.rs 95.16% <0.00%> (-0.50%) ⬇️
arrow/src/array/array_dictionary.rs 91.53% <0.00%> (-0.39%) ⬇️
parquet/src/util/bit_util.rs 92.68% <0.00%> (-0.33%) ⬇️
arrow/src/record_batch.rs 93.89% <0.00%> (-0.11%) ⬇️
parquet/src/encodings/encoding.rs 93.62% <0.00%> (-0.03%) ⬇️
arrow/src/json/reader.rs 84.58% <0.00%> (ø)
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c7f89e1...7c6c2ee. Read the comment docs.

@liukun4515
Copy link
Contributor Author

hi @andygrove @alamb , I have implemented the IPC compression, but the ci on AMD 32 fails.
I have no idea to fix it, can you help me?

@liukun4515
Copy link
Contributor Author

After this pr merged, I will enable the IT test for 2.0.0-compression

@liukun4515
Copy link
Contributor Author

The integration test was failed from https://github.com/apache/arrow-rs/runs/6952125461?check_suite_focus=true#step:6:15070
and check the cause of c++ exception: https://github.com/apache/arrow/blob/4ade394a7a0fa22ecb8ef2e3b4dc8bb42e599274/cpp/src/arrow/ipc/reader.cc#L198

I'm confused about why we need to make 8 bytes aligned for each buffer

@liukun4515
Copy link
Contributor Author

hi @andygrove @alamb , I have implemented the IPC compression, but the ci on AMD 32 fails. I have no idea to fix it, can you help me?

@martin-g Could you please take a look this ci problem?

@liukun4515
Copy link
Contributor Author

hi @andygrove @alamb , I have implemented the IPC compression, but the ci on AMD 32 fails. I have no idea to fix it, can you help me?

@martin-g Could you please take a look this ci problem?

I think the issue has been resolved

@alamb
Copy link
Contributor

alamb commented Jun 21, 2022

Thanks @liukun4515 -- I will try and find time tomorrow to review this PR

arrow/Cargo.toml Outdated
@@ -60,7 +63,7 @@ multiversion = { version = "0.6.1", default-features = false }
bitflags = { version = "1.2.1", default-features = false }

[features]
default = ["csv", "ipc", "test_utils"]
default = ["csv", "ipc", "test_utils", "zstd", "lz4"]
Copy link
Member

@viirya viirya Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So by default we will include zstd, lz4 dependencies? If we don't use ipc or don't use compressed ipc, seems the dependencies are not necessary?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree -- I suggest something like

Suggested change
default = ["csv", "ipc", "test_utils", "zstd", "lz4"]
default = ["csv", "ipc", "test_utils"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, include the dependency by default.
buffer compression protocol is common protocol for all languages in the arrow ecosystem, if we use the compression/decompression as optional, we can read the file or stream from compressed side.
2.0.0 compression has been implemented in the Java, C++ and other languages, If your rust server receiver a message compressed by the protocol, we can read them by default.

Copy link
Contributor

@nevi-me nevi-me Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be in favour of supporting adding the deps toipc but not creating ipc_compression separately. The inconvenience of a larger binary is a lesser problem than the runtime error of not supporting compression by omission.
The latter would need a rebuild.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have encountered an issue on cross-compiling lz4 crate on some platform. We don't use ipc so we can choose not to include ipc feature at all. I'm wondering if ipc compression can't be excluded, there might be some issues like that. Although this sounds like corner case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have add the feature ipc_compression = "lz4,zstd" as an option, if you want to compile with the ipc compression feature.

arrow/src/ipc/reader.rs Show resolved Hide resolved
Comment on lines 84 to 85
// TODO consider the error result
#[cfg(any(feature = "zstd,lz4", test))]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, if these features are not used, what will it happened? I think an explicit error is necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with you, @viirya . I have no idea and how to control this.
Could you please give me some suggestion or advice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I follow the #[cfg(any(feature usage like other model to resolve the error from the CI

cargo build --features=csv,ipc,simd,lz4,zstd --target wasm32-unknown-unknown

If I don't use this method, I can't pass the ci for target wasm32-unknown-unknown
I search the reason and find the issue #180 and #180 which have the same problem.
@alamb

})
}
}
z => panic!("Unsupported ipc::MetadataVersion {:?}", z),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ipc::MetadataVersion::V4 seems also hitting this error? But actually there is CompressionCodecType::NoCompression indicating no compression is used.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also given that this function returns a Result it seems like we could return a proper error here rather than panicing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just copy some code from try_new.
try_new just use the CompressionCodecType::NoCompression and it is same with older version.
try_new_with_compression want to open the compression, but I think I made a mistake.
I should use the CompressionType instead of CompressionCodecType and make sure the compression is enable

Comment on lines 1051 to 1054
#[cfg(any(feature = "zstd,lz4", test))]
compression_codec
.compress(buffer.as_slice(), &mut _compression_buffer)
.unwrap();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can prevent using compression option in try_new_with_compression if these features are not included.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@viirya comments like above reply
Maybe I can resolve this option or compile issue in the next issue or pull request.
In this pull request, we can focus on the protocal of IPC compression

// write the schema, set the written bytes to the schema + header
let encoded_message = data_gen.schema_to_bytes(schema, &write_options);
let (meta, data) = write_message(&mut writer, encoded_message, &write_options)?;
Ok(Self {
writer,
write_options,
schema: schema.clone(),
block_offsets: meta + data + 8,
block_offsets: meta + data + header_size,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

try_new_with_options is used to write schema to the file or stream.
In the arrow format of IPC format, the layout is from the doc https://arrow.apache.org/docs/format/Columnar.html#ipc-file-format

First, we will write
<magic number "ARROW1">
<empty padding bytes [to 8 byte boundary]>
the size of above part is 8bytes and is the length of header_size

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @liukun4515 -- I think this is looking pretty good. The biggest thing I would like to sort out the feature flags and keeping the compression code a bit more isolated. Otherwise this is looking close

arrow/Cargo.toml Outdated
@@ -60,7 +63,7 @@ multiversion = { version = "0.6.1", default-features = false }
bitflags = { version = "1.2.1", default-features = false }

[features]
default = ["csv", "ipc", "test_utils"]
default = ["csv", "ipc", "test_utils", "zstd", "lz4"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree -- I suggest something like

Suggested change
default = ["csv", "ipc", "test_utils", "zstd", "lz4"]
default = ["csv", "ipc", "test_utils"]

arrow/Cargo.toml Show resolved Hide resolved
match self {
CompressionCodecType::Lz4Frame => {
let mut encoder = lz4::EncoderBuilder::new().build(output).unwrap();
encoder.write_all(input).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest passing the errors back out of here directly (as compress returns a Result) rather than unwrap() which will panic on error

}
}

#[cfg(any(feature = "zstd,lz4", test))]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if you can put this guard on the entire mod ipc_compression statement so that the entire module (including the test) is not compiled unless that feature is active

arrow/src/ipc/mod.rs Show resolved Hide resolved
_ => {
if metadata_version != ipc::MetadataVersion::V5 {
return Err(ArrowError::InvalidArgumentError(
"Compress buffer just support from metadata v5".to_string(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Compress buffer just support from metadata v5".to_string(),
"Compression only supported in metadata v5 and above".to_string(),

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

match batch_compression_type {
CompressionCodecType::NoCompression => {}
_ => {
if metadata_version != ipc::MetadataVersion::V5 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this check be < instead of != to cover future versions?

Suggested change
if metadata_version != ipc::MetadataVersion::V5 {
if metadata_version < ipc::MetadataVersion::V5 {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

})
}
}
z => panic!("Unsupported ipc::MetadataVersion {:?}", z),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also given that this function returns a Result it seems like we could return a proper error here rather than panicing

}

impl IpcWriteOptions {
pub fn try_new_with_compression(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if you could avoid the duplication here with more of a Builder style:

impl IpcWriteOptions {
  pub fn with_compression(mut self, batch_compression_type: CompressionCodecType) -> Result<Self> {
    .. // do checks here
    self.batch_compresson_type = batch_compression_type;
    Ok(self)
  }
...
}

Then one could use it like:

  let options = IpcWriteOptions::try_new(8, false, ipc::MetadataVersion::v5)?
    .with_compression(CompressionCodecType::LZ4)?;
...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The constructor may be refactor by follow up PR

(buffer.as_slice(), origin_buffer_len as i64)
}
CompressionCodecType::Lz4Frame | CompressionCodecType::Zstd => {
if (origin_buffer_len as i64) == LENGTH_EMPTY_COMPRESSED_DATA {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like this code should throw an "Not supported" error (or panic if it is encountered without support compiled in)

@alamb
Copy link
Contributor

alamb commented Jun 24, 2022

What is the status of this PR? Is it ready to go? Do we need to mess with the feature flags more?

@viirya
Copy link
Member

viirya commented Jun 24, 2022

I think there are some comments are not addressed yet?

@liukun4515
Copy link
Contributor Author

Hi @alamb @viirya
I will continue to do this work next week after I learn the feature flag and other something.
I will make the pr as a draft.
After the pr is ready and address all the comments, I will reopen it.
Sorry for that.

@alamb alamb requested a review from viirya July 31, 2022 11:12
}
}
CompressionCodecType::NoCompression => Buffer::from(buf_data),
_ => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How to handle this?
If the rust service compiled without the ipc_compression and receive a message with the ipc compression feature.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think returning an error is the correct way but as you have identifed above you can't do that without changing the signature to Result<Buffer> -- but since decompression can fail we probably need to make the change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alamb
I file a new issue to track this, and will submit a sub pr for this.

}
match compression_codec {
CompressionCodecType::Lz4Frame | CompressionCodecType::Zstd
if cfg!(feature = "ipc_compression") || cfg!(test) =>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to check the compile options, if receive a message or ipc message with the compression feature.

Ok(())
}
CompressionCodecType::Zstd => {
let mut encoder = zstd::Encoder::new(output, 0).unwrap();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we pass the error out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix, and pass the error out.

Err(e) => Err(e.into()),
}
}
_ => Ok(input.len()),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for CompressionCodecType::NoCompression? If so, do we need to copy form input to output?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the NoCompression in the CompressionCodecType.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just handle the LZ4 and ZSTD branch.

Buffer::from(data)
} else {
// decompress data using the codec
let mut _uncompressed_buffer = Vec::new();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We know decompressed_length at this point. Should we allocate the vector with enough capacity?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, we can init the vec with the capacity.

/// -1: indicate that the data that follows is not compressed
/// 0: indicate that there is no data
/// positive number: indicate the uncompressed length for the following data
fn read_uncompressed_size(buffer: &[u8]) -> i64 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#[inline]?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Some(compression) => match compression.codec() {
CompressionType::ZSTD => CompressionCodecType::Zstd,
CompressionType::LZ4_FRAME => CompressionCodecType::Lz4Frame,
_ => CompressionCodecType::NoCompression,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there other option?

Copy link
Contributor Author

@liukun4515 liukun4515 Aug 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, in current version, just support the LZ4 and zstd.

/// uncompressed length may be set to -1 to indicate that the data that
/// follows is not compressed, which can be useful for cases where
/// compression does not yield appreciable savings.
fn read_buffer(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO:
we should change the output arg to Result<Buffer> and return error message if the buffer can't be read.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you still intend to make this change? Or is it planned for a subsequent PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}
CompressionCodecType::Lz4Frame | CompressionCodecType::Zstd => {
if (origin_buffer_len as i64) == LENGTH_EMPTY_COMPRESSED_DATA {
(buffer, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
(buffer, 0)
(buffer, LENGTH_EMPTY_COMPRESSED_DATA)

Comment on lines +1285 to +1286
// padding and make offset 8 bytes aligned
let pad_len = pad_to_8(len as u32) as i64;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it padding to data len? Or total len?

Copy link
Contributor Author

@liukun4515 liukun4515 Aug 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In each buffer, we have two struct, one is the metadata which store offset and actual len of data, the other is the data.
The actual len is the total_len.
The pad_len is just used to align the buffer.

@liukun4515 liukun4515 requested a review from viirya August 1, 2022 14:43
@liukun4515
Copy link
Contributor Author

@viirya @alamb PTAL

Comment on lines +89 to +90
lz4 = { version = "1.23", default-features = false }
zstd = { version = "0.11", default-features = false }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these need to be in dev dependencies do they? If they are already in the dependencies of the crate?

Copy link
Contributor Author

@liukun4515 liukun4515 Aug 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can’t remove the lz4 and zstd from dev-dependency.
The ipc_compression is not in the default feature, we can‘t run cargo test with the lz4 and zstd lib.
But we need to run the ipc compiression UT in CI

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I try to remove these from the dev-dependency, can run cargo test.
I got the compile error.

error[E0433]: failed to resolve: use of undeclared crate or module `lz4`
  --> arrow/src/ipc/compression/ipc_compression.rs:57:39
   |
57 |                     let mut encoder = lz4::EncoderBuilder::new().build(output)?;
   |                                       ^^^ use of undeclared crate or module `lz4`

error[E0433]: failed to resolve: use of undeclared crate or module `zstd`
  --> arrow/src/ipc/compression/ipc_compression.rs:65:39
   |
65 |                     let mut encoder = zstd::Encoder::new(output, 0)?;
   |                                       ^^^^ use of undeclared crate or module `zstd`
   |
help: there is a crate or module with a similar name
   |
65 |                     let mut encoder = std::Encoder::new(output, 0)?;
   |                                       ~~~

@alamb

/// uncompressed length may be set to -1 to indicate that the data that
/// follows is not compressed, which can be useful for cases where
/// compression does not yield appreciable savings.
fn read_buffer(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you still intend to make this change? Or is it planned for a subsequent PR?

}
}
CompressionCodecType::NoCompression => Buffer::from(buf_data),
_ => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think returning an error is the correct way but as you have identifed above you can't do that without changing the signature to Result<Buffer> -- but since decompression can fail we probably need to make the change

// decompress data using the codec
let mut _uncompressed_buffer =
Vec::with_capacity(decompressed_length as usize);
let _input_data = &buf_data[(LENGTH_OF_PREFIX_DATA as usize)..];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something still isn't quite right with this code -- instead of gating the code on the test feature, I think the more typical pattern is to gate the entire test on the ipc_compression feature

So something like

    #[cfg(ipc_compression)]
    #[test]
    fn read_generated_streams_200() {
        let testdata = crate::util::test_util::arrow_test_data();
        let version = "2.0.0-compression";
...
}

Copy link
Contributor Author

@liukun4515 liukun4515 Aug 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#[cfg(any(feature = "ipc_compression", test))] this is not the feature of test, just feature ipc_compression or test.
It only can be compiled with the compile flag feature="ipc_compression" or test.

There are some usage like in parquet.

#[cfg(any(feature = "lz4", test))]

@alamb
Copy link
Contributor

alamb commented Aug 6, 2022

@liukun4515 -- perhaps I can find some time to try and help with this PR. I will try to do so tomorrow

@liukun4515
Copy link
Contributor Author

@liukun4515 -- perhaps I can find some time to try and help with this PR. I will try to do so tomorrow

Thank you, you can also ping me through the slack.

@alamb
Copy link
Contributor

alamb commented Aug 6, 2022

Thank you, you can also ping me through the slack.

Thanks @liukun4515 -- I think I have gotten to the point where I can't offer any more specific suggestions on structure without trying it myself. I hope to try and rearrange the feature-flags and make a proposed PR to your branch.

I won't have a chance to do it until tomorrow however.

@alamb
Copy link
Contributor

alamb commented Aug 7, 2022

@liukun4515 -- I made good progress on sorting out the feature flags. A draft PR is here liukun4515#1 in case you want to see the direction I am headed.

I sadly ran out of time today, but I will try and finish it up tomorrow.

@liukun4515
Copy link
Contributor Author

@liukun4515 -- I made good progress on sorting out the feature flags. A draft PR is here liukun4515#1 in case you want to see the direction I am headed.

I sadly ran out of time today, but I will try and finish it up tomorrow.

thanks for your help!!!
I will review your work tomorrow, because I am busy doing and discussing about optimization of decimal.

@alamb
Copy link
Contributor

alamb commented Aug 8, 2022

I will review your work tomorrow, because I am busy doing and discussing about optimization of decimal.

Thank you -- I hope to push up an update shortly. It turned into a larger change than I was expecting

@alamb
Copy link
Contributor

alamb commented Aug 8, 2022

@liukun4515

Here is my proposal (the PR to liukun4515#1 is now rendered terribly): #2369

@liukun4515
Copy link
Contributor Author

Thanks for @alamb cooperation.
I will close this pr.

@liukun4515 liukun4515 closed this Aug 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrow Changes to the arrow crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support IPC file compression Create implementation of IPC RecordBatch body buffer compression from ARROW-300
7 participants