Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CODEC-290: Base16 Input and Output Streams #46

Merged
merged 14 commits into from
Jun 30, 2020
Merged

Conversation

adamretter
Copy link
Contributor

@adamretter adamretter commented Apr 19, 2020

Alongside the Commons Codec Base64 Input and Output Stream already provided, we have a need in our project (eXist-db) to be able to encode/decode Base16 (hexadecimal) with Input and Output Streams.

We have had similar code in our own project for a while, so I decided to generalise it and upstream it to commons codec.

This PR adds Base16InputStream and Base16OutputStream.

I have tried to accompany the functionality with 100% test coverage. However, there is a single very small branch that I can't seem to hit with JaCoCo - is it a false positive? I am not sure if it is important or not? If it needs to be solved, some guidance on how to reach that branch with JaCoCo would be appreciated.

@coveralls
Copy link

coveralls commented Apr 19, 2020

Coverage Status

Coverage increased (+0.1%) to 93.96% when pulling 427998a on adamretter:base16 into 7118544 on apache:master.

@garydgregory
Copy link
Member

Hi @adamretter
Thank you for the PR!
I do not see support for org.apache.commons.codec.binary.BaseNCodec.isStrictDecoding() which version 1.15 introduces for Base32 and Base64. The PR seems to silently ignore that stetting unless I am missing something.

@adamretter
Copy link
Contributor Author

Thanks @garydgregory from the API of BaseNCodec it was not at all clear that I had to support that. I can get that implemented soon.

Are there any other things that I should consider that are perhaps non obvious (i.e. not enforced through abstract methods)?

@aherbert
Copy link
Contributor

Hi,

Further to Gary's note here is some info on strict decoding.

The strict decoding support applies to decodings where the length of the input characters to decode is incorrect. Strict decoding would reject these. Lenient decoding would allow it. Strict decoding is a contract that states the encoding can be round tripped via decode-encode to the same byte array (ignoring padding characters).

So for Base16 this is simple. If you have an even number of Hex characters your encoding is good and you can decode everything. If the decoding leaves a final trailing character you have leftover bits. Strict decoding would reject this. Lenient decoding would allow it by discarding the leftover bits.

@garydgregory
Copy link
Member

through abstract methods)?

Not that comes to mind.

@adamretter
Copy link
Contributor Author

adamretter commented Apr 21, 2020

@garydgregory @aherbert Thanks for the feedback. I have pushed a commit which I think addresses the strict decoding thing now.

@garydgregory
Copy link
Member

@aherbert How does it look to you?

Also, in a separate PR. should the toLowerCase setting be pulled up?

@aherbert
Copy link
Contributor

I think this is not the way to go. I would not use the Hex class at all. Use a decode table as is done for Base32/64Codec. I would drop support for the Charset. This is only relevant when converting to/from String via char[] arrays. You should not even have to deal with char[] let alone allocate them as full length intermediates for every encode or decode of a byte[]. Plus there are extra public static methods added to Base16Codec which duplicate functionality from Hex.

Hex has 16 characters. I would expect to construct a Base16Codec with a boolean flag for upper or lowercase. In Base32 we support decode of Base32 or Base32 Hex alphabets. Base64 supports URL safe or standard MIME 64 alphabet. So I see Base16 supporting encoding to lower or upper case and decoding either, effectively making decoding case insensitive.

The BaseNCodec is purely about packing bytes using an alphabet of N characters expressed using UTF-8 characters, but specifically UTF-8 characters less than 127 so effectively the standard ASCII alphabet. This is enforced by the base class BaseNCodec which converts Strings to/from bytes as UTF-8. Any extension of BaseNCodec should only deal with encoding/decoding between packed bytes and a byte alphabet (that maps in UTF-8 to a readable character set). I would expect the class to look like Base32Codec. Essentially copy that class and just remove some of the extra alphabet support. This would seamlessly support decoding of lower and upper case hex characters using a decode table. Any character not in the decode table is ignored as whitepsace/padding.

Your current class cannot decode whitespace padded Hex or encode with a line wrapping separator. I would expect to be able to encode to Hex wrapped at 72 characters:

ABCDEF0123456789.....
ABCDEF0123456789.....
ABCDE

and then decode this.

Currently the functionality in the Base16 class has a different extended API than Base32/64 (more public static methods) but less flexible encoding or decoding with regard to whitespace. Ideally we want to be able to use Base16 to produce ASCII armoured hex encoding of byte data as can be done using the Base32/64 variants.

This is not an extension of BaseNCodec to support Base16. It is a shoehorn of ported functionality into the same namespace. Anyone viewing the 3 classes down the line will wonder why the functionality and API are not aligned. Let's reduce maintenance burden going forward by making Base16 look and function like Base32/64.

@garydgregory
Copy link
Member

garydgregory commented Apr 21, 2020 via email

@adamretter
Copy link
Contributor Author

@aherbert After your initial PR feedback which seemed positive, your second comments are quite disheartening. I tried to reuse what was already present in Commons Codec, i.e. the Hex class, which I though seemed out of place, but I thought you must have a good reason for having it.

I could of course refactor this code to not use that, but... I also have some confusion/concerns around your other feedback...

This would seamlessly support decoding of lower and upper case hex characters using a decode table. Any character not in the decode table is ignored as whitepsace/padding.

I don't understand why anyone would want that? Whilst with Base64 it makes sense because there are standards we can point to, which say that is what should happen, there are no such standards for Hex. If I started encoding or decoding data to hexadecimal and I encountered not alphabet chars (e.g. whitespace) my expectation is that that is an error, because they are not in the hexadecimal alphabet and there is no standard I am adhering to that says to ignore them. This seems very unintuitive to me and the source of possible unexpected data bugs for users.

Your current class cannot decode whitespace padded Hex or encode with a line wrapping separator. I would expect to be able to encode to Hex wrapped at 72 characters:

I am wondering where that expectation of 72 characters comes from? Again I don't know of any standard for hexadecimal (unlike Base64) which stipulates that. Certainly as a naive commons-codec user I don't have such an expectation. Perhaps there are some specs for Base16 that I am aware of that you could point me at?

@aherbert
Copy link
Contributor

My aim was not to dishearten, it was to try and maintain a common interface and functionality in the different instances of BaseNCodec.

The 72 character line wrap was an example. This is an option specified in the constructor for Base32/64. The default for URL safe encoding in Base64 is 76 chars. The default is 0 for no line wrapping.

I agree that whitespace support is not required. But it is also not unexpected given how Base32 and Base64 function. The use of a decode table to translate the bytes will have a similar runtime to your use of Hex. Each character must be checked to see it is valid. At the business end Hex does this on each character:

final int digit = Character.digit(ch, 16);
if (digit == -1) {
    throw new DecoderException("Illegal hexadecimal character " + ch + " at index " + index);
}
return digit;

I am suggesting we skip conversion of input byte[] to char[] and the use of Hex. Just do the decoding direct on the byte[] using a decode table.

Regarding line wrapping it is an expected feature of Base64 encoded data. As as you say there are standard formats used where this is done. The support has been carried through to Base32. I do not know if this has precedence or it was done because it could be. I was suggesting doing the same for Base16. If you encode data and write it somewhere it may be nice to line wrap this for convenience when viewing it. Supporting a line wrap when encoding is thus for human readable consumption. I find it more likely to use line wrapping on the encoded output than to use a character set that is anything other than UTF-8 (or more specifically US_ASCII).

But what about decoding? You have to support it if you are expecting the input to be line wrapped. But perhaps you are not. So what to do when the input is not a valid hex character during decoding is a separate issue. There is a case for adding support to BaseNCodec to throw an exception when the input byte is not a valid character in the alphabet. Currently they are ignored to allow whitespace padding. But this does of course allow you to put in junk and this is also ignored.

Then we have 3 cases for a non-alphabet character during decoding:

  • Ignore
  • Detect if whitespace/padding and ignore; otherwise throw an exception
  • Throw an exception

Currently Base16 would throw an exception. If you passed the exact same data to Base32/64 they would decode it as it is a valid alphabet (although they may throw an exception if using strict decoding depending on the trailing characters). This difference of behaviour is odd. The simplest option for Base16 is to do what Base32/64 do and ignore the invalid character.

Cleaning up exception functionality should be part of a separate fix. To me it seems out of place to allow whitespace when decoding Base32/64 but not Base16. In addition you can encode to Base32/64 with whitespace but not with Base16.

I would suggest fixing Base16 to work as per Base32/64 in this PR. Then fixing BaseNCodec to have an option flag to throw an exception when decoding if an invalid character is encountered. This functionality would be off by default for backward compatibility in Base32/64. The flag could be carried through to the BaseNCodecInput/OutputStream allowing you to optionally throw exceptions during streaming decoding when invalid characters are present.

@garydgregory
Copy link
Member

Where are we on this?

@adamretter
Copy link
Contributor Author

adamretter commented Apr 26, 2020

@garydgregory I am not sure...

Whilst I am happy to refactor to use a decode table, I don't agree with the other arguments for additional requirements to make it support Base64 like behaviour. The published standards are only for Base64 and there is no equivalent for Base16, therefore to me it makes no sense to apply such stuff to Base16.

Also ignoring invalid chars or inserting line breaks every 72 characters, would break my own use-cases, and therefore I can't really justify adding it when I can't use the code.

How do we resolve this?

@garydgregory
Copy link
Member

Why not follow https://tools.ietf.org/html/rfc4648 ?

@adamretter adamretter force-pushed the base16 branch 2 times, most recently from 69bc6e5 to 882b8dd Compare June 1, 2020 11:52
@adamretter
Copy link
Contributor Author

adamretter commented Jun 1, 2020

@garydgregory @aherbert I have now updated this PR to use decode/encode tables and I have gotten rid of the charset stuff. I have also removed the additional static convenience methods that were felt to be out-of-place.

The Base16 stuff supports upper and lower-case hex alphabets. It follows RFC 4648 as requested. Therefore the default alphabet is upper-case as that is the only one specified in RFC 4648.

  1. It does not support padding, because RFC 4648 clearly says:

Unlike base 32 and base 64, no special padding is necessary since a full code word is always available.

  1. It also does not support line-wrapping, because RFC 4648 states:

Implementations MUST NOT add line feeds to base-encoded data unless
the specification referring to this document explicitly directs base
encoders to add line feeds after a specific number of characters.

AFAIK there is no standard for hex encoded data that suggests line-wrapping.

  1. Regarding ignoring characters outside of the alphabet, RFC 6468 states:

Implementations MUST reject the encoded data if it contains
characters outside the base alphabet when interpreting base-encoded
data, unless the specification referring to this document explicitly
states otherwise.

AFAIK there is no standard for hex encoded data that allows invalid characters. Therefore an IllegalArgumentException is thrown if the user tries to decode characters outside of the Base16 alphabets.

Hopefully this now addresses your concerns?

@adamretter
Copy link
Contributor Author

@garydgregory Could I get an update on this please?

Copy link
Contributor

@aherbert aherbert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change to use a decode table is good. Given that this does not support whitespace and padding characters (unlike Base32/64) then this should be added to the Javadoc header.

I am unsure why you want to restrict the decoding to be case sensitive. Base32 and java.lang.Character will decode mixed upper and lower case hex characters.

There is a bug when trying to encode very large byte arrays which should be fixed.

There is common functionality in Base16/32/64TestData. Base16TestData duplicates the DECODE array from Base64TestData and there is the method static byte[] streamToBytes(final InputStream is). I prefer your method in Base16TestData to stream the InputStream to a byte array. The common functionality can be moved into a single class, for example BaseNCodecTestData. Opinions on this?

void decode(final byte[] data, int offset, final int length, final Context context) {
if (context.eof || length < 0) {
context.eof = true;
if (context.ibitWorkArea > 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ibitWorkArea != 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay.

*/
@Override
public boolean isInAlphabet(final byte octet) {
return octet >= 0 && octet < decodeTable.length && decodeTable[octet] != -1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid checking if the octet is not negative you could convert as unsigned:

(octet & 0xff) < decodeTable.length && decodeTable[octet] != -1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's slightly nicer, thanks

* @return {@code true} if the value is defined in the the Base16 alphabet {@code false} otherwise.
*/
@Override
public boolean isInAlphabet(final byte octet) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not convinced by the two decode tables. Is there any reason that you would not want to support case-insensitive decoding? Currently if I want to decode some hex encoded data I have to know before decoding if the hex uses upper or lower case and create the correct decoder.

Base32 supports case insensitive decoding when used in Hex mode.

Base64 supports decoding of both URL-safe and standard encoding by default.

The Hex class which delegates to Character.digit(ch, 16) will handle mixed case input hex.

The upside of forcing a case is that data encoded using both upper and lower case characters will be reported as invalid. Is this something in RFC 4648?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes exactly! I want this to be spec compliant, not something that might produce unexpected behaviour. If you don't know if the hex you are decoding is upper or lower case, I think you have bigger problems... i.e. are you sure it is even valid hex?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are two issues here:

  • Not knowing if the hex is upper or lower case, but it must use exclusively one or the other
  • Not allowing mixed case hex

Ideally the most flexible decoder would have a 3rd mode for case insensitive decoding. This would apply to decoding only. It would work using a 3rd decoding table that combines the upper and lower case tables.

However allowing mixed case in the same input is not really valid. So the decoder would have to detect the first A-F character and ensure the same case is used for the rest. An exception can be raised if not. This can even be covered by the strict/lenient decoding policy. Tracking the first A-F character is a lot of extra code so I don't like this idea.

What do you think to using the decoding policy to cover this? The constructor accepts the upper/lower case flag for encoding and the strict/lenient decoding flag. If using strict decoding the case must match. If lenient mode then decoding is case insensitive. After all, if we are allowing a trailing character to be discarded, then allowing upper or lower case hex seems no worse a violation of a true encoding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is not a bad suggestion, but I also think this is perhaps feature/scope creep. My goal was to implement the decoder according to the RFC. What you have suggested sounds like an additional feature which could be added later without issue. I would rather get this PR merged as it is. Is that okay with you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK.

The idea of using the lenient decoding to cover both case insensitive decoding and allowing trailing characters is a mash up. A case insensitive decoding table can be added as an option later. After all, you may wish to support case insensitive decoding but enforce no trailing characters.


private int decodeOctet(final byte octet) {
int decoded = -1;
if (octet >= 0 && octet < decodeTable.length) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid the >= 0 using (octet & 0xff) < decodeTable.length

@@ -567,7 +567,7 @@ public String encodeToString(final byte[] pArray) {
*/
protected byte[] ensureBufferSize(final int size, final Context context){
if (context.buffer == null) {
context.buffer = new byte[getDefaultBufferSize()];
context.buffer = new byte[Math.max(size, getDefaultBufferSize())];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice spot. Previously it was not relevant as the buffer is expanded continuously inside the decoding loop. Base32 and Base64 can (should) be similarly optimised. Something for another improvement.

@@ -340,6 +341,18 @@ public void testDecodeHexCharArrayOddCharacters5() {
checkDecodeHexCharArrayOddCharacters(new char[] { 'A', 'B', 'C', 'D', 'E' });
}

@Test(expected = DecoderException.class)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC these tests target coverage in new methods added to Hex. If the changes are dropped then these tests can be dropped too.


@Test
public void testConstructor_LowerCase() {
final Base16 Base16 = new Base16(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace Base16 Base16 with Base16 base16

@Test
public void testDecodeSingleBytesOptimisation() {
final BaseNCodec.Context context = new BaseNCodec.Context();
context.ibitWorkArea = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The codec relies on the fact that a new Context created in BaseNCodec.decode(byte[]) has a zero work area. So perhaps this should be checked. I'd also change the -17 to the hex byte representation for clarity.

final BaseNCodec.Context context = new BaseNCodec.Context();
assertEquals(0, context.ibitWorkArea);

final byte[] data = new byte[1];
final Base16 b16 = new Base16();

data[0] = (byte) 'E';
b16.decode(data, 0, 1, context);
assertEquals(15, context.ibitWorkArea);

data[0] = (byte) 'F';
b16.decode(data, 0, 1, context);
assertEquals(0, context.ibitWorkArea);

assertEquals((byte) 0xEF, context.buffer[0]);

}

final int dataLen = Math.min(data.length - offset, length);
final int availableChars = (context.ibitWorkArea > 0 ? 1 : 0) + dataLen;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could change to ibitWorkArea != 0

// This test is memory hungry. Check we can run it.
final long presumableFreeMemory = BaseNCodecTest.getPresumableFreeMemory();

// Estimate the maximum memory required:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been copied from the Base64Test, changed (to encode a 500MiB file) and then disabled (no @Test annotation).

Codec 265 concerns encoding a 1GiB file. Using base 64 you get 4 characters per 3 bytes so the output is ~1.33GiB. With Base16 you get 2 characters per byte and it is not possible to create a 2 GiB char array.

Note: It will never run on Travis CI due to memory restrictions. When I run this locally it passes with a 500MiB input (1 << 29) but fails with 1GiB (1 << 30). However this is not due to an out-of-memory error (as expected) but an index out of bounds exception. This is due to the allocation of the buffer using:

context.buffer = new byte[Math.max(size, getDefaultBufferSize())];

As noted in my review comment for ensureBufferSize this has been changed to allow a single allocation of the entire array. Unfortunately when size is negative this creates a buffer of the default size.

Given that BaseNCodec.ensureBufferSize is intended to be called with a positive-only size one suggestion is Base16.encode could be updated to change this line:

final byte[] buffer = ensureBufferSize(length * BYTES_PER_ENCODED_BLOCK, context)

to

final int size = length * BYTES_PER_ENCODED_BLOCK;
if (size < 0) {
    throw new IllegalArgumentException("Input byte[] length exceeds maximum size for encoded data: " + length);
}

final byte[] buffer = ensureBufferSize(size, context)

A different approach with a less meaningful exception (a negative array size exception) would be to change ensureBufferSize:

context.buffer = new byte[
    compareUnsigned(size, getDefaultBufferSize()) > 0
       ? size
       : getDefaultBufferSize()];

Either way a test should be added to hit this exception.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that the change to Base16.encode to handle overflow has been done but this test still exists. It should be replaced by an attempt to encode a byte[] that is length 2^30. This will require 1Gb of memory and should be OK on Travis CI. Something like:

@Test(expected = IllegalArgumentException.class)
public void testMaximumEncodedSizeExceeded() {
  final int size1GiB = 1 << 30;
  final byte[] bytes = new byte[size1GiB];
  new Base16().encode(bytes);
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aherbert I already added such a test as you requested. See the test function directly below your comment named checkEncodeLengthBounds

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. It uses a different method call where the length is a separate argument but given the simplicity of the encode method there is no point in testing encode(new byte[1 << 30]) as well. So just remove testCodec265_over() as the test makes no sense for Base16.

@adamretter
Copy link
Contributor Author

Given that this does not support whitespace and padding characters (unlike Base32/64) then this should be added to the Javadoc header.

Okay I have added a bit more information to the Javadoc header.

I am unsure why you want to restrict the decoding to be case sensitive. Base32 and java.lang.Character will decode mixed upper and lower case hex characters.

It is restricted because I am following the RFC, and the RFC specified an upper-case alphabet. The lower-case support which I added is an extension of the RFC, but it should not be (and is not) enabled by default, otherwise for those expecting "standard" behaviour the results may be unexpected. The constructor allows you to choose the case. I have also now made this more explicit in the Javadoc.

@adamretter
Copy link
Contributor Author

There is common functionality in Base16/32/64TestData. Base16TestData duplicates the DECODE array from Base64TestData and there is the method static byte[] streamToBytes(final InputStream is). I prefer your method in Base16TestData to stream the InputStream to a byte array. The common functionality can be moved into a single class, for example BaseNCodecTestData. Opinions on this?

Okay sure... I have pushed a commit which tidies all that up into BaseNTestData.java

@adamretter
Copy link
Contributor Author

Thanks for the detailed review @aherbert. You had some good suggestions in there!

I have pushed some more commits, which I think now addresses all of your concerns. Let me know...

@aherbert
Copy link
Contributor

You have not reverted all the changes to Hex and HexTest. Are these not redundant now?

@adamretter
Copy link
Contributor Author

@aherbert I felt that the additional decode/encode methods that I added to Hex still have value; They allow the user to work with an existing byte array rather than the code allocation a fixed size array for them every time. The tests in HexTest accompany them.

Arguably, I should move them to a separate PR as its a different feature. What do you prefer?

@aherbert
Copy link
Contributor

The changes in Hex are not currently complete for a public API. The changes were originally made to allow the Base16 class to use the Hex class. That no longer happens and the changes do not expose all the added functionality.

  • The DIGITS_LOWER/UPPER arrays have been changed to package private although only used by the Hex class.
  • The decodeHex method to decode into a provided byte[] is public (OK)
  • The encodeHex methods to encode into a provided array are protected (Not OK). These should be public with the exception of the actual method that does the encoding which should be private, i.e. a public method call can specify offsets and upper/lower case but not actually pass in the toDigits char[].

I would create a new PR for these changes to make it clear they are not related to the Base16 changes.

Have you created a ticket on the codec bugtracker for this change? If you create one for Base16 and also for Hex changes then the issue number can be added to the change log when merged.

@adamretter
Copy link
Contributor Author

adamretter commented Jun 29, 2020

@aherbert For this one - https://issues.apache.org/jira/browse/CODEC-290

I will move the Hex class stuff into a separate PR.

@aherbert aherbert changed the title Base16 Input and Output Streams CODEC-290: Base16 Input and Output Streams Jun 29, 2020
assertFalse(b16.isInAlphabet((byte) ('0' - 1)));
assertFalse(b16.isInAlphabet((byte) ('9' + 1)));
assertFalse(b16.isInAlphabet((byte) ('a' - 1)));
assertFalse(b16.isInAlphabet((byte) ('z' + 1)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just spotted this one. It should be 'f' + 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intention was for 'z' + 1... I will add f+1 too

@adamretter
Copy link
Contributor Author

adamretter commented Jun 29, 2020

@aherbert Hopefully that is everything done now.
I have moved the Hex.java/HexTest.java changes to a separate PR - #49

@asfgit asfgit merged commit 427998a into apache:master Jun 30, 2020
@adamretter
Copy link
Contributor Author

Thank you so much @aherbert :-)

Now for the inevitable question... is there a plan for a release any time soon?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants