Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Viterbi decoder msg size #6

Closed
MageSlayer opened this issue Apr 30, 2017 · 6 comments
Closed

Viterbi decoder msg size #6

MageSlayer opened this issue Apr 30, 2017 · 6 comments

Comments

@MageSlayer
Copy link

Hi

correct_convolutional_decode contains a code calculating original message size which looks:

    size_t num_encoded_bytes =
        (num_encoded_bits % 8) ? (num_encoded_bits / 8 + 1) : (num_encoded_bits / 8);

however, coder defines following function for calculating encoded bits

size_t correct_convolutional_encode_len(correct_convolutional *conv, size_t msg_len) {
    size_t msgbits = 8 * msg_len;
    size_t encodedbits = conv->rate * (msgbits + conv->order + 1);
    return encodedbits;
}

Shouldn't decoder be based on coder message size calculation?
It seems that decoder returns invalid message size now.
Or I am missing something obvious?

@brian-armstrong
Copy link
Member

@MageSlayer I think this is just poor variable naming and not an actual error. I've tested the conv code stuff with enough message lengths that I don't believe there's any actual error here, but of course it's still possible.

The first snippet you posted is from when the decoder is configuring its internal reader. The reader needs to trace out the number of raw bits passed to the decoder, and the snippet you posted is just deciding how many contiguous bytes that occupies in memory. This won't correspond directly to how many bits or bytes are recovered.

The second snippet is taking a given message length of msg_len bytes and deciding how many bits the actual encoded message will be. I think the corresponding part of the decoder is effectively sort of baked into the various loop indicies.

There is one small bug here. We configure the internal writer to set its ->len to this length here https://github.com/quiet/libcorrect/blob/master/src/convolutional/decode.c#L276 But as far as I can tell from inspecting the code, this len is never actually used in any way. It's a dead variable. I think I intended to use it for bounds checking so that the writer would provide an additional guard against overwrites in case the decoder had a bug, but haven't implemented it yet

If you can find a precise example that fails, I'd be happy to check it out

@brian-armstrong
Copy link
Member

Also I think you can run the tests with make check

@MageSlayer
Copy link
Author

I am not sure I understand your point.

Isn't num_encoded_bytes is returned as correct_convolutional_decode result?
I mean _convolutional_decode returns it as decoded_len_bytes.
Are you trying to say that correct_convolutional_decode return value is unreliable?

@brian-armstrong
Copy link
Member

Oof. Wow, I completely missed that. You're right, that return value must be completely wrong. Looking at the testbench now, I see that the return value is ignored (and in the libfec shim, nothing like the written length is returned, so it works there too)

I think this can be remedied fairly easily by having the bit_writer simply return how far into the buffer it's gone. One sec :)

@MageSlayer
Copy link
Author

BTW, could you make correct_convolutional_decode return value compatible with correct_reed_solomon_decode? I mean, is it possible to return -1 in case of error instead of 0?

@brian-armstrong
Copy link
Member

This appears to be fixed by #7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants