Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

invoices: expose custom tlv records from the payload #3742

Merged
merged 3 commits into from Dec 10, 2019

Conversation

joostjager
Copy link
Collaborator

@joostjager joostjager commented Nov 19, 2019

This PR extracts the received custom records from the hop payload and stores them in the invoice database. Users can retrieve the custom records through the usual invoice queries / subscriptions.

tlv/record.go Outdated
// TypeSet is an unordered set of Types. The map values are byte slices. If the
// byte slice is nil, the type was successfully parsed. Otherwise the value is
// byte slice containing the encoded data.
type TypeSet map[Type][]byte
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be really overloading the type beyond its original intended usage. It's only meant to be used in order to determine violations w.r.t known/unknown types. We already return the full set of unparsed types after decoding, so not sure why this is needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also have you benched marked this? As is, it appears we'll end up copying all the values from the main stream twice.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is overloading the type, but we can deviate from an original intention.

I haven't benchmarked. If we want to process the raw bytes of unrecognized records, we need to copy anyway. Just in case our intention is to really ignore them, the copy is unnecessary. Afaik, for db tlv that isn't expected. And for the tlv payload it may happen with an unrecognized standard field (the custom ones we probably want to store somewhere). To me it seems premature to optimize for.

I am open to different ways of implementing this. What do you prefer?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do you see the copy twice btw? Afaik we only read bytes from the stream and store them in a new byte slice that is a value of the TypeSet map.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to TypeMap and preallocated the buffer.

@@ -157,14 +164,22 @@ func NewPayloadFromReader(r io.Reader) (*Payload, error) {
mpp = nil
}

for t, parseResult := range parsedTypes {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels like there's another way to do this with less copying.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In line with comment above, I don't think it matters much. If you want the tlv stream to return a map that we can use directly, we need to add more controls to the stream to specify what exactly we expect.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion from @cfromknecht: delete items from the map

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems it isn't possible to cast map[Type][]byte to map[uint64][]byte though

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked about this offline with @Roasbeef. We do want to stop tlv.Type from being used anywhere. Copying the map (but not the values) looks to be ok.

return err
}

records = append(records, customRecords...)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Offline comment from @cfromknecht: not sure if it is a good idea to mix the custom records with the db types here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed this further. It may be better to create a single tlv record for which the value is a tlv stream with just the custom records. As a simple insurance against accidental modification of standard fields.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. There seems to be more support for a flat structure, so leaving it as is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still somewhat concerned about mingling minimally validated user data with known records. I think it is possible to do it safely, but i think it forces us to be stricter in how we handle migrations.

Consider the scenario where we add a field to the invoice body but do not bump the db version. If a user downgrades, this will now appear in the custom record set even tho its type is beneath the cutoff. I think it's possible that data gets written back, but it's difficult imo to ascertain whether this can lead to unforeseen bugs.

We could prevent this by filter the custom records when reading as we do in the hop, then the data is actually dropped when writing back. This would maintain the current behavior in that scenario.


records = append(records, customRecords...)

tlvStream, err := tlv.NewStream(records...)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question open as to whether we want to bump the db version, to prevent users running master from downgrading and loosing their custom records

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed this in the context of another PR. We don't give this guarantee to users running master generally.

@joostjager joostjager added this to WIP in v0.9.0-beta Nov 20, 2019
@joostjager joostjager self-assigned this Nov 22, 2019
@joostjager joostjager force-pushed the expose-custom-tlv branch 8 times, most recently from a4cf7d8 to bd6cd63 Compare December 4, 2019 15:42
@joostjager joostjager added this to the 0.9.0 milestone Dec 4, 2019
@joostjager joostjager moved this from WIP to Needs Review in v0.9.0-beta Dec 4, 2019
@joostjager joostjager force-pushed the expose-custom-tlv branch 2 times, most recently from 963132e to ae79e2c Compare December 5, 2019 13:56
tlv/stream.go Show resolved Hide resolved
htlcswitch/hop/payload.go Outdated Show resolved Hide resolved
htlcswitch/hop/payload_test.go Show resolved Hide resolved
channeldb/invoices.go Outdated Show resolved Hide resolved
invoices/interface.go Show resolved Hide resolved
channeldb/invoices.go Show resolved Hide resolved
return err
}

records = append(records, customRecords...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still somewhat concerned about mingling minimally validated user data with known records. I think it is possible to do it safely, but i think it forces us to be stricter in how we handle migrations.

Consider the scenario where we add a field to the invoice body but do not bump the db version. If a user downgrades, this will now appear in the custom record set even tho its type is beneath the cutoff. I think it's possible that data gets written back, but it's difficult imo to ascertain whether this can lead to unforeseen bugs.

We could prevent this by filter the custom records when reading as we do in the hop, then the data is actually dropped when writing back. This would maintain the current behavior in that scenario.

@joostjager
Copy link
Collaborator Author

We could prevent this by filter the custom records when reading as we do in the hop, then the data is actually dropped when writing back. This would maintain the current behavior in that scenario.

Yes that is a nice safe guard that doesn't require a different db struct. It should be in place now

Copy link
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🧹

Left one comment which is mainly concerned with consistency of the API w.r.t TypeMap.

// Record the successfully decoded type if the caller
// provided an initialized TypeMap.
if parsedTypes != nil {
parsedTypes[typ] = nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we also want to provide the caller w/ the raw bytes of the type just as we do for unknown+odd types?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case we would copy the data twice for the known types. And need another data structure to signal which ones were decoded. We can also rethink that when the need arises to access the raw bytes of known types, this is all in memory.

Copy link
Contributor

@cfromknecht cfromknecht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🤘

v0.9.0-beta automation moved this from Needs Review to Approved Dec 8, 2019
@cfromknecht
Copy link
Contributor

needs rebase

@joostjager
Copy link
Collaborator Author

rebased

@joostjager joostjager merged commit 699bb19 into lightningnetwork:master Dec 10, 2019
v0.9.0-beta automation moved this from Approved to Done Dec 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
v0.9.0-beta
  
Done
Development

Successfully merging this pull request may close these issues.

None yet

3 participants