Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[decision] Change Tracking #4554

Merged
merged 2 commits into from
Nov 2, 2022

Conversation

atmaxinger
Copy link
Contributor

@atmaxinger atmaxinger commented Oct 13, 2022

Basics

  • Short descriptions of your changes are in the release notes
    (added as entry in doc/news/_preparation_next_release.md which
    contains _(my name)_)
    Please always add something to the release notes.
  • Details of what you changed are in commit messages
    (first line should have module: short statement syntax)
  • References to issues, e.g. close #X, are in the commit messages.
  • The buildservers are happy. If not, fix in this order:
    • add a line in doc/news/_preparation_next_release.md
    • reformat the code with scripts/dev/reformat-all
    • make all unit tests pass
    • fix all memleaks
  • The PR is rebased with current master.

Checklist

  • I added unit tests for my code
  • I fully described what my PR does in the documentation
    (not in the PR description)
  • I fixed all affected documentation (see Documentation Guidelines)
  • I added code comments, logging, and assertions as appropriate (see Coding Guidelines)
  • I updated all meta data (e.g. README.md of plugins and METADATA.ini)
  • I mentioned every code not directly written by me in reuse syntax

Review

Labels

  • Add the "work in progress" label if you do not want the PR to be reviewed yet.
  • Add the "ready to merge" label if the basics are fulfilled and no further pushes are planned by you.

@atmaxinger atmaxinger changed the title [decisions] Change Tracking [decision] Change Tracking Oct 13, 2022
Copy link
Member

@kodebach kodebach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As stated before, I don't think libelektra-core should do the change tracking directly by storing the old value. It is simply not part of libelektra-core's goal. Also, while it is not relevant for the use-case in libelektra-kdb (*), if we do do change tracking in libelektra-core, it should also include the key name. That would make the whole size overhead a lot worse. I say this, because from libelektra-core's POV there is no reason why only values should be tracked, but not names.

IMO if we do this in libelektra-core it would be far better, if both KeySet and Key had a field changeCallback. This would be a function pointer, that is called when the Key/KeySet is changed. Then libelektra-kdb could set this callback and keep an internal record of the changes. However, I don't think this really is a good solution either, because it is still beyond what libelektra-core should do.

(*) for libelektra-core tracking names is not needed, because we only deal with KeySets and when a Key is in a KeySet it's name cannot change.


I think you discredit option 1 far too quickly. It is not only the easiest to implement, but it is also not nearly as bad as you make it out to be. There are already constraints on the kdbGet and kdbSet relation. The constraint right now is specifically:

For kdbSet (handle, setKs, setParent) to succeed, kdbGet (handle, getKs, getParent) must have been called previously and the relation between getParent and setParent must be (at least) one of:

  1. setParent is the same as or below getParent
  2. both getParent and setParent are stored in the same backend

AFAICT the only required change would be to say "the last kdbGet call using handle must have been kdbGet (handle, getKs, getParent)" instead of "kdbGet (handle, getKs, getParent) must have been called previously". This could easily be checked and we could return an error if this is not the case.

Yes, it has an impact on the developer using Elektra. But it is not a huge impact. You just need to use a separate KDB handle. That too has some memory (and computation) overhead, but in general it should be less than the overhead of e.g. option 3.

In addition to that, I believe that it would be very rare that one of the problematic sequences was created intentionally. It is far more likely that it happend through sharing of a KDB handle between unrelated components or even threads (which is explicitly unsupported).

Furthermore, I think this might not even have any additional memory overhead. In the KDB handle's KeySet * backends we already have a struct _BackendData for every backend that was used. These contain a KeySet * keys, with the keys for the backend. Importantly, those keys are already keyDup'd. I think that data could be used to detect changes between kdbGet and kdbSet.


Finally, when talking about memory overhead you also have to keep in mind that Elektra is global. If you start a change tracking session, this should affect all applications using Elektra. Therefore, each of these applications will have the overhead of change tracking.

doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
Comment on lines 128 to 131
it may be noticable for `Key`. On a 64-bit system we'd add 8+8=16 bytes to it.
To put this in perspective, the current size of the `Key` struct is 64 bytes,
so we'd add 25% overhead to an empty key. However, this percentage will be much lower in
a real-world application, as the usefulness of an empty key is very low.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you take all the allocated data into account and not just struct _Key, then even for an "empty" key it is technically just ~23% overhead. That's because even an "empty" key needs a name an the smallest name /, needs 2 bytes for the escaped and 3 bytes for the unescaped form.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@atmaxinger can you please elaborate which 16 bytes you mean? Isn't a single pointer to "change-tracking data" enough? And maybe it even can be in meta-data, not needing any extra bytes in _Key.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8 bytes for the pointer, 8 bytes for the size.

I won't comment on the rest, because this clearly goes into too much detail. If the problem is now clear, this PR should be merged and this should be addressed in the next PR.

doc/decisions/change_tracking.md Show resolved Hide resolved
Copy link
Contributor

@markus2330 markus2330 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job, it is much more clear to me now!

## Constraints

Change tracking must:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also add links to the reelvant use cases here.

doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/news/_preparation_next_release.md Show resolved Hide resolved
@atmaxinger
Copy link
Contributor Author

@markus2330 @kodebach I have updated the decision and addressed (hopefully) all your remarks.

In particular, I have removed the first two constraints-violating alternatives, and added more detailed info about the hooks-based approach.

It would be greatly appreciated if we come to a consensus which approach we'll decide on soon, so that I can start implementing it as quickly as possible 🚀.

@kodebach
Copy link
Member

It would be greatly appreciated if we come to a consensus which approach we'll decide on soon, so that I can start implementing it as quickly as possible .

Of course, but I think we need to decide #4574 first. If the sequences are limited, this decision becomes much simpler.

Copy link
Contributor

@markus2330 markus2330 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is duplication with #4574. Please only request reviews when all work was done. Or is this wrongly shown by GitHub?

doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
1. on `kdbGet` duplicate the returned keyset,
2. on `kdbSet` calculate what changed in between.

This basic approach will fail when there is another `kdbGet` operation for a different part of the KDB in between.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I stopped reading here, as this duplicates with #4574

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed it and referenced #4574 instead.

@atmaxinger
Copy link
Contributor Author

atmaxinger commented Oct 21, 2022

@markus2330 of course it duplicates to some extend with #4574, as the "illegal" sequences mostly only affect plugins that do change tracking. Creating another decision for it was your idea.

The thing is, and this is where I strongly disagree with @kodebach, we can actually decide this independently from #4574, as all of the alternatives presented in here allow us to work around the "illegal" sequences issue.

In the case of alternatives 1-3, we can just store all keysets that were returned by kdbGet. This is an implementation detail that doesn't even need to be in a decision. If we decide then later decide on #4574, we can update the implementation to something that uses resources more efficiently based on the boundaries we decide there.

In the case of alternative 4, the change tracking is completely done on KeySet level, so the sequences of kdbGet and kdbSet don't even matter.

Essentially, all this decision is about is how we should do change tracking:

  1. let's do it in KDB
  2. let's do it in KDB with meta keys
  3. create a seperate plugin that does it with an API that other plugins can use
  4. do it on KeySet level

@atmaxinger
Copy link
Contributor Author

@kodebach RE _BackendData->keys: yes we could use that! It stores a deep-duped copy of each part of the read keyset. However, it gets (obviously) cleared at every kdbGet and kdbSet operation. So the outcome of #4574 will greatly effect this if we go into that direction.

@kodebach
Copy link
Member

However, it gets (obviously) cleared at every kdbGet and kdbSet operation

Actually cleared or, rather replaced and updated?

Unless I'm mistaken, backendsDivide is the only function that clears the data. But it also immediately replaces the data with the current data. If there is still a problem, we could also split backendsDivide into two functions. One for clearing the data and one for writing the new data. Then we could just clear the data when it actually needs to be cleared.

@markus2330
Copy link
Contributor

markus2330 commented Oct 24, 2022

Having deeply duplicated keys of everything is out of question. We need to go down with memory consumption and not double it! So it goes against the constraint to not use more memory.

What I can totally imagine (and is probably a brilliant idea) is that we always use COW semantics from the keys that are passed out from our API (also in the non-mmap case; for mmap we actually already do it that way) and keep references to everything inside.

Then we could:

  • implement change tracking
  • only pass out the keys as requested (internally there might be more keys, depending on where the backends are, but we can ksBelow exactly as requested)

There is already a draft decision on that topic: doc/decisions/internal_cache.md

@atmaxinger can you take over this decision, too?

@kodebach
Copy link
Member

Having deeply duplicated keys of everything is out of question. We need to go down with memory consumption and not double it!

You misunderstood, my idea... I suspect, we already have a deepDup'ed copy of the keys that we can use. We could probably get rid of this, but AFAIK it was already present in the split of the old backend system.

COW semantics

General copy-on-write semantics are very complicated and I don't believe we can do that without API changes. Since the KeySet is provided by the caller (and changing that seems out of the question) it would definitely need changes in libelektra-core. I'm not totally against that, but the core should remain minimal and it should not know anything about libelektra-kdb. Doing this also means thinking about a lot of edge cases.

IMO the only option to do COW properly, is extending the mmap specific code in the core to be full COW write support that can be used by anyone.

Also @markus2330 since you seem to have a vague idea about how this should work, could you maybe write that up? To me it is currently entirely unclear what exactly you mean by "always use COW semantics from the keys that are passed out from our API".

@atmaxinger
Copy link
Contributor Author

atmaxinger commented Oct 24, 2022

@kodebach actually cleared, on mutliple occasions. Yes, backendsDivide is one place, but directly in kdbGet it is also cleard. However, as mentioned, this shouldn't be too much of a problem. We can still use this to compute the changes done by the user.

@markus2330 I don't see a way to do change tracking without using more memory. After all, we need to store original values of changed keys. However, as mentioned this will only be done if a plugin explicitly requests it.

Having a central code that does the change tracking can actually lower the memory consumption compared to the current state. Currently, if you use both the dbus and the internalnotification plugin, both will do the change tracking and keep a copy of the keysets. If we do it centrally, there will only be a single copy.

There is already a draft decision on that topic: doc/decisions/internal_cache.md

I read it, and IMHO it really isn't adding anything to this. This decision is very vague and I don't understand its purpose. Actually it very much reminds me of stuff we already do in new-backend, especially the "keep duplicated keyset internally". Maybe @kodebach can correct me on that.

Also, the considered alternatives are only bullet points without further description. As with @kodebach, I have no idea what you mean with "COW semantics". Could you be so kind and write it up? Does it differ much from what I have written in this decision?

I'd also like to mention that for me this decision is slowly drifting into bikeshedding. As I have already mentioned, the whole change tracking stuff should be behind an API. How it is implemented should be an implementation detail. I'd rather build a trivial but non-optimised solution first to validate our use cases (this also includes RecordElektra). Optimizations can then always be done later on. If we start doing full COW for the whole of Elektra before implementing this, it will take a long time before we end up with anything useful.

@atmaxinger
Copy link
Contributor Author

@markus2330 @kodebach if we don't reach a decision here in short time, I will implement a prototype based on option Nr 3 - doing it in a seperate plugin while reusing the already duplicated data.

@kodebach
Copy link
Member

How would you access the already duplicated data in a plugin? I think using backendData->keys is the best solution, but I think needs to be done mostly in libelektra-kdb.

You could pass a "current KDB state" KeySet to a special plugin in kdbGet and kdbSet, and then calculate the diff in the plugin. But the calculation of this "current state" from the backendData->keys has to happen in libelektra-kdb.

One advantage of this, would be that all hook plugins are implemented as a single (main) plugin and not a list of plugins. Where multiple plugins could be active (e.g. notifications) the main hook plugin would call extra plugins.

@atmaxinger
Copy link
Contributor Author

atmaxinger commented Oct 25, 2022

backendData->keys is just a KeySet. Change-tracking would be a seperate hook, and I can define that function to just take two keysets (current and the duplicated one), so no problem there.

As for other plugins, they will just use the change-tracking API that I will provide. This API will be general enough that we can replace the implementation of the change tracking to whatever we like later. For example:

bool elektraChangeTrackingIsEnabled(KDB *kdb);
KeySet * elektraChangeTrackingGetAddedKeys(KDB* kdb, Key * parentKey);
KeySet * elektraChangeTrackingGetRemovedKeys(KDB * kdb, Key * parentKey);
KeySet * elektraChangeTrackingGetModifiedKeys(KDB * kdb, Key * parentKey);

bool elektraChangeTrackingValueChanged(KDB * kdb, Key * key, Key * parentKey);
bool elektraChangeTrackingMetaChanged(KDB * kdb, Key * key, Key * parentKey);

KeySet * elektraChangeTrackingGetAddedMetaKeys(KDB * kdb, Key * key, Key * parentKey);
KeySet * elektraChangeTrackingGetRemovedMetaKeys(KDB * kdb, Key * key, Key * parentKey);
KeySet * elektraChangeTrackingGetModifiedMetaKeys(KDB * kdb, Key * key, Key * parentKey);

Key * elektraChangeTrackingGetOriginalKey(KDB * kdb, Key * key, Key * parentKey);
Key * elektraChangeTrackingGetOriginalMetaKey(KDB * kdb, Key * key, const char * metaName, Key * parentKey);

Alternatively, I could just put the computed changeset into a ChangeTrackingContext and use that instead of always supplying KDB * kdb and Key * parentKey.

bool elektraChangeTrackingIsEnabled(KDB *kdb);
ChangeTrackingContext * elektraChangeTrackingGetContext(KDB * kdb, Key * parentKey);


KeySet * elektraChangeTrackingGetAddedKeys(ChangeTrackingContext * context);
KeySet * elektraChangeTrackingGetRemovedKeys(ChangeTrackingContext * context);
KeySet * elektraChangeTrackingGetModifiedKeys(ChangeTrackingContext * context);

bool elektraChangeTrackingValueChanged(ChangeTrackingContext * context, Key * key);
bool elektraChangeTrackingMetaChanged(ChangeTrackingContext * context, Key * key);

KeySet * elektraChangeTrackingGetAddedMetaKeys(ChangeTrackingContext * context, Key * key);
KeySet * elektraChangeTrackingGetRemovedMetaKeys(ChangeTrackingContext * context, Key * key);
KeySet * elektraChangeTrackingGetModifiedMetaKeys(ChangeTrackingContext * context, Key * key);

Key * elektraChangeTrackingGetOriginalKey(ChangeTrackingContext * context, Key * key);
Key * elektraChangeTrackingGetOriginalMetaKey(ChangeTrackingContext * context, Key * key, const char * metaName);

@kodebach
Copy link
Member

backendData->keys is just a KeySet.

It's one KeySet per backend, they can be merged into a single KeySet to get the one from kdbGet/kdbSet, but only works when they are merged in the right order (so that nested backends work).

I think the merging should just be done in libelektra-kdb (because we already have backdendsMerge to do it) and then that data can be passed to a plugin. Maybe that is what you wanted to do anyway...

As for other plugins, they will just use the change-tracking API that I will provide

KDB * is currently not available in plugins, although it could be made available. For ChangeTrackingContext * I have even less idea, where plugins would get that from.

@atmaxinger
Copy link
Contributor Author

I think the merging should just be done in libelektra-kdb (because we already have backdendsMerge to do it) and then that data can be passed to a plugin. Maybe that is what you wanted to do anyway...

Exactly

KDB * is currently not available in plugins, although it could be made available

Can't we just add it to struct _Plugin? And then add a function to get it for those plugins that need it.

For ChangeTrackingContext * I have even less idea, where plugins would get that from.

From KDB.

@atmaxinger
Copy link
Contributor Author

@kodebach turns out while yes we do deep duplication in backendsDivide, we then actually return those deep-duped keys in kdbGet via backendsMerge. So as long as this isn't a bug or an oversight, we can not use those keys and have to keep our own deep duped data in the change tracking.

@kodebach
Copy link
Member

Ah okay that actually makes more sense. I was really sure why we keep this internal deep-duped copy, but if we actually return those keys it makes a lot more sense.

However, maybe we could still use this. I think we can just add an extra parameter bool deepDup to backendsMerge.

void backendsMerge (KeySet * backends, KeySet * ks, bool deepDup)
{
	for (elektraCursor i = 0; i < ksGetSize (backends); i++)
	{
		const Key * backendKey = ksAtCursor (backends, i);
		BackendData * backendData = (BackendData *) keyValue (backendKey);

		if (keyGetNamespace (backendKey) != KEY_NS_DEFAULT)
		{
			ssize_t size = ksGetSize (backendData->keys);
			backendData->getSize = size;
			if (deepDup)
			{
				KeySet *  dup = ksDeepDup (backendData->keys);
				ksAppend (ks, dup);
				ksDel (dup);
			}
			else
			{
				ksAppend (ks, backendData->keys);
			}
		}
	}
}

Normally deepDup = false, but if change-tracking is active then we use deepDup = true. That way we could use backendData->keys.

@markus2330
Copy link
Contributor

Please Note: all relevant discussions must be accounted for within the decision. I will not read your discussion to have a fair view on the decision, as someone else will have when later reading the decision (e.g. in the second round).

@markus2330 I don't see a way to do change tracking without using more memory. After all, we need to store original values of changed keys. However, as mentioned this will only be done if a plugin explicitly requests it.

In particular it is important that kdbGet alone does not do any deep duplication, as it also didn't do before. Performance requirements for kdbSet are that actually writing the files should be the dominating factor. Ideally you do some profiling on the situation before new-backend and also now to have a good idea what is going on and what is actually expensive.

@markus2330 @kodebach if we don't reach a decision here in short time, I will implement a prototype based on option Nr 3 - doing it in a seperate plugin while reusing the already duplicated data.

The progress will be faster if your PRs are in a better state. Please finish #4515 before creating new PRs. If you are really stuck with everything, please write in #4463.

@kodebach
Copy link
Member

Please Note: all relevant discussions must be accounted for within the decision.

Yes, of course the document needs to be updated. But until now the discussion wasn't completed, so there was no point in updating the document.

Now @atmaxinger should update the document and add using backendData->keys as an option. It is similar to option 1, but slightly different. The "hashmap" would exist, but the keys wouldn't be all the parentKeys used with kdbGet. Instead the keys would be all the mountpoints and the "hashmap" would be the already existing KeySet * backends. When change tracking is active, this data would be kept separate from the data returned by kdbGet.

In particular it is important that kdbGet alone does not do any deep duplication, as it also didn't do before.

Any additional duplicating that happens, will of course only happen when change tracking is active. However, when it is active the duplication must happen during kdbGet, because it must happen before the user has a chance to modify keys. This simply cannot be avoided.

@atmaxinger
Copy link
Contributor Author

Please Note: all relevant discussions must be accounted for within the decision. I will not read your discussion to have a fair view on the decision, as someone else will have when later reading the decision (e.g. in the second round).

You don't need to. But those discussions are relevant for reaching a consensus and updating the document. How else would you or @kodebach be able to share your thoughts if not with these discussions? There is no point to update anything in the file if we don't even agree on what to update.

In particular it is important that kdbGet alone does not do any deep duplication, as it also didn't do before

Yes, while the function kdbGet in the file kdb.c didn't do any duplication directly by itself, the plugins that were called by it, i.e. internalnotification and dbus did a duplication! In the case of internalnotification it's even a deep duplication of the requested keys. (One could argue that that's marginally better, because only requested keys are deep-duped. But that could still be added to the general change-tracker).

The only way we don't deep-dup is when using option 4 - the COW approach. But you didn't event comment on that, nor did you provide an answer to my question whether this was what you ment with your "COW semantics". That said, this approach also has downsides, as already mentioned in the decision file.

Please finish #4515 before creating new PRs.

This is a completey different PR which can be worked on in parallel. There is no need for it to be merged while doing this. In fact, change tracking is more important for the general record use-case. #4515 only deals with usability improvents. Still very important for the finished product, but change-tracking is the base of it all.

@markus2330
Copy link
Contributor

Could you be so kind and write it up?

#4619

I'd also like to mention that for me this decision is slowly drifting into bikeshedding.

Absolutely, thus my insistence on a clear problem, constraints, assumptions etc. This was not the case in my last review so obviously any discussion on such grounds are questionable.

Please inform me when I should reread.

Optimizations can then always be done later on.

No, this statement is plain wrong. E.g. read #4619. We needed to discard the whole implementation because it is a dead end. At that time we obviously didn't put enough efforts in finding the right decision. So please let us not repeat the same.

Copy link
Contributor

@markus2330 markus2330 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few small things, otherwise I think the problem is described clear enough to merge the PR as draft.

doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Show resolved Hide resolved
Comment on lines 128 to 131
it may be noticable for `Key`. On a 64-bit system we'd add 8+8=16 bytes to it.
To put this in perspective, the current size of the `Key` struct is 64 bytes,
so we'd add 25% overhead to an empty key. However, this percentage will be much lower in
a real-world application, as the usefulness of an empty key is very low.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@atmaxinger can you please elaborate which 16 bytes you mean? Isn't a single pointer to "change-tracking data" enough? And maybe it even can be in meta-data, not needing any extra bytes in _Key.

doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
doc/decisions/change_tracking.md Outdated Show resolved Hide resolved
Comment on lines 128 to 131
it may be noticable for `Key`. On a 64-bit system we'd add 8+8=16 bytes to it.
To put this in perspective, the current size of the `Key` struct is 64 bytes,
so we'd add 25% overhead to an empty key. However, this percentage will be much lower in
a real-world application, as the usefulness of an empty key is very low.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8 bytes for the pointer, 8 bytes for the size.

I won't comment on the rest, because this clearly goes into too much detail. If the problem is now clear, this PR should be merged and this should be addressed in the next PR.

Copy link
Member

@kodebach kodebach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is clear, IMO this can merged. @markus2330 What do you say?

Co-authored-by: Klemens Böswirth <23529132+kodebach@users.noreply.github.com>
Co-authored-by: Markus Raab <markus2330@users.noreply.github.com>
@atmaxinger
Copy link
Contributor Author

@markus2330 I have rebased to current master and addressed all your remarks.

@atmaxinger atmaxinger marked this pull request as ready for review November 1, 2022 14:47
@markus2330
Copy link
Contributor

Thank you, great job! I agree the problem is clearly presented now! ❤️

@atmaxinger atmaxinger deleted the dec-tracking branch November 23, 2022 18:11
@mpranj mpranj added this to the 0.9.12 milestone Jan 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants