Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Record keys are deduplicated, but should duplicates generate a thrown error? #361

Open
brad4d opened this issue Oct 3, 2022 · 5 comments

Comments

@brad4d
Copy link

brad4d commented Oct 3, 2022

I'm hoping I've just misread the current version of the spec text.
If so, please point me at what I'm missing and close this issue.

It looks like the current spec text indicates that multiple values for the same key will be silently ignored by the DeDuplicateRecordEntries AO.

I would expect an Error of some kind to be thrown at runtime if creation of a record encountered multiple values for a single key.

Perhaps there is some good reason to silently ignore duplicates?
I couldn't find an issue in which the decision to ignore or throw was discussed.

@ljharb
Copy link
Member

ljharb commented Oct 3, 2022

Consistency with object literals would imply silent deduplication, leaving it to linters to detect duplicates.

Given that #{ ...record, x: true } might produce a duplicate, and there's no way to statically know, I can see arguments both for and against throwing if record already has an x key - but the overwhelming precedent would be that there is no exception.

@brad4d
Copy link
Author

brad4d commented Oct 13, 2022

I think the trade-off here can be summarized as follows:

If duplicate keys silently override earlier instances of the same key, then

  • + Code authors can use a pattern like #{...record, x: newX} to create a modified form of a record.
  • - However, code authors will not be protected from any cases where such overwriting was unintentional.

The champions believe that it is more important to enable intentional overriding than to protect against the unintentional.
Making the duplicate keys an error is unacceptable unless some equivalently easy method is provided to do the overriding.

Is that a fair summary?

@ljharb
Copy link
Member

ljharb commented Oct 13, 2022

I'd also add a point about consistency with object literals, which have always had silent override behavior.

@rricard
Copy link
Member

rricard commented Oct 14, 2022

Sorry for the delay for weighing on this.

Thank you @brad4d for summarising - yes that is our position, we want to enable the possibility to update a single key value by copy by spreading & overriding. As @ljharb pointed out this is consistent with objects, meaning both Records & Objects are interoperable for those operations.

Unless we have a strong reason to prevent unintentional overrides we will likely proceed with the current behaviour.

What are your thoughts on this @brad4d ?

@brad4d
Copy link
Author

brad4d commented Oct 15, 2022

@rricard I see the reasoning for not throwing an exception now.
I have no plans to push back against it

It initially seemed very counter intuitive to me.
I tend to think of the silent overwriting that objects do as a design mistake that hides bugs.
Consistency doesn't seem to me like a good enough reason to keep doing it for Records.

However, I can definitely see the need to leverage the silent overwriting in order to create an modified form of a record,
since there's no other clear way to do that.

I'm OK with closing this issue if no one has anything further to add.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants