Skip to content
This repository was archived by the owner on Oct 12, 2022. It is now read-only.

hash and typeinfo refactoring #482

Closed
wants to merge 7 commits into from

Conversation

IgorStepanov
Copy link
Contributor

This is part of #477 pull.
My motivations for this refactoring:

  1. Hash calculating is a very dark magic. Different types need a different hash functions. If you want to calculate hash at compile time, you need a CTFE-ready function for this task. (This function must know static types of "hashable" object). If you need to calculate hash at run time, without static type, you must call TypeInfo.getHash. Very impotant, that TypeInfo.getHash and CTFE-ready function use same hash calculating strategy. There is different special cases, which must be take into account. E.g. common types we calculating with hashOf fuction, but hash of int value is a int value. char[] and string uses a special hash algorithm (but const(char)[] not -- I've fixed that.)
    Ergo: all hash functions must be placed into single module (and this module must be allowed to end-user).
  2. I've made TypeInfo refactoring for fun. All those Ti_Ag classes contains a identical code. Using of templates has allowed to achieve 2.5 ratio (and prevent some errors). I think, using a templates for typeinfos is a right way. In future, we can replace dmd code, which generates typeinfos with template code. It is a solvable problem, now.

@IgorStepanov
Copy link
Contributor Author

@yebblies review this pull please.

@MartinNowak
Copy link
Member

The typeinfo refactoring should be independent of adding a compile time hash function so this should be two pull requests.

What are the reasons to introduce a compile time hash function in druntime?

@IgorStepanov
Copy link
Contributor Author

The typeinfo refactoring should be independent of adding a compile time hash function so this should be two pull requests.

It's possible, but difficult. One part of hash refactoring: moving hash calculation logic into core.util.hash module.
In this connection I need to replace all rt.typeinfo.ti_TYPENAME.TypeInfo_XX.getHash function with:

override size_t getHash(in void* p) @trusted const
{
    T[] s = *cast(T[]*)p;
    return s.computeHash();
}

where T different for all TypeInfo. May be possibly to keep pull as is? This is not cyclic dependency. This is an attempt to avoid unnecessary work. However, if you tell me to split this pulls, I'll do it.

What are the reasons to introduce a compile time hash function in druntime?

First reason is allowing to construct AA literals in CTFE and returning it to calling code. However, this is not the sole reason.
After merging pull dlang/dmd#1724 became possible to generate complex data structures in CTFE. Thus, with increasing opportunities of CTFE, increases the complexity of the tasks. For example, you can implement a beautiful double dispatching using hash table (with type name key) is fills at compile time.
Third possible usage: comparing TypeInfo objects is a expensive operation (if this is a different instances of one type typeid.) There is many calls of virtual methods (see: typeid(immutable(char[]) == typeid(immutable(wchar[])).

this typeinfos has a different addresses and we must to call:

TypeInfo_Const.opEquals() //immutable(immutable(char)[]) == immutable(immutable(wchar)[])
TypeInfo_Array.opEquals() //immutable(char)[] == immutable(wchar)[]
TypeInfo_Const.opEquals() //immutable(char) == immutable(wchar)
TypeInfo.opEquals() //char == wchar : FAIL

What we can to do?

  1. We can refactor typeid generation and move it into druntime. We can declare template genTypeInfo(T), which returns instance if TypeInfo class or child of TypeInfo.
  2. Replace DMD typeinfo generation code with calling genTypeInfo!(T) (This refactoring will solve a number of other issues, particularly the question appropriate size and alignment of fields in object_.d and DMD.)
  3. Add to all TypeInfo object field with hash of T.mangleof.
    After this optimization we can reveal typeinfo non-equality with simple int comparing.

However, the main goal of this patch is allowing to generate associative array instances an compile time.

P.S. Thank you for taking the time to see this pull and sorry for my bad English.

@IgorStepanov
Copy link
Contributor Author

Ping

@IgorStepanov
Copy link
Contributor Author

@dawgfoto please, comment this. Do I must split this pull to typeinfo refactoring and hash refactoring? (I've wrote early, why I dont want to do it). Also please comment my reasons for hash CTFE implementation. Thanks.

@IgorStepanov
Copy link
Contributor Author

Rebased and reverted typeinfo refactoring (I'll create pull request for it later).

What are the reasons to introduce a compile time hash function in druntime?

If you want to ask, why certainly in druntime, I have an answer.
Hash calculating for different types have many issues and special cases.
Obviously, CTFE computeHash must keep next invariant:

T val = ...
assert(computeHash(val) == typeid(T).getHash(&val));

If I move CTFE implementation of hash function to my library or phobos, I'll need follow druntime hash code. My code will be depended on druntime implementation. See simple example:

import std.stdio;

class Foo
{
    override size_t toHash() const
    {
        return 5;
    }
}


void main() {
    auto arr = [new Foo, new Foo, new Foo];
    writeln(typeid(Object[]).getHash(&arr));
}

this code print not 15 as someone expect, but big strange number.
If you see at rt.typeinfo.ti_AC you, will see another behavior. But in TypeInfo_Array defined this behavior.

Ok. I can follow this behavior in my ctfehash.d
But if someone fix specified issue and change TypeInfo_Array implementation my ctfehash unittest will be failed and I'll need to fix my code. This is not good situation.

Specified issue is not alone. There are many strange cases and I'll need to follow all of druntime fixes in my ctfehash.d

Now all hash-calculation functions concentrated in core.util.hash.
The most of the code is common for static type and typeid versions. All difference (~150 rows) are protected by unittests and core.util.hash has guarantee keeping of the invariant.

By a way, computeHash(val) can be more preferable than typeid(T),getHash in class toHash functions, because computeHash(val) can works faster. (There are no virtual function calls and computeHash(val) can be inlined)

Last reason: CTFE hash calculation need for compile-time construction of the associative array instances (now ctfe can not return AA instance to main code and place this instance in static segment). After this pull merge I can add this ability (near 30 rows of D code and some C++ code in DMD will be needed)

}
else
{
const(ubyte)[] bytes = cast(const(ubyte)[])cast(const(void)[])((&val)[0..1]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What Types are handled in this fall-through case. I don't think you can access the memory representation in CTFE.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is default behaivor. It dont need to support ctfe. computeHash must work for all types, even if it cannot to do it it ctfe. i.e. delegates hash compute in the default branch.

@MartinNowak
Copy link
Member

How about the following.

  • We first focus on implementing a solid/tested core.internal.hash that is
    usable at compile time. We should try to solve the harder problems first, i.e.
    hashOf(real), hashOf(Value[Key]), and Don should have a look whether we rely
    on any buggy CTFE behavior. This is also a good opportunity to get rid of any
    homegrown hashes and solely use a single hash on the binary representation of
    data.
  • After that we replace all internal hashing with which ensures
    assert(hashOf(val) == typeid(T).getHash(&val));
  • After that we make the new implementation available as object.hashOf.

@IgorStepanov
Copy link
Contributor Author

I do not understand the first step.
Do you suggest to move computeHash to hidden part of druntime (which doesnt available to user) and rename it to toHash? And after aprooving and testing this function - move it to object. Do you allow to usinhg new hashOf function in TypeInfo.getHash on a first step?

hashOf do not allow to compute real hash at CTFE, because Don havent accepted my pull which allow converting real to ubyte[]. AA hash caluclating correctly if CTFE foreach order follow the runtime foreach order for AA.

And computeHash need not only for CTFE hash. It must compute identical hash for ctfe and rune if ctfe hash can be computed.

What I must to do now?

@IgorStepanov
Copy link
Contributor Author

How about the following...

I think this pull has achieved the first two goals.
It compute hash (for some cases in CTFE) and it guarantees keeping of the invariant for this cases.
Otherwise I dont understand you.

@MartinNowak
Copy link
Member

Do you suggest to move computeHash to hidden part of druntime (which doesnt available to user) and rename it to toHash?

Yes, the exact name of the function is not important, but hashOf is short and descriptive.

And after aprooving and testing this function - move it to object.

Not moving, we can create a template function in object that imports and forwards to the internal implementation.

Do you allow to usinhg new hashOf function in TypeInfo.getHash on a first step?

I would like to use that opportunity to clean up and improve some hash computations instead of simply cut&pasting the existing ones. Of course this is not strictly necessary.
Having this as two steps has the benefit, that the diff becomes substantially smaller and we can review/merge it much faster.

@IgorStepanov
Copy link
Contributor Author

Not moving, we can create a template function in object that imports and forwards to the internal implementation.

This means, that core.internals.hash must be in public part of druntime. (in import folder)
But I see no reasons to overload object namespace with hashOf symbol. If core.internals placed in public part, user can import core.internals.hash module if he need a hashOf function.

I would like to use that opportunity to clean up and improve some hash computations instead of simply cut&pasting the existing ones. Of course this is not strictly necessary.
Having this as two steps has the benefit, that the diff becomes substantially smaller and we can review/merge it much faster.

Opposite, after I moved all hash logic to one place, we can optimize/clear/improve this logic as we wish. Otherwise, when logic divided between 10+ modules, you can not change something, without fear that any break.
Now you can fast see trivial changes in TypeInfos modules, check that all logic placed in util.hash and work only on this module. Any suggestion for improvement hash logic are welcomed. For example, if you want to change string hash you can rewrite one function and all another druntime still correct.

@IgorStepanov
Copy link
Contributor Author

About naming: I'll rename:
core.util.hash => core.internal.hash
core.util.hash.hashOf => core.internal._hashOf (and make it private)
core.util.hash.computeHash => core.internal.hashOf

Ok?

@IgorStepanov
Copy link
Contributor Author

Is it ok now? What I need to do after renaming? Is it possible to work under this pull without separation to hashOf implementation and TypeInfo.getHash implementation? They are highly dependent on each other. In addition, I have made ​​several changes in the hash computation. (For example, for arrays of structures and classes) Revert getHash changes lead to misalignment.

@IgorStepanov
Copy link
Contributor Author

cleanup of constraints and added comments for hashOf functions

@IgorStepanov
Copy link
Contributor Author

My suggestion for improving:

  1. merge static array and dynamic array hash strategies.
    Now, dynamic array hash calculating as hash of raw data of the array. (except structs and interface array). Static array calls getHash for all members. It good, i think. Anyway, TI for arrays of simple types are defined it re.typeinfo and getHash overrided as need. (hash of int[] calculated with one call of hashOf)
    I think int[][] var must be calculated as sum of members hashes, not as bytesHash(var.ptr, var.length).
  2. find better string hash algorithm
  3. replace hash += new_hash sum of array and associative array hashes with hash = hash * PRIME_NUMBER + new_hash

@IgorStepanov
Copy link
Contributor Author

Also, I've added tests which cover all if/else branches if all hashOf function, and ensure that hashOf(v) == typeid(typeof(v)).getHash(&v);
Now this code ready for deep review.

* http://www.boost.org/LICENSE_1_0.txt)
*/
module core.internals.hash;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably package: protection would make sense here, so that only core can access this module.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What means "package" did it allow access for all core.* modules, or only for all core.internals.*
(I mean "must means").
Now (http://d.puremagic.com/issues/show_bug.cgi?id=143) package does protect us from unwarranted access?

@MartinNowak
Copy link
Member

Looks much better now.
I don't have much time currently but I'll try to review it within next week, sorry for the delay.

@IgorStepanov
Copy link
Contributor Author

I don't have much time currently but I'll try to review it within next week, sorry for the delay.

Ok. Thank you for your attention.

@IgorStepanov
Copy link
Contributor Author

@dawgfoto Do you have time to rewiew this pull now? :) Thanks!

@@ -13,6 +13,8 @@ COPY=\
$(IMPDIR)\core\time.d \
$(IMPDIR)\core\vararg.d \
\
$(IMPDIR)\core\internals\hash.d \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's named internal in phobos so we should stick to singular too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't genenerate import file? But how we can access to this module? If we hide this module and provide a proxy function in object.di, we'll be need to access to this internal module. Moreower, I don't know any way to protect internal.hash module (Allow access to it only with object.di), because object module haven't parent package and package protection can't help us.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, let's scratch the package protection.
But please rename the packge from core.internals to core.internal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok.

@IgorStepanov
Copy link
Contributor Author

@dawgfoto All test are green. The last unresolwed issue is hash for float/double, yes? Is it good solution - implement CTFE log2 function to calculate exponent and compute ubyte[] representaion of double value like this: http://codepad.org/tWhIDslx (Of course, there are special cases like inf, nan etc)?

@IgorStepanov
Copy link
Contributor Author

if dlang/dmd#2656 will be merged, I'll be able to implement CTFE double hash properly.

@MartinNowak
Copy link
Member

The last unresolved issue is hash for float/double, yes?

Not sure, but that was the main remaining issue.

http://codepad.org/tWhIDslx

Looks good.

@MartinNowak
Copy link
Member

Can we break down the pull request a little further?
For example adding the seed parameter to Object.toHash should be independent and needs closer inspection w.r.t. code breakage.

@IgorStepanov
Copy link
Contributor Author

For example adding the seed parameter to Object.toHash should be independent and needs closer inspection w.r.t. code breakage.

I'm not sure that adding seed to Object.toHash is a good idea. This addition will break the user code. Of course, we can add this parameter as optional:

class Object
{
    pragma(msg, "Attention: size_t toHash() is deprecated. Don't use it!");
    size_t toHash(){assert(0);} //default implementation
    size_t toHash(size_t seed){return seedHash(seed, toHash());}
}

For struct toHash we should add some compiler code: if user define toHash without args compiler should emit wrapper size_t toHash(size_t seed){return seedHash(seed, toHash());} and place pointer to it to TypeInfo_Struct.xtoHash

Anyway this changes are so big and it will require long-term agreements with the community.
Do you want to start this work within this pull?

http://codepad.org/tWhIDslx

Looks good.

I've pushed this changes. Waiting for tests.

@IgorStepanov
Copy link
Contributor Author

@WalterBright I've strange linkage bug in win64 and ld warning on linux while compiling core.internal.convert.testNumberConvert
I can pass template parameter as string, for expample, but I think, you should see this error. Or passing literals through alias template paramether is not good?

@IgorStepanov
Copy link
Contributor Author

I've created bug report about this bug: http://d.puremagic.com/issues/show_bug.cgi?id=11273

@IgorStepanov
Copy link
Contributor Author

@dawgfoto float hash issue is fixed. Is this solution is good? What we need to do next?

@IgorStepanov
Copy link
Contributor Author

For example adding the seed parameter to Object.toHash should be independent and needs closer inspection w.r.t. code breakage.

I have some ideas how to perform this change and avoid circular dependence. Do you suggest to start this work now? Current pull request doesn't break public interface, but toHash signature changing will do it. Should we do it if a separate pull request if we want to start this work?

@IgorStepanov
Copy link
Contributor Author

@dawgfoto

Can we break down the pull request a little further?
For example adding the seed parameter to Object.toHash should be independent and needs closer inspection w.r.t. code breakage.

I guess I do not understand you correctly. Do you suggest split this pull request to part?
It's very hard and sometimes impossible (I wrote about it).
However, there is a good news: This pull request don't break user code. It's practically does not change the public interface. There is only one change: I've add seed parameter to TypeInfo.getHash(in void* p). Now it looks like TypeInfo.getHash(in void* p, size_t seed = 0). seed parameter has a 0 default value and user can use this method as he used it early: size_t hash = typeid(T).getHash(&val);
I didn't add seed arg to Object.toHash() method. When I need ti compute hash value of object I'm using this approach:

size_t hashOf(Object val, size_t seed = 0)
{
   size_t o_hash = val ? val.toHash() : 0;
   return bytesHash(cast(ubyte*)&o_hash, size_t.sizeof, seed);
}

val.toHash() still don't take any args.

@IgorStepanov
Copy link
Contributor Author

@dawgfoto Ping

}
else
{
return ((cast(uint) x[3]) << 24) | ((cast(uint) x[2]) << 16) | ((cast(uint) x[1]) << 8) | (cast(uint) x[0]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out that neither GCC 4.8 nor LLVM 3.3 implement this optimization. Apparently, nobody ever writes C code like this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'll rewert cast in non-dmd case. Another trouble: dmd can't inline function with if-else even if condition is trivially evaluatable at compile time like __ctfe pseudu-variable.

@MartinNowak
Copy link
Member

However, there is a good news: This pull request don't break user code.

You're right, I misunderstood that part.
Sorry that it's lying around for so long, I hope to find enough time for this soon, stay tuned.

@MartinNowak
Copy link
Member

I guess I do not understand you correctly. Do you suggest split this pull request to part?
It's very hard and sometimes impossible (I wrote about it).

The bad news, it's really good work but there are simply too many orthogonal changes (add hashOf, change hash function, rewire TypeInfo.toHash) in this pull request which even complicates the AA situation.
The good news, I have a proposal how to get out of the AA misfortune.

  • We add a hashOf template function to core.something as a general purpose hash function (effectively your first commit without the TypeInfo stuff).
    • On top of the hashOf facility we can implement a simple templated/ctfeable hash container (as a dub package).
    • Evaluate and chose alternative hash implementations (MurmurHash).
  • The compiler needs to support implicit conversion from AA literals to say TypeTuple!(Key[], Value[]) so
    it becomes possible to implement MyAA!(string, string)(["foo":"bar"]). This step is optional but it allows any AA implementation to support construction from literals.
  • We replace object.AssociativeArray with the one from 1b. The compiler implements the following rewrites.
    Value[Key] => object.AssociativeArray!(Key, Value) ["key": "value"] => object.AssociativeArray!(Key, Value)(["key"], ["value"])
    Any other compiler AA logic gets removed.
    If necessary the new AA could be named AssociativeArray2 for a transitional phase.
  • All object.Object method which only exists to support RTTI based AAs are deprecated. AFAIK these are opCmp, opEquals and toHash. Same for the corresponding function pointers in TypeInfo_Class and TypeInfo_Struct.

If this works out we don't need to touch TypeInfo.toHash stuff and avoid complications like the seeded step-by-step array hash.

@IgorStepanov
Copy link
Contributor Author

@dawgfoto Hmm. Do you suggest to remove getHash method from TypeInfo? Yes, we can reimplement AA ad use templated hash to it. However, getHash method can be used in other cases, where the rtti can be used. (I think, rtti is an important part of druntime and it can be used not only for AA support).

All object.Object method which only exists to support RTTI based AAs are deprecated. AFAIK these are opCmp, opEquals and toHash

How to compute hash of object without toHash? Class or struct can has fields which shouldn't affect to hash value. Typical example: cache field, which can be changed during using a object and doesn't reflect object state.
Another example: Developer of class know good hash algorithm for specific domain space of this class: phone number, car number et c.
I think, computing hash of aggregate type as a hash of all field hashes is not good idea.
And what about simple comparing of objects. Don't we need opCmp/opEquals for simple comparing of objects?

@MartinNowak
Copy link
Member

How to compute hash of object without toHash? Class or struct can has fields which shouldn't affect to hash value.

If a struct or class defines a toHash method use it, otherwise don't.

And what about simple comparing of objects. Don't we need opCmp/opEquals for simple comparing of objects?

That's the point, these operations don't make sense for every class so we shouldn't define those methods in the root class. The only reason they are part of Object is because we need them for AAs.

Replacing the virtual methods with a template functions in object will also allow to use qualifiers.
This was the reason why we had to undo #72 in #278.

Here are the relevant Bugzilla entries. We might also want to ask @jmdavis for the current status of this.
Issue 9769, Issue 9770, Issue 9771, Issue 9772

@IgorStepanov
Copy link
Contributor Author

If a struct or class defines a toHash method use it, otherwise don't

In this case, T[Object] doesn't make sense, because we can't compute hash of Object, because we can't find toHash method of his real type. We can move toHash pointer to TypeInfo_Class: if class of his base class defines toHash, TypeInfo_Class.xtoHash point to this method, otherwise it null.

I've understood general idea. My first goal: create PR which implements module core.internal.hash without TypeInfo part. I'll do it at this week. The next goal: implement AA, which use core.internal.hashhashinstead ofTypeInfo.getHashgetHash. Btw, I know how to implement full templated AssociativeArray and saveextern(C)` interface for it.

@MartinNowak
Copy link
Member

In this case, T[Object] doesn't make sense

Yes, it doesn't work because with a moving GC we wouldn't have any sensible data to hash.
But for now the hashOf function should add deprecated support for the incorrect implementation (hashing the object pointer).

Btw, I know how to implement full templated AssociativeArray and saveextern(C)` interface for it.

This sounds interesting, but i wouldn't mind if object imported core.internel.aa.

@MartinNowak
Copy link
Member

We should ask @9rnsr and @donc if one of them has a good idea for 2.

@IgorStepanov
Copy link
Contributor Author

Yes, it doesn't work because with a moving GC we wouldn't have any sensible data to hash.
But for now the hashOf function should add deprecated support for the incorrect implementation (hashing the object pointer).

I would like to clarify my question.
Does the next code is correct?

class A
{
   size_t toHash()
   {
        return 5;
   }
}

class B
{
   size_t toHash()
   {
        return 10;
   }
}

void main()
{
   Object a = new A;
   Object b = new B;

   auto aa = [a:"A", b:"B"];
}

BTW: Can we add special rules to toHash definition which can be checked by compiler?
I suggest the next rule: "toHash method can access only to immutable/const fields and global variables."
I.E.

struct Foo
{
    immutable string a;
    const string b;
    string cache;

    size_t toHash()
    {
       size_t hash = 0;
       hash = computeHash(a, hash); //ОК, a is immutable
       hash = computeHash(b, hash); //ОК, b is const
       hash = computeHash(cache, hash); //NG: cache is mutable;
    }
}

This rule allow to ensure that AA key hash can't be changed but doesn't require that a key has been immutable.

@MartinNowak
Copy link
Member

Does the next code is correct?

No, the code is not correct, because I can insert object which aren't comparable.

It will take quite some time until we can remove the Object functions and I think this is orthogonal to the AA issue.
So we need to think more thoroughly about the implications. One idea would be to add a Hashable interface which all classes that support hashing would inherit from (this still has the qualifier issue).
For now it only means we have to support the existing toHash methods in the new hashOf function.

@IgorStepanov
Copy link
Contributor Author

@dawgfoto

No, the code is not correct, because I can insert object which aren't comparable.

I mean that A and B are hash compatible (opEquals and toHash are implemented). Can thay be keys in int[Object] aa?

It will take quite some time until we can remove the Object functions and I think this is orthogonal to the AA issue.

Yes. I think we can use toHash as is, and when we'll start removing toHash from Object, we'll fix hashOf class part.
I think IHashable interface is the best solution.

About extern(C) aa interface:
For now D supprots the following code:

int[string][string] ranges;
ranges["cat"]["dog"] = 1;
ranges["dog"]["pedigree"] = 1;
ranges["cat"]["pedigree"] = 2;

This code unable to implement as D user type (without ugly temporary proxy objects) because opIndex implementation don't know about next opIndexAssign.

@MartinNowak
Copy link
Member

I mean that A and B are hash compatible (opEquals and toHash are implemented). Can thay be keys in int[Object] aa?

True, but you'd need to express this different because int[Object] allows to insert Objects which aren't compatible.
So int[IHashable] might be a solution. The remaining problem with interfaces is const/qualifier correctness. In the past toHash() const or toString() const were found to be too restrictive (e.g. don't allow logical const caching).

This code unable to implement as D user type (without ugly temporary proxy objects) because opIndex implementation don't know about next opIndexAssign.

I think we should try to go with a fully generic AA.

module object;
//...
template AA(Key, Value)
{
    import core.internal.aaimpl;
    alias AA = AAImpl!(Key, Value);
}

@IgorStepanov
Copy link
Contributor Author

True, but you'd need to express this different because int[Object] allows to insert Objects which aren't compatible.
So int[IHashable] might be a solution. The remaining problem with interfaces is const/qualifier correctness. In the past toHash() const or toString() const were found to be too restrictive (e.g. don't allow logical const caching).

About toHash: I think it should be more restrictive then is. Hash value of object shouldn't be changed during object lifetime. Otherwice it can be chenged when object placed in associative array, and break this array.

The compiler needs to support implicit conversion from AA literals to say TypeTuple!(Key[], Value[])...

Can you help me with this stage? I think it be a good idea to create special template object.AssociativeArrayLiteral(T...) and cast all aa literals to it. AssociativeArrayLiteral should allow different types of keys/values (like [1:2.0, "val":[1,2,3]]) because it can be used in different user containers like JSONValue. Same for simple array literal. But I don't know how to do it.

@MartinNowak
Copy link
Member

Can you help me with this stage?

I'll have a look at this, but it'll take 1 or 2 weeks until I have time. For now we can continue without this, it means that calling MyAA!(Key, Value)([key : value]) will allocate an unnecessary runtime AA and pass it to the ctor.

I think it be a good idea to create special template object.AssociativeArrayLiteral(T...) and cast all aa literals to it.

That might be a good idea, but I'm not sure we can make this compatible with the current runtime AA.

AssociativeArrayLiteral should allow different types of keys/values (like [1:2.0, "val":[1,2,3]]) because it can be used in different user containers like JSONValue.

That's an interesting idea, but it's a heavy language change. So we should file it as further enhancement.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants