Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The type_of could be very slow #53

Closed
ice-tong opened this issue Aug 10, 2022 · 36 comments · Fixed by #73
Closed

The type_of could be very slow #53

ice-tong opened this issue Aug 10, 2022 · 36 comments · Fixed by #73

Comments

@ice-tong
Copy link

ice-tong commented Aug 10, 2022

Hi, thanks for the nice project!

I found that the _types_of_iterable could be very slow if the iterable is very large. This could be a big problem for practical application. Is there any solution for this?

@wesselb
Copy link
Member

wesselb commented Aug 10, 2022

Hey @ice-tong! Thank you for the kind words. :)

You're totally right that _types_of_iterable can run into performance issues when run on nested structures or iterables with many elements. This is currently and unfortunately a limitation of the implementation.

When a list with many elements is passed to a function, one must inspect all elements of the list to determine the right element type. As far as I can tell, there is no cheaper way to determine this.

One way of approaching this issue is, for large lists, say n >= 1000, to only inspect a subset of the elements. For example, the first 100, last 100, and 100 randomly chosen elements sounds like a reasonable approach. Of course, the type might then not be right anymore.

Do you have any thoughts on this?

@ice-tong
Copy link
Author

Hi @wesselb, I agree that there is no cheaper way to accurately get the type of a Python Object. I also don't have any good ideas for this right now. 🤣

@wesselb
Copy link
Member

wesselb commented Aug 11, 2022

Let's leave this issue open, because it is certainly a performance issue that at some point will need addressing!

@ice-tong
Copy link
Author

ice-tong commented Aug 12, 2022

Hi @wesselb, I want to share with you some possible tricks to optimize the type_of speed.

In the following big list test, we can use the singleton Type since a lot of instance creation in type_of can be expensive. And the hash cache of plum type can also bring better speed.

import time
from contextlib import contextmanager
from plum import type_of
from plum.type import TypeMeta
from plum.type import Type as plum_Type
from plum.parametric import Tuple as plum_Tuple
import cProfile


big_list = [(1, 2) for _ in range(10000*100)]

start_t = time.time()
print(type_of(big_list))
base_t = time.time() - start_t
print(f'Big list cost: {base_t} s')


@contextmanager
def singleton(meta_cls):
    origin = meta_cls.__call__
    
    meta_cls._instances = {}

    def __call__(cls, *args, **kwargs):
        assert not kwargs
        key = (cls, args)
        if key not in cls._instances:
            cls._instances[key] = origin(cls, *args, **kwargs)
        return cls._instances[key]
    
    meta_cls.__call__ = __call__
    yield

    meta_cls.__call__ = origin


with singleton(TypeMeta):
    start_t = time.time()
    print(type_of(big_list))
    t_1 = time.time() - start_t
    print(f'Big list with singleton type cost: {t_1} s')
    print(f'speed up: {(base_t - t_1) / base_t}')


@contextmanager
def hash_cache(hashable_ty):
    hash_core = hashable_ty.__hash__
    hashable_ty._hash = None

    def __hash__(self):
        if self._hash is None:
            self._hash = hash_core(self)
        return self._hash
    
    hashable_ty.__hash__ = __hash__
    yield 
    hashable_ty.__hash__ = hash_core


with hash_cache(plum_Tuple), hash_cache(plum_Type), singleton(TypeMeta):
    start_t = time.time()
    print(type_of(big_list))
    t_2 = time.time() - start_t
    print(f'Big list with singleton type and hash cache cost: {t_2} s')
    print(f'speed up: {(base_t - t_2) / base_t}')
    cProfile.run('type_of(big_list)', filename='plum_type_of.prof')

we got 40+% speed up:

List[Tuple[builtins.int, builtins.int]]
Big list cost: 5.383634090423584 s
List[Tuple[builtins.int, builtins.int]]
Big list with singleton type cost: 3.3826003074645996 s
speed up: 0.371688296297556
List[Tuple[builtins.int, builtins.int]]
Big list with singleton type and hash cache cost: 2.8426778316497803 s
speed up: 0.47197789004525037

@ice-tong
Copy link
Author

The type_of use multiple dispatche also make performance issue. Use the following type_of implementation got 15% speed up.

def type_of(obj):
    if type(obj) is list:
        return List(_types_of_iterable(obj))
    elif type(obj) is tuple:
        return Tuple(*(type_of(x) for x in obj))
    elif type(obj) is dict:
        return Dict(_types_of_iterable(obj.keys()), _types_of_iterable(obj.values()))
    else:
        return ptype(type(obj))

@wesselb
Copy link
Member

wesselb commented Aug 14, 2022

Hey @ice-tong,

Those are some clever optimisations—halving the runtime of type_of is a big improvement! Caching objects and hash values seems like a good way to reduce unnecessary overhead. Some use of Plum loads types dynamically, which means that the hash of a type might suddently change, but other than that I don't foresee any issues.

The type_of use multiple dispatche also make performance issue. Use the following type_of implementation got 15% speed up.

Currently, multiple dispatch is used to make type_of extendable. This is an example of where that could be useful. If we were to remove multiple dispatch from type_of, then that would not be possible anymore. There hence seems to be a speed–functionality trade-off here.

@rtbs-dev
Copy link
Contributor

@wesselb I'm late to this game, but there's a pretty large body of work around describing this issue, which maybe plum can take advantage of? Two things: 1. runtime typechecks of deeply nested generics, and 2. using this for dispatch

  1. I've been using a library called classes, where @sobolevn has a dispatch-esque ad hoc polymorphism setup. It uses __instance_check__ dunders to patch into the isinstance() system, such that generics are allowed (both for general dispatch and dedicated Supports typeclasses). Loads of boilerplate to so this, but it looks like the phantom-types library can be used to clean up the typing, dev-side. I'm probably over-simplifying here, but all this is to say, it was @sobolevn that led me to find beartype, which has a pretty phenomenal way of doing O(1) nested container type-checking at runtime, in a fully PEP-compliant way.

One way of approaching this issue is, for large lists, say n >= 1000, to only inspect a subset of the elements. For example, the first 100, last 100, and 100 randomly chosen elements sounds like a reasonable approach. Of course, the type might then not be right anymore.

It's effectively a robust version of doing this, just with @leycec's incredible obsession for speed verification (and puns 😅)

  1. So, can we use beartype to accomplish some of the back-end type-checking, at least as an optional backend? More generally, the community is adopting a number of very well-loved runtime typechekers like beartype and typeguard. For instance, Google's new functional ML darling Jax has fully interoperable runtime type definitions that work with either. In addition, the new hot kid on the block will be Pydantic V2, with the new blog post outlining that the pydantic-core (written in Rust) will provide arbitrary validation without BaseModel, pushing pydantic more toward being a true runtime type-checker (with specific love for JSONSchema). SO... can we take advantage of this runtime-checker deluge for some DRY in the dispatch community? As I understand your source code (very briefly, I'll add), you effectively created your own runtime type-checker with special techniques for handling concrete generics --- _something that @leycec has spent a lot of time generalizing. I've noticed that most of the dispatching solutions basically need a runtime typechecker, but end up pretty much writing their own each time, no?

I actually spent a bit of time working on the theoretical side of this problem (a beartype-based dispatcher I lovingly wanted to call bearpatch). Since we really only get True vs. False out if the insinstance-like is_bearable() function, I was working on a dispatcher that used a Trie to compile a sort of ad-hoc generic type hierarchy that could determine a dispatch for covariant generic subtypes in, what, O(log n)? Ish? But this was a nightmare and mistake to undertake alone, and so I found this thread while considering forking plum to replace the typechecking guts with a frankenbeartype monstrosity...

So, I realize this is a massive can of worms I'm opening in terms of the horrifying way python treats runtime types.

(e.g. see this glorious manifesto of a rollercoaster that will dig into the hidden bowels of the mechanics and politics of simply trying to implement a beartype-able @overloads system, which in fact directly mentions NOT doing dispatching, since "other libraries can do this"... and yet "other libraries" all suffer from the same need to check nested containers! 😅

BUT I guess I'm just trying to start a broader conversation here about how one might go about unifying the community around a decent approach to dispatch on deeply nested containers that doesn't require reinventing the wheel every two seconds/libraries? 😄

@ice-tong
Copy link
Author

ice-tong commented Aug 19, 2022

Hi @tbsexton, I had the same idea as you: using beartype as the runtime type checker to implement multiple dispatch. But I found that beartype is not at all fast in some cases and the type checking is not accurate.

In [1]: from typing import List, Tuple

In [2]: from beartype.abby import is_bearable

In [3]: import torch

In [4]: import numpy as np

In [5]: typing_hints = List[Tuple[torch.Tensor, torch.Tensor]]

In [6]: data = [(np.array(1), np.array(2)) for _ in range(10000*100)]

In [7]: %time is_bearable(data, typing_hints)
CPU times: user 23.6 s, sys: 41.2 ms, total: 23.6 s
Wall time: 23.6 s
Out[7]: False

So I dropped this idea. 🤣

As long as we can optimize the time consumption of type_of as much as possible, I believe that the execution speed of plum can eventually reach an acceptable level. ^_^

@leycec
Copy link
Member

leycec commented Aug 19, 2022

shots fired

bang bang

First up, thanks a metric ton to @tbsexton! But what else would you expect from NIST? Only the best. That's what. Thankfully, ludicrously fast runtime type-checking is in our wheelhouse. Let's see if we can't shed a light into the darkness that is this issue.

Second up, let's re-profile @ice-tong's horrifying worst-case example that makes my filmy eyes bleed for posterity. Gotta set that record straight if we can. To do so, we'll begin with a simpler example that elucidates possibly interesting things:

$ ipython3.10
[ins] In [1]:  import beartype
[ins] In [2]:  beartype.__version__
Out[2]: '0.11.0'  # <-- good. this is good.
[ins] In [3]:  from beartype.abby import is_bearable
[ins] In [4]:  from beartype.typing import List, Tuple  # <-- avoid PEP 585 deprecation warnings, yo.
[ins] In [5]:  typing_hints = List[Tuple[str, str]]  # <-- so, strings instead of PyTorch tensors.

# So far, so good. First, a plus-sized list of 2-tuples satisfying this hint.
[ins] In [6]:  data_good = [(str(i), str(i+1)) for i in range(10000*100)]
[ins] In [7]: %time is_bearable(data_good, typing_hints)
CPU times: user 2.17 ms, sys: 958 µs, total: 3.13 ms
Wall time: 3.12 ms
Out[7]: True

# Goodness continues. @beartype yeets. Second, let's try that again. Just 'cause.
[ins] In [8]:  %time is_bearable(data_good, typing_hints)
CPU times: user 34 µs, sys: 3 µs, total: 37 µs
Wall time: 39.1 µs
Out[8]: True

# Goodness amplifies! @beartype is two orders of magnitude faster the second time
# you invoke it than the first. What gives? Simple. The first time you call is_bearable(),
# @beartype secretly creates and caches a synthetic runtime type-checker specific to
# the exact type hint you passed (i.e., "typing_hints" above). Every subsequent time
# you call is_bearable() with the same type hint, @beartype internally reuses the
# previously cached synthetic runtime type-checker for that type hint.
#
# In complexity terms, is_bearable() exhibits amortized worst-case O(1) time complexity
# with negligible constants. This is good and exactly as advertised.
#
# So far, still good. Next, a plus-sized list of 2-tuples *VIOLATING* this hint.
[ins] In [9]:  data_bad = [(i, i+1) for i in range(10000*100)]  # <-- so, integers instead of NumPy arrays.
[ins] In [9]:  %time is_bearable(data_bad, typing_hints)
CPU times: user 1.03 s, sys: 12.1 ms, total: 1.05 s
Wall time: 1.04 s
Out[9]: False

# *THAT'S TERRIBAD.* Okay, sure. It's not quite the 23.6 seconds terribad that
# @ice-tong exhibited above. But... that's still not great. This should be tens of
# orders of magnitudes faster than it currently is. What gives, @beartype!?!?
#
# The real-world, mostly. In a desperate last-ditch effort to push beartype 0.10.0
# out the door, I implemented is_bearable() in the crudest way possible. I wasn't
# convinced anyone was actually interested in performing their own runtime
# type-checking at the statement level. So, I just shipped the "beartype.abby"
# subpackage out the door as fast as possible. We know how this story ends.
# It would eventually become clear that everyone, in fact, is interested in
# performing their own runtime type-checking at the statement level. Then I
# began facepalming myself repeatedly.
#
# Because laziness prevailed, is_bearable() currently uses the Easier to Ask for
# Permission than Forgiveness (EAFP) approach to detect type hint violations.
# Basically, is_bearable() catches exceptions. Exception handling is fast in
# Python if no exception is raised. As soon as an exception is raised, however,
# exception handling is slooooow. That's what we're seeing above.
#
# This is compounded by the fact that the "data_bad" object is a stupidly large
# list of 2-tuples. So, is_bearable() is not only catching an exception there;
# is_bearable() is catching an exception that has embedded inside of itself a
# string representation of a stupidly large list of 2-tuples. It takes a considerable
# amount of wall-clock time to construct that string representation. In fact,
# almost *ALL* of the 1.03 seconds consumed by that last call to is_bearable()
# was spent constructing a string representation that no one ever actually sees.
# More facepalming.

I'll stop there. This embarrassment must end! It's now evident that there is tangible interest in statement-level runtime type-checking. So, I'll absolutely divert resources to optimizing is_bearable() for the next beartype 0.12.0 release cycle.

That Said, Nothing Above Matters

That's right. @leycec always cackles madly at the end of every GitHub post – and this is no exception. plum doesn't want to call is_bearable() or any other existing API of any other existing third-party package, because plum is doing something fundamentally novel that I have never seen anything else do before.

That's fascinating and ultimately the reason we're all here. So, here are the two things I'm seeing:

  • On the one hand, @beartype and other runtime type-checkers validate that an arbitrary object satisfies a given type hint.
  • On the other hand, plum.type_of() heuristically reverse-engineers the type hint which an arbitrary object would satisfy were that object to be annotated by that hint. Let's take a moment and appreciate how excitingly insane and non-trivial that problem domain actually is. Because it is. It's exciting. But it's also insane.

The core issue is that type_of() will fail in the general case, because there are a countably infinite number of edge cases that grow monotonically with each new CPython minor version and publication of new PEP standards. No. Seriously. To paraphrase childhood nostalgia: "It's dangerous to go alone."

old mean spitting truth

Preach, old man!

There are many intersecting issues here. The most significant is that PEP-compliant type hints were never designed, intended, or implemented to be used at runtime. Type hints explicitly prohibit introspection by defining metaclasses whose __isinstancecheck__() and __issubclasscheck__() dunder methods prohibit trivial introspection via isinstance() and issubclass(). So how do you even detect what "sort of thing" a given type hint even is? You kinda don't. Numerous PEP standards even publicly admit that "Runtime is not a concern." If you've ever attempted to introspect a type hint at runtime (and you have; oh, you have), you know this hard truth of which I speak. It's like a parasitic slug attached to the inner lining of your intestinal wall. You know it's there, but you'd rather not publicly talk about it to anyone.

The runtime API of type hints varies across Python versions and PEP standards. Most type hints initially popularized by PEP 484 have since been deprecated by PEP 585. Critically, this includes core type hint factories like typing.List[...] and typing.Tuple[...]. Moreover, arbitrary user-defined classes can subclass arbitrarily complex type hints. We refer to these as generics. Critically, this includes both PEP 484- and 585-compliant generics – whose runtime APIs are completely different "under the hood" despite the superficial similarities in their declarative syntax: e.g.,

from typing import List, Tuple

# This is a PEP 484-compliant generic.
class ListOfStrs484(List[Tuple[str]]): pass

# This is a PEP 585-compliant generic.
class ListOfStrs585(list[tuple[str]]): pass

So, now users can create instances of both ListOfStrs484 and ListOfStrs585 and then pass those instances to plum.type_of(), which then needs to transparently support both PEP 484- and 585-compliant generics. And that's only the start, really. There are typed named tuples, typed dictionaries, typed literals, typed protocols, and all manner of hideous monstrosities you do not want to try detecting by hand – let alone handling. Then throw typing_extensions into the mix as well, which produces objects that have a completely different runtime API from those produced by typing.

Postponement raises its ugly head, too. How do you resolve PEP 563-postponed type hints under the from __future__ import annotation pragma? You kinda don't. I mean, @beartype does – but we're certifiably nutso. Don't do what we do. We set bad examples. Except you kinda have to do what @beartype does, or you can't fully resolve postponed type hints. I see that plum sorta half-heartedly tries to resolve postponed type hints by trying to pretend that they're just forward references in drag – but they're not. 'Union[MyClass, int]' is a valid postponed type hint but not a valid forward reference. Right? To fully resolve the former plum would need to perform deep call stack introspection coupled with potentially dangerous dynamic evaluation via eval(). Even pydantic, with its immense girth and corporate sponsorship, just gave up and politely requested that PEP 563 be rolled back from Python 3.10. Now everyone except @beartype pretends that from __future__ import annotation no longer exists. But... it still exists.

Ambiguity raises its ugly head, too. How do you differentiate between a fixed-length tuple (e.g., an object satisfying typing.Tuple[int, int]) and a variadic tuple (e.g., an object satisfying tuple[int, ...])? You kinda don't. How do you differentiate between a string (e.g., an object satisfying str) and a string literal (e.g., an object satisfying typing.LiteralString)? You kinda don't – not without abstract syntax tree (AST) inspection typically performed by a static type checker, anyway. @beartype will do that in a future version, incidentally.

So What You're Saying is Type Hints Suck

Sorta. That's not far off, really. Type hints don't suck, per say. They're just sufficiently non-trivial to support at runtime that it's generally not worth anyone's time to actually do so – unless they're @beartype or typeguard or another full-blown runtime type-checker, in which case we have no choice but to grit our teeth, hit multiple consecutive shots of home-distilled whisky, clamp down on a leather strap, and brace for incoming pain.

What I'm saying is... plum really wants to defer as much of its type hint handling to third-part(y|ies) as possible. Let plum obsessively focus on what plum excels at, which is multiple dispatch. Let somebody else handle the actual type hints, because ain't nobody got 108 consecutive lifetimes to throw at that shade of nonsense.

@beartype actually does all of the above internally already – full-blown PEP 563 resolution of postponed type hints, full-blown detection of arbitrary type hints, full-blown iteration of generic superclass trees. You name it and we probably have a private internal API handling it. That's the rub, though. Private. We sorta nebulously suspected that somebody else might want to introspect type hints, but never quite got around to finalizing a public API.

The closest we've come is our upcoming Decidedly Object-Oriented Runtime-checking (DOOR) API, shipping with beartype 0.11.0 in a week or two:

# This is DOOR. It's a Pythonic API providing an object-oriented interface
# to low-level type hints that basically have no interface whatsoever.
>>> from beartype.door import TypeHint
>>> usable_type_hint = TypeHint(int | str | None)
>>> print(usable_type_hint)
TypeHint(int | str | None)

# DOOR hints can be iterated Pythonically.
>>> for child_hint in usable_type_hint: print(child_hint)
TypeHint(<class 'int'>)
TypeHint(<class 'str'>)
TypeHint(<class 'NoneType'>)

# DOOR hints support equality Pythonically.
>>> from typing import Union
>>> usable_type_hint == TypeHint(Union[int, str, None])
True  # <-- this is madness.

# DOOR hints support rich comparisons Pythonically.
>>> usable_type_hint <= TypeHint(int | str | bool | None)
True  # <-- madness continues.

# DOOR hints are self-caching.
>>> TypeHint(int | str | bool | None) is TypeHint(int | str | bool | None)
True  # <-- blowing minds over here.

Basically, beartype.door is what typing should have been but isn't (and never will be). If a Pythonic object-oriented API for introspecting type hints would be of interest to plum, just let us know what you'd like us to stuff in there. We'll make the magic happen on your behalf.

Roll Call

Thanks again to @tbsexton for the gracious ping! Furious fist-bumps to @wesselb for maintaining all of this mysterious magic and @ice-tong for their justifiable interest in both macro- and micro-optimization. Literally can't wait to see where you take plum next, @wesselb. 🤜 🤛

@rtbs-dev
Copy link
Contributor

rtbs-dev commented Aug 19, 2022

@leycec thanks for dropping by! I hesitated to so rudely summon you from "hibernation" (read: implementing whatever beautiful insanity v0.12 is turning into), but I wont lie, I'm glad I did! 😅

You mentinoned:

# The real-world, mostly. In a desperate last-ditch effort to push beartype 0.10.0
# out the door, I implemented is_bearable() in the crudest way possible. I wasn't
# convinced anyone was actually interested in performing their own runtime
# type-checking at the statement level. So, I just shipped the "beartype.abby"
# subpackage out the door as fast as possible. We know how this story ends. 

I actually had started writing out a reply to @ice-tong somewhat along those lines, since I had noticed the is_bearable interface source code had a big #FIXME: Optimize us up, please..

What I'm saying is... plum really wants to defer as much of its type hint handling to third-part(y|ies) as possible. Let plum obsessively focus on what plum excels at, which is multiple dispatch. Let somebody else handle the actual type hints, because ain't nobody got 108 consecutive lifetimes to throw at that shade of nonsense.

This was exactly what I was hoping to hear, though how to do this smoothly and without breaking up @wesselb's workflow as much as possible would be excellent. I know the dry-python/classes docs mention beartype for doing generic delegates, but I admit I can't really figure out how the two are meant to interop, when the only overlap I can see (so far) is both beartype and classes are built on isinstance's, at some deep, arcane level. This puts us back where we started...

Needless to say, DOOR is everything I was ever scared I needed, so I'd be happy to beta test for some ideas I've been floating around. Is there an issue/PR I could migrate the discussion to, to keep this on-topic?

no choice but to grit our teeth, hit multiple consecutive shots of home-distilled whisky, clamp down on a leather strap, and brace for incoming pain.

I feel like we need our own masochisttypes-and-functional-programming version of PyCon to commiseratecollaborate on, for all of us weirdos that like this sort of thing. Enough (Dask, Xarray, Jax, Toolz, BiDict, Pandera, Pydantic, and PyViz) folks have done things like this to warrant our own (heretical? ) community gatherings. I know I've wanted, say, higher-kinded-types an untold number of times to handle this sort of thing, which took a small army to describe to others, yknow, why (another of my favorite issues to despairingly track/doomscroll) Or e.g. have you seen coconut-lang? @evhub does some amazing AST manipulation to handle dispatching on pattern matched function definitions, another ingenious bit of wizardry that may be up your alley. It's really just the lack of LSP and emacs tooling that keeps me back in vanilla python land most of the time 😅.

Anyway, thanks @leycec for dropping by, and @wesselb this is a lot of text-walls to come back to

but I guess I just want to say, we're here to help? Happy to continue this discussion here or in a new thread, etc., if beartype/typegaurd reliance interests you.

@wesselb
Copy link
Member

wesselb commented Aug 21, 2022

Sup @tbsexton and @leycec,

I’m currently away on holidays, but I saw emails for your messages popping up and just wanted to briefly thank for you for the fantastic posts! Once I’m back to a keyboard I’ll reply properly.

In the meantime, I just wanted to say that beartype looks absolutely fantastic, and I’m puzzled that I wasn’t aware of the library before. I am more than happy—hell, I’d even much prefer to have plum offload everything that isn’t multiple dispatch to other libraries. I realise that some appreciate that plum currently is dependency free, so perhaps a simplistic fallback (like the current type_of) in case beartype / other dependency isn’t installed seems like a good solution that might yield the best of both worlds.

@leycec
Copy link
Member

leycec commented Aug 23, 2022

@wesselb: Wondrous idea. Graceful fallbacks and optional dependencies are the only way. @beartype itself does that all the friggin' time in our main codebase and test suite. Let us know how @beartype can help plum pluck all that ripe low-hanging fruit for multiple dispatch. Oh, and I hope you had a phenomenal holiday! August and September are the time to be alive here in Canada; my wife and I pretty much just lived on trail bikes for the past three weeks. 🌞 🚵 🌞

@ice-tong: Behold! @beartype commit 8b15fa resolves the shameful performance regression in is_bearable(), which now guarantees the same O(1) "constant-time with negligible constants" runtime performance in all edge cases as the @beartype decorator:

In [1]: from typing import List, Tuple

In [2]: from beartype.abby import is_bearable

In [3]: import torch

In [4]: import numpy as np

In [5]: typing_hints = List[Tuple[torch.Tensor, torch.Tensor]]

In [6]: data = [(np.array(1), np.array(2)) for _ in range(10000*100)]

In [7]: %time is_bearable(data, typing_hints)
CPU times: user 31 µs, sys: 2 µs, total: 33 µs
Wall time: 36.7 µs
Out[7]: False

I threw all of Saturday and Sunday evening at making that magic happen, which meant refactoring the entire @beartype codebase behind the scenes to support dynamic generation of arbitrary type-checking tester functions. Please don't do this at home, kids. 😅

Let me know if you hit any other speed snafus, @ice-tong. Actually... don't let me know. I intend to spend next weekend doing nothing but playing every video game that exists.

@rtbs-dev
Copy link
Contributor

Holy bananas, bearman

Well @leycec that's a tour-de-force, right there. Incredibly useful! Now... go get some sleep, I guess? 😅

@wesselb
Copy link
Member

wesselb commented Sep 2, 2022

I intend to spend next weekend doing nothing but playing every video game that exists.

Whoa! That's a massive effort. I hope you indeed spent last weekend playing your entire catalogue of video games. 😅

First of all, thanks @leycec, @tbsexton, and @ice-tong for the in-depth discussion. It’s much appreciated.

On a high level, for Plum to function, we basically need two things:

(Thing 1) A function type_of which takes in an object o and gives back the type of o. More specifically, it should give the most strict type t such that o is an instance of t. The built-in function type does this, to some extent, but obviously it doesn’t deal with generics:

>>> type([1, 2])
list  # Desired: list[int]

As @leycec rightly so says, implementing such a function, unfortunately, is a massive can of worms, and probably the biggest challenge to tackle.

The core issue is that type_of() will fail in the general case, because there are a countably infinite number of edge cases that grow monotonically with each new CPython minor version and publication of new PEP standards. No. Seriously. To paraphrase childhood nostalgia: "It's dangerous to go alone."

If we could not need Thing 1 and only require the ability to check whether objects are of a certain type at runtime, then we could use @beartype all the way. Sadly, though, I do think multiple dispatch requires a type_of. The issues of which you speak, @leycec, which are (issue 1) variable runtime API of type hints across Python versions and PEP standards (see below), (issue 2) postponement (see also below), and (issue 3) ambiguity (see again below), are something that we've certainly felt. Admittedly, a general solution is likely not possible.

(Thing 2) For all types returned by type_of, we require a total partial order <= indicating whether one type is a subtype of another. As I understand it, @leycec, this is what TypeHint from beartype.door could provide, with the bonus of being blazingly fast. I am definitely in favour.

In addition to Thing 1 and Thing 2, we would really like to ability to define custom types which extend the type system. One such example would be something that we could call a "deferred type". By deferred types I mean types which can already be used for type hints, but which may not yet exist. Forward references, I suppose, are one such example. Another example would be TFTensor, which initially resolves to NotYetImported[“tf.Tensor”] but eventually resolves to tf.Tensor once TensorFlow is imported. We desire the ability to construct custom type that can implement such behaviour and nicely interact with (1) existing types, (2) type_of, and (3) <=.

On Issue 1: Variable Runtime API of Type Hints

Plum currently attempts to deal with this by converting the type to a string and parsing the string. Yes, really. The parsed string is then wrapped in versions of Union et cetera with a consistent interface, which are then used. Clearly, this is not a scalable approach. Again quoting @leycec,

ain't nobody got 108 consecutive lifetimes to throw at that shade of nonsense.

If at all possible, I would really like to get rid of this wrapping of types. (@PhilipVinc)

On Issue 2: Postponement

Forward references have been a pain to get right and are responsible for a fair chunk of the complexity of Pum. @leycec, you say that @beartype resolves PEP 563-postponed type hints. Is that something Plum could borrow, too?

'Union[MyClass, int]' is a valid postponed type hint but not a valid forward reference. Right?

You're right that "Union[MyClass, int]" won't work, though our intention is that, currently, the user uses a slightly different pattern:

from typing import Union

from plum import dispatch


class MyClass:
    @dispatch
    def f(self, x: Union["MyClass", int]):
        return x

Briefly on this topic, there was a time where I thought the following would be a good idea: whenever a user desires a forward reference, require from __future__ import annotations and simply rely on typing.get_type_hints to work the magic. I have a feeling, though, that this may a terrible idea, because from __future__ import annotations might break stuff. @leycec, @tbsexton, @ice-tong, @PhilipVinc, just mentioning this here in case any of you would think this is either a horrible or fantastic idea.

On Issue 3: Ambiguity

How do you differentiate between a fixed-length tuple (e.g., an object satisfying typing.Tuple[int, int]) and a variadic tuple (e.g., an object satisfying tuple[int, ...])?

Fortunately, I believe this should not be an issue, as long as Thing 1 and Thing 2 are soundly implemented. That is, type_of(o) should always give the most strict type t such that o is an instance of t, and all types should soundly implement the total partial order <=.

For the above example, type_of((2, 2)) therefore should give Tuple[int, int], and the total partial order should satsify Tuple[int, int] <= tuple[int, ...].

Concluding Remark

What I'm saying is... plum really wants to defer as much of its type hint handling to third-part(y|ies) as possible.

I wholeheartedly agree. My sense is that it might be time for a 2.* series of Plum where we reconsider all of the above and try to simplify the design of the package, like getting rid of type wrapping; and as much as possible rely on other libraries like @beartype for typing, with simplistic fallbacks in case those libraries aren't loaded.

@leycec
Copy link
Member

leycec commented Sep 3, 2022

Excellent exegesis, @wesselb! As always, because you're right about everything, I'm in agreement with everything. I'm also on the eroded precipice of releasing beartype 0.11.0 and thus a bit pressed for volunteer time. But... this is exciting, so let's do this instead.

A function type_of which takes in an object o and gives back the type of o.

Yes! You also want to do this in O(1) time with negligible constants, which is (yet again) right in @beartype's wheelhousecave. To do this in O(1) time, some combination of @beartype and/or Plum will want to pursue a similar strategy as @beartype's core O(1) type-checking algorithm. That is, rather than exhaustively introspecting all items of an arbitrary container in O(n) time to decide the nested subtype of that container, we and/or you instead pseudo-randomly introspect only a single item efficiently selected at pseudo-random in O(1) time, decide the type of only that item, and then blindly adopt that type as the nested subtype of that container by the power of casual hand-waving.

Again, @beartype can help you here. You might want to allow your users themselves to select between O(1), O(log n), and O(n) type detection strategies, right? Thankfully, beartype 0.10.4 is one step ahead. @beartype already provides a pair of public APIs for expressing these user desires:

# Type-check all items of a list in O(n) time.
from beartype import beartype, BeartypeConf, BeartypeStrategy

@beartype(conf=BeartypeConf(strategy=BeartypeStrategy.On))
def get_list_size_slowly(muh_list: list[str]) -> int:
    return len(muh_list)

The same support extends to our statement-level runtime checkers as well:

# Type-check all items of a list in O(n) time.
from beartype import BeartypeConf, BeartypeStrategy
from beartype.abby import die_if_unbearable

big_list_of_strings = ['Pretend', 'this', 'is', 'alotta', 'strings.']
die_if_unbearable(big_list_of_strings, conf=BeartypeConf(strategy=BeartypeStrategy.On))

So, in synopsis:

  • beartype.BeartypeStrategy is an enumeration with members:
    • O1, expressing a desire for O(1)-style constant-time typing.
    • Ologn, expressing a desire for O(log n)-style logarithmic-time typing.
    • On, expressing a desire for O(n)-style linear-time typing.
  • beartype.BeartypeConf is a configuration dataclass enabling users to fully configure type-checking through various parameters, including:
    • strategy, an instance of beartype.BeartypeStrategy.

Onward and upward, valiant typing soldiers!

For all types returned by type_of, we require a total order <= indicating whether one type is a subtype of another.

Partial order, actually. But, yes! Exactly! A partial order is what builtin Python sets and frozen sets provide as well, I believe. The idea here is that you can only reasonably compare semantically commensurable type hints (i.e., type hints that are meaningfully related to one another). Examples include Union[str, bool, None] and str | bool, where those two type hints are commensurable; indeed, beartype.door.TypeHint(str | bool) < beartype.door.TypeHint(Union[str, bool, None]) is True as expected.

Type hints that are semantically incommensurable, however, always compare as False. Examples include str | bool and Callable[[], None]. Since there's no meaningful relationship between a union and a callable type hint:

  • beartype.door.TypeHint(Callable[[], None]) < beartype.door.TypeHint(str | bool) is False.
  • beartype.door.TypeHint(str | bool) < beartype.door.TypeHint(Callable[[], None]) is False, too! 😱

Thus, partial rather than total order.

Everything above is shipping with beartype 0.11.0, to be released tonight...ish. Please Gods, let it be tonight. 😓

As I understand it, @leycec, this is what TypeHint from beartype.door could provide, with the bonus of being blazingly fast.

Yes! Blazing fast is @beartype's two middle names. Prod us if performance regressions rear their ugly mugs again and we'll promptly wack them with the Mallet of Justice.

Gah!

Depressingly, I just ran out of time. beartype 0.11.0 still isn't out and I basically answered none of @wesselb's questions.

Oh, right! PEP 563-postponed type hints. That's critical. So, beartype.door.TypeHint() currently fails to resolve PEP 563-postponed type hints for you – but absolutely will under a future release, just probably not in time for beartype 0.11.0. The following black magic will work someday; we just have to wire a few things together to make that happen:

>>> from beartype.door import TypeHint

# Currently fails, but definitely will work someday.
# I swear. Would @leycec lie? *sweat drips down forehead*
>>> resolved_typehint = TypeHint('Union[MuhClass, MuhOtherClass, bool]')
>>> resolved_typehint.hint
Union[MuhClass, MuhOtherClass, bool]  # <-- everything resolved, yo

😓

@wesselb
Copy link
Member

wesselb commented Sep 3, 2022

Thus, partial rather than total order.

Yes. 🤦 How very silly of me... You're totally right, of course! Let's pretend I said partial order all along. 😅

Yes! You also want to do this in O(1) time with negligible constants, which is (yet again) right in https://github.com/beartype's wheelhousecave.

Oh, yes—please! If @beartype could be used to implement type_of, and the user would even be able to select an O(1), O(log n), or O(n), strategy, then I think that would be the perfect outcome. :) I think that would be really cool and satisfy all use cases.

So, beartype.door.TypeHint() currently fails to resolve PEP 563-postponed type hints for you – but absolutely will under a future release, just probably not in time for beartype 0.11.0. The following black magic will work someday; we just have to wire a few things together to make that happen:

The goodness just keeps on going! Also this sounds fantastic. :)

I'm also on the eroded precipice of releasing beartype 0.11.0 and thus a bit pressed for volunteer time.

Please do prioritise your own work and don't let this issue take up too much of your time! I am very appreciative of all your time invested in the discussion thus far, @leycec, @ice-tong, @tbsexton.

leycec added a commit to beartype/beartype that referenced this issue Sep 7, 2022
This commit is the first in a commit chain publicizing our previously
private algorithm for resolving stringified type hints postponed by PEP
563 via ``from __future__ import annotations`` statements at the heads
of user-defined modules. Specifically, this commit:

* Defines a new public `beartype.peps` subpackage.
* Defines a new public `beartype.peps.resolve_pep563()` function
  extracted from our previously private `beartype._decor._pep.pep563`
  submodule. This function is intended to be "the final word" on runtime
  resolution of PEP 563-postponed type hints. May no other third-party
  package suffer as we have suffered. This commit is for you, everyone.
  And "by everyone," we of course mostly mean @wesselb of
  [Plum](github.com/wesselb/plum) fame. See also beartype/plum#53.

(*Very connective invective!*)
leycec added a commit to beartype/beartype that referenced this issue Sep 18, 2022
This minor release unleashes a major firestorm of support for **class
decoration,** **colourful exceptions,** **pyright + PyLance + VSCode,**
[PEP 484][PEP 484], [PEP 544][PEP 544], [PEP 561][PEP 561],
[PEP 563][PEP 563], [PEP 585][PEP 585], [PEP 604][PEP 604],
[PEP 612][PEP 612], and [PEP 647][PEP 647].

This minor release resolves a mammoth **29 issues** and merges **12 pull
requests.** Noteworthy changes include:

## Compatibility Improved

* **Class decoration.** The `@beartype` decorator now decorates both
  higher-level classes *and* lower-level callables (i.e., functions,
  methods), resolving feature request #152 kindly submitted by @posita
  the positively sublime. All possible edge cases are supported,
  including:
  * Classes defining methods decorated by builtin decorators: i.e.,
    * Class methods via `@classmethod`.
    * Static methods via `@staticmethod`.
    * Property getters, setters, and deleters via `@property`.
  * Arbitrarily deeply nested (i.e., inner) classes.
  * Arbitrarily deeply nested (i.e., inner) classes whose type hints are
    postponed under [PEP 563][PEP 563].
  Since this was surprisingly trivial, @leycec
  probably should have done this a few years ago. He didn't. This is why
  he laments into his oatmeal in late 2022.
* **[PEP 484][PEP 484]- and [PEP 585][PEP 585]-compliant nested
  generics.** @beartype now supports arbitrarily complex [PEP 484][PEP
  484]- and [PEP 585][PEP 585]-compliant inheritance trees subclassing
  non-trivial combinations of the `typing.Generic` superclass and other
  `typing` pseudo-superclasses, resolving issue #140 kindly submitted by
  @langfield (William Blake – yes, *that* William Blake). Notably,
  this release extricated our transitive visitation of the tree of all
  pseudo-superclasses of any PEP 484- and 585-compliant generic type
  hint (*...don't ask*) from its prior hidden sacred cave deep within
  the private `beartype._decor._code._pep._pephint` submodule into a new
  reusable `iter_hint_pep484585_generic_bases_unerased_tree()`
  generator, which is now believed to be the most fully-compliant
  algorithm for traversing generic inheritance trees at runtime. This
  cleanly resolved all lingering issues surrounding generics,
  dramatically reduced the likelihood of more issues surrounding
  generics, and streamlined the resolution of any more issues
  surrounding generics should they arise... *which they won't.*
  Generics: we have resoundingly beaten you. Stay down, please.
* **[PEP 544][PEP 544] compatibility.** @beartype now supports
  arbitrarily complex [PEP 544][PEP 544]-compliant inheritance trees
  subclassing non-trivial combinations of the `typing.Protocol` +
  `abc.ABC` superclasses, resolving #117 kindly submitted by
  too-entertaining pun master @twoertwein (Torsten Wörtwein).
  Notably, `@beartype` now:
  * Correctly detects non-protocols as non-protocols. Previously,
    @beartype erroneously detected a subset of non-protocols as
    PEP 544-compliant protocols. It's best not to ask why.
  * Ignores both the unsubscripted `beartype.typing.Protocol` superclass
    *and* parametrizations of that superclass by one or more type
    variables (e.g., `beartype.typing.Protocol[typing.TypeVar('T')]`) as
    semantically meaningless in accordance with similar treatment of the
    `typing.Protocol` superclass.
  * Permits caller-defined abstract protocols subclassing our caching
    `beartype.typing.Protocol` superclass to themselves be subclassed by
    one or more concrete subclasses. Previously, attempting to do so
    would raise non-human-readable exceptions from the `typing` module;
    now, doing so behaves as expected.
  * Relaxed our prior bad assumption that the second-to-last superclass
    of all generics – and thus protocols – is the `typing.Generic`
    superclass. That assumption *only* holds for standard generics and
    protocols; non-standard protocols subclassing non-`typing`
    superclasses (e.g., the `abc.ABC` superclass) *after* the list
    `typing` superclass in their method resolution order (MRO)
    flagrantly violate this assumption. Well, that's fine. We're fine
    with that. What's not fine about that? **Fine. This is fine.**
  * Avoids a circular import dependency. Previously, our caching
    `beartype.typing.Protocol` superclass leveraged the general-purpose
    `@beartype._util.cache.utilcachecall.callable_cached decorator` to
    memoize its subscription; however, since that decorator transitively
    imports from the `beartype.typing` subpackage, doing so induced a
    circular import dependency. To circumvent this, a new
    `@beartype.typing._typingcache.callable_cached_minimal` decorator
    implementing only the minimal subset of the full
    `@beartype._util.cache.utilcachecall.callable_cached` decorator has
    been defined; the `beartype.typing` subpackage now safely defers to
    this minimal variant for all its caching needs.
* **[PEP 563][PEP 563] compatibility.** @beartype now resolves [PEP
  563][PEP 563]-postponed **self-referential type hints** (i.e., type
  hints circularly referring to the class currently being decorated).
  **Caveat:** this support requires that external callers decorate the
  *class* being referred to (rather than the *method* doing the
  referring) by the `@beartype` decorator. For this and similar reasons,
  users are advised to begin refactoring their object-oriented codebases
  to decorate their *classes* rather than *methods* with `@beartype`.
* **[PEP 612][PEP 612] partial shallow compatibility.** @beartype now
  shallowly detects [PEP 612][PEP 612]-compliant `typing.ParamSpec`
  objects by internally associating such objects with our
  `beartype._data.hint.pep.sign.datapepsigns.HintSignParamSpec`
  singleton, enabling @beartype to portably introspect
  `Callable[typing.ParamSpec(...), ...]` type hints.
* **Static type-checking.** @beartype is now substantially more
  compliant with static type-checkers, including:
  * **Microsoft [pyright](https://github.com/microsoft/pyright) +
    [PyLance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
    + [VSCode](https://visualstudio.com).** @beartype now officially
    supports pyright, Microsoft's in-house static type-checker oddly
    implemented in pure-TypeScript, <sup>*gulp*</sup> resolving issues
    #126 and #127 kindly submitted by fellow Zelda aficionado @rbroderi.
    Specifically, this release resolves several hundred false warnings
    and errors issued by pyright against the @beartype codebase. It is,
    indeed, dangerous to go alone – but we did it anyway.
  * **mypy `beartype.typing.Protocol` compatibility.** The
    @beartype-specific `beartype.typing.Protocol` superclass implementing
    [PEP 544][PEP 544]-compliant fast caching protocols is now fully
    compatible with mypy, Python's official static type-checker.
    Specifically, `beartype.typing.Protocol` now circumvents:
    * python/mypy#11013 by explicitly annotating the type of its
      `__slots__` as `Any`.
    * python/mypy#9282 by explicitly setting the `typing.TypeVar()`
      `bounds` parameter to this superclass.
* **[PEP 647][PEP 647] compatibility.** @beartype now supports
  arbitrarily complex **[type
  narrowing](https://mypy.readthedocs.io/en/latest/type_narrowing.html)**
  in [PEP 647][PEP 647]-compliant static type-checkers (e.g., mypy,
  pyright), resolving issues #164 and #165 kindly submitted in parallel
  by foxy machine learning gurus @justinchuby (Justin Chuby) and @rsokl
  (Ryan Soklaski). Thanks to their earnest dedication, @beartype is now
  believed to be the most fully complete type narrower. Specifically,
  the return of both the `beartype.door.is_bearable()` function and
  corresponding `beartype.door.TypeHint.is_bearable()` method are now
  annotated by the [PEP 647][PEP 647]-compliant `typing.TypeGuard[...]`
  type hint under both Python ≥ 3.10 *and* Python < 3.10 when the
  optional third-party `typing_extensions` dependency is installed.
  Doing so substantially reduces false positives from static type
  checkers on downstream codebases deferring to these callables.
  Thanks so much for improving @beartype so much, @justinchuby and
  @rsokl!
* **`@{classmethod,staticmethod,property}` chaining.** The `@beartype`
  decorator now implicitly supports callables decorated by both
  `@beartype` *and* one of the builtin method decorators `@classmethod`,
  `@staticmethod`, or `@property` regardless of decoration order,
  resolving issue #80 kindly requested by @qiujiangkun (AKA, Type
  Genius-kun). Previously, `@beartype` explicitly raised an exception
  when ordered *after* one of those builtin method decorators. This
  releseae relaxes this constraint, enabling callers to list `@beartype`
  either before or after one of those builtin method decorators.
* **`beartype.vale.Is[...]` integration.** Functional validators (i.e.,
  `beartype.vale.Is[...]`) now integrate more cleanly with the remainder
  of the Python ecosystem, including:
  * **IPython.** Functional validators localized to a sufficiently
    intelligent REPL (e.g., IPython) that caches locally defined
    callables to the standard `linecache` module now raise
    human-readable errors on type-checking, resolving issue #123 kindly
    submitted by typing brain-child @braniii. Relatedly, @beartype now
    permissively accepts both physical on-disk files and dynamic
    in-memory fake files cached with `linecache` as the files defining
    an arbitrary callable.
  * **NumPy,** which publishes various **bool-like tester functions**
    (i.e., functions returning a non-`bool` object whose class defines
    at least one of the `__bool__()` or `__len__()` dunder methods and
    is thus implicitly convertible into a `bool`). Functional validators
    now support subscription by these functions, resolving issue #153
    kindly submitted by molecular luminary @braniii (Daniel Nagel).
    Specifically, @beartype now unconditionally wraps *all* tester
    callables subscripting (indexing) `beartype.vale.Is` with a new
    private `_is_valid_bool()` closure that (in order):
    1. Detects when those tester callables return bool-like objects.
    2. Coerces those objects into corresponding `bool` values.
    3. Returns those `bool` values instead.
* **Moar fake builtin types.**@beartype now detects all known **fake
  builtin types** (i.e., C-based types falsely advertising themselves as
  being builtin and thus *not* require explicit importation), succinctly
  resolving issue #158 kindly submitted by the decorous typing gentleman
  @langfield. Specifically, @beartype now recognizes instances of all of
  the following as fake builtin types:
  * `beartype.cave.AsyncCoroutineCType`.
  * `beartype.cave.AsyncGeneratorCType`.
  * `beartype.cave.CallableCodeObjectType`.
  * `beartype.cave.CallableFrameType`.
  * `beartype.cave.ClassDictType`.
  * `beartype.cave.ClassType`.
  * `beartype.cave.ClosureVarCellType`.
  * `beartype.cave.EllipsisType`.
  * `beartype.cave.ExceptionTracebackType`.
  * `beartype.cave.FunctionType`.
  * `beartype.cave.FunctionOrMethodCType`.
  * `beartype.cave.GeneratorCType`.
  * `beartype.cave.MethodBoundInstanceDunderCType`.
  * `beartype.cave.MethodBoundInstanceOrClassType`.
  * `beartype.cave.MethodDecoratorBuiltinTypes`.
  * `beartype.cave.MethodUnboundClassCType`.
  * `beartype.cave.MethodUnboundInstanceDunderCType`.
  * `beartype.cave.MethodUnboundInstanceNondunderCType`.
  * `beartype.cave.MethodUnboundPropertyNontrivialCExtensionType`.
  * `beartype.cave.MethodUnboundPropertyTrivialCExtensionType`.

## Compatibility Broken

* **Python 3.6.x support dropped.** This release unilaterally drops
  support for the Python 3.6.x series, which somnambulantly collided
  with its End-of-Life (EOL) a year ago and now constitutes a compelling
  security risk. Doing so substantially streamlines the codebase, whose
  support for Python 3.6.x required an unmaintainable writhing nest of
  wicked corner cases. We all now breathe a sigh of contentment in the
  temporary stillness of morning.
* **`beartype.cave` deprecation removals.** This release removes all
  deprecated third-party attributes from the `beartype.cave` submodule.
  The continued existence of these attributes substantially increased
  the cost of importing *anything* from our mostly undocumented
  `beartype.cave` submodule, rendering that submodule even less useful
  than it already is. Specifically, this release removes these
  previously deprecated attributes:
  * `beartype.cave.NumpyArrayType`.
  * `beartype.cave.NumpyScalarType`.
  * `beartype.cave.SequenceOrNumpyArrayTypes`.
  * `beartype.cave.SequenceMutableOrNumpyArrayTypes`.
  * `beartype.cave.SetuptoolsVersionTypes`.
  * `beartype.cave.VersionComparableTypes`.
  * `beartype.cave.VersionTypes`.

## Exceptions Improved

* **Colour** – the sensation formerly known as "color." @beartype now
  emits colourized type-checking violations (i.e.,
  `beartype.roar.BeartypeCallHintViolation` exceptions) raised by both
  `@beartype`-decorated callables *and* statement-level type-checkers
  (e.g., `beartype.door.die_if_unbearable()`,
  `beartype.door.TypeHint.die_if_unbearable()`), resolving issue #161
  kindly submitted by foxy machine learning expert @justinchuby (Justin
  Chu). When standard output is attached to an interactive terminal
  (TTY), ANSII-flavoured colours now syntactically highlight various
  substrings of those violations for improved visibility, readability,
  and debuggability. Since *all* actively maintained versions of Windows
  (i.e., Windows ≥ 10) now widely support ANSII escape sequences across
  both Microsoft-managed terminals (e.g., Windows Terminal) and
  Microsoft-managed Integrated Development Environments (IDEs) (e.g.,
  VSCode), this supports extends to Windows as well. The bad old days of
  non-standard behaviour are behind us all. Thanks *so* much to
  @justinchuby for his immense contribution to the righteous cause of
  eye-pleasing user experience (UX)!
* **Types disambiguated.** @beartype now explicitly disambiguates the
  types of parameters and returns that violate type-checking in
  exception messages raised by the `@beartype` decorator, resolving
  issue #124 kindly submitted by typing brain-child @braniii. Thus was
  justice restored to the QAverse.
* **Stack frame squelched.** @beartype now intentionally squelches
  (i.e., hides) the ignorable stack frame encapsulating the call to our
  private `beartype._decor._error.errormain.get_beartype_violation()`
  getter from the parent type-checking wrapper function generated by the
  :mod:`beartype.beartype` decorator, resolving issue #140 kindly
  submitted by @langfield (William Blake – yes, *that* William Blake).
  That stack frame only needlessly complicated visual inspection of
  type-checking violations in tracebacks – especially from testing
  frameworks like :mod:`pytest` that recapitulate the full definition of
  the `get_beartype_violation()` getter (including verbose docstring) in
  those tracebacks. Specifically, this release:
  * Renamed the poorly named `raise_pep_call_exception()` function to
    `get_beartype_violation()` for clarity.
  * Refactored `get_beartype_violation()` to return rather than raise
    `BeartypeCallHintViolation` exceptions (while still raising all
    other types of unexpected exceptions for robustness).
  * Refactored type-checking wrapper functions to directly raise the
    exception returned by calling `get_beartype_violation()`.
* **``None`` type.** The type of the ``None`` singleton is no longer
  erroneously labelled as a PEP 544-compliant protocol in type-checking
  violations. Let's pretend that never happened.
* **`beartype.abby.die_if_unbearable()` violations.** The
  `beartype.abby.die_if_unbearable()` validator function no longer
  raises non-human-readable exception messages prefixed by the
  unexpected substring `"@beartyped
  beartype.abby._abbytest._get_type_checker._die_if_unbearable()
  return"`. "Surely that never happened, @beartype!"

## Features Added

* **`beartype.door.** @beartype now provides a new public framework for
  introspecting, sorting, and type-checking type hints at runtime in
  constant time. N-n-now... hear me out here. @leycec came up with a
  ludicrous acronym and we're going to have to learn to live with it:
  the **D**ecidedly **O**bject-**O**rientedly **R**ecursive (DOOR) API.
  Or, `beartype.door` for short. Open the door to a whole new
  type-hinting world, everyone. `beartype.door` enables type hint
  arithmetic via an object-oriented type hint class hierarchy
  encapsulating the crude non-object-oriented type hint declarative API
  standardized by the :mod:`typing` module, resolving issues #133 and
  #138 kindly submitted by Harvard microscopist and general genius
  @tlambert03. The new `beartype.door` subpackage defines a public:
  * `TypeHint({type_hint})` superclass, enabling rich comparisons
    between pairs of arbitrary type hints. Altogether, this class
    implements a partial ordering over the countably infinite set of all
    type hints. Pedagogical excitement ensues. Instances of this class
    efficiently satisfy both the `collections.abc.Sequence` and
    `collections.abc.FrozenSet` abstract base classes (ABC) and thus
    behave just like tuples and frozen sets over child type hints.
    Public attributes defined by this class include:
    * A pair of `die_if_unbearable()` and `is_bearable()` runtime
      type-checking methods, analogous in behaviour to the existing
      `beartype.abby.die_if_unbearable()` and
      `beartype.abby.is_bearable()` runtime type-checking functions.
    * `TypeHint.is_bearable()`, currently implemented in terms of the
      procedural `beartype.abby.is_bearable()` tester.
    * An `is_ignorable` property evaluating to `True` only if the
      current type hint is semantically ignorable (e.g., `object`,
      `typing.Any`). There exist a countably infinite number of
      semantically ignorable type hints. The more you know, the less you
      want to read this changeset.
    * The equality comparison operator (e.g., `==`), enabling type hints
      to be compared according to semantic equivalence.
    * Rich comparison operators (e.g., `<=`, `>`), enabling type hints
      to be compared and sorted according to semantic narrowing.
    * A sane `__bool__()` dunder method, enabling type hint wrappers to
      be trivially evaluated as booleans according to the child type
      hints subscripting the wrapped type hints.
    * A sane `__len__()` dunder method, enabling type hint wrappers to
      be trivially sized according to the child type hints subscripting
      the wrapped type hints.
    * A sane `__contains__()` dunder method, enabling type hint wrappers
      to be tested for child type hint membership – just like builtin
      sets, frozen sets, and dictionaries.
    * A sane `__getindex__()` dunder method, enabling type hint wrappers
      to be subscripted by both positive and negative indices as well as
      slices of such indices – just like builtin tuples.
  * `beartype.door.AnnotatedTypeHint` subclass.
  * `beartype.door.CallableTypeHint` subclass.
  * `beartype.door.LiteralTypeHint` subclass.
  * `beartype.door.NewTypeTypeHint` subclass.
  * `beartype.door.TupleTypeHint` subclass.
  * `beartype.door.TypeVarTypeHint` subclass.
  * `beartype.door.UnionTypeHint` subclass.
  * `is_subtype({type_hint_a}, {type_hint_b})` function, enabling
    @beartype users to decide whether any type hint is a **subtype**
    (i.e., narrower type hint) of any other type hint.
  * `beartype.roar.BeartypeDoorNonpepException` type, raised when the
    `beartype.door.TypeHint` constructor is passed an object that is
    *not* a PEP-compliant type hint currently supported by the DOOR API.
  Thanks so much to @tlambert03 for his phenomenal work here. He ran
  GitHub's PR gauntlet so that you did not have to. Praise be to him.
  Some people are the living embodiment of quality. @tlambert03 is one
  such people.
* **`beartype.peps`.** @beartype now publicizes runtime support for
  `typing`-centric Python Enhancement Proposals (PEPs) that currently
  lack official runtime support via a new public subpackage:
  `beartype.peps`. Notably, @beartype now provides:
  . Specifically, this commit:
  * A new public `beartype.peps.resolve_pep563()` function resolving
    [PEP 563][PEP 563]-postponed type hints on behalf of third-party
    Python packages. This function is intended to be "the final word" on
    runtime resolution of [PEP 563][PEP 563]. May no other third-party
    package suffer as we have suffered. This commit is for you,
    everyone. And "by everyone," we of course mostly mean @wesselb of
    [Plum](github.com/wesselb/plum) fame. See also beartype/plum#53.
* **`beartype.vale.Is*[...] {&,|}` short-circuiting.** `&`- and
  `|`-chained beartype validators now explicitly short-circuit when
  raising human-readable exceptions from type-checking violations
  against those validators, resolving issue #125 kindly submitted by
  typing brain-child @braniii.

## Features Optimized

* **`beartype.abby.is_bearable()` when returning `False`.** Previously,
  the public `beartype.abby.is_bearable()` runtime type-checker behaved
  reasonably optimally when the passed object satisfied the passed type
  hint but *extremely* suboptimally when that object violated that hint;
  this was due to our current naive implementation of that tester using
  the standard Easier to Ask for Permission than Forgiveness (EAFP)
  approach. This release fundamentally refactored
  `beartype.abby.is_bearable()` in terms of our new private
  `beartype._check.checkmake.make_func_tester()` type-checking tester
  function factory function. Ad-hoc profiling shows a speedup on
  the order of eight orders of magnitude – the single most intense
  optimization @beartype has ever brought to bear (*heh*). Our core code
  generation API now transparently generates both:
  * **Runtime type-checking testers** (i.e., functions merely returning
    ``False`` on type-checking violations).
  * **Runtime type-checking validators** (i.e., functions raising
    exceptions on type-checking violations).
* **[PEP 604][PEP 604]-compliant new unions** (e.g., `int | str |
  None`). Since these unions are **non-self-caching type hints** (i.e.,
  hints that do *not* implicitly cache themselves to reduce space and
  time consumption), @beartype now efficiently coerces these unions into
  singletons in the same manner as [PEP 585][PEP 585]-compliant type
  hints – which are similarly non-self-caching.

## Features Deprecated

* **`beartype.abby` → `beartype.door`.** This release officially
  deprecates the poorly named `beartype.abby` subpackage in favour of
  the sorta less poorly named `beartype.door` subpackage, whose name
  actually means something – even if that something is a punny acronym
  no one will ever find funny. Specifically:
  * `beartype.abby.die_if_unbearable()` has been moved to
    `beartype.door.die_if_unbearable()`.
  * `beartype.abby.is_bearable()` has been moved to
    `beartype.door.is_bearable()`.
  To preserve backward compatibility, the `beartype.abby` subpackage
  continues to dynamically exist (and thus be importable from) – albeit
  as a deprecated alias of the `beartype.door` subpackage.

## Deprecations Resolved

* **Setuptools licensing.** This release resolves a mostly negligible
  `setuptools` deprecation warning concerning the deprecated
  `license_file` setting in the top-level `setup.cfg` file. *Next!*

## Tests Improved

* **[PEP 544][PEP 544] compatibility.** All [PEP 544][PEP 544]-specific
  test type hints have been generalized to apply to both the non-caching
  `typing.Protocol` superclass *and* our caching
  `beartype.typing.Protocol` superclass.
* **[PEP 561][PEP 561] compatibility via pyright.** Our test suite now
  enforces static type-checking with `pyright`. Notably:
  * A new `test_pep561_pyright` functional test statically type-checks
    the @beartype codebase against the external `pyright` command in the
    current `${PATH}` (if available) specific to the version of the
    active Python interpreter currently being tested. For personal
    sanity, this test is currently ignored on remote continuous
    integration (CI) workflows. Let this shrieking demon finally die!
  * The private `beartype_test.util.cmd.pytcmdrun` submodule underlying
    our cross-platform portable forking of testing subprocesses now
    transparently supports vanilla Windows shells (e.g., `CMD.exe`,
    PowerShell).
* **Tarball compatibility.** `beartype` may now be fully tested from
  non-`git` repositories, including source tarballs containing the
  `beartype_test` package. Previously, three functional tests making
  inappropriate assumptions about the existence of a top-level `.git/`
  directory failed when exercised from a source tarball.
* **Sphinx documentation.** Our test suite now exercises that our
  documentation successfully builds with Sphinx via a new
  `test_sphinx_build()` functional test. This was surprisingly
  non-trivial – thanks to the `pytest`-specific `sphinx.testing`
  subpackage being mostly undocumented, behaving non-orthogonally, and
  suffering a host of unresolved issues that required we monkey-patch
  the core `pathlib.Path` class. Insanity, thy name is Sphinx.
* **GitHub Actions dependencies bumped.** This release bumps our GitHub
  Actions-based continuous integration (CI) workflows to both the
  recently released `checkout@v3` and `setup-python@v3` actions,
  inspired by a pair of sadly closed PRs by @RotekHandelsGmbH CTO
  @bitranox (Robert Nowotny). Thanks so much for the great idea,
  @bitranox!
* **`beartype.door` conformance.** A new smoke test guarantees
  conformance between our DOOR API and abstract base classes (ABCs)
  published by the standard `typing` module.
* **python/mypy#13627 circumvention.** This release pins our GitHub
  Actions-based CI workflow to Python 3.10.6 rather than 3.10.7,
  resolving a mypy-specific complaint inducing spurious test failures.

## Documentation Improved

* **[`beartype.abby`
  documented](https://github.com/beartype/beartype#beartype-at-any-time-api).**
  The new "Beartype At Any Time API" subsection of our front-facing
  `README.rst` file now documents our public `beartype.abby` API,
  resolving issue #139 kindly submitted by @gelatinouscube42 (i.e., the
  user whose username is the answer to the question: "What is the
  meaning of collagen sustainably harvested from animal body parts?").
* **[GitHub Sponsors activated](https://github.com/sponsors/leycec).**
  @beartype is now proudly financially supported by **GitHub Sponsors.**
  Specifically, this release:
  * Defines a new GitHub-specific funding configuration (i.e.,
    `.github/FUNDING.yml`).
  * Injects a hopefully non-intrusive advertising template
    <sup>*gulp*</sup> at the head of our `README.rst` documentation.
* **Sphinx configuration sanitized.** As the first tentative step
  towards chain refactoring our documentation from its current
  monolithic home in our top-level `README.rst` file to its eventual
  modular home at [ReadTheDocs (RTD)](https://beartype.readthedocs.io),
  en-route to resolving issue #8 (!) kindly submitted a literal lifetime
  ago by visionary computer vision export and long-standing phenomenal
  Finn @felix-hilden (Felix Hildén):
  * Our core Sphinx configuration has been resurrected from its early
    grave – which now actually builds nothing without raising errors. Is
    this an accomplishment? In 2022, mere survival is an accomplishment!
    So... *yes.* Significant improvements include:
    * Activation and configuration of the effectively mandatory
      `autosectionlabels` builtin Sphinx extension.
  * Our `doc/source/404.rst` file has been temporarily moved aside,
    resolving a non-fatal warning pertaining to that file. Look, we're
    not here to actually solve deep issues; we're here to just get
    documentation building, which it's not. Sphinx, you have much to
    answer for.
  * Our top-level `sphinx` entry point now:
    * Temporarily disables Sphinx's nit-picky mode (i.e., the `-n`
      option previously passed to `sphinx-build`) due to Sphinx's
      `autodoc` extension locally failing to generate working
      references.
    * Unconditionally disables Sphinx caching by forcing *all* target
      documentation files to be rebuilt regardless of whether their
      underlying source files have since been modified or not, obviating
      spurious build issues.

  [PEP 484]: https://www.python.org/dev/peps/pep-0484/
  [PEP 544]: https://www.python.org/dev/peps/pep-0544/
  [PEP 561]: https://www.python.org/dev/peps/pep-0561/
  [PEP 563]: https://www.python.org/dev/peps/pep-0563/
  [PEP 585]: https://www.python.org/dev/peps/pep-0585/
  [PEP 604]: https://www.python.org/dev/peps/pep-0604/
  [PEP 612]: https://www.python.org/dev/peps/pep-0612/
  [PEP 647]: https://www.python.org/dev/peps/pep-0647/

(*Impossible journey on an implacable placard-studded gurney!*)
@leycec
Copy link
Member

leycec commented Sep 21, 2022

💥

Just did things. The new Plum-friendly beartype.door and beartype.peps subpackages have gone public with our hot-off-the-presses release of @beartype 0.11.0.

beartype.door: Never Leave typing Without It

The beartype.door.TypeHint class now provides a partial ordering over type hints – amongst other ludicrous things like Pythonic indexing, iteration, and membership testing of child type hints:

# This is DOOR. It's a Pythonic API providing an object-oriented interface
# to low-level type hints that basically have no interface whatsoever.
>>> from beartype.door import TypeHint
>>> union_hint = TypeHint(int | str | None)
>>> print(union_hint)
TypeHint(int | str | None)

# DOOR hints have Pythonic classes -- unlike normal type hints.
>>> type(union_hint)
beartype.door.UnionTypeHint  # <-- what madness is this?

# DOOR hints can be classified Pythonically -- unlike normal type hints.
>>> from beartype.door import UnionTypeHint
>>> isinstance(union_hint, UnionTypeHint)  # <-- *shocked face*
True

# DOOR hints can be type-checked Pythonically -- unlike normal type hints.
>>> union_hint.is_bearable('The unbearable lightness of type-checking.')
True
>>> union_hint.die_if_unbearable(b'The @beartype that cannot be named.')
beartype.roar.BeartypeDoorHintViolation: Object b'The @beartype that cannot
be named.' violates type hint int | str | None, as bytes b'The @beartype
that cannot be named.' not str, <class "builtins.NoneType">, or int.

# DOOR hints can be iterated Pythonically -- unlike normal type hints.
>>> for child_hint in union_hint: print(child_hint)
TypeHint(<class 'int'>)
TypeHint(<class 'str'>)
TypeHint(<class 'NoneType'>)

# DOOR hints can be indexed Pythonically -- unlike normal type hints.
>>> union_hint[0]
TypeHint(<class 'int'>)
>>> union_hint[-1]
TypeHint(<class 'str'>)

# DOOR hints can be sliced Pythonically -- unlike normal type hints.
>>> union_hint[0:2]
(TypeHint(<class 'int'>), TypeHint(<class 'str'>))

# DOOR hints supports "in" Pythonically -- unlike normal type hints.
>>> TypeHint(int) in union_hint  # <-- it's all true.
True
>>> TypeHint(bool) in union_hint  # <-- believe it.
False

# DOOR hints are sized Pythonically -- unlike normal type hints.
>>> len(union_hint)  # <-- woah.
3

# DOOR hints reduce to booleans Pythonically -- unlike normal type hints.
>>> if union_hint: print('This type hint has children.')
This type hint has children.
>>> if not TypeHint(tuple[()]): print('But this other type hint is empty.')
But this other type hint is empty.

# DOOR hints support equality Pythonically -- unlike normal type hints.
>>> from typing import Union
>>> union_hint == TypeHint(Union[int, str, None])
True  # <-- this is madness.

# DOOR hints support comparisons Pythonically -- unlike normal type hints.
>>> union_hint <= TypeHint(int | str | bool | None)
True  # <-- madness continues.

# DOOR hints are semantically self-caching.
>>> TypeHint(int | str | bool | None) is TypeHint(None | bool | str | int)
True  # <-- blowing minds over here.

beartype.peps: The Final Word on PEP 563

The beartype.peps subpackage is considerably simpler. Currently, it just provides a resolve_pep563() function fully resolving PEP 563-postponed type hints. This is probably the closest Python will ever get to a reference implementation of runtime PEP 563 support:

# In "horrorshow.py":
from __future__ import annotations  # <-- people should really stop doing this

def sad_function_is_sad() -> str | bytes | None:
    if 'sad':
        return "You can't spell 'sadness' without madness (excluding the 'm')."
    else:
        return b'This cannot be! But it is! An unreachable condition erupts.'

print('Uh oh. We got postponed type hints here:')
print(repr(sad_function_is_sad.__annotations__['return']))
print()

# Your time is over, PEP 563.
from beartype.peps import resolve_pep563

# Make the bad postponed type hints go away, Bear Daddy.
resolve_pep563(sad_function_is_sad)

print('Suck it, PEP 563. Only real type hints need apply:')
print(repr(sad_function_is_sad.__annotations__['return']))

Now run that, because you trust @leycec implicitly. You do trust @leycec implicitly, don't you? smiling_face_with_tear

$ python3.10 horrorshow.py
Uh oh. We got postponed type hints here:
'str | bytes | None'

Suck it, PEP 563. Only real type hints need apply:
str | bytes | None

type_of()

Oh, Gods. Here we go, folks.

I've given a bit of thought to how beartype might implement something resembling Plum's type_of(). I'm convinced it can be done in O(k) time for k the height of a type hint tree (e.g., k == 3 for Union[str, list[int], tuple[set[bool], ...]]). Since almost all type hints of interest to anyone that matters are shallow in height, that's basically O(1) time. Recursion is dangerous and slow in Python, so we implement it using a single loop.

Relatedly: this is actually how @beartype dynamically generates Python code that runtime type-checks arbitrarily complex type hints. Yup. It's all a single loop. 😅 😰 🥵

So. A @beartype-style type_of() implementation guarantees O(k) time by only randomly sampling a single item at each nesting depth of a container. For example, given a container satisfying the type hint list[list[str]] (i.e., a list of lists of strings), a @beartype-style type_of() would:

  1. Randomly sample a single item of the outermost list.
  2. Detect that item to be a nested sublist.
  3. Randomly sample a single item of that nested sublist.
  4. Detect that item to be a string.
  5. Return list[list[str]].

Easy 'nuff, right? The devil's in the details. Most containers don't support random access; they only support sequential access (e.g., dictionaries, sets), which means you can only efficiently access the first item. Moreover, many iterables can't be safely iterated at all idempotently, because the mere act of iteration fundamentally changes the iterable (e.g., generators, coroutines, counters). The theoretical complexities spiral into real-world hardships pretty fast.

Moreover, what about infinitely nested containers. 😲

plaid_speed = ["We've now hit Ludicrous Speed.",]
plaid_speed.append(plaid_speed)

Nobody ever actually uses infinitely nested containers in production code, because we are all sane here – but malicious Bad Dudes will inevitably pass one of these monstrosities to something that transitively calls type_of(). And then Python blows smelly chunks everywhere.

Okay. Technically, one can and should protect against that. Plum is probably doing something like this already. You just internally cache the id() of each previously visited item in a local set named seen_em; then, before delving any deeper into an item, you just test whether you've already done that before and avoid doing so if you have. Right? Potential catastrophe averted.

Sounds like blood, thunder, and fun. But will @leycec ever find volunteer open-source time to actually do that when he still hasn't authored sound ReadTheDocs-hosted documentation for @beartype? Tune in next exciting GitHub monologue to find out all this... and more. 🐻

@rtbs-dev
Copy link
Contributor

@leycec @wesselb So --- and this might be totally naive --- do we need a type_of function to operate in the first place? Can we just... solve the inverse problem?

Hear me out:

Dispatch means we know what we want

Ostensibly, any user of multiple dispatch here isn't actually trying to dispatch on every extant type possible, but rather they register allowable types and provide a mechanism for extensibility. This means that, at runtime, we're not actually needing to know "what is the type_of my input", but rather, "Is my input a registered type, and if so, which one is best?"

You might be thinking (as I did, previously) "hey wait! I have to know what the type is in order know if it's registered...right?" And that's kind of correct. Except that it happens we have a "magic oracle" a la is_bearable, such that we can ask "what about this one?" and each time get a YES or a NO.

So, yeah, for a given input, if the "nice" type information isn't known at runtime (e.g. complex, nested iterables, and anything that type() gives us back garbage for) we can just go down the list in approximately O(n) worst-case time... it's a sort of type hash-table.

But we can do better, right?

So, this is a lot like the good old nearest-neighbor problem, right? (no? Not following? alright, hang on...)

If I have a bunch of data inputs and outputs, and some new data comes along, I want to guess what the output should be. If I were DATA GOD, I would have a magic, domain-driven function that spat out the one-true answer every time. In our situation, this is the same as having our type_of() function, fully operational (...battle-station?).

But we aren't usually DATA GOD, so we cheat. We look up in our table of known inputs and see if any of them are "close". In this case, this is our is_bearable(). Ok, so far so good. Except, O(n) is the bane of nearest neighbors' existence, since generally we want to do this a lot.... like an O(n^2) lot. So what do we do?

By imposing a hierarchy onto the space of data (e.g. different sized binnning of the cartesian coordinates) we can ask fewer, simpler questions... is my new point in the left or right half of my data? Ok, now in the left or right of that? And that?

This is a (really awful) way of describing a binary search tree (more like a kd-Tree). But anyways, we do have hierarchy information, no? there's a partial order on types, and iirc containers can be treated as a tree of sorts...

Let's make a prefix tree of some kind, that includes everything beartype already knows about, plus the set of registered TypeHint stuff that the dispatcher has actually been given. Then we treat the dispatch problem as a tree search problem.

what this looks like

brief sketch

  1. beartype.door (or plum, idk) has a search tree of known types, based on their partial order. Generic types (a la union, protocols, etc) would just recursively use the same search tree, just for the type parameter instead of the type itself.
  2. programmer using plum registers types to the dispatcher... these can be either already in the search tree, or added explicitly as new type alias' or something. All of the registered types are effectively just adding a bool (registered=True) as their node value. This step is where the new door api is invaluable... we can insert the dispatched types into their correct spot in the tree right away.
  3. runtime users put a value into the dispatched function
    • we check the "top-level" (i.e. roots) of the various trees... bool, int, str, iterable, etc. using is_bearable. If something passes, continue down that branch.
    • do any children pass is_bearable for our input? if yes, take that branch
    • continue until a fail, or the end of the tree.
    • we now have an approximation of the nearest type_of, assuming it's either a type we already know or have been told about in the dispatcher
    • is it registered? then that's the dispatch!

For your example of infinite lists, then either

  1. we don't like that (we want a List[List[List[str]]], damnit!), OR
  2. we don't actually care (I mean, technically a List[List[List[List[List[List[...]]]]]] is a List[List[List]], so... we only need to know if it's the latter or not... 3 steps deep in the tree, as spec'd in the dispatcher).

post script

Technically, we don't even need the first step... plum could handle making a prefix tree by itself, and only of types that we dispatch on. BUT I feel like the overhead is minimal, and we'd get really rich debug info if we had the full tree, so... shrug?

Admittedly, this is going to run in O(log n) for n types known a priori (whether all known types, or just the dispatched types, see above). My assumption here is that the number of possible input types is going to WAY dwarf the number of types a dispatcher is registered to, so I'd personally rather scale by the latter. ALSO, we can always memoize this, so that if we do find a decent type to "match" our input, it goes right to the O(1) lookup table?

@leycec
Copy link
Member

leycec commented Oct 11, 2022

Yup. Smarty-pants NIST scientist @tbsexton with the game-winning layup at the buzzer. A binary search tree-ish monstrosity administered by beartype.door.is_bearable() is a clever alternative – and possibly even simpler (despite its monstrous form).

It's germane for me to admit at this critical juncture that @beartype currently does not deeply type-check nearly as much as everyone believes it does: e.g.,

>>> from beartype.door import is_bearable
>>> is_bearable({'This is...': 'kinda non-ideal.'}, dict[bool, float])
True  # <-- oh gods wth is this horror show

The reason being I haven't found a spare hot minute to throw at deep type-checking. This is embarrassing, because that's what we're supposed to do. I kinda got distracted by other deceptively enticing open feature requests like documenting everything on ReadTheDocs (RTD) 😬 and the implicit numeric tower and is_color... and here we are.

Maybe I'd just better do deep type-checking, instead. I'm confident I can spin up deep type-checking support for sets and mappings (i.e., dictionaries), which hopefully covers most of the remaining type hints plum users would be expected to dispatch on. If I had a goatee, I would be stroking it optimistically.

@rtbs-dev
Copy link
Contributor

rtbs-dev commented Oct 11, 2022

Hah I mean, we'll see if this is viable, and monstrous just sounds fun right? :P

As for deep-checking, this is strictly for (based on the nasty example we saw starting this whole thing) dict, set, and tuple? Since those don't randomly sample without some ctypes shenanigans? Hmm...

I see your door api already does this? e.g.

>>> TypeHint(T.Dict[T.Any, int]) > TypeHint(T.Dict[bool,int])
True

>>> TypeHint(T.Dict[T.Any, int]) < TypeHint(T.Dict[bool,int])
False

>>> TypeHint(T.Dict[str, T.Dict]) > TypeHint(T.Dict[str,T.Dict[str,int]])
True

>>> TypeHint(T.Dict[str, T.Dict[str, T.Dict]]) > TypeHint(T.Dict[str,T.Dict[str,int]])
False

So yeah that's awesome. AND we now know, a priori, how deep a given dispatcher needs to go (since by construction we are assuming the universe of possible types is given by the dispatcher).

For my case, if my dispatcher had the audacity to register TypeHint(T.Dict[str, T.Dict[str, T.Dict]]), then in my "vision", we recursively traverse the tree every time there's a new generic's parameter. It's like a new level of dreams in inception; we just "start over" with the parameter as our type-in-question. So we shouldn't ever really be asking is_bearable about anything deeper than one level, right?

>>> my_json={'look':{'forthe':{'bear':'necessities'}}}
>>> is_bearable(my_json, T.Dict)  # <-- the root check
True
>>> is_bearable(my_json.items[1], T.Dict) # <-- the 1st-level child check 
True
>>> is_bearable(my_json.items[1].items[1], T.Dict) # <--- I have no memory of this place
True
>>> is_bearable(my_json.items[1].items[1].items[1], str) # this way! the air is not so foul down here
True

I skipped the items[0] checks, but the point is the same...each time it's just a check against the generic parameter, and whatever we found inside the corresponding container in what got passed? We don't need to randomly access anything, since we know a priori what exactly should be at every level of the registered type hint.

Am I missing something really obvious? I'm running a tad low on sleep... 😅

EDIT: I thought of an easier way to describe that monologue (thank you, sleep)...
It's a separation of concerns between type variance coming from

  • inheritance and object shenanigans (beartype is fine with this; I'm looking at you, enum 😠 ), and
  • generics/parameterization (beartype can be fine with this a la Union/List, but not for dict/tuple/set)

We only really need the search tree for the first one, because parameters are either also needing the first one, or also have their own parameter, and down the recursion we go, until we hit the max depth specified by the dispatch registry (this part is important!)

Obviously it's easier (for us, anyway) if beartype handled both types in one call, with deep-checks. But frankly I'm too excited for real documentation and whatever is_color is that I can't imagine adding more stuff to that pile (you maniac 😆)

@leycec
Copy link
Member

leycec commented Oct 13, 2022

Ah, ha. High-IQ NIST researcher is high-IQ, which explains why one of us works at NIST while the other lives in a cabin in the woods. So... you're absolutely right. I think. I mean, that's the safe bet here, isn't here? See also our respective workplaces.

As you astutely observe, the subset of the beartype.door API that implements our partial ordering over type hints (e.g., <=, is_subhint()) does miraculously appear to order both mapping and set type hints correctly. I publicly confess that I don't know how any of that works, because Harvard microscopist @tlambert03 kindly did everything for us. We consider that a win.

Likewise, the subset of the beartype.door API that implements runtime type-checking (e.g., die_if_unbearable(), is_bearable()) does not yet deeply type-check mappings or sets. But you cleverly realized Plum can kludge around that, because Plum knows the type hint it expects a priori. Let k be the nesting height of any type hint (e.g., k == 3 for typing.AbstractSet[typing.AbstractSet[int]]]). Then Plum can just repeatedly bang is_bearable() against each of the k possibly nested containers matched by that known type hint in a tight inner loop with time complexity O(k). Since we expect k to be really small for real-world type hints, that's basically O(1).

clever

@wesselb
Copy link
Member

wesselb commented Oct 23, 2022

Hello, @leycec and @tbsexton! I'm very sorry for the radio silence on my part. Between wrapping up the PhD and starting a full-time job I've not managed to find time for everything I've been wanting to do, but I'm finally slowly finding a balance.

@leycec @wesselb So --- and this might be totally naive --- do we need a type_of function to operate in the first place? Can we just... solve the inverse problem?

You're onto something here! In fact, I think this is really good idea. As you say, this completely avoids the need to implement a function type_of. And with @beartype to do the type checking, this would be really fast too!

Your proposal has made something really clear to me. In Python, in many practical scenarios, the type of an object is not sufficient to appropriately perform dispatch. That is, whereas we would hope that

isinstance(x, Type) == issubclass(type(x), Type)

this identity is more than often not true. This is made really clear by #66, which is a feature request for support for literal types.

This insight makes me believe that your proposed approach would be a strictly better way of doing things, if not for one downside: caching. Since the type of an object cannot be relied upon to find the right method, caching would not be possible. I would therefore propose the following design, which attempts to combine the best of both approaches.

Let us call a type Type faithful if the identity

isinstance(x, Type) == issubclass(type(x), Type)

holds true. If all types in the dispatch tree (I guess we could use this terminology?) for a function are faithful, then dispatch only depends on the types of the arguments, which means that caching is possible again. Therefore, whenever all methods of a function only usedfaithful types, fast method lookup is possible. This gives the user the ability to (1) only use faithful types and get optimal performance or (2) use more complicated types at the cost of a small performance penalty. I think (1) is very important for functions which are called very often.

I think performing dispatch only based on isinstance checks would resolve a lot of issues. To be honest, I'm so convinced that I'm going to start prototyping right away! Just tagging @PhilipVinc here too since he's been closely involved in the development.

It's germane for me to admit at this critical juncture that @beartype currently does not deeply type-check nearly as much as everyone believes it does:

The reason being I haven't found a spare hot minute to throw at deep type-checking.

What @beartype currently already does is amazing! I'm absolutely sure we can find a workaround for deep type checking that suffices in the vast majority of use cases. @tbsexton's proposal seems like an excellent way to go about this in a first iteration.

@wesselb wesselb mentioned this issue Oct 23, 2022
@rtbs-dev
Copy link
Contributor

rtbs-dev commented Oct 24, 2022

Still digesting, but lemme point out something real quick:

If all types in the dispatch tree (I guess we could use this terminology?) for a function are faithful, then dispatch only depends on the types of the arguments, which means that caching is possible again.

So there's two separate "trees" in my proposal. One is the dispatch tree:

  1. I have some type, so what is the correct function?

This is where your faithful vs. not faithful thing is most relevant, because if I have a partial order over types, I can guarantee a cacheable dispatch function (dispatch only depends on types). So-called unfaithful types are ones that break the partial order somehow... In your case given, they aren't subclasses.

BUT the beartype.door api solves that. Theoretically, every possible type hint is now fully "faithful", as long as we a) have the type hint, and b) use the TypeHint wrapper with the <=/is_subhint tool. Beartype "solves" the faithful caching problem for us. They've built the first tree.

But now I've assumed I know the type, so...

  1. If I have some input, what is it's type?

This is where we really don't have a good answer in general. Instead we have a nearly perfect chance of answering "is it a subtype of this other type (yes/no)" using beartype is_bearable on the kinds of things normal humans want to actually dispatch over.

So we construct a binary search tree of TypeHints we know/care about (which we can make really easily using the first tree via is_subhint) to ask beartype "good" yes or no questions (is my input an object? Great! Is it an....) to approximate our missing type_of in pretty much constant time.

So, two separate trees. Technically, the second is a tree the dispatcher builds for us from the first one, since we already get the first one, for free, from beartype partial order. Because they're the same tree, but we have to find clever subsets of the (infinite?) first one to avoid making type_of

@wesselb
Copy link
Member

wesselb commented Oct 24, 2022

BUT the beartype.door api solves that. Theoretically, every possible type hint is now fully "faithful", as long as we a) have the type hint, and b) use the TypeHint wrapper with the <=/is_subhint tool. Beartype "solves" the faithful caching problem for us. They've built the first tree.

What about a method f(x: Literal[1])? If x were 1, then its type would be int. However, knowing that its type is int is not enough to establish that the method applies to 1. This is the prime example of an "unfaithful" type that I have in mind. Moreover, even if we know that the method applies to 1, e.g. using is_bearable, then we cannot use the type of 1 to perform caching, because the method should only apply to 1s and no other ints.

So-called unfaithful types are ones that break the partial order somehow... In your case given, they aren't subclasses.

Hmmm, as far as I can see, Literal[1] would not necessarily break the partial order, right? I believe issubclass(Literal[1], int) would hold true!

So we construct a binary search tree of TypeHints we know/care about (which we can make really easily using the first tree via is_subhint) to ask beartype "good" yes or no questions (is my input an object? Great! Is it an....) to approximate our missing type_of in pretty much constant time.

Yes! This is super nice and makes a whole lot of sense!

So, two separate trees.

Right, gotcha! I think I understand the distinction you're making.

@rtbs-dev
Copy link
Contributor

OH I gotcha. Ok so in this case, I assumed we would have an ambiguity resolution precedence. Either the order the dispatch rules were defined in the code, or (my preference) the most specific applicable rule always gets applied.

E.g.

method f(x: Literal[1])? If x were 1, then its type would be int

Which is only true if int is in the dispatcher somewhere else (e.g. method f(x: int) is also defined) and it takes precedence. If the user hasn't defined f(x: int) then the "second tree" would only be able to resolve to Literal[1]) for x=1, and fail for other int.

If they have defined both, I think the search tree should continue as far down as it can, to obtain the most specific/narrow applicable type. Similar to how classes works, just simpler with beartype resolution checking. Theoretically this would be cacheable (I hope!) 😄

@wesselb
Copy link
Member

wesselb commented Oct 25, 2022

the most specific applicable rule always gets applied.

I think this is what should happen! (And how the current design works.)

Hmm, just to be sure we're on the same page, let me walk through a slightly more elaborate example.

Consider the code

from numbers import Number
from typing import Literal

from plum import dispatch

@dispatch
def f(x: str):
    return "str!"


@dispatch
def f(x: Number):
    return "number!"
    

@dispatch
def f(x: int):
    return "int!"


@dispatch
def f(x: float):
    return "float!"
    

@dispatch
def f(x: Literal[1]):
    return "the literal one!"

This registers five methods for the function f, with the following four signatures: (str,), (Number,), (int,), (float,), and (Literal[1]). Using beartype.door, these signatures may be arranged into the following two directed acyclic graphs (DAG):

(str,)

(Number,) -> (int,) -> (Literal[1],)
   |
   v
(float,)

Note that Literal[1] should indeed satisfy TypeHint(int) >= TypeHint(Literal[1]). Also note that that Literal[1] is unfaithful, because

>>> isinstance(1, Literal[1])
True

>>> issubclass(type(1), Literal[1]) 
False

We desire the following behaviour:

>>> f("1")
'string!'

>>> f(1)
'the literal one!'

>>> f(1j)
'number!'

>>> f(1.0)
'float!'

>>> f(2)
'int!'

>>> f(object())
MethodNotFoundError: cannot find an appropriate method

To determine the right method, there are two ways of doing this:

The isinstance-Based Method

Whenever f(x) is called, start at the top of the first DAG. Call the root node n.

  1. If not isinstance(x, n), reject the DAG and proceed to the next DAG and repeat step 1. Otherwise, proceed to the next step.
  2. For every child of n, check isinstance(x, n). If there are no matches, the method corresponding to n is appropriate for this DAG. If there is exactly one match, name this child n and repeat step 2. If there is more than one match, return an ambiguity error.

If there is an appropriate method for one and exactly one DAG, we have succesfully found the right method to run!

The type(x)-Based Method

If all types are faithful, then every call isinstance(x, n) can be replaced with issubclass(type(x), n) (or TypeHint(type(x)) <= TypeHint(x)). The big consequence of this is that the algorithm only depends on type(x). Therefore, once we run the algorithm, we can associate the resulting method to type(x), and run this method for any future arguments of this type! In other words, we can do caching.

If the types are not all faithful, then I'm afraid that there is no such way to do caching, because the outcome of isinstance(x, n) can only be determined by actually running this instance check. Therefore, the isinstance-based method must be run for all invocations of f.

Side node: If some of the types in the type tree are unfaithful, then caching may still be possible. Namely, if, for a particular input x, the isinstance-based method happens to only come across faithful nodes, then caching should still be valid!

Another side note: By allowing the user to make type_of smarter, more types can be rendered faithful. For example, if type_of(x) = Literal[1] if x is 1 else type(x), then isinstance(x, Literal[1]) == issubclass(type_of(x), Literal[1]). In this way, by making type_of smarter, the user would be able to gradually improve caching performance.

Let me work through two examples.

Example 1

We call f(1). isinstance(1, str) is false, so we reject the first DAG. isinstance(1, Number), however, is true. Now isinstance(1, float) is false and isinstance(1, int) is true, so we proceed to the node int. Finally, isinstance(1, Literal[1]) is also true, and Literal[1] is a terminal node, so we return the method corresponding to Literal[1].

Example 2

We call f(1.0). isinstance(1.0, str) is again false, so we reject the first DAG, and isinstance(1.0, Number) is again true. Now isinstance(1.0, float) is true and isinstance(1.0, int) is false, and float is a terminal node, so we return the method corresponding to float.

Does this also correspond to what you have in mind?

@rtbs-dev
Copy link
Contributor

Ahhh I got you. Ok, I have some deadlines here but very briefly let me clarify what parts I would hope are (ostensibly) cacheable:

First, I don't think we can ever really hope to use the native type() for anything worthwhile. It's kind of a nightmare. BUT it is true that

>>> issubclass(type(1), Literal[1]) 
False

since a Literal isn't a subclass of Int. However, I'm actually imagining a compilation and 2x passes here.

  1. compilation: Construct the search tree of the types we want, out of the universe of everything possible. We use the typehints from the dispatch definitions above, and get exactly what you wrote as the DAG(s).^[1]. IMO, most of the "caching" work was done here, since you've narrowed your search space so much that the cost of dispatching over the space of types is pretty much amortized away. So now you've effectively got a tree of a few types, so that at dispatch-time, you:
  2. determine the type of an argument, assuming the only available types are the ones the dispatcher explicitly cares about. This is using a (probably very shallow) search tree constructed when the dispatcher is instantiated. This wouldn't need to be cached, since the runtime should be very close to constant time, anyway. It's practically a lookup-table (very loosely), which is exactly what the caching/memoize system would do, anyway. NOTE: the only reason this works is because we aren't doing isinstance, but is_bearable, which runs via beartype O(1)-ish time for every query.
  3. once you have an approximate guess for plumish_type(x)->TypeHint, you can in theory directly use that type to match something from the dispatch: dispatcher: TypeHint->Callable. You don't really need caching here either since everything in the tree corresponds to something you wanted to dispatch on.

OK so now that I've said that, I think both 2 and 3 are still cachable. They are (pure) functions, since every input will deterministically end at either a type in the tree or fail if no type is available. But as you note, you wouldn't be caching on types in 2, only in 3. But if you think it's needed (e.g. if a user passes the same x over and over such that memoizing on x rather than type(x) is worth it), you could have a kwarg or something that enables that behavior.

I generally think that relying on any kind of type(x) for python is going to land us in a world of pain. Exhibit-A: Enum. While folks over at, say, pydantic are using Enum as pseudo tagged unions (like, yknow, the sum-types we all wish we had)[^2], it turns out that dispatching on enums (how we might want to) becomes horrifyingly impossible, given that the type() of an enum member is always set as... enum 🙄. This is bad. Or at least, it was before beartype's from beartype.cave import EnumType, EnumMemberType.

SO yeah I think the isinstance() method is pretty much there, but we want to change that to is_bearable, and, it's only after we've narrowed the search space down such that caching is, for all intents and purposes... optional?

^[1]: as a sidenote, if users want to do numeric type checks, I would direct them to the (beartype-fueled) numerary since they're another project running into type issues and trying to solve them 📦

^[2]: @leycec I will say, if you're looking for feature requests... damn I want a clean syntax for sum types/tagged union so. bad.. I don't want to have to add a janky assert_never into my code to do exhaustiveness checking. I want clean algebraic types, and I want them bearable, damnit! 😄

@leycec
Copy link
Member

leycec commented Oct 28, 2022

I don't think we can ever really hope to use the native type() for anything worthwhile. It's kind of a nightmare.

Yes. type() is appropriate for scaring apprentice child software engineers at Halloween, but otherwise inappropriate. With respect to type hints, type() lies, because many type hints and other objects of interest (e.g., enumeration members) lie about their types; the remainder have no public types. Some objects even have entirely different types depending on whether you're running under CPython or PyPy, which is worse than a lie. That's squabbling amongst parents who insist they love you.

This makes my eye twitch even more spasmodically than it usually does:

>>> issubclass(type(1), Literal[1]) 
False

I hear the Jaws theme when I see horrors like that.

is_bearable, which runs via beartype O(1)-ish time for every query.

I see your "O(1)-ish time" and raise you a table flip.

panda no likey

My bald-faced lies are hard to believe, but all of the runtime type-checking performed by @beartype (including is_bearable()) is strictly O(1). I will eat my moth-eaten Merino sweater live on Gutter Gitter if this is not the case.

Okay. It's actually O(k) for k the nesting depth of a type hint (e.g., k == 2 for list[int]). But k < 5 in almost all real-world cases. My lies live to see another day.

pydantic

Friends don't let friends pydantic. I kid, of course! Pydantic is probably great – at least as a temporary stopgap until @beartype gracefully absorbs pydantic's feature set into itself like some gurgling tinfoil amoeba from Mystery Science Theatre 3000.

pydantic are using Enum as pseudo tagged unions

...this is not a thing people should be doing:

from dataclasses import dataclass
from enum import Enum

# Horrible sum type is horrible.
@dataclass
class SumType:
    kind: Enum
    data: object

# Rando enumeration is rando.
Token = Enum('Token', ['Number', 'Operator', 'Identifier', 'Space', 'Expression'])

muh_horrible_sum_type = SumType(Token.Number, 42)

That's awful, as both the SumType.kind and SumType.data fields are basically untyped. But that's not nearly as awful as the official example mypy gives:

from typing import Literal, TypedDict, Union

class NewJobEvent(TypedDict):
    tag: Literal["new-job"]
    job_name: str
    config_file_path: str

class CancelJobEvent(TypedDict):
    tag: Literal["cancel-job"]
    job_id: int

Event = Union[NewJobEvent, CancelJobEvent]

def process_event(event: Event) -> None:
    # Since we made sure both TypedDicts have a key named 'tag', it's
    # safe to do 'event["tag"]'. This expression normally has the type
    # Literal["new-job", "cancel-job"], but the check below will narrow
    # the type to either Literal["new-job"] or Literal["cancel-job"].
    #
    # This in turns narrows the type of 'event' to either NewJobEvent
    # or CancelJobEvent.
    if event["tag"] == "new-job":
        print(event["job_name"])
    else:
        print(event["job_id"])

Seriously? Expensive O(n) string matching just to narrow conditional branches? What is this burning trash pile you are badly recommending to long-suffering data scientists such as ourselves, mypy? 😮‍💨

beartype's from beartype.cave import EnumType, EnumMemberType.

Gah! You have discovered all my undocumented secrets. I... I was pretty sure nobody even knew about that API. You're like some kind of GitHub bloodhound, sniffing out the bitrot and skeletons in my commit history. This is why you work at NIST.

I don't want to have to add a janky assert_never into my code to do exhaustiveness checking.

You're in luck then! In addition to being stupidly faster than prior CPython versions, the just-released CPython 3.11.0 publishes an official typing.assert_never() function. It's still a bit silly, but at least you no longer need to jank your own codebase with it.

@leycec I will say, if you're looking for feature requests..

No. No! Oh, please GitHub Gods above (...so, Satya Nadella), no!!! I think I may be hyperventilating here.

I want a clean syntax for sum types/tagged union so. bad..

Yes! Do this for me, please. Do everything for me and then I will make this happen. Actually, this leads to an interesting factotum I just recalled. Did you know that typing.Literal[...]:

  • Accepts multiple arguments?
  • Accepts enumeration members as arguments?

Combining those two realizations, we can exhaustively type enumeration members:

from enum import Enum
from typing import Literal, assert_never

# Rando enumeration is rando.
Token = Enum('Token', ['Number', 'Operator', 'Identifier', 'Space', 'Expression'])

TokenKind = Literal[
    Token.Number,
    Token.Operator,
    Token.Identifier,
    Token.Space,
    Token.Expression,
]

TokenTypes = int | str | OTHER_STUFF_I_GUESS

@beartype
def muh_tokenizer(token: TokenTypes, token_kind: TokenKind):
    match token_kind:
        case Token.Number:
            ...
        case Token.Identifier:
            ...
        blah-blah-blah!
        case _:
            assert_never() 

@beartype supports that already. I think. No quotes, please.

Of course, that's just a really bad form of multiple dispatch. Plum would solve the same quandary considerably more elegantly, assuming typing.Literal support: e.g.,

```python
from enum import Enum
from plum import dispatch
from typing import Literal

# Rando enumeration is rando.
Token = Enum('Token', ['Number', 'Operator', 'Identifier', 'Space', 'Expression'])

@dispatch
def muh_tokenizer(token: int, token_kind: Literal[Token.Number]): ...

@dispatch
def muh_tokenizer(token: str, token_kind: Literal[Token.Space]): ...

Assuming @plum.dispatch internally reduces to something like @typing.overload when statically typing.TYPE_CHECKING, the above might even be statically type-checkable. Maybe. Or maybe a mypy-specific plugin that doesn't work under any other static type-checker would need to be authored. I shudder.

But you probably want something more Rust-like, right? Like, a full-blown SumType data structure that you can then stuff into monstrous red-black tree nodes and algebraically reason about with both static and runtime type-checkers. Static type-checking support probably requires a full-blown PEP. Runtime type-checking support requires someone who is not me to have free time and submit a mostly untested, undocumented, non-working PR to @beartype. Yet again, I shudder.

I would direct them to the (beartype-fueled) numerary since they're another project running into type issues and trying to solve them 📦

@posita: People love numerary. You're famous. Happy Halloween. 🎃

@wesselb
Copy link
Member

wesselb commented Oct 30, 2022

@tbsexton, I should've clarified that in all instances of isinstance I mean to use is_bearable and in all instances of issubclass I mean to use TypeHint and <=. Note that, with TypeHint, issubclass(Literal[1], int) is true:

>>> TypeHint(Literal[1]) <= TypeHint(int)
True

determine the type of an argument, assuming the only available types are the ones the dispatcher explicitly cares about.

Isn't this the function type_of that we're trying to avoid? Unless I'm misinterpreting, I believe that we're all in agreement that this is most likely not the right way forward.

Ok then! I think we've converged onto an approach that should generally work: the isinstance-method implemented with is_bearable. The best way to do caching, however, still remains to be figured out. I think that's enough for to me to get to work! 😃

I was not aware of numerary. @posita, that's really nice work!!

I was also not aware of tagged unions, and I think I'd rather stay unaware. That's ugly.

@leycec
Copy link
Member

leycec commented Nov 3, 2022

@wesselb spitting hard truths as always. I applaud this timely advice, too:

I was also not aware of tagged unions, and I think I'd rather stay unaware. That's ugly.

truth

@rtbs-dev
Copy link
Contributor

rtbs-dev commented Nov 3, 2022

@wesselb

Isn't this the function type_of that we're trying to avoid? Unless I'm misinterpreting, I believe that we're all in agreement that this is most likely not the right way forward.

Right, sorry, let me fix that sentence:

determine the type of an argument, given only the available types our runtime dispatcher explicitly cares about.

rather than:

determine the type of an argument out from the set of possible python and/or user-defined/imported types.

The second is the bad one. But a function that goes type_of(x:Any) -> typ:TypeHint is going to be necessary. My point was that we can do the "caching"/time-savings ahead of time by restricting the space we search for typ in, by using what the dispatcher was told x is expected to be, and organizing it into a useful hierarchy (search tree).

As for why sum types (discriminated/tagged unions), it's actually a way we can a priori do dispatching for a bunch of complicated types, without needing to validate or check which subtype gets dispatched to, a priori. See this tidbit from the pydantic docs:

When Union is used with multiple submodels, you sometimes know exactly which submodel needs to be checked and validated and want to enforce this. To do that you can set the same field - let's call it my_discriminator - in each of the submodels with a discriminated value, which is one (or many) Literal value(s). For your Union, you can set the discriminator in its value: Field(discriminator='my_discriminator').

Setting a discriminated union has many benefits:
- validation is faster since it is only attempted against one model
- only one explicit error is raised in case of failure
- the generated JSON schema implements the associated OpenAPI specification

So what I imagined here was a more general "validate()" function that is effectively doing, you guessed it, multiple dispatch, where I don't have to do a bunch of try/except or if isinstance() blocks... it just knows. Pydantic achieves this with Literal tags as metadata, ergo "tagged union". It's a janky solution, but I've used it to pretty great effect for speeding up code and reasoning about types. I'll be honest here, part of what drew me to plum and beartype was a way I could do similar things on, say, TypedDict or NamedTuple or dataclass without selling my codebase's soul to the pydantic dependency gods 😅.

So this is all a bit of a tangent right? well... it's connected, I promise! I came across a brilliant SO comment chain that illustrates what I'm talking about. With "Untagged", i.e. normal typing.Union types

In other words, if you have an Int ∪ Double, then you'd know you have one of the two, but wouldn't be able to pattern match to see which, so you could only do things that would be valid to do for both possibilities.

But hang on, how do we seem to get along fine without tagged types in other languages?

For clarity, TypeScript has type information available at runtime, so it has a typeof operator that can make up for the lack of tagging and see which type is used anyway. Haskell is fully type-erased, so if it supported this feature, then there wouldn't be any equivalent to that.

SO we've come full circle! Hah! The entire issue here is that in reality python has what amounts to no real support for typeof that lets us know what to do with e.g. generics and unions, etc. For all intents and purposes, we can (only in the context of a dispatcher, mind you), do a typeof_NEAREST_NEIGHBOR that is super fast. That's the essence of my original proposal. But IMO it would be awesome to have "real" sum types/tagged unions/ADT's, so that we can treat python as type-erased the way that (apparently) Guido wanted us to 😛. Then something like e.g. Plum would get a huge performance boost when it knew it was dispatching over ADTs/sum types

@leycec so this is (kinda) getting out of scope here, so I might move this, but...you mention

we can exhaustively type enumeration members [...] Of course, that's just a really bad form of multiple dispatch. Plum would solve the same quandary considerably more elegantly, assuming typing.Literal support

So yeah, and in reality pattern matching can kinda do this to an extent. And plum is definitely the way to do sum type destructuring, IMO. But my issue is with sum type construction, so what bothers me is the need for these annoying little (user-defined) tags, such that no two libraries implementing sum types will ever really have a common protocol. See: the pydantic thing above, or the janky mypy assert_never as seen here (and in your example), or even better, the wikipedia entry for tagged union implementation in python 🙄.

SO what I'm thinking is... beartype could (probably?) be... coaxed... into emulating type tags (and therefore, ADTs muahahaha). After all, we're pretty much talking about one more kind of type arithmetic that needs to be supported (the disjoint union), and the new TypeHint object seems like a perfect candidate for representing something's true type information as metadata.

@wesselb
Copy link
Member

wesselb commented Nov 6, 2022

The second is the bad one. But a function that goes type_of(x:Any) -> typ:TypeHint is going to be necessary. My point was that we can do the "caching"/time-savings ahead of time by restricting the space we search for typ in, by using what the dispatcher was told x is expected to be, and organizing it into a useful hierarchy (search tree).

Is a function type_of truly necessary? I think I previously convinced myself that we can get by entirely without a function type_of! 😅 More specificially, the previously outlined isinstance-method (i.e. is_bearable) only uses isinstance and the order by TypeHint and completely avoids all use of type_of. type_of only comes into play when we need to do caching—for correct functioning, I believe we don't need a type_of at all.

hat's the essence of my original proposal. But IMO it would be awesome to have "real" sum types/tagged unions/ADT's, so that we can treat python as type-erased the way that (apparently) Guido wanted us to 😛.

How would an ideal solution to this problem look like to you? I.e., how would you really like the code to look like?

For all intents and purposes, we can (only in the context of a dispatcher, mind you), do a typeof_NEAREST_NEIGHBOR that is super fast.

I think I'm becoming increasingly convinced that we might not be able to do an implementation of type_of that works correctly in all cases without manually specifying how to handle more complex types, even when only consider signatures for a particular function. For example, if we have methods f(x: int), f(x: Addable) (Addable the type of all objects implementing __add__), f(x: Literal[1]), some complicated logic may be required to infer that type_of(1.0) should give something that is a subtype of Addable.

@posita
Copy link

posita commented Nov 7, 2022

@posita: People love numerary. You're famous. Happy Halloween. 🎃

Perhaps technically correct in that more than one person qualifies as "people" (shout out to @tbsexton), but if I'm famous, someone forgot to tell me. Maybe I should contact the Department of Internet Money and see how many millions I'm entitled to. 😉

I was not aware of numerary. @posita, that's really nice work!!

Thanks! And by "nice work" I will take it to mean the type that never should have been required in the first place and should be obsoleted by the standard library as quickly as possible. The bad news is that we're not there yet. The good news is that there's a discussion, if that's something that interests you. The bad news is that it has kind of stalled. The good news is ... hang on, I'm thinking ... I've been spending some casual time wrestling with what an algorithmic generic might look like. The bad news is that I haven't gotten very far, probably because it's hard and because I've been distracted by other things that life has a way of throwing at one wrestling with tough problems without the benefit of complete creative freedom afforded by elective solitude and inherited financial independence. Alas, I fear I will never qualify for such niceties, and must make do with what I do have (and for which I am grateful). Patience is humbly appreciated! 🙇

@rtbs-dev
Copy link
Contributor

rtbs-dev commented Nov 8, 2022

@wesselb For that last example, "stuff that implements __add__" is precisely this protocol:

class Addable(T.Protocol):
    def __add__(self, other):
        ...

TypeHint(T.Literal[1]) < TypeHint(int) < TypeHint(Addable)

>>> True

Even though we're in structural-typing land now, beartype.door is working pretty great.
If all of those typehints were registered with a dispatcher, we could still strictly choose the "smallest" one that qualifies: a 1 goes to T.Literal[1], a 2 goes to int, and 'hi' goes to Addable.

How would an ideal solution to this problem look like to you? I.e., how would you really like the code to look like?

I suppose I should start working on this 😅 Gimme a little bit, I'll see if I can sketch some things out.

@wesselb
Copy link
Member

wesselb commented Nov 9, 2022

@tbsexton I fully agree that DOOR works beautifully here and gives us the order that we’re after! (@leycec, this really is fantastic!) My point was that implementing a function typeof in this case is like solving the dispatch problem in the first place. That is, the logic to determine that 2 should go to int is in this case equivalent to determining that f(2) should dispatch to the method implemented by int. But perhaps we’re in agreement! I should have a prototype working soon(tm).

@wesselb
Copy link
Member

wesselb commented Nov 9, 2022

@posita That’s certainly something I’m interested in! It’s exciting to see that there is discussion around this. I can fully imagine that what you’re after is a tall
ask. :)

PS: The GitHub mobile website doesn’t allow one to react to posts with emoji’s? That’s a bummer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants