Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC][EXPR] Formalize Integer Arithmetic Analysis #2588

Open
tqchen opened this issue Feb 12, 2019 · 11 comments

Comments

Projects
None yet
3 participants
@tqchen
Copy link
Member

commented Feb 12, 2019

The integer analysis (arith namespace) serves as a foundational tool for the low-level code generation. At this point, the analysis code is somewhat scattered in the codebase. This RFC proposes to revamp them.

Here is the list of currently related code:

  • Symbolic integer simplification
    • Canonical form based simplification(as in CanonicalSimplify)
    • Rewrite-rule based simplification(Simplify in HalideIR)
  • Symbolic interval bound analysis
  • Modulo analysis(DetectLinearEquation)
  • Deep equivalence checking

We have also seen issues on this topic.

Most of the current analysis functions can be put into two categories:

  • Symbolic equivalent analysis: maps a symbolic expression e1 to e2 such that e2 and e1 are equivalent to each other (in the case of simplification)
  • Relaxed integer set analysis: for e1, provide a set representation S, such that we ensure e1 \subset S.
    • For example, the result of DetectLinearEquation can be viewed as an integer set {x * factor + base}

Relaxed integer set analysis only requires us to find a superset of the corresponding set. Such relaxation allows us to deal with cases such as division or other functions.

Here is a list of observations about these analysis functions.

  • They can use results from each other.
    • For example, we can use the result of a modulo analysis when simplifying divisions.
  • They can complement each other.
    • Constant integer bound and symbolic bound can give us two different perspectives.
  • They are context dependent
    • We need the context information about(bound, positiveness and other properties) about variables in an expression to get the best analysis result.

Given these observations, the new API is a super analyzer that carries states about the context information, and bundles sub-analyzers.

Proposed sub-analyzers

  • CanonicalSimplifier
  • RewriteSimplifier
  • Deep equality class(via possibly via union-find)
  • Symbolic interval finder
  • Constant integer bound
  • LinearEquation detector(modulo analysis)
  • Integer polynomial(optional, its special case of the linear equation should be sufficient)
  • Make it easy to plug-in a "new-view" when we have a new analysis

Each sub-analyzer will use memoization to cache the intermediate result. Importantly, each sub-analyzer has a handle(weak reference) back to the super analyzer to allow them to call other sub-analyzers. A user needs to create a super analyzer, populates related context states, and run the analysis.

Followup Goals Enabled by This Proposal

There are several things that formalization can enable.

  • Better context-dependent analysis, we can populate context info(e.g., the shape size is non-negative), and use the information in code generation.
  • Constant integer bound analysis and automatic integer narrowing.
    • So far we use int32 for most indices. Ideally, we want to be able to default to int64, and automatically narrows the integers to int32 when we detect all the arithmetic are within the bound.
  • A centralized arithmetic logic that deals with integer analysis
  • Improve simplification in general

Proposed Steps

  • Design discussion and RFC comment(please reply to this thread).
  • Pattern match utility to better port RewriteSimplification (#2589)
  • Scaffolding of Super Analyzer and sub analyzer structure (#2668)
  • Populating the new infra
    • constant integer bound
    • modular set
    • RewriteSimplifier
    • CanonicalSimplifier
    • Deep equality

Comments are welcomed.

@sgrechanik-h

This comment has been minimized.

Copy link
Contributor

commented Feb 13, 2019

Concerning using int32/int64 for indices, I think we should create a separate, conceptually unbounded integer type for index arithmetic. Ideally, in compile-time we should use some arbitrary precision integer library for it, but using int64 may do as a starting point. (And of course it should be represented with int32/64 in run-time).

@grwlf

This comment has been minimized.

Copy link
Contributor

commented Feb 13, 2019

@tqchen I think this is very important direction to improve. Have you thought about using existing libraries like ISL? As far as I know it contains many useful algorithms and, more importantly, well-defined notation for describing formulas and relations of variables.

@tqchen

This comment has been minimized.

Copy link
Member Author

commented Feb 13, 2019

re conceptual integer index. I think moving toward int64 as default and do optional narrowing is a good starting point as @sgrechanik-h pointed out.

re ISL. My understanding of isl is that it tries to do exact integer set analysis, and uses a constraint solver based approach(although I am not an expert in isl). As a result, it has some restrictions in terms of operation, as well as speed of solving.

In our case, we aim at providing simple forward solving ways for relaxed integer set analysis. Integer set abstraction and pushing most analysis to integer set primitives are important, and we should learn that perspective from isl.

On the other hand, I think the first pass of scaffolding should aim to build a self-contained analyzer. Then we can see if there is value to bring isl as an optional sub-anlayzer.

@tqchen

This comment has been minimized.

Copy link
Member Author

commented Feb 24, 2019

Constant bound analyzer #2668

@tqchen

This comment has been minimized.

Copy link
Member Author

commented Mar 31, 2019

Followup discussion with @sgrechanik-h on memoization.

At the current moment, I tried to avoid memoization as much as possible mainly because memoization can cause problems when we update context information about variable x(and previously computed transformation about x related expression may no longer hold. I think this might be a good strategy we can take for now. The cost is that there might be some repetitive computations, however, given most expressions are not deep, this may not be a too big problem.

One way to get around this problem is renaming (we rename the variable to another one every time its context information changes). This might need some thoughts and change in the IR, but it follows the same spirit as SSA.

Alternatively, the strategy is to actively clean up the cache when there any context information change.

One of the first analyzer memoization analyzer is common sub-expression folding and canonicalization, which maps two instances of x + y into the same address. In this case, because it is not dependent on the context information, we are fine.

@sgrechanik-h

This comment has been minimized.

Copy link
Contributor

commented Mar 31, 2019

In my opinion, analyses should be pure functions, something like (Expr, Context) -> Expr. This makes the problem of memoization much simpler: we just have to use both the arguments as a key.

@tqchen

This comment has been minimized.

Copy link
Member Author

commented Mar 31, 2019

The main problem of making things pure is the cost of copy when updating the Context(because of the need for context to be immutable) and that may not be best for efficiency.

@sgrechanik-h

This comment has been minimized.

Copy link
Contributor

commented Apr 1, 2019

@tqchen I guess we cannot know for sure: the performance benefits of memoization may outweigh the possible performance losses due to immutability. What is more important is that pure functions often make things more clear to think about.

@tqchen

This comment has been minimized.

Copy link
Member Author

commented Apr 1, 2019

What I mean is that we can support memoization even if we don’t do the functionally style, as I outlined in the last post

@sgrechanik-h

This comment has been minimized.

Copy link
Contributor

commented Jun 15, 2019

@tqchen One thing I wanted to clarify: why isn't the Analyzer class integrated into the Node hierarchy? Instead some separate closure-based mechanism is used for python integration, which feels strange and seemingly makes it harder to create functions which accept Analyzer objects and work across both languages.

@tqchen

This comment has been minimized.

Copy link
Member Author

commented Jun 17, 2019

@sgrechanik-h This is a very good point. I have thought about it, but the conclusion then was it may not be as useful. But you are right that we might get to a stage where we need to pass Analyzer objects around and it makes sense to make Analyzer as a Node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.