Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement TokenFlags stored on each Token #11578

Merged
merged 1 commit into from
May 29, 2024

Conversation

dhruvmanila
Copy link
Member

@dhruvmanila dhruvmanila commented May 28, 2024

Summary

This PR implements the TokenFlags which will be stored on each Token and certain flags will be set depending on the token kind. Currently, it's equivalent to AnyStringFlags but it will help in the future to provide additional information regarding certain tokens like unterminated string, number kinds, etc.

The main motivation to add a TokenFlags is to store certain information related to the token which will then be used by downstream tools. Currently, the information is only related to string tokens. The downstream tools should not be allowed access to the flags directly, it's an implementation detail. Instead, methods will be provided on Token to query certain information. An example can be seen in the follow-up PR (#11592).

For example, the Stylist and Indexer uses the string flags stored on String/FStringStart token to get certain information. They will be updated to use these flags instead, thus removing the need for Tok completely.

Prior art in TypeScript: https://github.com/microsoft/TypeScript/blob/16beff101ae1dae0600820ebf22632ac8a40cfc8/src/compiler/types.ts#L2788-L2827

@dhruvmanila dhruvmanila added the parser Related to the parser label May 28, 2024
Comment on lines 37 to +40
#[deprecated]
pub fn lex(_source: &str, _mode: Mode) {}
#[deprecated]
pub fn lex_starts_at(_source: &str, _mode: Mode, _offset: TextSize) {}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are here just to help me with finding the remaining references and will be removed at the end.

Comment on lines +144 to +161
/// Try lexing the single character string prefix, updating the token flags accordingly.
/// Returns `true` if it matches.
fn try_single_char_prefix(&mut self, first: char) -> bool {
match first {
'f' | 'F' => self.current_flags |= TokenFlags::F_STRING,
'u' | 'U' => self.current_flags |= TokenFlags::UNICODE_STRING,
'b' | 'B' => self.current_flags |= TokenFlags::BYTE_STRING,
'r' => self.current_flags |= TokenFlags::RAW_STRING_LOWERCASE,
'R' => self.current_flags |= TokenFlags::RAW_STRING_UPPERCASE,
_ => return false,
}
true
}

/// Try lexing the double character string prefix, updating the token flags accordingly.
/// Returns `true` if it matches.
fn try_double_char_prefix(&mut self, value: [char; 2]) -> bool {
match value {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could go from AnyStringPrefix -> TokenFlags but that seemed like an unnecessary computation. So, instead of char -> AnyStringPrefix -> TokenFlags, we go directly from char -> TokenFlags.

Comment on lines +590 to +592
// Keep the current flags in sync throughout the f-string context.
self.current_flags = fstring.flags();

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that all three f-string tokens (FStringStart, FStringMiddle, FStringEnd) will have the flag set without any additional cost.

@dhruvmanila dhruvmanila changed the title WIP: Implement TokenFlags Implement TokenFlags May 28, 2024
@dhruvmanila dhruvmanila changed the title Implement TokenFlags Implement TokenFlags stored on each Token May 28, 2024
@dhruvmanila dhruvmanila marked this pull request as ready for review May 28, 2024 10:10
Copy link
Member

@MichaReiser MichaReiser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

I think it would be helpful to add some additional context to the PR summary why we need this. What I understand is that we need it for some token based lint rules to remove the dependency on Tok.

Comment on lines +144 to +146
/// Try lexing the single character string prefix, updating the token flags accordingly.
/// Returns `true` if it matches.
fn try_single_char_prefix(&mut self, first: char) -> bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think I would move this method after lex_identifier. I was surprised to find this as the very first non-infrastructure method (maybe we should reorganize the lexer methods in a separate PR once we're done with your parser work. E.g. move next_token to the top, followed by lex_token).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: You could consider changing the method to Option<TokenFlags> to remove the side effect from it.

if let Some(prefix_flags) = self.try_single_char_prefix(first) {
	self.token_flags |= prefix_flags;
  self.lex_string(...)
} else {
	...
}

Although it might not be worth it...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although it might not be worth it...

Yeah, not sure if it's worth doing.

Nit: I think I would move this method after lex_identifier. I was surprised to find this as the very first non-infrastructure method (maybe we should reorganize the lexer methods in a separate PR once we're done with your parser work. E.g. move next_token to the top, followed by lex_token).

Yes, I'm going to follow-up on this.

Base automatically changed from dhruv/token-kind-only to dhruv/parser-phase-2 May 29, 2024 06:09
@dhruvmanila dhruvmanila merged commit b3d094c into dhruv/parser-phase-2 May 29, 2024
5 of 18 checks passed
@dhruvmanila dhruvmanila deleted the dhruv/token-flags branch May 29, 2024 06:13
dhruvmanila added a commit that referenced this pull request May 30, 2024
## Summary

This PR implements the `TokenFlags` which will be stored on each `Token`
and certain flags will be set depending on the token kind. Currently,
it's equivalent to `AnyStringFlags` but it will help in the future to
provide additional information regarding certain tokens like
unterminated string, number kinds, etc.

The main motivation to add a `TokenFlags` is to store certain
information related to the token which will then be used by downstream
tools. Currently, the information is only related to string tokens. The
downstream tools should not be allowed access to the flags directly,
it's an implementation detail. Instead, methods will be provided on
`Token` to query certain information. An example can be seen in the
follow-up PR (#11592).

For example, the `Stylist` and `Indexer` uses the string flags stored on
`String`/`FStringStart` token to get certain information. They will be
updated to use these flags instead, thus removing the need for `Tok`
completely.

Prior art in TypeScript:
https://github.com/microsoft/TypeScript/blob/16beff101ae1dae0600820ebf22632ac8a40cfc8/src/compiler/types.ts#L2788-L2827
dhruvmanila added a commit that referenced this pull request May 31, 2024
## Summary

This PR implements the `TokenFlags` which will be stored on each `Token`
and certain flags will be set depending on the token kind. Currently,
it's equivalent to `AnyStringFlags` but it will help in the future to
provide additional information regarding certain tokens like
unterminated string, number kinds, etc.

The main motivation to add a `TokenFlags` is to store certain
information related to the token which will then be used by downstream
tools. Currently, the information is only related to string tokens. The
downstream tools should not be allowed access to the flags directly,
it's an implementation detail. Instead, methods will be provided on
`Token` to query certain information. An example can be seen in the
follow-up PR (#11592).

For example, the `Stylist` and `Indexer` uses the string flags stored on
`String`/`FStringStart` token to get certain information. They will be
updated to use these flags instead, thus removing the need for `Tok`
completely.

Prior art in TypeScript:
https://github.com/microsoft/TypeScript/blob/16beff101ae1dae0600820ebf22632ac8a40cfc8/src/compiler/types.ts#L2788-L2827
dhruvmanila added a commit that referenced this pull request Jun 3, 2024
## Summary

This PR implements the `TokenFlags` which will be stored on each `Token`
and certain flags will be set depending on the token kind. Currently,
it's equivalent to `AnyStringFlags` but it will help in the future to
provide additional information regarding certain tokens like
unterminated string, number kinds, etc.

The main motivation to add a `TokenFlags` is to store certain
information related to the token which will then be used by downstream
tools. Currently, the information is only related to string tokens. The
downstream tools should not be allowed access to the flags directly,
it's an implementation detail. Instead, methods will be provided on
`Token` to query certain information. An example can be seen in the
follow-up PR (#11592).

For example, the `Stylist` and `Indexer` uses the string flags stored on
`String`/`FStringStart` token to get certain information. They will be
updated to use these flags instead, thus removing the need for `Tok`
completely.

Prior art in TypeScript:
https://github.com/microsoft/TypeScript/blob/16beff101ae1dae0600820ebf22632ac8a40cfc8/src/compiler/types.ts#L2788-L2827
dhruvmanila added a commit that referenced this pull request Jun 3, 2024
## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* #11244
* #11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
https://github.com/astral-sh/ruff/blob/8196720f809380d8f1fc7651679ff3fc2cb58cd7/crates/ruff_python_parser/src/parser/expression.rs#L1260-L1262

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* #11441
* #11459
* #11442
* #11443
* #11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
https://github.com/astral-sh/ruff/blob/8196720f809380d8f1fc7651679ff3fc2cb58cd7/crates/ruff_python_parser/src/parser/expression.rs#L487-L491.

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* #11494
* #11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: #11494.

### Refactor the downstream tools (linter and formatter)

* #11511
* #11515
* #11529
* #11562
* #11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* #11402
	* #11406
	* #11418
	* #11419
	* #11420
	* #11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. #11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parser Related to the parser
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants