Skip to content

Conversation

akshanshbhatt
Copy link
Collaborator

The tokenizer was throwing an error when the last line of the source code was dedented by one level less than the previous level. I debugged and fixed it.

Closes #358

@akshanshbhatt akshanshbhatt changed the title Fix the dedent token generation in new tokenizer Fix the dedent token generation in new tokenizer Apr 12, 2022
@certik
Copy link
Contributor

certik commented Apr 13, 2022

@akshanshbhatt thanks for the PR! @Thirumalai-Shaktivel would you mind reviewing it please?

Copy link
Collaborator

@Thirumalai-Shaktivel Thirumalai-Shaktivel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR @akshanshbhatt.
Yup, I just noticed that there is no need to sum up all the element's values in the indent_length, as the last element in the indent_length has the summed indent value.
I left some changes.

Copy link
Collaborator

@Thirumalai-Shaktivel Thirumalai-Shaktivel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, Thanks!

@certik certik merged commit 36a1b57 into lcompilers:main Apr 13, 2022
@certik
Copy link
Contributor

certik commented Apr 13, 2022

Thanks @akshanshbhatt for the PR and thanks @Thirumalai-Shaktivel for the review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New tokenizer fails when there are multiple indentations
3 participants