Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] Parsing comments/documentation and attach it to function declarations #79

Closed
xDGameStudios opened this issue Dec 10, 2019 · 4 comments

Comments

@xDGameStudios
Copy link

xDGameStudios commented Dec 10, 2019

To all of you parser developers! I'm making a parser for a custom language. This language uses JSDoc as way of adding function documentation and metadata. My idea was to tokenize the input with the following types of comments:

// my line comment
/* my multi line comment */  
/// this is a documentation

then I would strip them from the List...

var comments = tokens.where(x=> x.Kind == Kind.SingleLComment || x.Kind == Kind.MultiLComment);
var documentation = tokens.where(x=> x.Kind == Kind.Documentation);
var code = tokens.where(x=> !comments.Contains(x) && !documentation.Contains(x));

now the code tokens can be normally parsed as intended...

now in the parser I want to be able to, at any time, check if there is a token previous to a declaration and attach it to the declaration model as metadata... for example...

ParseFunctionDeclaration()
{
            var functionKeyword = MatchToken(SyntaxKind.FunctionKeyword);
            var identifier = MatchToken(SyntaxKind.IdentifierToken);
            var openParenthesisToken = MatchToken(SyntaxKind.OpenParenthesisToken);
            var parameters = ParseParameterList();
            var closeParenthesisToken = MatchToken(SyntaxKind.CloseParenthesisToken);
            var meta = functionKeyword.TryMatchPrevious(SyntaxKind.Documentation, out var token) ? token : null; 
}

even though the tokens are no longer next to each other.. (I stripped the original list) I was thinking of keeping them linked using a pointer to the previous token (during the tokenization process).

my problem is I maybe breaking the model structure here... as the tokens will have access to each other and they will need to have a TryMatchPrevious method it doesn't really make sense because a token shouldn't have a match functionality.

on the other hand I can just put the function in the Parser and have it's signature be:

TryMatchPrevious(kindToMatch, out var token);
or even
TryMatchPrevious(startToken, kindToMatch, out var token);

what do you think of this approach? am I overthinking it? is this too much? is there a simple way of implementing this!!

@infinitoparra

This comment has been minimized.

@infinitoparra

This comment has been minimized.

@terrajobst
Copy link
Owner

terrajobst commented Mar 26, 2020

@xDGameStudios

That's a good problem to think through. Here is what I'd do:

Don't model whitespace and comments as tokens. Instead, follow the Roslyn model where those are considered trivia. Trivia is associated with a token. A token can have leading and trailing trivia. Trailing trivia only goes until the end of line, all subsequent trivia is considered leading trivia for the following token. Therefore, only the first token on a line can have leading trivia. Comments at the bottom of a file are leading trivia of the synthetic end-of-file token.

Why this way? Because it follows how we think of comments:

  • If a comment is on a line by itself, it usually refers to the following code.
  • If a comment is on a line with code, it usually refers to the code before it.

Let's look at a few examples:

// Comment 1
function Foo(a: int, // Comment 2
             b: int) // Comment 3
{
    let x = a /* Comment 4 */ + /* Comment 5 */ b
    /* Comment 6 */ let y = x // Comment 7
    // Comment 8
}
Comment Leading/Trailing Token
Comment 1 Leading function
Comment 2 Trailing ,
Comment 3 Trailing )
Comment 4 Trailing a
Comment 5 Trailing +
Comment 6 Leading let
Comment 7 Trailing x
Comment 8 Leading }

It's worth noting that comments are just one kind of trivia. Others are:

  • whitespace & line breaks
  • preprocessor directives
  • tokens that were skipped by the parser (this ensures the SyntaxTree can always be used to print the input, character for character, which we currently can't do)

Rough sketch:

partial class SyntaxToken
{
    public ImmutableArray<SyntaxTrivia> LeadingTrivia { get;}
    public ImmutableArray<SyntaxTrivia> TrailingTrivia { get; }
}
class SyntaxTrivia
{
    public SyntaxKind Kind { get; }
    public string Text { get; }
}

Now add a few APIs to SyntaxNode:

  • Descendents(). Return all nodes and tokens by recursively walking the current node and all the children.
  • GetFirstToken() => Descendents().OfType<SyntaxToken>().First();
  • GetLastToken() => Descendents().OfType<SyntaxToken>().Last();
  • GetLeadingTrivia() => GetFirstToken().LeadingTrivia;
  • GetTrailingTrivia() => GetLastToken().TrailingTrivia;

With that, your problem becomes pretty straight forward. When declaring a function, you do something like that:

void BindFunctionDeclaration(FunctionDeclarationSyntax syntax)
{
    var comments = new List<string>();
    var leadingTrivia = syntax.GetLeadingTrivia();

    foreach (var trivia in leadingTrivia)
    {
        if (trivia.Kind == SyntaxKind.DocumentationCommentTrivia)
        {
            // Get text without slashes
            var text = trivia.Text.Substring(3).Trim(); 
            comments.Add(text); 
        }
    }

    var summary = string.Join(Environment.NewLine, comments);

    ...

    var symbol = new FunctionSymbol(..., summary);

    ...
}

Does this make sense?

@terrajobst
Copy link
Owner

I'll implement this on stream, which is tracked by #87 & #101.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants