New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support templated expressions in Azure pipelines files #29
Comments
Is it not possible for you to run whatever preprocessing you want in a separate script, and then pass the result to
Can you point me at where this happens? I'm curious about what they're doing, but wasn't able to find anything obvious in the azure pipelines vscode repo which does this. My initial reaction to this is that I don't want to support it. That said, if we can specify this in a way which is simple enough to implement, and the azure use-case is compelling enough, I could possibly be convinced that it's a good idea. |
The "Each" keyword is where I ran into difficulty. I've been using this to preprocess the files, but then realized the schema validation is also occurring on their end as well which was a nice side-effect. https://docs.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run-pipeline?view=azure-devops-rest-6.0 I'd prefer to not use a REST API and use your tool instead though. |
I haven't had a chance to look into this more deeply, but let me see if I'm understanding you correctly. Can you provide a simplified example of a file which is failing to validate under the schema, but which validates correctly under the
|
Any file that utilizes the E.g.
|
I asked in the pipelines-vscode repo about how they handle this, and they directed me to the appropriate source, in the pipelines language-server code. I see there are helpers in which their language-server code is modifying any expressions: Basically, in the case of the jobs:
- ${{ each val in parameter.vals }}:
- job: foo
steps:
- bash: echo ${{ val }} is becoming jobs:
- job: foo
steps:
- bash: echo ${{ val }} which will pass schema validation. There's a similar transform which I haven't quite grasped yet for arrays with expressions like conditionals. I will have to think more about this, but I understand this use-case a lot better now. I'm thinking about possibly implementing a new feature to transform data prior to passing it to schema validation.
This would be built into the pre-commit hook, and I would have to give some thought to how to make it extensible. @davewhitters, does that approach, having a transform which does the same work as what Microsoft already does for VSCode, sound satisfactory to you? |
This approach sounds great; it should satisfy the use-case I described. |
I have a version of this up and running now, and I've worked out enough issues with it that I think I'll be comfortable publishing it soon, after a bit more testing. There was one thing that I wanted to note about the sample pipeline you provided: ---
parameters:
- name: vals
type: object
default:
- 1
- 2
jobs:
- ${{ each val in parameter.vals }}:
- job:
displayName: Job
steps:
- bash: echo ${{ val }} This fails validation with an error even on the branch of
At first I thought there was something wrong with the code, but then I found the explanation when I added an option to show all errors. Because the structure of the azure pipelines schema uses many nested anyOf and oneOf clauses, there are a lot of failed matches (and therefore distinct errors). To pull the relevant bits from that:
That is, the schema from azure pipelines The pipeline will pass if we explicitly set ---
parameters:
- name: vals
type: object
default:
- '1'
- '2'
jobs:
- ${{ each val in parameter.vals }}:
- job: 'foo'
displayName: Job
steps:
- bash: echo ${{ val }} |
I've just released v0.11.0 with support for this. There isn't very much I can do about the fact that different yaml parsers may handle ambiguous parses differently, or cases like the above where an integer is probably accepted by the service but is not listed in the schema. However, if you run into issues using it which you think I can solve, please let me know! |
Some lines in a yml/json file may not adhere to a schema. (azure devops preprocesses their yml pipelines prior to running their schema against the .yml files)
Is it possible to just add a comment syntax to a line to avoid flagging the line as an error?
E.g.
The text was updated successfully, but these errors were encountered: