New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFVC Build directory override not taking hold on hosted agent in all accounts. #3632
Comments
Hey @jessehouwing, as I see the fix that you've mentioned wasn't included in the release 2.195.0 of an agent, it was merged after we have started rollout of the new version. This fix should be available with the next agent release. You can track changes that are included in particular agent release in release notes here. |
Can you explain why my |
@jessehouwing Hmm, that sounds a little bit strange. Am I right that we are talking about the fix that was introduced by this PR? |
My organisation is I've tested the same TFVC based pipeline in these 2 accounts and they both show I tried to find a way to overrule the It would be nice if we could set some if the agent config elements as a capability on a hosted pool. But I'm not aware if any way to do that (yet). |
And yes, I'm looking at that PR, as far as I can tell that's the code that will set a different workspace name and build folder for TFVC workflows. |
Log from agent init:
Then in Checkout it's clearly creating
This is the behavior we need for the client as well. |
And from the client's environment:
And in checkout:
|
Hi @jessehouwing recent fix for TFVC is not rolled out yet - the behavior you observe could be related to the condition for assignment of agent id as build directory name (one tfvc repository in a pipeline) or spontaneous nature of original issue. |
The client's builds have only 1 TFVC repo in their build definitions. Which is always the case for TFVC builds as far as I can tell. In that case, what bug is causing that line not to work for our builds?! The Pipeline clearly hars only a single repo and it's type is TFVC:
And the agent is of type Hosted:
It's too predictably true for my account to be of spontaneous nature. I've tried creating new builds and all of them fail this important check and allocate Is it possible this happens on environments that were imported, my accounts are all created through the portal, their account is a recent import. Could that somehow infuence the repository count? |
It looks like the pending PR does influence this behavior, the value of shouldOverrideBuildDirectory was previously discarded in a merge situation:
|
Isn't this an example of having 2 repo's in one build definition? (btw: I work for the client @jessehouwing makes references to in this issue) |
Let me close this item since the changes mentioned in this ticket have already been deployed. Feel free to ping in case of any questions. |
Still failing with 2.196.2 this is not fixed. |
@EzzhevNikita could you please check? |
@jessehouwing Could you please share an example of yml, where you have seen an issue? |
The actual build @jessehouwing shared a screenshot of is JSON. but this is the YAML export. Please note the resources:
repositories:
- repository: self
type: git
ref: $/[redacted]/
jobs:
- job: Job_1
displayName: Agent job 1
strategy:
parallel: 20
pool:
vmImage: windows-2019
steps:
- checkout: self
- task: CmdLine@2
displayName: Command Line Script
...
This is the JSON of the task, showing it does a trivial command line echo: "steps": [
{
"environment": {
},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "Command Line Script",
"timeoutInMinutes": 0,
"retryCountOnTaskFailure": 0,
"condition": "succeeded()",
"task": {
"id": "d9bafed4-0b18-4f58-968d-86655b4d2ce9",
"versionSpec": "2.*",
"definitionType": "task"
},
"inputs": {
"script": "echo Write your commands here\n\necho Hello world\n",
"workingDirectory": "",
"failOnStderr": "false"
}
}
], |
@frankvaneykelen-work Could you please also attach a log of the failed pipeline? |
Agent job 1 8
|
Agent job 1 15
|
Agent job 1 19
|
@EzzhevNikita please let me know if you need the debug logs of a succeeded run too |
The trick of that build is to make sure it spends some time performing the checkout. I think I checked in a folder with a couple of MB of random PDFs from my download folder. The build itself doesn't matter. As long as the organisation is able to spin up 20 parallel jobs on the hosted pool the reproduction time is near instant. And from the logs it's clear that all runs still create workspace ws_1_agentid instead of ws_agentid_agentid. And they check out to /a/1 instead of /a/agentid. |
This is the minimal repro pipeline to reproduce the issue And make sure this value is a significantly high number to increase the chances this will reproduce: For the client this is at 25. And as you can see in the screenshots at the original report, for some reason the bug doesn't surface across all my accounts. Using the same json import seeing different results. I've been unable to explain these differences. |
Ship! Ship! It! 🛳️ Is this still able to ship in this cycle, say 2.198.3? |
@jessehouwing The fix for this issue will be included in the next agent release 2.199. We are planning to start it this week. |
What's the hold-up with 2.200? I really don't see why this issue is taking so long to get deployed. Even after multiple high-level support issues. It could have easily been merged into 2.198.x, it seems 2.199 was skipped altogether and now 2.200 has had 3 weeks to roll out since 198 but hasn't. |
@jessehouwing Deployment of agent version 2.200.0 completed, could you please check if the issue was resolved? |
Pinged the client. |
Our agents are now running on 2.200.2, and our Multi-Agent (20x) Parallelism Test build, that used to consistently fail on "The working folder D:\a\1\s is already in use" errors in some of the Checkout tasks, now completed successfully twice in a row, so I am quite confident that the issue has indeed been fixed! 🎉 |
Agent Version and Platform
Version of your agent? 2.195
OS of the machine running the agent? Windows
Azure DevOps Type and Version
dev.azure.com hosted pool
What's not working?
@anatolybolshakov I'm seeing agent 2.195 rolling out with a number of fixes for TFVC on the hosted agent (including a couple of my own 🚀🎉), but I'm not (yet) seeing the hosted agents changing the workspace folder from ws_1_AgentId to ws_AgentId_AgentId. Is that a separate setting rolling out to a hosted pool near me soon? I'm seeing that on my own account (jessehouwing-dev), but not on my client's account yet with the same pool settings on a new TFVC build definition.
I see a setting to turn off the behaviour, but none to force it on...
azure-pipelines-agent/src/Agent.Sdk/Knob/AgentKnobs.cs
Lines 269 to 274 in f384b2b
Attached screenshots from their account and mine showing the behaviour isn't yet enabled on theirs. Both accounts are in West-Europe...
Working on my account:
Not working on Client's account:
I can share more account details and logs from the client privately.
The text was updated successfully, but these errors were encountered: