-
Notifications
You must be signed in to change notification settings - Fork 337
wrangler tail fails with 10013: workers.api.error.unknown #1415
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Can I "bump" my own ticket? This is still an issue, |
i keep seeing this one and without spelunking in the api code it's hard to know what triggers this error code; probably best that a CF contributor does so and amends the error message to give the user better insight. |
Also ran into this today. Some work on it would be appreciated. |
Hi @jpwilliams, could you share your |
@nataliescottdavidson Sure. It's in a private repo so can't link to it, but: // core
const path = require("path");
// config
const mode = process.env.NODE_ENV || "production";
module.exports = {
output: {
filename: "worker.js",
path: path.join(__dirname, "dist"),
},
mode,
resolve: {
extensions: [".ts", ".js"],
plugins: [],
},
module: {
rules: [
{
test: /\.ts$/,
loader: "ts-loader",
options: {
transpileOnly: true,
},
},
],
},
}; |
I managed to get around this by fully deleting the worker in Cloudflare and then republishing it. Thankfully I only hit this on a development worker and not a production one but this at least fixed it for the dev one. Note I made no changes to webpack or the actual worker in between the delete and re-publish. |
I experienced the same thing as @cdloh above, though it took me a little while longer to try this as mine was a production worker when it began displaying this issue. Even if I update the original worker with new code I continue to see the same issue, but publishing it under a new name gave me immediate access to the logs for that newly-published worker. This would lead me to believe that it's not a configuration issue, but something internal on Cloudflare's side that applies to individual workers. Sorry - I would've posted this with the initial issue posting but only did this myself a few weeks ago and didn't think to update it. @cdloh's answer is a wonderful workaround if anyone else is currently experiencing this. |
Hey @jpwilliams, to diagnose the error I need to take a look at the logs. Could you email your account id, script name, and zone id (or just your wrangler.toml) to wrangler@cloudflare.com? |
Of course, @nataliescottdavidson. I've sent an email over with my Shout if you need anything else! |
@nataliescottdavidson I'm afraid my mail to wrangler@cloudflare.com failed:
|
@jpwilliams gah, I'm sorry about that. I'm new to the team and still figuring out some processes. I've changed the permissions to allow non members to post to the group. Would you mind resending the email? |
No worries whatsoever, @nataliescottdavidson! I've resent the email and it looks like it's gone through fine this time. 👍🏻 |
@Erisa 🙏 |
@nataliescottdavidson I'm experiencing the same error. I'm very reluctant to try and delete + recreate the worker as its in production. Is there any way you could assist me? |
I just experienced the same issue - would be good to have this resolved... |
Any update on this issue? I deleted the worker but still the error.
|
@venom90 , could you retry and let me know if error is persisting? We had an incident with another service today that caused 10013s and is now resolved. |
I also had this issue today for Error: Something went wrong! Status: 500 Internal Server Error, Details {
"result": null,
"success": false,
"errors": [
{
"code": 10013,
"message": "workers.api.error.unknown"
}
],
"messages": []
} however it's no longer present. @nataliescottdavidson at least for me it's solved now |
This was fixed in our API. |
This just happened for me, however, it was because I already had another worker using the same routes which then clashed. Once I removed the routes on the old worker it published fine. |
🐛 Bug Report
Environment
rustc -V
: not installednode -v
: v12.13.0wrangler -V
: wrangler 1.10.1wrangler.toml
Steps to reproduce
wrangler tail --env prod
What did you expect to see?
Logging from the
prod
environment.What did you see instead?
Extra info
I had an attack on the zone that this worker is on, so had to turn on "Attack Mode". The
prod
environment logs were working previous to that, but even though "Attack Mode" is now off, I can't access them.I'm assuming this has something to do with Attack Mode, but am unsure due to the generic error! What can I do to help debug on my side?
The text was updated successfully, but these errors were encountered: