New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating document - 'Received RST_STREAM with code 1' #2345
Comments
I found a few problems with this issue:
|
We're seeing this same issue on firestore batch commit() calls as well |
This issue was being caused due to an error on my end. My cloud function was being used as a webhook, and the first thing I did was It looks like by doing this, it tells the runtime it is safe to teardown the instance, and by the time I updated my database, the connection was closed. This function has been doing this for years without any issues, however the issue began appearing while using the new Gen 2 cloud functions, so it seems the new runtime tears down the instance faster. I resolved this by publishing my data to a pubsub, and then returning a 200 to the webhook. EDIT Given many others are facing the same problem, I have reopened the issue |
@wagerlab Any updates?We have same issue with |
I have been seeing this issue since Oct 16 as well. |
I am seeing this as well. Redeploying my functions temporarily fixes them, and then they break again |
As well here, using GCP Cloud Run, any solutions? edit: to give more context , this happens to us while performing multiple writes using promises, it’s very inconsistent and might happen once/twice a day or literally every single time. |
Happens for us since 9th Oct 2023 as well.
Is this even being looked at? It has the "needs-triage" label, but it's also already closed, no idea how this is handled here. |
We haven't found a solution to this issue yet. We've attempted several methods, but none have resolved the problem:
Interestingly, everything operates flawlessly in our development project. The only difference is that the development project has a smaller User collection. I'm starting to suspect that this might be related to some undocumented limitation in Firestore... @ryan-saffer can we reopen this issue? |
Given many people are reporting the same problem, I am reopening the issue. |
got the same issue : at WriteBatch.commit
code: 13, |
Coming here to update, also experiencing the same error out of the blue, running a Node.js process locally on a server. It happened during a large series of updates only for a small amount of the updated docs. Running the process again with a smaller number had no issue |
I have a node script I run locally to test and then a cloud function that runs on an interval. Both are experiencing this issue. I send a bunch of doc(id).update(obj) calls with a bunch of promises. I've lost 24 hours to this already with no rhyme or reason. Sometimes 100 might go through and then with the same data sometimes 0. Is there any kind of rate limit with a cool off period that is buried somewhere. 13 INTERNAL: Received RST_STREAM with code 1 And in us-central1. |
This issue recently started happening with me today. In my cloud function I save bunch of write batches and then return a 200 success call. So it can't be that the instance is being torn down because I send a 200 success after all batches a re written. However as soon as I try to commit the batches I get this error |
Same issue here. |
Same issue for our project. |
Same issue, huge headache. |
Same here. Started in all our Firebase projects at the same time, between 16:30 and 18:30 BST on 20th October, europe-west2 region. It's not connected to any deployments from us, or changes to our code. Functions that are failing:
All failing with this error: 13 INTERNAL: Received RST_STREAM with code 1
Any help would be very much appreciated. |
I managed to work around the issue (for now) by changing this particular unit: const promises = []
await tenantsQuerySnapshot.docs.forEach(async tenantDoc => {
...
promises.push(
pricesDocRef.set({
DISPATCH_FEE: +dispatchFee,
CALL_FEE: +callFee,
PLATFORM_FEE: +platformFee,
}, { merge: true }),
)
})
await Promise.all(promises)
res.status(201).send('success') To this: await Promise.all(tenantsQuerySnapshot.docs.map(async tenantDoc => {
const pricesDocRef = usageDocSnapshot.ref.collection('metadata').doc('prices')
...
const result = await pricesDocRef.set({
DISPATCH_FEE: +dispatchFee,
CALL_FEE: +callFee,
PLATFORM_FEE: +platformFee,
}, { merge: true })
// For debugging purposes.
console.debug(usageDayPath, JSON.stringify(result))
}))
res.status(201).send('success') I also faced this issue using a batch. |
Having the same issue. |
Hi everyone, thanks for reporting. We have started to investigate, and will provide updates. |
The issue does seem to be related to the size of the batch, so a workaround that has helped on our side is adding an automatic retry which breaks the batch into smaller chunks if the first write attempt fails:
|
Did some more digging, here are my notes:
Stacktrace
The data isn't restored perfectly and some info is redacted, but the general shape/structure & types of values is the same. data{
"dateLastUpdated": {
"_seconds": 1698086022,
"_nanoseconds": 952000000
},
"days": [
{
"workouts": [
{
"name": {
"de": "2",
"en": "2"
},
"type": "B",
"training": {
"_firestore": {},
"_path": {
"segments": [
"B",
"2"
]
},
"_converter": {}
}
}
]
},
{
"workouts": [
{
"name": {
"de": "13",
"en": "13"
},
"type": "A",
"training": {
"_firestore": {},
"_path": {
"segments": [
"A",
"13"
]
},
"_converter": {}
}
},
{
"name": {
"de": "6",
"en": "6"
},
"type": "A",
"training": {
"_firestore": {},
"_path": {
"segments": [
"A",
"6"
]
},
"_converter": {}
}
},
{
"name": {
"de": "10",
"en": "10"
},
"type": "A",
"training": {
"_firestore": {},
"_path": {
"segments": [
"A",
"10"
]
},
"_converter": {}
}
}
]
},
{
"workouts": []
},
{
"workouts": []
},
{
"workouts": [
{
"name": {
"de": "13",
"en": "13"
},
"type": "A",
"training": {
"_firestore": {},
"_path": {
"segments": [
"A",
"13"
]
},
"_converter": {}
}
},
{
"name": {
"de": "10",
"en": "10"
},
"type": "A",
"training": {
"_firestore": {},
"_path": {
"segments": [
"A",
"10"
]
},
"_converter": {}
}
},
{
"name": {
"de": "11",
"en": "11"
},
"type": "A",
"training": {
"_firestore": {},
"_path": {
"segments": [
"A",
"11"
]
},
"_converter": {}
}
}
]
},
{
"workouts": [
{
"name": {
"de": "3",
"en": "3"
},
"type": "B",
"training": {
"_firestore": {},
"_path": {
"segments": [
"B",
"3"
]
},
"_converter": {}
}
}
]
},
{
"workouts": [
{
"name": {
"de": "1",
"en": "1"
},
"type": "B",
"training": {
"_firestore": {},
"_path": {
"segments": [
"B",
"1"
]
},
"_converter": {}
}
}
]
}
],
"isCustomized": false,
"goals": [
{
"_firestore": {},
"_path": {
"segments": [
"C",
"some id"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"C",
"some id"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"C",
"some id"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"C",
"some id"
]
},
"_converter": {}
}
],
"trainingplans": [
{
"_firestore": {},
"_path": {
"segments": [
"B",
"2"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"B",
"1"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"B",
"3"
]
},
"_converter": {}
}
],
"skills": [
{
"_firestore": {},
"_path": {
"segments": [
"A",
"13"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"A",
"10"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"A",
"6"
]
},
"_converter": {}
},
{
"_firestore": {},
"_path": {
"segments": [
"A",
"11"
]
},
"_converter": {}
}
]
} |
I had the same problem since last friday and so far I understood looks like something inside firebase / grpc implementation related to long time delays between firebase calls. Also, correct me if I'm wrong, firebase library seems to be moving away from gRPC but it still keeps it as a default option if you don't set For me the solution was calling init right before performing an operation and/or changing firebase config to prefer using rest comunication. Here is my code snipet:
Use const Another "dirty option" seems to be using a retry approach after the first fail. I'd use this only in an emergency situation. |
We have lowered all our batch writes to 100 and we didn't get errors on the latest batch processing 🎉
👍 Btw be aware that some writes are getting trough with this error. Even though batch writes are suppose to be atomic. Our monitoring based on the logs displays different data than is reality. |
Hi everyone! Thanks again for reporting issues here ❤️ |
Hi @ehsannas But, still thanks for fixing it! |
I'd like to reiterate @jobweegink's statement. It has greatly inconvenienced my customers (and myself. frankly) and while I acknowledge that s*it will happen, at Google as well as anywhere else, I do feel a technical description of the issue and how it was resolved is reasonable to expect. |
Hey guys! 👋 I've also been experiencing the same issue in my client's app, which has been working fine 24/7 for the past three years. In my case, nothing has changed in the codebase for the past three years. So, my conclusion is that this must be something related to Firebase or Google, and not necessarily the business logic of your app. After extensive research and several attempts, I can confirm that upgrading dependencies to the latest version and using Here are the steps:
const admin = require("firebase-admin");
const firestore = admin.firestore();
firestore.settings({ preferRest: true });
// Use Firestore as usual Depending on what your previous versions were before upgrading, you could encounter some breaking changes. Make sure to read the changelogs first and adapt your project as needed. If you run into issues when deploying your functions, I recommend using debug mode: Please don't take this as the definitive solution. This is what worked for me, and that's why I'm sharing it. It might or might not work for you. If it doesn't work, feel free to ask. I'll do my best to provide a quick response. Happy coding, buddies! 👩🏻💻 |
@ehsannas |
Hi @ciriousjoker @oyvindholmstad . This was not an SDK issue, and is believed to be fully addressed. If you are still experiencing an issue, I'd encourage you to reach out to Google Cloud Support or Firebase Support. |
Hi all, just wanted to post about my experience with this error in case anyone else was still having this issue after trying all of the above fixes. Essentially, my cloud function was set up to execute a series of smaller async functions that each performed a I believe I was encountering this error because I wasn't tldr - check that your Hope this helps anyone having the same problem. |
We also still receive this error while nothing has changed, we didn't redeploy anything since a few months, but it came out of nowhere, are you sure you fixed this issue for all Firestore regions? And also about my other comment, can you please get back to that as well, I feel like it's being ignored even though this had and has a great impact on us. Even though it seems to be more stable if I put an Await for every batch commit rather than doing Promise.all like @garveychan suggested. This is nice but weird that it worked for a long time and then stopped working. |
@jobweegink good Point about the regions. We're using europe-west1, what about you, @oyvindholmstad & @jobweegink? |
@ciriousjoker europe-west1 here as well! |
Europe-west for me |
Hello, the issue remains for us too without having deployed anything new. We also use europe-west1. More specifically: |
Helpppppp please 😱 My app with 30k users playing every day is totally broke... Got this random error hundreds times a day on several functions...... Error: 13 INTERNAL: Received RST_STREAM with code 2 Please help ! |
@loicsalzmann I recommend contacting the Firebase support about this, they can escalate this much more than comments on a closed issue can. Especially if you can provide them with a reproducible example (MCVE) and the node, firebase-functions, 1st/2nd gen functions etc. versions you're using. |
For what it's worth, I have contacted support, and they have escalated the issue to the engineering team. I have been promised a response before 10th November 2023, 10:00 PM IST. We still see this daily. |
Please Keep posted :-) |
Please we need a fix for this! The workaround to reinit the firebase/db on every functions-call is no good solution... region: europe-west1 |
Got an update from support on friday: "As informed earlier, we have reported your issue concern to the Engineering team and many users have reported that issue also. Please be assured that the Engineering team has started working on the issue and I’m actively working with them to get updates on your issue. I will reach out to you as soon as I have an update from the Engineering team but no later than (2 business days) 14th November 2023, 10:00 PM IST." |
It seems that the problem is fixed on my end; I haven't encountered any errors anymore. Fortunately, thank God. 🙏🏼 |
Is there an update? |
@loicsalzmann Good to hear. Problem persists on our end, unfortunately 😩 @HansaExport Nothing new. They keep pushing the date forward: "Hello, Thank you for your patience. I would like to inform you that I'm still waiting for an update from the Product Engineering Team. As informed earlier many users have reported that issue also and Please be assured that the Engineering team has started working on the issue and I’m actively working with them to get updates on your issue. I will reach out to you as soon as I have an update from the Engineering team but no later than 17th November 2023, 10:00 PM IST. I would like to set the right expectation that the issue troubleshooting may take a while to narrow down the root cause of this. I appreciate your understanding and cooperation in this matter." |
Got an update from support today:
|
Thanks for sharing! |
@jobweegink Correct, they have confirmed that the issue is on the backend. |
New update from support:
|
This is getting ridiculous:
|
[REQUIRED] Step 2: Describe your environment
[REQUIRED] Step 3: Describe the problem
Randomly receive error when updating a firestore document:
'13 INTERNAL: Received RST_STREAM with code 1'
Steps to reproduce:
I am seeing the error occur randomly when using
ref.set(data, { merge: true })
Relevant Code:
The text was updated successfully, but these errors were encountered: