Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating document - 'Received RST_STREAM with code 1' #2345

Closed
ryan-saffer opened this issue Oct 20, 2023 · 83 comments
Closed

Updating document - 'Received RST_STREAM with code 1' #2345

ryan-saffer opened this issue Oct 20, 2023 · 83 comments
Assignees

Comments

@ryan-saffer
Copy link

ryan-saffer commented Oct 20, 2023

[REQUIRED] Step 2: Describe your environment

  • Operating System version: Google Cloud Function Runtime (Gen 2) Node 18
  • Firebase SDK version: 11.11.0
  • Firebase Product: Firestore
  • Node.js version: 18.18
  • NPM version: 9.8.1

[REQUIRED] Step 3: Describe the problem

Randomly receive error when updating a firestore document:
'13 INTERNAL: Received RST_STREAM with code 1'

Steps to reproduce:

I am seeing the error occur randomly when using ref.set(data, { merge: true })

Relevant Code:

const firestore = getFirestore()
firestore.setting({ ignoreUndefinedProperties: true })

const id = // existing document id
const newData = { foo: 'bar' }
try {
  await  firestore.collection('bookings').doc(id).set(newData, { merge: true })
} catch(error) {
  console.log(error.message) // '13 INTERNAL: Received RST_STREAM with code 1'
}
@google-oss-bot
Copy link

I found a few problems with this issue:

  • I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight.
  • This issue does not seem to follow the issue template. Make sure you provide all the required information.

@wagerlab
Copy link

We're seeing this same issue on firestore batch commit() calls as well

@ryan-saffer
Copy link
Author

ryan-saffer commented Oct 21, 2023

This issue was being caused due to an error on my end.

My cloud function was being used as a webhook, and the first thing I did was res.status(200). send() to tell the webhook that I received the request.

It looks like by doing this, it tells the runtime it is safe to teardown the instance, and by the time I updated my database, the connection was closed.

This function has been doing this for years without any issues, however the issue began appearing while using the new Gen 2 cloud functions, so it seems the new runtime tears down the instance faster.

I resolved this by publishing my data to a pubsub, and then returning a 200 to the webhook.

EDIT

Given many others are facing the same problem, I have reopened the issue

@maylorsan
Copy link

@wagerlab Any updates?We have same issue with .update(fieldsToUpdate); method

@narenpublic
Copy link

I have been seeing this issue since Oct 16 as well.

@Jamesfleming1
Copy link

I am seeing this as well. Redeploying my functions temporarily fixes them, and then they break again

@im12345dev
Copy link

im12345dev commented Oct 22, 2023

As well here, using GCP Cloud Run, any solutions?

edit: to give more context , this happens to us while performing multiple writes using promises, it’s very inconsistent and might happen once/twice a day or literally every single time.

@ciriousjoker
Copy link

ciriousjoker commented Oct 22, 2023

Happens for us since 9th Oct 2023 as well.

  • In our case, we await the .set() call that fails, so the above mentioned issue of teardown shouldn't occur at all.
  • I thought this was because of a function timeout, but the function only ran for 10 seconds when the error occurred

Is this even being looked at? It has the "needs-triage" label, but it's also already closed, no idea how this is handled here.

@maylorsan
Copy link

We haven't found a solution to this issue yet.

We've attempted several methods, but none have resolved the problem:

  • Switched the update method to use set and set with {merge: true}, but this didn't work.
  • Created a new cloud function dedicated solely to this method. Surprisingly, it didn't work either. However, it's noteworthy that we have several other cloud functions that update user data using the same method, and they function as expected.
    What's even more perplexing is that this issue arises in about 60% of our http cloud function calls, and it seems to occur randomly. We haven't been able to identify a consistent pattern.

Interestingly, everything operates flawlessly in our development project. The only difference is that the development project has a smaller User collection.

I'm starting to suspect that this might be related to some undocumented limitation in Firestore...

@ryan-saffer can we reopen this issue?

@ryan-saffer
Copy link
Author

ryan-saffer commented Oct 22, 2023

Given many people are reporting the same problem, I am reopening the issue.

@ryan-saffer ryan-saffer reopened this Oct 22, 2023
@CollinsVizion35
Copy link

got the same issue :

at WriteBatch.commit
at DocumentReference.set

at processTicksAndRejections (node:internal/process/task_queues:96:5) {

code: 13,
details: 'Received RST_STREAM with code 1',
metadata: Metadata { internalRepr: Map(0) {}, options: {} },
note: 'Exception occurred in retry method that was not classified as transient'

@TheRealFlyingCoder
Copy link

TheRealFlyingCoder commented Oct 23, 2023

Coming here to update, also experiencing the same error out of the blue, running a Node.js process locally on a server. It happened during a large series of updates only for a small amount of the updated docs.

Running the process again with a smaller number had no issue

@narenpublic
Copy link

narenpublic commented Oct 23, 2023

I have a node script I run locally to test and then a cloud function that runs on an interval. Both are experiencing this issue. I send a bunch of doc(id).update(obj) calls with a bunch of promises. I've lost 24 hours to this already with no rhyme or reason. Sometimes 100 might go through and then with the same data sometimes 0. Is there any kind of rate limit with a cool off period that is buried somewhere.

13 INTERNAL: Received RST_STREAM with code 1

And in us-central1.

@umarmirza20
Copy link

This issue recently started happening with me today. In my cloud function I save bunch of write batches and then return a 200 success call. So it can't be that the instance is being torn down because I send a 200 success after all batches a re written. However as soon as I try to commit the batches I get this error

@fabdurso
Copy link

Same issue here.

@thomasdrouin
Copy link

Same issue for our project.

@znayer
Copy link

znayer commented Oct 23, 2023

Same issue, huge headache.

@pc7
Copy link

pc7 commented Oct 23, 2023

Same here.

Started in all our Firebase projects at the same time, between 16:30 and 18:30 BST on 20th October, europe-west2 region.

It's not connected to any deployments from us, or changes to our code.

Functions that are failing:

  • all scheduled functions that contain Firestore deletes.

  • all trigger functions attached to Firestore document updates, that execute writes to other Firestore documents. All have 'await'.

All failing with this error:

13 INTERNAL: Received RST_STREAM with code 1

at runNextTicks (node:internal/process/task_queues:61:5)
at processImmediate (node:internal/timers:437:9)
at process.topLevelDomainCallback (node:domain:161:15)
at process.callbackTrampoline (node:internal/async_hooks:128:24)"

Any help would be very much appreciated.

@raphaelolivero
Copy link

I managed to work around the issue (for now) by changing this particular unit:

const promises = []
await tenantsQuerySnapshot.docs.forEach(async tenantDoc => {
  ...
  promises.push(
    pricesDocRef.set({
      DISPATCH_FEE: +dispatchFee,
      CALL_FEE: +callFee,
      PLATFORM_FEE: +platformFee,
    }, { merge: true }),
  )
})

await Promise.all(promises)

res.status(201).send('success')

To this:

await Promise.all(tenantsQuerySnapshot.docs.map(async tenantDoc => {
  const pricesDocRef = usageDocSnapshot.ref.collection('metadata').doc('prices')
  ...

  const result = await pricesDocRef.set({
    DISPATCH_FEE: +dispatchFee,
    CALL_FEE: +callFee,
    PLATFORM_FEE: +platformFee,
  }, { merge: true })

  // For debugging purposes.
  console.debug(usageDayPath, JSON.stringify(result))
}))

res.status(201).send('success')

I also faced this issue using a batch.

@TheSavageDev
Copy link

Having the same issue.

@ehsannas ehsannas self-assigned this Oct 23, 2023
@ehsannas
Copy link

Hi everyone, thanks for reporting. We have started to investigate, and will provide updates.

@mikemoone
Copy link

The issue does seem to be related to the size of the batch, so a workaround that has helped on our side is adding an automatic retry which breaks the batch into smaller chunks if the first write attempt fails:

  asyncWriteBatch: async (allWrites: BatchWrite[], chunkSize = 500, delay = 0) => {
    if (allWrites.length === 0) return;
    const chunks = _.chunk(allWrites, chunkSize);
    for (const chunk of chunks) {
      try {
        const batch = firestore.batch();
        for (const write of chunk) {
          switch (write.type) {
            case 'delete':
              batch.delete(write.ref);
              break;
            case 'set':
              batch.set(write.ref, write.value, write.options || {});
              break;
            case 'update':
              batch.update(write.ref, write.value);
              break;
            default:
              break;
          }
        }
        await batch.commit();
      } catch {
        await sleep(1000);
        const batch = firestore.batch();
        const subChunks = _.chunk(chunk, 10);
        for (const subChunk of subChunks) {
          for (const write of subChunk) {
            switch (write.type) {
              case 'delete':
                batch.delete(write.ref);
                break;
              case 'set':
                batch.set(write.ref, write.value, write.options || {});
                break;
              case 'update':
                batch.update(write.ref, write.value);
                break;
              default:
                break;
            }
          }
        }
        await batch.commit();
      }
      if (delay) {
        await sleep(delay);
      }
    }
  },

@ciriousjoker
Copy link

ciriousjoker commented Oct 23, 2023

Did some more digging, here are my notes:

  • Only affects small percentage of our calls (55 out of 1034)
  • There was one set of inputs that lead to a reliable crash
  • Our function is deterministic, and I've reconstructed the data passed to .set() (see below).
  • I've since deployed a new version, which doesn't crash anymore (suggests that our code was the issue)
  • I've then redeployed the original version (unfortunately not the exact same executable, since v1 cloud functions can't be rolled back). Instead, I've redeployed the code from the same day (21.06.2023). This also doesn't have the issue anymore (suggests that the code isn't the issue).
  • The only interesting things about the data passed to .set() are these: We used Timestamp.now() instead of a FieldValue and there are lots of DocumentReferences
  • It's a single .set() call that fails, even though it seems to be using WriteBatch internally
Stacktrace
Unhandled error Error: 13 INTERNAL: Received RST_STREAM with code 1
    at callErrorFromStatus (/workspace/node_modules/@grpc/grpc-js/build/src/call.js:31:19)
    at Object.onReceiveStatus (/workspace/node_modules/@grpc/grpc-js/build/src/client.js:192:76)
    at Object.onReceiveStatus (/workspace/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:360:141)
    at Object.onReceiveStatus (/workspace/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:323:181)
    at /workspace/node_modules/@grpc/grpc-js/build/src/resolving-call.js:94:78
    at processTicksAndRejections (node:internal/process/task_queues:78:11)
for call at

Caused by: Error
    at WriteBatch.commit (/workspace/node_modules/@google-cloud/firestore/build/src/write-batch.js:433:23)
    at DocumentReference.set (/workspace/node_modules/@google-cloud/firestore/build/src/reference.js:393:27)
    at /workspace/src/functions/workoutplan/generate_workoutplan.js:55:75
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async /workspace/node_modules/firebase-functions/lib/common/providers/https.js:458:26 {
  code: 13,
  details: 'Received RST_STREAM with code 1',
  metadata: Metadata { internalRepr: Map(0) {}, options: {} },
  note: 'Exception occurred in retry method that was not classified as transient'
}

The data isn't restored perfectly and some info is redacted, but the general shape/structure & types of values is the same.

data
{
  "dateLastUpdated": {
    "_seconds": 1698086022,
    "_nanoseconds": 952000000
  },
  "days": [
    {
      "workouts": [
        {
          "name": {
            "de": "2",
            "en": "2"
          },
          "type": "B",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "B",
                "2"
              ]
            },
            "_converter": {}
          }
        }
      ]
    },
    {
      "workouts": [
        {
          "name": {
            "de": "13",
            "en": "13"
          },
          "type": "A",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "A",
                "13"
              ]
            },
            "_converter": {}
          }
        },
        {
          "name": {
            "de": "6",
            "en": "6"
          },
          "type": "A",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "A",
                "6"
              ]
            },
            "_converter": {}
          }
        },
        {
          "name": {
            "de": "10",
            "en": "10"
          },
          "type": "A",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "A",
                "10"
              ]
            },
            "_converter": {}
          }
        }
      ]
    },
    {
      "workouts": []
    },
    {
      "workouts": []
    },
    {
      "workouts": [
        {
          "name": {
            "de": "13",
            "en": "13"
          },
          "type": "A",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "A",
                "13"
              ]
            },
            "_converter": {}
          }
        },
        {
          "name": {
            "de": "10",
            "en": "10"
          },
          "type": "A",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "A",
                "10"
              ]
            },
            "_converter": {}
          }
        },
        {
          "name": {
            "de": "11",
            "en": "11"
          },
          "type": "A",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "A",
                "11"
              ]
            },
            "_converter": {}
          }
        }
      ]
    },
    {
      "workouts": [
        {
          "name": {
            "de": "3",
            "en": "3"
          },
          "type": "B",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "B",
                "3"
              ]
            },
            "_converter": {}
          }
        }
      ]
    },
    {
      "workouts": [
        {
          "name": {
            "de": "1",
            "en": "1"
          },
          "type": "B",
          "training": {
            "_firestore": {},
            "_path": {
              "segments": [
                "B",
                "1"
              ]
            },
            "_converter": {}
          }
        }
      ]
    }
  ],
  "isCustomized": false,
  "goals": [
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "C",
          "some id"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "C",
          "some id"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "C",
          "some id"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "C",
          "some id"
        ]
      },
      "_converter": {}
    }
  ],
  "trainingplans": [
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "B",
          "2"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "B",
          "1"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "B",
          "3"
        ]
      },
      "_converter": {}
    }
  ],
  "skills": [
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "A",
          "13"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "A",
          "10"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "A",
          "6"
        ]
      },
      "_converter": {}
    },
    {
      "_firestore": {},
      "_path": {
        "segments": [
          "A",
          "11"
        ]
      },
      "_converter": {}
    }
  ]
}

@edmilsonss
Copy link

I had the same problem since last friday and so far I understood looks like something inside firebase / grpc implementation related to long time delays between firebase calls.

Also, correct me if I'm wrong, firebase library seems to be moving away from gRPC but it still keeps it as a default option if you don't set preferRest: true (see docs)

For me the solution was calling init right before performing an operation and/or changing firebase config to prefer using rest comunication.

Here is my code snipet:

function getFbDb() {
  const mainFirebaseApp = firebaseAdmin.initializeApp({
      credential: firebaseAdmin.credential.applicationDefault()
    }, uuid.v4());

  const db = mainFirebaseApp.firestore();
  const settings = {
    preferRest: true,
    timestampsInSnapshots: true
  };
  db.settings(settings);
  return { db, firebaseApp: mainFirebaseApp };
}
const { db, firebaseApp } = getFbDb();

Use const { db, firebaseApp } = getFbDb(); everytime you have to execute an operation.

Another "dirty option" seems to be using a retry approach after the first fail. I'd use this only in an emergency situation.

@petrvecera
Copy link

We have lowered all our batch writes to 100 and we didn't get errors on the latest batch processing 🎉

Also, may I ask why this issue is not reflected in https://status.cloud.google.com/?

👍

Btw be aware that some writes are getting trough with this error. Even though batch writes are suppose to be atomic. Our monitoring based on the logs displays different data than is reality.

@ehsannas
Copy link

Hi everyone! Thanks again for reporting issues here ❤️
This issue has been resolved as of yesterday evening.
No action is needed on your part: no SDK upgrade needed, no change to your cloud functions needed.

@jobweegink
Copy link

Hi @ehsannas
Thanks for fixing it and letting us know.
Can you give a bit more insight in this issue, was it a backend issue?
Is there any way we can prevent or track this in the future?
tbh it caused quite some issues for us, and I think I'm not the only one here.

But, still thanks for fixing it!

@jessp01
Copy link

jessp01 commented Oct 26, 2023

I'd like to reiterate @jobweegink's statement. It has greatly inconvenienced my customers (and myself. frankly) and while I acknowledge that s*it will happen, at Google as well as anywhere else, I do feel a technical description of the issue and how it was resolved is reasonable to expect.

@oyvindholmstad
Copy link

@ehsannas @allspain

Nothing has changed here. We are still experiencing the issue. I find the error message in the log 24 times the last 12 hours.

The latest incident happened just an hour ago.

Gen 1
europe-west1
Node JS 16

@jferrettiboke
Copy link

Hey guys! 👋

I've also been experiencing the same issue in my client's app, which has been working fine 24/7 for the past three years. In my case, nothing has changed in the codebase for the past three years. So, my conclusion is that this must be something related to Firebase or Google, and not necessarily the business logic of your app.

After extensive research and several attempts, I can confirm that upgrading dependencies to the latest version and using preferRest: true works.

Here are the steps:

  1. cd your-project/
  2. npm install firebase-admin@latest firebase-functions@latest
  3. npm install -g firebase-tools@latest
  4. Update your Firestore settings and add preferRest: true (see the code below).
  5. Deploy your functions again.
const admin = require("firebase-admin");

const firestore = admin.firestore();

firestore.settings({ preferRest: true });

// Use Firestore as usual

Depending on what your previous versions were before upgrading, you could encounter some breaking changes. Make sure to read the changelogs first and adapt your project as needed.

If you run into issues when deploying your functions, I recommend using debug mode: firebase deploy --only functions --debug. That way, you will know what's happening.

Please don't take this as the definitive solution. This is what worked for me, and that's why I'm sharing it. It might or might not work for you. If it doesn't work, feel free to ask. I'll do my best to provide a quick response.

Happy coding, buddies! 👩🏻‍💻

@ciriousjoker
Copy link

ciriousjoker commented Oct 27, 2023

@ehsannas
Same for us. The error is still appearing (last 2 hours ago). It seems to only affect one cloud function though and while redeploying a new version seems to have "fixed" it initially, the errors are coming back. Also, I have a new set of imports that reproducibly crashes the cloud function.

@oyvindholmstad
Copy link

@ehsannas @allspain This is still happening. Please re-open the issue.

@ehsannas
Copy link

Hi @ciriousjoker @oyvindholmstad . This was not an SDK issue, and is believed to be fully addressed. If you are still experiencing an issue, I'd encourage you to reach out to Google Cloud Support or Firebase Support.

@garveychan
Copy link

garveychan commented Oct 31, 2023

Hi all, just wanted to post about my experience with this error in case anyone else was still having this issue after trying all of the above fixes.

Essentially, my cloud function was set up to execute a series of smaller async functions that each performed a firestore.get() and pushed a doc.update() to a promise array that was returned in a Promise.all() out of the overall cloud function.

I believe I was encountering this error because I wasn't awaiting each of the smaller async functions so their respective firestore.get() queries were being made after the function had returned its outer Promise.

tldr - check that your get queries (Promises) are resolving before the cloud function exits.

Hope this helps anyone having the same problem.

@jobweegink
Copy link

jobweegink commented Nov 1, 2023

Hi @ciriousjoker @oyvindholmstad . This was not an SDK issue, and is believed to be fully addressed. If you are still experiencing an issue, I'd encourage you to reach out to Google Cloud Support or Firebase Support.

We also still receive this error while nothing has changed, we didn't redeploy anything since a few months, but it came out of nowhere, are you sure you fixed this issue for all Firestore regions?

And also about my other comment, can you please get back to that as well, I feel like it's being ignored even though this had and has a great impact on us.

Even though it seems to be more stable if I put an Await for every batch commit rather than doing Promise.all like @garveychan suggested. This is nice but weird that it worked for a long time and then stopped working.

@ciriousjoker
Copy link

@jobweegink good Point about the regions.

We're using europe-west1, what about you, @oyvindholmstad & @jobweegink?

@oyvindholmstad
Copy link

@ciriousjoker europe-west1 here as well!

@jobweegink
Copy link

@jobweegink good Point about the regions.

We're using europe-west1, what about you, @oyvindholmstad & @jobweegink?

Europe-west for me

@nikoSchoinas
Copy link

Hello, the issue remains for us too without having deployed anything new. We also use europe-west1. More specifically:
Gen 2
Node 16
firebase-admin 11.10.0
firebase-functions 4.4.0

@loicsalzmann
Copy link

Helpppppp please 😱 My app with 30k users playing every day is totally broke... Got this random error hundreds times a day on several functions...... Error: 13 INTERNAL: Received RST_STREAM with code 2

Please help !

@ciriousjoker
Copy link

@loicsalzmann I recommend contacting the Firebase support about this, they can escalate this much more than comments on a closed issue can. Especially if you can provide them with a reproducible example (MCVE) and the node, firebase-functions, 1st/2nd gen functions etc. versions you're using.

@oyvindholmstad
Copy link

For what it's worth, I have contacted support, and they have escalated the issue to the engineering team. I have been promised a response before 10th November 2023, 10:00 PM IST. We still see this daily.

@loicsalzmann
Copy link

For what it's worth, I have contacted support, and they have escalated the issue to the engineering team. I have been promised a response before 10th November 2023, 10:00 PM IST. We still see this daily.

Please Keep posted :-)

@HansaExport
Copy link

Please we need a fix for this! The workaround to reinit the firebase/db on every functions-call is no good solution...

region: europe-west1

@oyvindholmstad
Copy link

Got an update from support on friday:

"As informed earlier, we have reported your issue concern to the Engineering team and many users have reported that issue also. Please be assured that the Engineering team has started working on the issue and I’m actively working with them to get updates on your issue. I will reach out to you as soon as I have an update from the Engineering team but no later than (2 business days) 14th November 2023, 10:00 PM IST."

@loicsalzmann
Copy link

It seems that the problem is fixed on my end; I haven't encountered any errors anymore. Fortunately, thank God. 🙏🏼

@HansaExport
Copy link

Got an update from support on friday:

"As informed earlier, we have reported your issue concern to the Engineering team and many users have reported that issue also. Please be assured that the Engineering team has started working on the issue and I’m actively working with them to get updates on your issue. I will reach out to you as soon as I have an update from the Engineering team but no later than (2 business days) 14th November 2023, 10:00 PM IST."

Is there an update?

@oyvindholmstad
Copy link

@loicsalzmann Good to hear. Problem persists on our end, unfortunately 😩

@HansaExport Nothing new. They keep pushing the date forward:

"Hello,

Thank you for your patience.

I would like to inform you that I'm still waiting for an update from the Product Engineering Team. As informed earlier many users have reported that issue also and Please be assured that the Engineering team has started working on the issue and I’m actively working with them to get updates on your issue. I will reach out to you as soon as I have an update from the Engineering team but no later than 17th November 2023, 10:00 PM IST.

I would like to set the right expectation that the issue troubleshooting may take a while to narrow down the root cause of this. I appreciate your understanding and cooperation in this matter."

@oyvindholmstad
Copy link

Got an update from support today:

I received an update from the Product Engineering team regarding the ETA. As per their update we have an internal potential mitigation that will be deployed on the week of December 4th and is expected to take effective on December 4th or 5th. Please be informed that we are working with high priority on this issue and I will keep you updated actively.
I appreciate your patience as we continue to investigate this issue. Meanwhile, if you have any further concerns or queries, please feel free to reach out to me, I will be happy to assist.

@jobweegink
Copy link

Thanks for sharing!
That doesn't sound as high priority to me but okay.
But what they are basically saying is that it is a back end issue?

@oyvindholmstad
Copy link

@jobweegink Correct, they have confirmed that the issue is on the backend.

@oyvindholmstad
Copy link

New update from support:

As a follow up of the case I would like to inform you that the migration has been initiated and it is in progress now. It will be effective from December 5th. Please be informed that we are working with high priority on this issue and I will keep you updated actively.

@oyvindholmstad
Copy link

This is getting ridiculous:

Thank you for your patience.

I would like to inform you that the roll out process is still under process, and it will take one more week to get reflected in the production. Please be informed that we are working with high priority on this issue and I will keep you updated actively.

I appreciate your patience as we continue to investigate this issue. Meanwhile, if you have any further concerns or queries, please feel free to reach out to me, I will be happy to assist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests