Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errored transactions seem to still commit #41

Closed
Quelklef opened this issue May 20, 2020 · 3 comments
Closed

Errored transactions seem to still commit #41

Quelklef opened this issue May 20, 2020 · 3 comments

Comments

@Quelklef
Copy link

Quelklef commented May 20, 2020

First of all, fantastic work on this! It's been a great help to me recently, letting me develop in the node environment without having to use something overpowered like puppeteer.

That being said, I think I've found a bug.

We first crease a simple database; it has one object store with an indexed attribute indexed_attr. We then, in two separate transactions, place an object { indexed_attr: 'xxx' } into the database. As we'd expect, since indexed_attr is unique, the first transaction succeeds and the second fails. However, they both seem to commit: we find that the resultant database contains two copies of the object, not just one.

I do not know if this is spec-compliant behaviour or not, but it differs from Chrome, which you note has a 99% pass rate on the W3C IndexedDB test suite. In Chrome, the resultant db has only one copy of the object.

require('fake-indexeddb/auto');

function setup() {
  /* Create database, object store, and unique index */
  return new Promise(resolve => {
    indexedDB.deleteDatabase('mydb').onsuccess = event => {
      const openreq = indexedDB.open('mydb');

      openreq.onupgradeneeded = event => {
        const db = event.target.result;
        const store = db.createObjectStore('mystore', { autoIncrement: true });
        store.createIndex('myindex', 'indexed_attr', { unique: true });
      };

      openreq.onsuccess = _event => resolve();
    };
  });
}

const my_object = { indexed_attr: 'xxx' };

function put() {
  /* Put `my_object` into the db. */
  return new Promise(resolve => {
    indexedDB.open('mydb').onsuccess = event => {
      const db = event.target.result;
      const tx = db.transaction(['mystore'], 'readwrite');
      const store = tx.objectStore('mystore');
      const addreq = store.add(my_object);
      addreq.onsuccess = _event => resolve('succ');
      addreq.onerror = _event => resolve('fail');
    };
  });
}

function read() {
  /* Return list of all objects in the db */
  return new Promise(resolve => {
    indexedDB.open('mydb').onsuccess = event => {
      const db = event.target.result;
      const tx = db.transaction(['mystore'], 'readonly');
      const store = tx.objectStore('mystore');
      store.getAll().onsuccess = event => resolve(event.target.result);
    };
  });
}

await setup();
console.log(await put()); // returns 'succ', as expected
console.log(await put()); // returns 'fail', as expected
console.log(await read()); // returns [my_object, my_object] instead of just [my_object]
Quelklef added a commit to Quelklef/JINEdb that referenced this issue May 21, 2020
The following test fails:
  index.test.ts > path index > doesn't allow duplicate values on a unique idnex
The issue actually seems to be due to a bug in fake-indexeddb; see dumbmatter/fakeIndexedDB#41.
@Quelklef
Copy link
Author

Playing around with it a little more, here are some observations:

  • If you put() 3 or 4 times, there's still only 2 copies in the DB.
  • If you manually abort on the second put(), by doing tx.abort() right after addreq.onerror = ..., the code works fine.

@dumbmatter
Copy link
Owner

Thanks for the minimal reproduction! You're awesome for that.

Turns out this was a simple problem, as you can see in the commit referenced above. I was not tracking the newly-created object in the rollback log until after the indexes were successfully created, so a ConstraintError during index creation would not remove the object from the store.

What you noticed about more put() calls not adding even more records was because the primary key counter did get rolled back, but the record from put 2 was not! So it was hitting a ConstraintError earlier, for the primary key.

It's fixed in v3.0.1.

@Quelklef
Copy link
Author

Brilliant! Glad to see it wasn't too hairy of an issue. Thanks for the fix :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants