Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Clone in Desktop Download ZIP

Loading…

Allow tables without primary key #63

Closed
yoshinorim opened this Issue · 3 comments

3 participants

@yoshinorim
Owner

Figure out how realistic it is to create a hidden auto_increment primary key, if not specifying primary key in DDL, instead of returning an error.

@yoshinorim
Owner

Here are basic design plans. Since some data dictionary format change is needed, it makes some sense to add this feature before GA.

  • DDL
    create a hidden auto_increment internal primary key

  • Dictionary
    Adding a flag of using hidden pk or not

  • Row format
    PK: {key_id, hidden_primary_key, other columns}
    SK: {key_id, sec_key, hidden_primary_key}

  • Utility functions
    has_primary_key()

  • CF
    hidden pk are always stored into default CF

  • Insert
    get next value
    assign hidden pk value (auto_inc)

  • Read

  • memcmp is always done by SK

  • User-Defined unique indexes
    I think we can simply disallow this -- normally it should be easier to convert from unique key to primary key. We don't have any table that has unique key but does not have primary key.

  • RBR slave
    If the table has no PK, the slave will use the first applicable index to find the row to update. It is the same with InnoDB+RBR.

RDBSE_KEYDEF member functions

  • RDBSE_KEYDEF::get_primary_key_tuple()
  • ha_rocksdb::get_row_by_rowid()
  • get by hidden pk
  • ha_rocksdb::convert_record_from_storage_format()
  • RDBSE_KEYDEF::unpack_record()
  • if this table uses hidden pk, exclude the pk from packed_key then unpack.
  • RDBSE_KEYDEF::pack_record()
@jkedgar
Collaborator

By using an auto increment key you have to store the last (or next) value somewhere and protect it with a mutex when incrementing it, potentially causing a bottleneck in the code. What about instead of an auto-incrementing column using a 64-bit random value? There would be a very small chance of picking a value that is already in use, but if that occurred you would just get a new value during insert.

I'm not sure what our maximum number of rows in the database is, but even at 100 billion rows, only about 0.000000005% of the possible values would be used making the chance of duplicates still very small.

The pros:

  • no mutex during insert
  • no special storage for next value

The cons:

  • a 64-bit value per row (although this would also be necessary for the auto-increment method if you can't guarantee that the total number of rows inserted is less than about 4 billion).
  • very, very rarely (almost never - but a very small possibility) the insert would fail because of a duplicate value and a new primary key would need to be generated randomly.
  • testing the code that handled a duplicate primary key on insert would need special consideration (i.e. some way of forcing a duplicate value).
@yoshinorim
Owner
  • Tables without primary keys are relatively rare, and they're not heavily accessed, so improving performance is not much worth spending time.
  • In InnoDB, next auto-inc value is kept in memory. At startup, it just calculates by reading the table -- max(auto_inc)+1. I think we can take same approach in MyRocks. This does not use extra storage. It's less efficient at startup, but would be ok since performance is not much important.
@hermanlee hermanlee referenced this issue in facebook/mysql-5.6
Open

Allow tables without primary key #39

@hermanlee hermanlee closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.