19,000 writes enter, one write leaves #4601
Labels
area/testing/jepsen
kind/bug
Something is broken.
status/accepted
We accept to investigate/work on it.
What version of Dgraph are you using?
1.1.1
Have you tried reproducing the issue with the latest release?
I'm working on that now; I haven't seen it in 1.1.1-65-g2851e2d9a yet, but it's been difficult to reproduce, so it's hard to tell.
What is the hardware spec (RAM, OS)?
5-node LXC Jepsen cluster, 128GB ECC RAM, 48-way Xeon.
Steps to reproduce the issue (command/config used to run Dgraph).
With Jepsen 3932955ce71dc7a731e9510fd197b2b600d828d4, try
Expected behaviour and actual result.
In the UID-set test, Jepsen creates a schema like
value: [int] .
, and inserts a whole bunch of unique triples; each with the same UID, same predicate, and unique values. At the end of the test, it tries to read those triples back by querying for every value associated with the chosen UID. In this test run, we inserted 19,030 distinct values. However, when we tried to read those values at the end of the test, we observed:rather than the expected
24333 was the most recent successfully inserted value. It appears as if Dgraph has... perhaps lost the schema for the
value
predicate entirely, or somehow overwritten every previous record with a single one?@danielmai suggests this could be due to a bug in posting lists, which may have been fixed in #4574.
The text was updated successfully, but these errors were encountered: