Skip to content

19,000 writes enter, one write leaves #4601

@aphyr

Description

@aphyr

What version of Dgraph are you using?

1.1.1

Have you tried reproducing the issue with the latest release?

I'm working on that now; I haven't seen it in 1.1.1-65-g2851e2d9a yet, but it's been difficult to reproduce, so it's hard to tell.

What is the hardware spec (RAM, OS)?

5-node LXC Jepsen cluster, 128GB ECC RAM, 48-way Xeon.

Steps to reproduce the issue (command/config used to run Dgraph).

With Jepsen 3932955ce71dc7a731e9510fd197b2b600d828d4, try

lein run test --workload uid-set --time-limit 600 --concurrency 2n --test-count 20

Expected behaviour and actual result.

In the UID-set test, Jepsen creates a schema like value: [int] ., and inserts a whole bunch of unique triples; each with the same UID, same predicate, and unique values. At the end of the test, it tries to read those triples back by querying for every value associated with the chosen UID. In this test run, we inserted 19,030 distinct values. However, when we tried to read those values at the end of the test, we observed:

{:q [{:uid 0x1, :value 24333}]}

rather than the expected

:q [{:uid 0x3, :value [7758 1675 3419 ... <19,000 more elements>]}]

24333 was the most recent successfully inserted value. It appears as if Dgraph has... perhaps lost the schema for the value predicate entirely, or somehow overwritten every previous record with a single one?

@danielmai suggests this could be due to a bug in posting lists, which may have been fixed in #4574.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions