Skip to content

HDDS-5732 datanode duplicate write when concurrent write#2628

Closed
cchenax wants to merge 8 commits intoapache:masterfrom
cchenax:HDDS-5732
Closed

HDDS-5732 datanode duplicate write when concurrent write#2628
cchenax wants to merge 8 commits intoapache:masterfrom
cchenax:HDDS-5732

Conversation

@cchenax
Copy link
Contributor

@cchenax cchenax commented Sep 9, 2021

What changes were proposed in this pull request?

when concurrent write,for example,the chunkinfo{offset is 8192000,chunk size is 4096000} write in chunkfile,but the chunkinfo{offset is 4096000,chunk size is 4096000} is not write in chunkfile,so when this chunkinfo write in chunkfile will print warning log datanode duplicate write,but this chunkinfo was not wrote before.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-5732

How was this patch tested?

ci

long offset = chunkInfo.getOffset();
try {
FileInputStream inputStream = new FileInputStream(chunkFile);
inputStream.getChannel().position(offset);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use chunkFile.length() instead of seek and read?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chunkfile.length depend on the offset

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If my understanding is correct, you are trying to solve sparse file "overwrite" problem.

But this code still prints true.

    public static void main(String[] args) {
        File f = new File("/tmp/hello");
        try (FileOutputStream out = new FileOutputStream(f);
             FileInputStream in = new FileInputStream(f)) {

            out.getChannel().position(8 << 10); // 8KB
            out.write("hello, world".getBytes(StandardCharsets.UTF_8));

            System.out.println(f.length()); // prints 8204 (= 8192 + 12)

            in.getChannel().position(4 << 10); // 4KB
            if (in.read() == -1) { // read() = 0
                System.out.println(false);
            } else {
                System.out.println(true); // prints true
            }
        } catch (IOException e) {
            System.out.println(e);
        }
    }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ozone write the contents of the buffer from the current position to the limit

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give an example of inputs, when your code and the original code will return different results?

@bshashikant
Copy link
Contributor

Thanks @cchenax for the patch. To check the overwrite flag , seek and trying to read something may not be a viable option. With sparse files, read should return 0's not EOF. Do we still see concurrent writes for the same block?

@bshashikant
Copy link
Contributor

@cchenax , any update?

@ChenSammi
Copy link
Contributor

ChenSammi commented Oct 8, 2021

Hi @bshashikant , the issue is a false alert. I will close the PR.

@ChenSammi ChenSammi closed this Oct 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants