Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Put group of entries together to journal queue copy 4 #8

Conversation

horizonzy
Copy link

No description provided.

codelipenghui and others added 29 commits March 7, 2023 08:07
* Make read entry request recyclable

* Move recycle to finally block

* Fix test and comments

* Fix test
* Avoid unnecessary force write.

* code clean.

* fix style
---

### Motivation

The running tests job name doesn't match the tests. Correct
the job name.
### Motivation
When using the bkperf command `bin/bkperf journal append -j data -n 100000000 --sync true` to test the BookKeeper journal performance, it failed with the following exception
```
[0.002s][error][logging] Error opening log file '/Users/hangc/Downloads/tmp/tc/batch/ta/bookkeeper-all-4.16.0-SNAPSHOT/logs/bkperf-gc.log': No such file or directory
[0.002s][error][logging] Initialization of output 'file=/Users/hangc/Downloads/tmp/tc/batch/ta/bookkeeper-all-4.16.0-SNAPSHOT/logs/bkperf-gc.log' using options 'filecount=5,filesize=64m' failed.
Invalid -Xlog option '-Xlog:gc=info:file=/Users/hangc/Downloads/tmp/tc/batch/ta/bookkeeper-all-4.16.0-SNAPSHOT/logs/bkperf-gc.log::filecount=5,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
```

The root cause is that the `logs` directory was not created.

### Modifications
Create the `logs` directory before bkperf started.
…pache#3857)

* Fix flaky test in testRaceGuavaEvictAndReleaseBeforeRetain

* format code
---

### Motivation

Update website to 4.15.4
…ns (apache#3860)

### Motivation
After PR apache#3056 , Bookkeeper set `level_compaction_dynamic_level_bytes=true` as `TableOptions` in `entry_location_rocksdb.conf.default` , which will cause `level_compaction_dynamic_level_bytes` lose efficacy and will cause rocksDB .sst file compact sort chaos when update bookie release.
As RocksDB  conf, `level_compaction_dynamic_level_bytes` need set as `CFOptions` https://github.com/facebook/rocksdb/blob/master/examples/rocksdb_option_file_example.ini

<img width="703" alt="image" src="https://user-images.githubusercontent.com/84127069/224640399-d5481fe5-7b75-4229-ac06-3d280aa9ae6d.png">


<img width="240" alt="image" src="https://user-images.githubusercontent.com/84127069/224640621-737d0a42-4e01-4f38-bd5a-862a93bc4b32.png">

### Changes

1. Change `level_compaction_dynamic_level_bytes=true` from `TableOptions` to `CFOptions`  in `entry_location_rocksdb.conf.default` ;
---

### Motivation

Release note for 4.15.4
)

### Motivation
After the bookie instance running long time, the bookie entry location index rocksDB `.sst` file size maybe expand to 20-30GB as one ledger data dir's location index in some case, which will cause the rocksDB scan operator cost more time and cause the bookie client request timeout.

Add trigger entry location index rocksDB compact REST API which can trigger  entry location rocksDB compaction and get the compaction status. 

The full range entry location index rocksDB compact will cause the entry location index dir express higher IOUtils. So we'd better trigger the entry location rocksDB compact by the api in low data flow period.

**Some case before rocksDB compact:**
<img width="232" alt="image" src="https://user-images.githubusercontent.com/84127069/220893469-e6fbc1a3-c767-4ffe-8ae9-f05ad1833c50.png">


<img width="288" alt="image" src="https://user-images.githubusercontent.com/84127069/220891359-dc37e139-37b0-461b-8001-dcc48517366c.png">

**After rocksDB compact:**
<img width="255" alt="image" src="https://user-images.githubusercontent.com/84127069/220891419-24267fa7-348c-4fbd-8b3e-70a99840bce5.png">

### Changes
1. Add  REST API  to trigger entry location index rocksDB compact.
…pache#3794)

### Motivation
1. Pick the higher leak detection level between netty and bookkeeper.
2. Enhance the bookkeeper leak detection value match rule, now it's case insensitive.

There are detailed information about it: https://lists.apache.org/thread/d3zw8bxhlg0wxfhocyjglq0nbxrww3sg
### Motivation

There're two reasons that we want to disable the code coverage.

1. The current report result is not accurate.
2. We can't get the PR's unit test's code coverage because of the apache Codecov permission.
### Motivation
When we use `TransactionalEntryLogCompactor` to compact the entry log files, it will generate a lot of small entry log files, and for those files, the file usage is usually greater than 90%, which can not be compacted unless the file usage decreased.

![image](https://user-images.githubusercontent.com/5436568/201135615-4d6072f5-e353-483d-9afb-48fad8134044.png)


### Changes
We introduce the entry log file size check during compaction, and the checker is controlled by `gcEntryLogSizeRatio`. 
If the total entry log file size is less than `gcEntryLogSizeRatio * logSizeLimit`, the entry log file will be compacted even though the file usage is greater than 90%. This feature is disabled by default and the `gcEntryLogSizeRatio` default value is `0.0`
… check task (apache#3818)

### Motivation

Fixes apache#3817 

For details, see: apache#3817 

### Changes

When there is an `auditTask` during the `lostBookieRecoveryDelay` delay, other detection tasks should be skipped.
Co-authored-by: zengqiang.xu <zengqiang.xu@shopee.com>
* Fix compaction threshold precision problem.

* Fix compaction threshold precision problem.
* Single buffer for small add requests

* Fixed checkstyle

* Fixed treating of ComposityByteBuf

* Fixed merge issues

* Fixed merge issues

* WIP

* Fixed test and removed dead code

* Removed unused import

* Fixed BookieJournalTest

* removed unused import

* fix the checkstyle

* fix failed test

* fix failed test

---------

Co-authored-by: chenhang <chenhang@apache.org>
* Add log for entry log file delete.

* add log info.

* Address the comment.

* Address the comment.

* revert the code.
Descriptions of the changes in this PR:
This is an improvement for apache#3837

### Motivation
1. Now if the maxPendingResponsesSize is expanded large, it will not decrease. => We should make it flexible.
2. Now after prepareSendResponseV2 to the channel, then we trigger all channels to flush pendingSendResponses, maybe there is only a few channels that need to flush, but if we trigger all channels, it's a waste. => We only flush the channel which prepareSendResponseV2.
…ies-together-to-journal-queue

# Conflicts:
#	bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/Journal.java
@merlimat merlimat merged commit 6fcf22c into merlimat:put-group-of-entries-together-to-journal-queue Mar 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet