-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
on-disk size always 0 #81
Comments
I tried isolating the issue further by reducing my script to this: package main
import (
"encoding/binary"
"fmt"
"log"
"github.com/dgraph-io/badger/badger"
)
func main() {
// connection
opt := badger.DefaultOptions
opt.Dir = "/tmp/leveldb"
opt.ValueDir = opt.Dir
db, err := badger.NewKV(&opt)
if err != nil {
panic(err)
}
defer db.Close()
// writes
for i := 0; i < 10000; i++ {
bytes := make([]byte, 8)
binary.BigEndian.PutUint64(bytes, uint64(i))
// write to db
err = db.Set(bytes, bytes)
if err != nil {
log.Println("write error", err)
}
}
// fetch one key
keyOne := make([]byte, 8)
binary.BigEndian.PutUint64(keyOne, uint64(1))
// read from db
var item badger.KVItem
err = db.Get(keyOne, &item)
if err != nil {
log.Println("read error", err)
}
// debug
data := item.Value()
fmt.Println("val", data)
} it seems like there are two issues:
I think the issue for my original script it I got tired of waiting and cancelled the job, using leveldb it takes about 1.5s, I waited about 15s before giving it the 'ol cntl+c |
is it possible that I'm getting poor performance because I'm using an |
You should batch your writes. And, do them concurrently to achieve the best
performance from Badger. Serial one by one writes are slow.
Also, by default the writes are not sync. Which is why you don't see much
on disk until close.
Sent from Nexus 6P
…On Jun 28, 2017 11:14 PM, "Peter Johnson" ***@***.***> wrote:
is it possible that I'm getting poor performance because I'm using an ext4
filesystem?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#81 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABsyNIqJiyqtw5fBV_1yfdcwsSpHLH4Pks5sIlHEgaJpZM4OH5wN>
.
|
Let me know if there's something more I can help you with. |
hey @manishrjain thanks for your reply. I can re-jig my code to use batched writes instead of single writes, but the issue remains that comparing apples v. apples with leveldb, badger is much slower? are you suggesting that the batch mode of badger is much faster than the batch mode of leveldb? |
I haven't tried running any benchmarks against leveldb. Also, serial writes in a single goroutine isn't something that we have optimized for. Most practical applications which care about performance would do batched writes, concurrently. |
heya, I'm trying out badger as an alternative to leveldb, when I ported my code over I find that there is no data being synced to disk:
total 252K drwxrwxr-x 2 peter peter 4.0K Jun 28 14:28 . drwxrwxrwt 710 root root 244K Jun 28 14:28 .. -rw-rw-r-- 1 peter peter 0 Jun 28 14:28 000000.vlog -rw-rw-r-- 1 peter peter 0 Jun 28 14:28 clog
note that the
log.Println("wrote", item.ID)
line get's called for each item and no error is produced forlog.Println("write error", err)
.I have tested the encoding/decoding logic and it works fine with leveldb, so don't worry about that
any help would be appreciated :)
The text was updated successfully, but these errors were encountered: