Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DB.defrag(false) ALWAYS increases the size of i.0 file #104

Open
krajiv opened this issue May 27, 2014 · 1 comment
Open

DB.defrag(false) ALWAYS increases the size of i.0 file #104

krajiv opened this issue May 27, 2014 · 1 comment

Comments

@krajiv
Copy link

krajiv commented May 27, 2014

I am trying to use jdbm to store/delete temp objects. When the number of objects increased, I noticed that the d.0 file size has grown. So, i used the defrag method in the DB class. Upon using this, the d.0 file shrunk, but the i.0 file size increases. It happens every time I call a defrag.

I wrote a test code to perform a defrag with no data and noticed that on every invocation of the api, the size of i.0 increases by 4K.

Following is the Test program and the output

package com.test;

import java.util.concurrent.ConcurrentNavigableMap;

import org.apache.jdbm.DB;
import org.apache.jdbm.DBMaker;

public class DefragTest {

private DB m_db = null;
private ConcurrentNavigableMap<String, String> treeMap = null;
private int testCount = 5;

public DefragTest()
{
    m_db = DBMaker.openFile("DefragTest").make();
    treeMap = m_db.getTreeMap("DefragMap");
    if (treeMap == null) {
        treeMap = m_db.createTreeMap("DefragMap", null, null, null);
    }
}

public void runTest()
{
    System.out.println("====================== Initial Statistics ======================\n" + m_db.calculateStatistics());
    for (int i=0; i<testCount; i++)
    {
        System.out.println("--> Performing Defrag : Count " + (i + 1));
        m_db.defrag(false);
        try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            // ignore
            e.printStackTrace();
        }
    }
    System.out.println("====================== Final Statistics ======================\n" + m_db.calculateStatistics());
}

public static void main(String[] args) {
    DefragTest defragTest = new DefragTest();
    defragTest.runTest();
}

}

====================== Initial Statistics ======================
PAGES:
0 used pages with size 0B
1 record translation pages with size 4096B
0 free (unused) pages with size 0B
0 free (phys) pages with size 0B
0 free (logical) pages with size 0B
Total number of pages is 1 with size 4096B
RECORDS:
Contains 0 records and 675 free slots.
Total space occupied by data is 0B
Average data size in record is 0B
Maximal data size in record is 0B
Space wasted in record fragmentation is 0B
Maximal space wasted in single record fragmentation is 0B

--> Performing Defrag : Count 1
--> Performing Defrag : Count 2
--> Performing Defrag : Count 3
--> Performing Defrag : Count 4
--> Performing Defrag : Count 5
====================== Final Statistics ======================
PAGES:
1 used pages with size 4096B
6 record translation pages with size 24KB
0 free (unused) pages with size 0B
0 free (phys) pages with size 0B
0 free (logical) pages with size 0B
Total number of pages is 7 with size 28KB
RECORDS:
Contains 5 records and 4075 free slots.
Total space occupied by data is 68B
Average data size in record is 14B
Maximal data size in record is 33B
Space wasted in record fragmentation is 0B
Maximal space wasted in single record fragmentation is 0B

@jankotek
Copy link
Owner

Sorry, JDBM3 is unmaintained. Please use MapDB (aka JDBM4)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants