Skip to content

MeGysssTaa/lvq4j

master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 

About

LVQ4J is a basic implementation of the LVQ (Learning Vector Quantization) prototype-based supervised classification algorithm written in Java, and an accompanying library for its easier use and setup.

Heads up

I am not a professional data scientist in any way. I created this library solely for my own small research purposes in the machine learning field according to some publicly available papers, articles, and tutorials.

For this reason, I hereby state that I cannot guarantee that this implementation is 100% accurate and will always work as expected. Use LVQ4J in your projects on your own risk.

Contributing

Why LVQ4J?

The main intention of LVQ4J is to provide a simple, user-friendly, and, most importantly, lightweight API for creating, training and using Learning Vector Quantization algorithms for classification (prediction) purposes. It might not be as optimized, as fast, or as powerful as other libraries, but it is a considerably good starting point for data science beginners. The code is pretty small, easy to understand, and is well-documented.

If you are looking for a robust and/or GPU-optimized machine learning library, then you are wrong here. Otherwise, if you're just a data newbie who would like to get started with LVQ, then you will probably love this library.

Features

  • Basic implementation of the LVQ model in pure Java;
  • a variety of built-in input normalization functions;
  • several premade weights initialization strategies;
  • many default distance metrics;
  • comparably high level of abstraction for beginners, yet with deep access to the neural network at its lowest level for experienced users;
  • LVQ4J is extremely lightweight — the library itself is small, and the only dependency is Slf4j (log4j2), which is not required thanks to a default fallback logger implementation.

LVQ vs k-nn vs Deep Learning

In a nutshell, LVQ is an "eagerly-learning" variant of k-nn. LVQ is a neural network, whereas k-nn is not. It takes pretty long for an LVQ model to train, however, the performance of its predictions is a lot better compared to k-nn that has to do its CPU-heavy tricks on every classification due to its "lazy" learning nature. Moreover, LVQ can work with accuracy similar to that of a k-nn even with a significantly smaller amount of train data.

Nevertheless, LVQ is still one of the simplest neural network algorithms. In most cases its sole advantage over deep learning (e.g. RNN or SVM) algorithms is that it is very easy to implement and setup for instant use. Compared to other neural networks, one does not have to have a lot of specific knowledge and experience in order to work with an LVQ model.

Usage

Maven

<repositories>
    <repository>
        <id>reflex.public</id>
        <name>Public Reflex Repository</name>
        <url>https://archiva.reflex.rip/repository/public/</url>
    </repository>
</repositories>

<dependencies>
    <dependency>
        <groupId>me.darksidecode.lvq4j</groupId>
        <artifactId>lvq4j</artifactId>
        <version>1.2.1</version>
    </dependency>
</dependencies>

Gradle

repositories {
    maven {
        name 'Public Reflex Repository'
        url 'https://archiva.reflex.rip/repository/public/'
    }
}

dependencies {
    implementation group: 'me.darksidecode.lvq4j', name: 'lvq4j', version: '1.2.1'
}

Examples

Using LVQ4J in an own project? Want it to be listed here? Feel free to make a pull request!

Bulding

git clone https://github.com/MeGysssTaa/lvq4j
cd lvq4j
./gradlew build

License

Apache License 2.0