Skip to content

Commit

Permalink
Merge pull request #116 from AP6YC/feature/readme-updates
Browse files Browse the repository at this point in the history
Feature/readme updates
  • Loading branch information
AP6YC committed Jan 31, 2023
2 parents e2a4d6f + 8c8570f commit 0c4e058
Show file tree
Hide file tree
Showing 4 changed files with 76 additions and 33 deletions.
70 changes: 44 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ Please read the [documentation](https://ap6yc.github.io/AdaptiveResonance.jl/dev
- [Contributing](#contributing)
- [Acknowledgements](#acknowledgements)
- [Authors](#authors)
- [Support](#support)
- [History](#history)
- [Software](#software)
- [Datasets](#datasets)
Expand All @@ -80,8 +81,8 @@ Detailed usage and examples are provided in the [documentation](https://ap6yc.gi

### Installation

This project is distributed as a Julia package, available on [JuliaHub](https://juliahub.com/).
Its usage follows the usual Julia package installation procedure, interactively:
This project is distributed as a [Julia](https://julialang.org/) package, available on [JuliaHub](https://juliahub.com/), so you must first [install Julia](https://julialang.org/downloads/) on your system.
Its usage follows the usual [Julia package installation procedure](https://docs.julialang.org/en/v1/stdlib/Pkg/), interactively:

```julia-repl
julia> ]
Expand Down Expand Up @@ -122,7 +123,7 @@ You can pass module-specific options during construction with keyword arguments
art = DDVFA(rho_ub=0.75, rho_lb=0.4)
```

For more advanced users, options for the modules are contained in `Parameters.jl` structs.
For more advanced users, options for the modules are contained in [`Parameters.jl`](https://github.com/mauro3/Parameters.jl) structs.
These options can be passed keyword arguments before instantiating the model:

```julia
Expand Down Expand Up @@ -179,24 +180,35 @@ This project has implementations of the following ART (unsupervised) and ARTMAP
- ARTMAP
- **[`SFAM`][4]**: Simplified Fuzzy ARTMAP
- **[`FAM`][5]**: Fuzzy ARTMAP
- **[`DAM`][6]**: Default ARTMAP

Because each of these modules is a framework for many variants in the literature, this project also implements these [variants](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/modules/) by changing their module [options](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/guide/#art_options).
Variants built upon these modules are:

- ART
- **[`GammaNormalizedFuzzyART`][7]**: Gamma-Normalized FuzzyART (variant of FuzzyART).
- ARTMAP
- **[`DAM`][6]**: Default ARTMAP (variant of SFAM).

[1]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.FuzzyART
[2]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DVFA
[3]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DDVFA
[4]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.SFAM
[5]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.FAM
[6]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DAM

Because each of these modules is a framework for many variants in the literature, this project also implements these [variants](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/modules/) by changing their module [options](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/guide/#art_options).
[6]: https://ap6yc.github.io/AdaptiveResonance.jl/stable/man/full-index/#AdaptiveResonance.DAM-Tuple{opts_SFAM}
[7]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.GammaNormalizedFuzzyART-Tuple{opts_FuzzyART}

In addition to these modules, this package contains the following accessory methods:

- **ARTSCENE**: the ARTSCENE algorithm's multiple-stage filtering process is implemented as `artscene_filter`. Each filter stage is exported if further granularity is required.
- **performance**: classification accuracy is implemented as `performance`
- **complement_code**: complement coding is implemented with `complement_code`.
However, training and classification methods complement code their inputs unless they are passed `preprocessed=true`.
- **linear_normalization**: the first step to complement coding, `linear_normalization` normalizes input arrays within [0, 1].
- [**ARTSCENE**][21]: the ARTSCENE algorithm's multiple-stage filtering process is implemented as [`artscene_filter`][21]. Each filter stage is implemented internally if further granularity is required.
- [**performance**][22]: classification accuracy is implemented as [`performance`][22].
- [**complement_code**][23]: complement coding is implemented with [`complement_code`][23].
However, training and classification methods complement code their inputs unless they are passed `preprocessed=true`, indicating to the model that this step has already been done.
- [**linear_normalization**][24]: the first step to complement coding, [`linear_normalization`][24] normalizes input arrays within `[0, 1]`.

[21]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.artscene_filter-Union{Tuple{Array{T,%203}},%20Tuple{T}}%20where%20T%3C:AbstractFloat
[22]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.performance-Tuple{AbstractVector{T}%20where%20T%3C:Integer,%20AbstractVector{T}%20where%20T%3C:Integer}
[23]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.complement_code-Tuple{AbstractArray{T}%20where%20T%3C:Real}
[24]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.linear_normalization-Tuple{AbstractMatrix{T}%20where%20T%3C:Real}

### Contributing

Expand All @@ -208,18 +220,29 @@ In summary:
1. Questions and requested changes should all be made in the [issues][issues-url] page.
These are preferred because they are publicly viewable and could assist or educate others with similar issues or questions.
2. For changes, this project accepts pull requests (PRs) from `feature/<my-feature>` branches onto the `develop` branch using the [GitFlow](https://nvie.com/posts/a-successful-git-branching-model/) methodology.
If unit tests pass and the changes are beneficial, these PRs are merged into `develop` and eventually folded into versioned releases.
If unit tests pass and the changes are beneficial, these PRs are merged into `develop` and eventually folded into versioned releases throug a `release` branch that is merged with the `master` branch.
3. The project follows the [Semantic Versioning](https://semver.org/) convention of `major.minor.patch` incremental versioning numbers.
Patch versions are for bug fixes, minor versions are for backward-compatible changes, and major versions are for new and incompatible usage changes.

## Acknowledgements

### Authors

This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/). This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/).
This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/).
The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project.

### Support

This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/).
The material, findings, and conclusions here do not necessarily reflect the views of these entities.

The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project.
Research was sponsored by the Army Research Laboratory and was accomplished under
Cooperative Agreement Number W911NF-22-2-0209.
The views and conclusions contained in this document are
those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of
the Army Research Laboratory or the U.S. Government.
The U.S. Government is authorized to reproduce and
distribute reprints for Government purposes notwithstanding any copyright notation herein.

### History

Expand All @@ -229,6 +252,8 @@ The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://
- 10/13/2021 - Initiate GitFlow contribution.
- 5/4/2022 - [Acceptance to JOSS](https://doi.org/10.21105/joss.03671).
- 10/11/2022 - v0.6.0
- 12/15/2022 - v0.7.0
- 1/30/2023 - v0.8.0

### Software

Expand Down Expand Up @@ -258,17 +283,10 @@ The code in this repository is inspired the following repositories:

Boilerplate clustering datasets are periodically used to test, verify, and provide example of the functionality of the package.

1. UCI machine learning repository:
<http://archive.ics.uci.edu/ml>

2. Fundamental Clustering Problems Suite (FCPS):
<https://www.uni-marburg.de/fb12/arbeitsgruppen/datenbionik/data?language_sync=1>

3. Datasets package:
<https://www.researchgate.net/publication/239525861_Datasets_package>

4. Clustering basic benchmark:
<http://cs.uef.fi/sipu/datasets>
1. [UCI machine learning repository](http://archive.ics.uci.edu/ml)
2. [Fundamental Clustering Problems Suite (FCPS)](https://www.uni-marburg.de/fb12/arbeitsgruppen/datenbionik/data?language_sync=1)
3. [Nejc Ilc's unsupervised datasets package](https://www.researchgate.net/publication/239525861_Datasets_package)
4. [Clustering basic benchmark](http://cs.uef.fi/sipu/datasets)

### License

Expand Down
6 changes: 3 additions & 3 deletions docs/src/getting-started/whatisart.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Pioneered by Stephen Grossberg and Gail Carpenter, the field has had contributio

Because of the high degree of interplay between the neurocognitive theory and the engineering models born of it, the term ART is frequently used to refer to both in the modern day (for better or for worse).

Stephen Grossberg's has recently released a book summarizing the work of him, his wife Gail Carpenter, and his colleagues on Adaptive Resonance Theory in his book [Conscious Brain, Resonant Mind](https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552).
Stephen Grossberg's has recently released a book summarizing the work of him, his wife and colleague Gail Carpenter, and his other colleagues on Adaptive Resonance Theory in his book [Conscious Brain, Resonant Mind](https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552).

## ART Basics

Expand Down Expand Up @@ -43,7 +43,7 @@ In addition to the dynamics typical of an ART model, you must know:
ART modules are used for reinforcement learning by representing the mappings between state, value, and action spaces with ART dynamics.
3. Almost all ART models face the problem of the appropriate selection of the vigilance parameter, which may depend in its optimality according to the problem.
4. Being a class of neurogenitive neural network models, ART models gain the ability for theoretically infinite capacity along with the problem of "category proliferation," which is the undesirable increase in the number of categories as the model continues to learn, leading to increasing computational time.
In contrast, while the evaluation time of a deep neural network is always *exactly the same*, there exist upper bounds in their representational capacity.
In contrast, while the evaluation time of a fixed architecture deep neural network is always *exactly the same*, there exist upper bounds in their representational capacity.
5. Nearly every ART model requires feature normalization (i.e., feature elements lying within $$[0,1]$$) and a process known as complement coding where the feature vector is appended to its vector complement $$[1-\bar{x}]$$.
This is because real-numbered vectors can be arbitrarily close to one another, hindering learning performance, which requires a degree of contrast enhancement between samples to ensure their separation.

Expand All @@ -59,7 +59,7 @@ By representing categories as a field of instar networks, new categories could b
However, it was shown that the learning stability of Grossberg Networks degrades as the number of represented categories increases.
Discoveries in the neurocognitive theory and breakthroughs in their implementation led to the introduction of a recurrent connections between the two fields of the network to stabilize the learning.
These breakthroughs were based upon the discovery that autonomous learning depends on the interplay and agreement between *perception* and *expectation*, frequently referred to as bottom-up and top-down processes.
Furthermore, it is *resonance* between these states in the frequency domain that gives rise to conscious experiences and that permit adaptive weights to change, leading to the phenomenon of learning.
Furthermore, it is *resonance* between these states in the frequency domain that gives rise to conscious experiences and that permit adaptive weights to change, leading to the phenomea of attention and learning.
The theory has many explanatory consequences in psychology, such as why attention is required for learning, but its consequences in the engineering models are that it stabilizes learning in cooperative-competitive dynamics, such as interconnected fields of neurons, which are most often chaotic.

Chapters 18 and 19 of the book by [Neural Network Design by Hagan, Demuth, Beale, and De Jesus](https://hagan.okstate.edu/NNDesign.pdf) provide a good theoretical basis for learning how these network models were eventually implemented into the first binary-vector implementation of ART1.
2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ end

These pages serve as the official documentation for the AdaptiveResonance.jl Julia package.

Adaptive Resonance Theory (ART) began as a neurocognitive theory of how fields of cells can continuously learn stable representations, and it evolved into the basis for a myriad of practical machine learning algorithms.
Adaptive Resonance Theory (ART) began as a neurocognitive theory of how fields of cells can continuously learn stable representations, and it has been utilized as the basis for a myriad of practical machine learning algorithms.
Pioneered by Stephen Grossberg and Gail Carpenter, the field has had contributions across many years and from many disciplines, resulting in a plethora of engineering applications and theoretical advancements that have enabled ART-based algorithms to compete with many other modern learning and clustering algorithms.

The purpose of this package is to provide a home for the development and use of these ART-based machine learning algorithms in the Julia programming language.
Expand Down
31 changes: 28 additions & 3 deletions docs/src/man/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ From top to bottom, the ways of contributing are:
- [GitFlow:](@ref GitFlow) how to directly contribute code to the package in an organized way on GitHub.
- [Development Details:](@ref Development-Details) how the internals of the package are currently setup if you would like to directly contribute code.

Please also see the [Attribution](@ref Attribution) to learn about the authors and sources of support for the project.

## Issues

The main point of contact is the [GitHub issues](https://github.com/AP6YC/AdaptiveResonance.jl/issues) page for the project.
Expand Down Expand Up @@ -85,14 +87,19 @@ The `AdaptiveResonance.jl` package has the following file structure:
```console
AdaptiveResonance
├── .github/workflows // GitHub: workflows for testing and documentation.
├── data // Data: CI data location.
├── docs // Docs: documentation for the module.
│ └───src // Documentation source files.
├── examples // Source: example usage scripts.
├── src // Source: majority of source code.
│ ├───ART // ART-based unsupervised modules.
│ │ ├───distributed // Distributed ART modules.
│ │ └───single // Undistributed ART modules.
│ └───ARTMAP // ARTMAP-based supervised modules.
├── test // Test: Unit, integration, and environment tests.
│ ├── adaptiveresonance // Tests common to the entire package.
│ ├── art // Tests for just ART modules.
│ ├── artmap // Tests for just ARTMAP modules.
│ └───data // CI test data.
├── .appveyor // Appveyor: Windows-specific coverage.
├── .gitattributes // Git: LFS settings, languages, etc.
├── .gitignore // Git: .gitignore for the whole project.
Expand Down Expand Up @@ -142,6 +149,24 @@ Furthermore, independent class labels are always `Int` because of the [Julia nat

This project does not currently test for the support of [arbitrary precision arithmetic](https://docs.julialang.org/en/v1/manual/integers-and-floating-point-numbers/#Arbitrary-Precision-Arithmetic) because learning algorithms *in general* do not have a significant need for precision.

## Authors
## Attribution

### Authors

This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/).
The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project.

If you simply have suggestions for improvement, Sasha Petrenko (<sap625@mst.edu>) is the current developer and maintainer of the `AdaptiveResonance.jl` package, so please feel free to reach out with thoughts and questions.

### Support

This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/).
The material, findings, and conclusions here do not necessarily reflect the views of these entities.

If you simply have suggestions for improvement, Sasha Petrenko (<sap625@mst.edu>) is the current developer and maintainer of the AdaptiveResonance.jl package, so please feel free to reach out with thoughts and questions.
Research was sponsored by the Army Research Laboratory and was accomplished under
Cooperative Agreement Number W911NF-22-2-0209.
The views and conclusions contained in this document are
those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of
the Army Research Laboratory or the U.S. Government.
The U.S. Government is authorized to reproduce and
distribute reprints for Government purposes notwithstanding any copyright notation herein.

0 comments on commit 0c4e058

Please sign in to comment.