Skip to content

Commit

Permalink
Merge pull request #121 from AP6YC/release/v0.8.1
Browse files Browse the repository at this point in the history
Release/v0.8.1
  • Loading branch information
AP6YC committed Feb 1, 2023
2 parents b70c755 + 6b9ece3 commit b687015
Show file tree
Hide file tree
Showing 23 changed files with 187 additions and 64 deletions.
7 changes: 0 additions & 7 deletions .gitattributes

This file was deleted.

3 changes: 3 additions & 0 deletions .github/FUNDING.yml
@@ -0,0 +1,3 @@
github: AP6YC
patreon: AP6YC
custom: www.buymeacoffee.com/sashapetrenko
40 changes: 40 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
@@ -0,0 +1,40 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

---

## Describe the bug

A clear and concise description of what the bug is.

## To Reproduce

Steps to reproduce the behavior:

1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error

## Expected Behavior

A clear and concise description of what you expected to happen.

## Screenshots

If applicable, add screenshots to help explain your problem.

## Julia Configuration (please complete the following information):

- OS: [e.g. Windows, Linux, MacOS]
- Julia Version: [e.g. 1.6, 1.7, 1.8]
- Terminal: [e.g. Windows Terminal, Powershell, iTerm]
- Installation Method: [e.g. JuliaHub, GitHub repo clone]

## Additional context

Add any other context about the problem here.
24 changes: 24 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
@@ -0,0 +1,24 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''

---

## Is your feature request related to a problem? Please describe.

A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

## Describe the solution you'd like

A clear and concise description of what you want to happen.

## Describe alternatives you've considered

A clear and concise description of any alternative solutions or features you've considered.

## Additional context

Add any other context or screenshots about the feature request here.
1 change: 1 addition & 0 deletions .github/workflows/TagBot.yml
Expand Up @@ -13,3 +13,4 @@ jobs:
with:
token: ${{ secrets.GITHUB_TOKEN }}
ssh: ${{ secrets.DOCUMENTER_KEY }}
branch: master
3 changes: 3 additions & 0 deletions .gitignore
Expand Up @@ -8,6 +8,9 @@ _dev/
docs/src/democards/
docs/src/examples/

# Downloads folder for files grabbed during docs generation
docs/src/assets/downloads/

# Julia package development ignores
*.jl.*.cov
*.jl.cov
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Expand Up @@ -2,7 +2,7 @@ name = "AdaptiveResonance"
uuid = "3d72adc0-63d3-4141-bf9b-84450dd0395b"
authors = ["Sasha Petrenko"]
description = "A Julia package for Adaptive Resonance Theory (ART) algorithms."
version = "0.8.0"
version = "0.8.1"

[deps]
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
Expand Down
72 changes: 45 additions & 27 deletions README.md
@@ -1,4 +1,4 @@
[![adaptiveresonance-header](docs/src/assets/header.png)][docs-dev-url]
[![adaptiveresonance-header](https://github.com/AP6YC/FileStorage/blob/main/AdaptiveResonance/header.png?raw=true)][docs-dev-url]

A Julia package for Adaptive Resonance Theory (ART) algorithms.

Expand Down Expand Up @@ -61,6 +61,7 @@ Please read the [documentation](https://ap6yc.github.io/AdaptiveResonance.jl/dev
- [Contributing](#contributing)
- [Acknowledgements](#acknowledgements)
- [Authors](#authors)
- [Support](#support)
- [History](#history)
- [Software](#software)
- [Datasets](#datasets)
Expand All @@ -80,8 +81,8 @@ Detailed usage and examples are provided in the [documentation](https://ap6yc.gi

### Installation

This project is distributed as a Julia package, available on [JuliaHub](https://juliahub.com/).
Its usage follows the usual Julia package installation procedure, interactively:
This project is distributed as a [Julia](https://julialang.org/) package, available on [JuliaHub](https://juliahub.com/), so you must first [install Julia](https://julialang.org/downloads/) on your system.
Its usage follows the usual [Julia package installation procedure](https://docs.julialang.org/en/v1/stdlib/Pkg/), interactively:

```julia-repl
julia> ]
Expand Down Expand Up @@ -122,7 +123,7 @@ You can pass module-specific options during construction with keyword arguments
art = DDVFA(rho_ub=0.75, rho_lb=0.4)
```

For more advanced users, options for the modules are contained in `Parameters.jl` structs.
For more advanced users, options for the modules are contained in [`Parameters.jl`](https://github.com/mauro3/Parameters.jl) structs.
These options can be passed keyword arguments before instantiating the model:

```julia
Expand Down Expand Up @@ -179,24 +180,35 @@ This project has implementations of the following ART (unsupervised) and ARTMAP
- ARTMAP
- **[`SFAM`][4]**: Simplified Fuzzy ARTMAP
- **[`FAM`][5]**: Fuzzy ARTMAP
- **[`DAM`][6]**: Default ARTMAP

Because each of these modules is a framework for many variants in the literature, this project also implements these [variants](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/modules/) by changing their module [options](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/guide/#art_options).
Variants built upon these modules are:

- ART
- **[`GammaNormalizedFuzzyART`][7]**: Gamma-Normalized FuzzyART (variant of FuzzyART).
- ARTMAP
- **[`DAM`][6]**: Default ARTMAP (variant of SFAM).

[1]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.FuzzyART
[2]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DVFA
[3]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DDVFA
[4]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.SFAM
[5]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.FAM
[6]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DAM

Because each of these modules is a framework for many variants in the literature, this project also implements these [variants](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/modules/) by changing their module [options](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/guide/#art_options).
[6]: https://ap6yc.github.io/AdaptiveResonance.jl/stable/man/full-index/#AdaptiveResonance.DAM-Tuple{opts_SFAM}
[7]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.GammaNormalizedFuzzyART-Tuple{opts_FuzzyART}

In addition to these modules, this package contains the following accessory methods:

- **ARTSCENE**: the ARTSCENE algorithm's multiple-stage filtering process is implemented as `artscene_filter`. Each filter stage is exported if further granularity is required.
- **performance**: classification accuracy is implemented as `performance`
- **complement_code**: complement coding is implemented with `complement_code`.
However, training and classification methods complement code their inputs unless they are passed `preprocessed=true`.
- **linear_normalization**: the first step to complement coding, `linear_normalization` normalizes input arrays within [0, 1].
- [**ARTSCENE**][21]: the ARTSCENE algorithm's multiple-stage filtering process is implemented as [`artscene_filter`][21]. Each filter stage is implemented internally if further granularity is required.
- [**performance**][22]: classification accuracy is implemented as [`performance`][22].
- [**complement_code**][23]: complement coding is implemented with [`complement_code`][23].
However, training and classification methods complement code their inputs unless they are passed `preprocessed=true`, indicating to the model that this step has already been done.
- [**linear_normalization**][24]: the first step to complement coding, [`linear_normalization`][24] normalizes input arrays within `[0, 1]`.

[21]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.artscene_filter-Union{Tuple{Array{T,%203}},%20Tuple{T}}%20where%20T%3C:AbstractFloat
[22]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.performance-Tuple{AbstractVector{T}%20where%20T%3C:Integer,%20AbstractVector{T}%20where%20T%3C:Integer}
[23]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.complement_code-Tuple{AbstractArray{T}%20where%20T%3C:Real}
[24]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.linear_normalization-Tuple{AbstractMatrix{T}%20where%20T%3C:Real}

### Contributing

Expand All @@ -208,18 +220,29 @@ In summary:
1. Questions and requested changes should all be made in the [issues][issues-url] page.
These are preferred because they are publicly viewable and could assist or educate others with similar issues or questions.
2. For changes, this project accepts pull requests (PRs) from `feature/<my-feature>` branches onto the `develop` branch using the [GitFlow](https://nvie.com/posts/a-successful-git-branching-model/) methodology.
If unit tests pass and the changes are beneficial, these PRs are merged into `develop` and eventually folded into versioned releases.
If unit tests pass and the changes are beneficial, these PRs are merged into `develop` and eventually folded into versioned releases throug a `release` branch that is merged with the `master` branch.
3. The project follows the [Semantic Versioning](https://semver.org/) convention of `major.minor.patch` incremental versioning numbers.
Patch versions are for bug fixes, minor versions are for backward-compatible changes, and major versions are for new and incompatible usage changes.

## Acknowledgements

### Authors

This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/). This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/).
This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/).
The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project.

### Support

This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/).
The material, findings, and conclusions here do not necessarily reflect the views of these entities.

The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project.
Research was sponsored by the Army Research Laboratory and was accomplished under
Cooperative Agreement Number W911NF-22-2-0209.
The views and conclusions contained in this document are
those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of
the Army Research Laboratory or the U.S. Government.
The U.S. Government is authorized to reproduce and
distribute reprints for Government purposes notwithstanding any copyright notation herein.

### History

Expand All @@ -229,6 +252,8 @@ The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://
- 10/13/2021 - Initiate GitFlow contribution.
- 5/4/2022 - [Acceptance to JOSS](https://doi.org/10.21105/joss.03671).
- 10/11/2022 - v0.6.0
- 12/15/2022 - v0.7.0
- 1/30/2023 - v0.8.0

### Software

Expand Down Expand Up @@ -258,17 +283,10 @@ The code in this repository is inspired the following repositories:

Boilerplate clustering datasets are periodically used to test, verify, and provide example of the functionality of the package.

1. UCI machine learning repository:
<http://archive.ics.uci.edu/ml>

2. Fundamental Clustering Problems Suite (FCPS):
<https://www.uni-marburg.de/fb12/arbeitsgruppen/datenbionik/data?language_sync=1>

3. Datasets package:
<https://www.researchgate.net/publication/239525861_Datasets_package>

4. Clustering basic benchmark:
<http://cs.uef.fi/sipu/datasets>
1. [UCI machine learning repository](http://archive.ics.uci.edu/ml)
2. [Fundamental Clustering Problems Suite (FCPS)](https://www.uni-marburg.de/fb12/arbeitsgruppen/datenbionik/data?language_sync=1)
3. [Nejc Ilc's unsupervised datasets package](https://www.researchgate.net/publication/239525861_Datasets_package)
4. [Clustering basic benchmark](http://cs.uef.fi/sipu/datasets)

### License

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/adaptive_resonance/data_config.jl
@@ -1,7 +1,7 @@
# ---
# title: ART DataConfig Example
# id: data_config
# cover: ../assets/art.png
# cover: ../../assets/downloads/art.png
# date: 2021-12-2
# author: "[Sasha Petrenko](https://github.com/AP6YC)"
# julia: 1.8
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/art/ddvfa_supervised.jl
@@ -1,7 +1,7 @@
# ---
# title: Supervised DDVFA Example
# id: ddvfa_supervised
# cover: ../assets/ddvfa.png
# cover: ../../assets/downloads/ddvfa.png
# date: 2021-11-30
# author: "[Sasha Petrenko](https://github.com/AP6YC)"
# julia: 1.8
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/art/ddvfa_unsupervised.jl
@@ -1,7 +1,7 @@
# ---
# title: Unsupervised DDVFA Example
# id: ddvfa_unsupervised
# cover: ../assets/ddvfa.png
# cover: ../../assets/downloads/ddvfa.png
# date: 2021-11-30
# author: "[Sasha Petrenko](https://github.com/AP6YC)"
# julia: 1.8
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/artmap/sfam_iris.jl
@@ -1,7 +1,7 @@
# ---
# title: Supervised Simplified FuzzyARTMAP (SFAM) Example
# id: sfam_iris
# cover: ../assets/artmap.png
# cover: ../../assets/downloads/artmap.png
# date: 2021-11-30
# author: "[Sasha Petrenko](https://github.com/AP6YC)"
# julia: 1.8
Expand Down
3 changes: 0 additions & 3 deletions docs/examples/assets/art.png

This file was deleted.

3 changes: 0 additions & 3 deletions docs/examples/assets/artmap.png

This file was deleted.

3 changes: 0 additions & 3 deletions docs/examples/assets/ddvfa.png

This file was deleted.

33 changes: 32 additions & 1 deletion docs/make.jl
Expand Up @@ -47,6 +47,37 @@ if haskey(ENV, "DOCSARGS")
end
end

# -----------------------------------------------------------------------------
# DOWNLOAD LARGE ASSETS
# -----------------------------------------------------------------------------

# Point to the raw FileStorage location on GitHub
top_url = raw"https://media.githubusercontent.com/media/AP6YC/FileStorage/main/AdaptiveResonance/"
# List all of the files that we need to use in the docs
files = [
"header.png",
"art.png",
"artmap.png",
"ddvfa.png",
]
# Make a destination for the files
download_folder = joinpath("src", "assets", "downloads")
mkpath(download_folder)
download_list = []
# Download the files one at a time
for file in files
# Point to the correct file that we wish to download
src_file = top_url * file * "?raw=true"
# Point to the correct local destination file to download to
dest_file = joinpath(download_folder, file)
# Add the file to the list that we will append to assets
push!(download_list, dest_file)
# If the file isn't already here, download it
if !isfile(dest_file)
download(src_file, dest_file)
end
end

# -----------------------------------------------------------------------------
# GENERATE
# -----------------------------------------------------------------------------
Expand All @@ -56,7 +87,7 @@ end
demopage, postprocess_cb, demo_assets = makedemos("examples")

assets = [
joinpath("assets", "favicon.ico")
joinpath("assets", "favicon.ico"),
]

# if there are generated css assets, pass it to Documenter.HTML
Expand Down
Binary file modified docs/src/assets/favicon.ico
Binary file not shown.
3 changes: 0 additions & 3 deletions docs/src/assets/figures/art.png

This file was deleted.

3 changes: 0 additions & 3 deletions docs/src/assets/header.png

This file was deleted.

Binary file modified docs/src/assets/logo.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 4 additions & 4 deletions docs/src/getting-started/whatisart.md
Expand Up @@ -11,11 +11,11 @@ Pioneered by Stephen Grossberg and Gail Carpenter, the field has had contributio

Because of the high degree of interplay between the neurocognitive theory and the engineering models born of it, the term ART is frequently used to refer to both in the modern day (for better or for worse).

Stephen Grossberg's has recently released a book summarizing the work of him, his wife Gail Carpenter, and his colleagues on Adaptive Resonance Theory in his book [Conscious Brain, Resonant Mind](https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552).
Stephen Grossberg's has recently released a book summarizing the work of him, his wife and colleague Gail Carpenter, and his other colleagues on Adaptive Resonance Theory in his book [Conscious Brain, Resonant Mind](https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552).

## ART Basics

![art](../assets/figures/art.png)
![art](../assets/downloads/art.png)

### ART Dynamics

Expand Down Expand Up @@ -43,7 +43,7 @@ In addition to the dynamics typical of an ART model, you must know:
ART modules are used for reinforcement learning by representing the mappings between state, value, and action spaces with ART dynamics.
3. Almost all ART models face the problem of the appropriate selection of the vigilance parameter, which may depend in its optimality according to the problem.
4. Being a class of neurogenitive neural network models, ART models gain the ability for theoretically infinite capacity along with the problem of "category proliferation," which is the undesirable increase in the number of categories as the model continues to learn, leading to increasing computational time.
In contrast, while the evaluation time of a deep neural network is always *exactly the same*, there exist upper bounds in their representational capacity.
In contrast, while the evaluation time of a fixed architecture deep neural network is always *exactly the same*, there exist upper bounds in their representational capacity.
5. Nearly every ART model requires feature normalization (i.e., feature elements lying within $$[0,1]$$) and a process known as complement coding where the feature vector is appended to its vector complement $$[1-\bar{x}]$$.
This is because real-numbered vectors can be arbitrarily close to one another, hindering learning performance, which requires a degree of contrast enhancement between samples to ensure their separation.

Expand All @@ -59,7 +59,7 @@ By representing categories as a field of instar networks, new categories could b
However, it was shown that the learning stability of Grossberg Networks degrades as the number of represented categories increases.
Discoveries in the neurocognitive theory and breakthroughs in their implementation led to the introduction of a recurrent connections between the two fields of the network to stabilize the learning.
These breakthroughs were based upon the discovery that autonomous learning depends on the interplay and agreement between *perception* and *expectation*, frequently referred to as bottom-up and top-down processes.
Furthermore, it is *resonance* between these states in the frequency domain that gives rise to conscious experiences and that permit adaptive weights to change, leading to the phenomenon of learning.
Furthermore, it is *resonance* between these states in the frequency domain that gives rise to conscious experiences and that permit adaptive weights to change, leading to the phenomea of attention and learning.
The theory has many explanatory consequences in psychology, such as why attention is required for learning, but its consequences in the engineering models are that it stabilizes learning in cooperative-competitive dynamics, such as interconnected fields of neurons, which are most often chaotic.

Chapters 18 and 19 of the book by [Neural Network Design by Hagan, Demuth, Beale, and De Jesus](https://hagan.okstate.edu/NNDesign.pdf) provide a good theoretical basis for learning how these network models were eventually implemented into the first binary-vector implementation of ART1.

2 comments on commit b687015

@AP6YC
Copy link
Owner Author

@AP6YC AP6YC commented on b687015 Feb 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator register

Release notes:

This patch principally updates the documentation; the README and hosted docs have some updated links and text, and the strategy for loading images for the documentation has changed to a combination of downloading assets at build time and linking directly to external hosting. These changes mainly have the end result of removing a majority of file storage in the repo, reducing the project's overall size.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/76785

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.8.1 -m "<description of version>" b687015f8633bbd60a4ea44a3beeb26ea98df3b3
git push origin v0.8.1

Please sign in to comment.