diff --git a/.gitattributes b/.gitattributes deleted file mode 100644 index e37c4bd4..00000000 --- a/.gitattributes +++ /dev/null @@ -1,7 +0,0 @@ -# ----------------------------------------------------------------------------- -# LFS Settings -# ----------------------------------------------------------------------------- - -# LFS images -*.png filter=lfs diff=lfs merge=lfs -text -*.ico filter=lfs diff=lfs merge=lfs -text diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml new file mode 100644 index 00000000..1d08b8b5 --- /dev/null +++ b/.github/FUNDING.yml @@ -0,0 +1,3 @@ +github: AP6YC +patreon: AP6YC +custom: www.buymeacoffee.com/sashapetrenko diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 00000000..0149c3a4 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,40 @@ +--- +name: Bug report +about: Create a report to help us improve +title: '' +labels: '' +assignees: '' + +--- + +## Describe the bug + +A clear and concise description of what the bug is. + +## To Reproduce + +Steps to reproduce the behavior: + +1. Go to '...' +2. Click on '...' +3. Scroll down to '...' +4. See error + +## Expected Behavior + +A clear and concise description of what you expected to happen. + +## Screenshots + +If applicable, add screenshots to help explain your problem. + +## Julia Configuration (please complete the following information): + +- OS: [e.g. Windows, Linux, MacOS] +- Julia Version: [e.g. 1.6, 1.7, 1.8] +- Terminal: [e.g. Windows Terminal, Powershell, iTerm] +- Installation Method: [e.g. JuliaHub, GitHub repo clone] + +## Additional context + +Add any other context about the problem here. diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md new file mode 100644 index 00000000..148997aa --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -0,0 +1,24 @@ +--- +name: Feature request +about: Suggest an idea for this project +title: '' +labels: '' +assignees: '' + +--- + +## Is your feature request related to a problem? Please describe. + +A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] + +## Describe the solution you'd like + +A clear and concise description of what you want to happen. + +## Describe alternatives you've considered + +A clear and concise description of any alternative solutions or features you've considered. + +## Additional context + +Add any other context or screenshots about the feature request here. diff --git a/.github/workflows/TagBot.yml b/.github/workflows/TagBot.yml index f49313b6..56a1908a 100644 --- a/.github/workflows/TagBot.yml +++ b/.github/workflows/TagBot.yml @@ -13,3 +13,4 @@ jobs: with: token: ${{ secrets.GITHUB_TOKEN }} ssh: ${{ secrets.DOCUMENTER_KEY }} + branch: master diff --git a/.gitignore b/.gitignore index d7e55fab..a76a7ef1 100644 --- a/.gitignore +++ b/.gitignore @@ -8,6 +8,9 @@ _dev/ docs/src/democards/ docs/src/examples/ +# Downloads folder for files grabbed during docs generation +docs/src/assets/downloads/ + # Julia package development ignores *.jl.*.cov *.jl.cov diff --git a/Project.toml b/Project.toml index 2464138e..fae470ea 100644 --- a/Project.toml +++ b/Project.toml @@ -2,7 +2,7 @@ name = "AdaptiveResonance" uuid = "3d72adc0-63d3-4141-bf9b-84450dd0395b" authors = ["Sasha Petrenko"] description = "A Julia package for Adaptive Resonance Theory (ART) algorithms." -version = "0.8.0" +version = "0.8.1" [deps] Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b" diff --git a/README.md b/README.md index c7ffeb3d..07678457 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -[![adaptiveresonance-header](docs/src/assets/header.png)][docs-dev-url] +[![adaptiveresonance-header](https://github.com/AP6YC/FileStorage/blob/main/AdaptiveResonance/header.png?raw=true)][docs-dev-url] A Julia package for Adaptive Resonance Theory (ART) algorithms. @@ -61,6 +61,7 @@ Please read the [documentation](https://ap6yc.github.io/AdaptiveResonance.jl/dev - [Contributing](#contributing) - [Acknowledgements](#acknowledgements) - [Authors](#authors) + - [Support](#support) - [History](#history) - [Software](#software) - [Datasets](#datasets) @@ -80,8 +81,8 @@ Detailed usage and examples are provided in the [documentation](https://ap6yc.gi ### Installation -This project is distributed as a Julia package, available on [JuliaHub](https://juliahub.com/). -Its usage follows the usual Julia package installation procedure, interactively: +This project is distributed as a [Julia](https://julialang.org/) package, available on [JuliaHub](https://juliahub.com/), so you must first [install Julia](https://julialang.org/downloads/) on your system. +Its usage follows the usual [Julia package installation procedure](https://docs.julialang.org/en/v1/stdlib/Pkg/), interactively: ```julia-repl julia> ] @@ -122,7 +123,7 @@ You can pass module-specific options during construction with keyword arguments art = DDVFA(rho_ub=0.75, rho_lb=0.4) ``` -For more advanced users, options for the modules are contained in `Parameters.jl` structs. +For more advanced users, options for the modules are contained in [`Parameters.jl`](https://github.com/mauro3/Parameters.jl) structs. These options can be passed keyword arguments before instantiating the model: ```julia @@ -179,24 +180,35 @@ This project has implementations of the following ART (unsupervised) and ARTMAP - ARTMAP - **[`SFAM`][4]**: Simplified Fuzzy ARTMAP - **[`FAM`][5]**: Fuzzy ARTMAP - - **[`DAM`][6]**: Default ARTMAP + +Because each of these modules is a framework for many variants in the literature, this project also implements these [variants](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/modules/) by changing their module [options](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/guide/#art_options). +Variants built upon these modules are: + +- ART + - **[`GammaNormalizedFuzzyART`][7]**: Gamma-Normalized FuzzyART (variant of FuzzyART). +- ARTMAP + - **[`DAM`][6]**: Default ARTMAP (variant of SFAM). [1]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.FuzzyART [2]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DVFA [3]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DDVFA [4]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.SFAM [5]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.FAM -[6]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.DAM - -Because each of these modules is a framework for many variants in the literature, this project also implements these [variants](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/modules/) by changing their module [options](https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/guide/#art_options). +[6]: https://ap6yc.github.io/AdaptiveResonance.jl/stable/man/full-index/#AdaptiveResonance.DAM-Tuple{opts_SFAM} +[7]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.GammaNormalizedFuzzyART-Tuple{opts_FuzzyART} In addition to these modules, this package contains the following accessory methods: -- **ARTSCENE**: the ARTSCENE algorithm's multiple-stage filtering process is implemented as `artscene_filter`. Each filter stage is exported if further granularity is required. -- **performance**: classification accuracy is implemented as `performance` -- **complement_code**: complement coding is implemented with `complement_code`. -However, training and classification methods complement code their inputs unless they are passed `preprocessed=true`. -- **linear_normalization**: the first step to complement coding, `linear_normalization` normalizes input arrays within [0, 1]. +- [**ARTSCENE**][21]: the ARTSCENE algorithm's multiple-stage filtering process is implemented as [`artscene_filter`][21]. Each filter stage is implemented internally if further granularity is required. +- [**performance**][22]: classification accuracy is implemented as [`performance`][22]. +- [**complement_code**][23]: complement coding is implemented with [`complement_code`][23]. +However, training and classification methods complement code their inputs unless they are passed `preprocessed=true`, indicating to the model that this step has already been done. +- [**linear_normalization**][24]: the first step to complement coding, [`linear_normalization`][24] normalizes input arrays within `[0, 1]`. + +[21]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.artscene_filter-Union{Tuple{Array{T,%203}},%20Tuple{T}}%20where%20T%3C:AbstractFloat +[22]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.performance-Tuple{AbstractVector{T}%20where%20T%3C:Integer,%20AbstractVector{T}%20where%20T%3C:Integer} +[23]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.complement_code-Tuple{AbstractArray{T}%20where%20T%3C:Real} +[24]: https://ap6yc.github.io/AdaptiveResonance.jl/dev/man/full-index/#AdaptiveResonance.linear_normalization-Tuple{AbstractMatrix{T}%20where%20T%3C:Real} ### Contributing @@ -208,7 +220,7 @@ In summary: 1. Questions and requested changes should all be made in the [issues][issues-url] page. These are preferred because they are publicly viewable and could assist or educate others with similar issues or questions. 2. For changes, this project accepts pull requests (PRs) from `feature/` branches onto the `develop` branch using the [GitFlow](https://nvie.com/posts/a-successful-git-branching-model/) methodology. -If unit tests pass and the changes are beneficial, these PRs are merged into `develop` and eventually folded into versioned releases. +If unit tests pass and the changes are beneficial, these PRs are merged into `develop` and eventually folded into versioned releases throug a `release` branch that is merged with the `master` branch. 3. The project follows the [Semantic Versioning](https://semver.org/) convention of `major.minor.patch` incremental versioning numbers. Patch versions are for bug fixes, minor versions are for backward-compatible changes, and major versions are for new and incompatible usage changes. @@ -216,10 +228,21 @@ Patch versions are for bug fixes, minor versions are for backward-compatible cha ### Authors -This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/). This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/). +This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/). +The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project. + +### Support + +This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/). The material, findings, and conclusions here do not necessarily reflect the views of these entities. -The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project. +Research was sponsored by the Army Research Laboratory and was accomplished under +Cooperative Agreement Number W911NF-22-2-0209. +The views and conclusions contained in this document are +those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of +the Army Research Laboratory or the U.S. Government. +The U.S. Government is authorized to reproduce and +distribute reprints for Government purposes notwithstanding any copyright notation herein. ### History @@ -229,6 +252,8 @@ The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https:// - 10/13/2021 - Initiate GitFlow contribution. - 5/4/2022 - [Acceptance to JOSS](https://doi.org/10.21105/joss.03671). - 10/11/2022 - v0.6.0 +- 12/15/2022 - v0.7.0 +- 1/30/2023 - v0.8.0 ### Software @@ -258,17 +283,10 @@ The code in this repository is inspired the following repositories: Boilerplate clustering datasets are periodically used to test, verify, and provide example of the functionality of the package. -1. UCI machine learning repository: - - -2. Fundamental Clustering Problems Suite (FCPS): - - -3. Datasets package: - - -4. Clustering basic benchmark: - +1. [UCI machine learning repository](http://archive.ics.uci.edu/ml) +2. [Fundamental Clustering Problems Suite (FCPS)](https://www.uni-marburg.de/fb12/arbeitsgruppen/datenbionik/data?language_sync=1) +3. [Nejc Ilc's unsupervised datasets package](https://www.researchgate.net/publication/239525861_Datasets_package) +4. [Clustering basic benchmark](http://cs.uef.fi/sipu/datasets) ### License diff --git a/docs/examples/adaptive_resonance/data_config.jl b/docs/examples/adaptive_resonance/data_config.jl index b281a505..c9d74625 100644 --- a/docs/examples/adaptive_resonance/data_config.jl +++ b/docs/examples/adaptive_resonance/data_config.jl @@ -1,7 +1,7 @@ # --- # title: ART DataConfig Example # id: data_config -# cover: ../assets/art.png +# cover: ../../assets/downloads/art.png # date: 2021-12-2 # author: "[Sasha Petrenko](https://github.com/AP6YC)" # julia: 1.8 diff --git a/docs/examples/art/ddvfa_supervised.jl b/docs/examples/art/ddvfa_supervised.jl index eb2473fb..36528641 100644 --- a/docs/examples/art/ddvfa_supervised.jl +++ b/docs/examples/art/ddvfa_supervised.jl @@ -1,7 +1,7 @@ # --- # title: Supervised DDVFA Example # id: ddvfa_supervised -# cover: ../assets/ddvfa.png +# cover: ../../assets/downloads/ddvfa.png # date: 2021-11-30 # author: "[Sasha Petrenko](https://github.com/AP6YC)" # julia: 1.8 diff --git a/docs/examples/art/ddvfa_unsupervised.jl b/docs/examples/art/ddvfa_unsupervised.jl index ba43d773..44e27fd9 100644 --- a/docs/examples/art/ddvfa_unsupervised.jl +++ b/docs/examples/art/ddvfa_unsupervised.jl @@ -1,7 +1,7 @@ # --- # title: Unsupervised DDVFA Example # id: ddvfa_unsupervised -# cover: ../assets/ddvfa.png +# cover: ../../assets/downloads/ddvfa.png # date: 2021-11-30 # author: "[Sasha Petrenko](https://github.com/AP6YC)" # julia: 1.8 diff --git a/docs/examples/artmap/sfam_iris.jl b/docs/examples/artmap/sfam_iris.jl index 41ef9478..48c098bf 100644 --- a/docs/examples/artmap/sfam_iris.jl +++ b/docs/examples/artmap/sfam_iris.jl @@ -1,7 +1,7 @@ # --- # title: Supervised Simplified FuzzyARTMAP (SFAM) Example # id: sfam_iris -# cover: ../assets/artmap.png +# cover: ../../assets/downloads/artmap.png # date: 2021-11-30 # author: "[Sasha Petrenko](https://github.com/AP6YC)" # julia: 1.8 diff --git a/docs/examples/assets/art.png b/docs/examples/assets/art.png deleted file mode 100644 index fb8f4ec6..00000000 --- a/docs/examples/assets/art.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:39ee90ac4268148637d5654879b8b6a925ed907d8cc7a34ce0f1fc33e4a13511 -size 28577 diff --git a/docs/examples/assets/artmap.png b/docs/examples/assets/artmap.png deleted file mode 100644 index 71051c03..00000000 --- a/docs/examples/assets/artmap.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:77f8a62b8c34feab162bb1b48b55fdf898dc28f29359d3d310af01423fb15414 -size 184774 diff --git a/docs/examples/assets/ddvfa.png b/docs/examples/assets/ddvfa.png deleted file mode 100644 index d4db818d..00000000 --- a/docs/examples/assets/ddvfa.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a01e83b462fca2ec999662afa482d26e6e4f0d1883466ecc06f1d6337a59c488 -size 93402 diff --git a/docs/make.jl b/docs/make.jl index 72710a43..fbbf573c 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -47,6 +47,37 @@ if haskey(ENV, "DOCSARGS") end end +# ----------------------------------------------------------------------------- +# DOWNLOAD LARGE ASSETS +# ----------------------------------------------------------------------------- + +# Point to the raw FileStorage location on GitHub +top_url = raw"https://media.githubusercontent.com/media/AP6YC/FileStorage/main/AdaptiveResonance/" +# List all of the files that we need to use in the docs +files = [ + "header.png", + "art.png", + "artmap.png", + "ddvfa.png", +] +# Make a destination for the files +download_folder = joinpath("src", "assets", "downloads") +mkpath(download_folder) +download_list = [] +# Download the files one at a time +for file in files + # Point to the correct file that we wish to download + src_file = top_url * file * "?raw=true" + # Point to the correct local destination file to download to + dest_file = joinpath(download_folder, file) + # Add the file to the list that we will append to assets + push!(download_list, dest_file) + # If the file isn't already here, download it + if !isfile(dest_file) + download(src_file, dest_file) + end +end + # ----------------------------------------------------------------------------- # GENERATE # ----------------------------------------------------------------------------- @@ -56,7 +87,7 @@ end demopage, postprocess_cb, demo_assets = makedemos("examples") assets = [ - joinpath("assets", "favicon.ico") + joinpath("assets", "favicon.ico"), ] # if there are generated css assets, pass it to Documenter.HTML diff --git a/docs/src/assets/favicon.ico b/docs/src/assets/favicon.ico index c66f7fd4..c20690b4 100644 Binary files a/docs/src/assets/favicon.ico and b/docs/src/assets/favicon.ico differ diff --git a/docs/src/assets/figures/art.png b/docs/src/assets/figures/art.png deleted file mode 100644 index fb8f4ec6..00000000 --- a/docs/src/assets/figures/art.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:39ee90ac4268148637d5654879b8b6a925ed907d8cc7a34ce0f1fc33e4a13511 -size 28577 diff --git a/docs/src/assets/header.png b/docs/src/assets/header.png deleted file mode 100644 index a455f0ca..00000000 --- a/docs/src/assets/header.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3e4f46efa31a7c3db624c04971adc6fcd18f6f6f369ee4dcf19454d43ec576f3 -size 112187 diff --git a/docs/src/assets/logo.png b/docs/src/assets/logo.png index 4f84ca0c..9cae3a5f 100644 Binary files a/docs/src/assets/logo.png and b/docs/src/assets/logo.png differ diff --git a/docs/src/getting-started/whatisart.md b/docs/src/getting-started/whatisart.md index 5909cc41..d222717e 100644 --- a/docs/src/getting-started/whatisart.md +++ b/docs/src/getting-started/whatisart.md @@ -11,11 +11,11 @@ Pioneered by Stephen Grossberg and Gail Carpenter, the field has had contributio Because of the high degree of interplay between the neurocognitive theory and the engineering models born of it, the term ART is frequently used to refer to both in the modern day (for better or for worse). -Stephen Grossberg's has recently released a book summarizing the work of him, his wife Gail Carpenter, and his colleagues on Adaptive Resonance Theory in his book [Conscious Brain, Resonant Mind](https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552). +Stephen Grossberg's has recently released a book summarizing the work of him, his wife and colleague Gail Carpenter, and his other colleagues on Adaptive Resonance Theory in his book [Conscious Brain, Resonant Mind](https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552). ## ART Basics -![art](../assets/figures/art.png) +![art](../assets/downloads/art.png) ### ART Dynamics @@ -43,7 +43,7 @@ In addition to the dynamics typical of an ART model, you must know: ART modules are used for reinforcement learning by representing the mappings between state, value, and action spaces with ART dynamics. 3. Almost all ART models face the problem of the appropriate selection of the vigilance parameter, which may depend in its optimality according to the problem. 4. Being a class of neurogenitive neural network models, ART models gain the ability for theoretically infinite capacity along with the problem of "category proliferation," which is the undesirable increase in the number of categories as the model continues to learn, leading to increasing computational time. - In contrast, while the evaluation time of a deep neural network is always *exactly the same*, there exist upper bounds in their representational capacity. + In contrast, while the evaluation time of a fixed architecture deep neural network is always *exactly the same*, there exist upper bounds in their representational capacity. 5. Nearly every ART model requires feature normalization (i.e., feature elements lying within $$[0,1]$$) and a process known as complement coding where the feature vector is appended to its vector complement $$[1-\bar{x}]$$. This is because real-numbered vectors can be arbitrarily close to one another, hindering learning performance, which requires a degree of contrast enhancement between samples to ensure their separation. @@ -59,7 +59,7 @@ By representing categories as a field of instar networks, new categories could b However, it was shown that the learning stability of Grossberg Networks degrades as the number of represented categories increases. Discoveries in the neurocognitive theory and breakthroughs in their implementation led to the introduction of a recurrent connections between the two fields of the network to stabilize the learning. These breakthroughs were based upon the discovery that autonomous learning depends on the interplay and agreement between *perception* and *expectation*, frequently referred to as bottom-up and top-down processes. -Furthermore, it is *resonance* between these states in the frequency domain that gives rise to conscious experiences and that permit adaptive weights to change, leading to the phenomenon of learning. +Furthermore, it is *resonance* between these states in the frequency domain that gives rise to conscious experiences and that permit adaptive weights to change, leading to the phenomea of attention and learning. The theory has many explanatory consequences in psychology, such as why attention is required for learning, but its consequences in the engineering models are that it stabilizes learning in cooperative-competitive dynamics, such as interconnected fields of neurons, which are most often chaotic. Chapters 18 and 19 of the book by [Neural Network Design by Hagan, Demuth, Beale, and De Jesus](https://hagan.okstate.edu/NNDesign.pdf) provide a good theoretical basis for learning how these network models were eventually implemented into the first binary-vector implementation of ART1. diff --git a/docs/src/index.md b/docs/src/index.md index 31e7f198..e94a2770 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -4,13 +4,13 @@ DocTestSetup = quote end ``` -![header](assets/header.png) +![header](assets/downloads/header.png) --- These pages serve as the official documentation for the AdaptiveResonance.jl Julia package. -Adaptive Resonance Theory (ART) began as a neurocognitive theory of how fields of cells can continuously learn stable representations, and it evolved into the basis for a myriad of practical machine learning algorithms. +Adaptive Resonance Theory (ART) began as a neurocognitive theory of how fields of cells can continuously learn stable representations, and it has been utilized as the basis for a myriad of practical machine learning algorithms. Pioneered by Stephen Grossberg and Gail Carpenter, the field has had contributions across many years and from many disciplines, resulting in a plethora of engineering applications and theoretical advancements that have enabled ART-based algorithms to compete with many other modern learning and clustering algorithms. The purpose of this package is to provide a home for the development and use of these ART-based machine learning algorithms in the Julia programming language. diff --git a/docs/src/man/contributing.md b/docs/src/man/contributing.md index b28587d7..c858b574 100644 --- a/docs/src/man/contributing.md +++ b/docs/src/man/contributing.md @@ -8,6 +8,8 @@ From top to bottom, the ways of contributing are: - [GitFlow:](@ref GitFlow) how to directly contribute code to the package in an organized way on GitHub. - [Development Details:](@ref Development-Details) how the internals of the package are currently setup if you would like to directly contribute code. +Please also see the [Attribution](@ref Attribution) to learn about the authors and sources of support for the project. + ## Issues The main point of contact is the [GitHub issues](https://github.com/AP6YC/AdaptiveResonance.jl/issues) page for the project. @@ -85,14 +87,19 @@ The `AdaptiveResonance.jl` package has the following file structure: ```console AdaptiveResonance ├── .github/workflows // GitHub: workflows for testing and documentation. -├── data // Data: CI data location. ├── docs // Docs: documentation for the module. │ └───src // Documentation source files. ├── examples // Source: example usage scripts. ├── src // Source: majority of source code. │ ├───ART // ART-based unsupervised modules. +│ │ ├───distributed // Distributed ART modules. +│ │ └───single // Undistributed ART modules. │ └───ARTMAP // ARTMAP-based supervised modules. ├── test // Test: Unit, integration, and environment tests. +│ ├── adaptiveresonance // Tests common to the entire package. +│ ├── art // Tests for just ART modules. +│ ├── artmap // Tests for just ARTMAP modules. +│ └───data // CI test data. ├── .appveyor // Appveyor: Windows-specific coverage. ├── .gitattributes // Git: LFS settings, languages, etc. ├── .gitignore // Git: .gitignore for the whole project. @@ -142,6 +149,24 @@ Furthermore, independent class labels are always `Int` because of the [Julia nat This project does not currently test for the support of [arbitrary precision arithmetic](https://docs.julialang.org/en/v1/manual/integers-and-floating-point-numbers/#Arbitrary-Precision-Arithmetic) because learning algorithms *in general* do not have a significant need for precision. -## Authors +## Attribution + +### Authors + +This package is developed and maintained by [Sasha Petrenko](https://github.com/AP6YC) with sponsorship by the [Applied Computational Intelligence Laboratory (ACIL)](https://acil.mst.edu/). +The users [@aaronpeikert](https://github.com/aaronpeikert), [@hayesall](https://github.com/hayesall), and [@markNZed](https://github.com/markNZed) have graciously contributed their time with reviews and feedback that has greatly improved the project. + +If you simply have suggestions for improvement, Sasha Petrenko () is the current developer and maintainer of the `AdaptiveResonance.jl` package, so please feel free to reach out with thoughts and questions. + +### Support + +This project is supported by grants from the [Night Vision Electronic Sensors Directorate](https://c5isr.ccdc.army.mil/inside_c5isr_center/nvesd/), the [DARPA Lifelong Learning Machines (L2M) program](https://www.darpa.mil/program/lifelong-learning-machines), [Teledyne Technologies](http://www.teledyne.com/), and the [National Science Foundation](https://www.nsf.gov/). +The material, findings, and conclusions here do not necessarily reflect the views of these entities. -If you simply have suggestions for improvement, Sasha Petrenko () is the current developer and maintainer of the AdaptiveResonance.jl package, so please feel free to reach out with thoughts and questions. +Research was sponsored by the Army Research Laboratory and was accomplished under +Cooperative Agreement Number W911NF-22-2-0209. +The views and conclusions contained in this document are +those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of +the Army Research Laboratory or the U.S. Government. +The U.S. Government is authorized to reproduce and +distribute reprints for Government purposes notwithstanding any copyright notation herein.