Skip to content

Commit

Permalink
Miscellaneous documentation fixes (#337)
Browse files Browse the repository at this point in the history
## Description

<!-- Provide a brief description of the PR's purpose here. -->

## TODO

<!-- Notable points that this PR has either accomplished or will
accomplish. -->

## Questions

<!-- Any concerns or points of confusion? -->


## Status

- [x] I have read the guidelines in
[CONTRIBUTING.md](https://github.com/icaros-usc/pyribs/blob/master/CONTRIBUTING.md)
- [x] I have formatted my code using `yapf`
- [x] I have tested my code by running `pytest`
- [x] I have linted my code with `pylint`
- [x] I have added a one-line description of my change to the changelog
in `HISTORY.md`
- [x] This PR is ready to go
  • Loading branch information
itsdawei committed Jul 14, 2023
1 parent b1fe409 commit 7067159
Show file tree
Hide file tree
Showing 6 changed files with 43 additions and 29 deletions.
7 changes: 7 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
# History

## 0.5.2

### Changelog

#### Documentation
- Add BibTex citation for GECCO 2023 (#337)

## 0.5.1

This release contains miscellaneous edits to our documentation from v0.5.0.
Expand Down
32 changes: 20 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,20 +86,28 @@ more information on this paper, see [here](https://pyribs.org/paper).

## Citation

If you use pyribs in your research, please cite it as follows. Note that you
will need to include the
[hyperref](https://www.overleaf.com/learn/latex/Hyperlinks#Linking_web_addresses)
package in order to use the `\url` command. Also consider citing any algorithms
you use as shown [below](#citing-algorithms-in-pyribs).
If you use pyribs in your research, please consider citing our
[GECCO 2023 paper](https://dl.acm.org/doi/10.1145/3583131.3590374) as follows.
Also consider citing any algorithms you use as shown
[below](#citing-algorithms-in-pyribs).

```
@misc{pyribs,
title={pyribs: A Bare-Bones Python Library for Quality Diversity Optimization},
author={Bryon Tjanaka and Matthew C. Fontaine and David H. Lee and Yulun Zhang and Nivedit Reddy Balam and Nathaniel Dennler and Sujay S. Garlanka and Nikitas Dimitri Klapsis and Stefanos Nikolaidis},
year={2023},
eprint={2303.00191},
archivePrefix={arXiv},
primaryClass={cs.NE}
@inproceedings{10.1145/3583131.3590374,
author = {Tjanaka, Bryon and Fontaine, Matthew C and Lee, David H and Zhang, Yulun and Balam, Nivedit Reddy and Dennler, Nathaniel and Garlanka, Sujay S and Klapsis, Nikitas Dimitri and Nikolaidis, Stefanos},
title = {Pyribs: A Bare-Bones Python Library for Quality Diversity Optimization},
year = {2023},
isbn = {9798400701191},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3583131.3590374},
doi = {10.1145/3583131.3590374},
abstract = {Recent years have seen a rise in the popularity of quality diversity (QD) optimization, a branch of optimization that seeks to find a collection of diverse, high-performing solutions to a given problem. To grow further, we believe the QD community faces two challenges: developing a framework to represent the field's growing array of algorithms, and implementing that framework in software that supports a range of researchers and practitioners. To address these challenges, we have developed pyribs, a library built on a highly modular conceptual QD framework. By replacing components in the conceptual framework, and hence in pyribs, users can compose algorithms from across the QD literature; equally important, they can identify unexplored algorithm variations. Furthermore, pyribs makes this framework simple, flexible, and accessible, with a user-friendly API supported by extensive documentation and tutorials. This paper overviews the creation of pyribs, focusing on the conceptual framework that it implements and the design principles that have guided the library's development. Pyribs is available at https://pyribs.org},
booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
pages = {220–229},
numpages = {10},
keywords = {framework, quality diversity, software library},
location = {Lisbon, Portugal},
series = {GECCO '23}
}
```

Expand Down
8 changes: 4 additions & 4 deletions ribs/archives/_cvt_archive.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class CVTArchive(ArchiveBase):
:alt: Runtime to insert 100k entries into CVTArchive
Across almost all numbers of cells, using the k-D tree is faster than using
brute force. Thus, **we recommend always using he k-D tree.** See
brute force. Thus, **we recommend always using the k-D tree.** See
`benchmarks/cvt_add.py
<https://github.com/icaros-usc/pyribs/tree/master/benchmarks/cvt_add.py>`_
in the project repo for more information about how this plot was generated.
Expand Down Expand Up @@ -188,6 +188,7 @@ def __init__(self,
f"{self._measure_dim})")
self._centroids = custom_centroids
self._samples = None

if self._centroids is None:
self._samples = self._rng.uniform(
self._lower_bounds,
Expand Down Expand Up @@ -267,9 +268,8 @@ def index_of(self, measures_batch):
check_finite(measures_batch, "measures_batch")

if self._use_kd_tree:
return np.asarray(
self._centroid_kd_tree.query(measures_batch))[1].astype(
np.int32)
_, indices = self._centroid_kd_tree.query(measures_batch)
return indices.astype(np.int32)

# Brute force distance calculation -- start by taking the difference
# between each measure i and all the centroids.
Expand Down
8 changes: 4 additions & 4 deletions ribs/emitters/_evolution_strategy_emitter.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@ class EvolutionStrategyEmitter(EmitterBase):
and inserting solutions. For instance, this can be
:class:`ribs.archives.GridArchive`.
x0 (np.ndarray): Initial solution. Must be 1-dimensional.
sigma0 (float): Initial step size / standard deviation.
sigma0 (float): Initial step size / standard deviation of the
distribution from which solutions are sampled.
ranker (Callable or str): The ranker is a :class:`RankerBase` object
that orders the solutions after they have been evaluated in the
environment. This parameter may be a callable (e.g. a class or
Expand Down Expand Up @@ -219,9 +220,8 @@ def tell(self,
status_batch = np.asarray(status_batch)
value_batch = np.asarray(value_batch)
batch_size = solution_batch.shape[0]
metadata_batch = (np.empty(batch_size, dtype=object) if
metadata_batch is None else np.asarray(metadata_batch,
dtype=object))
metadata_batch = (np.empty(batch_size, dtype=object) if metadata_batch
is None else np.asarray(metadata_batch, dtype=object))

# Validate arguments.
validate_batch_args(archive=self.archive,
Expand Down
13 changes: 6 additions & 7 deletions ribs/emitters/_gradient_arborescence_emitter.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ class GradientArborescenceEmitter(EmitterBase):
inserting solutions. For instance, this can be
:class:`ribs.archives.GridArchive`.
x0 (np.ndarray): Initial solution.
sigma0 (float): Initial step size / standard deviation.
sigma0 (float): Initial step size / standard deviation of the
distribution of gradient coefficients.
lr (float): Learning rate for the gradient optimizer.
ranker (Callable or str): The ranker is a :class:`RankerBase` object
that orders the solutions after they have been evaluated in the
Expand Down Expand Up @@ -334,9 +335,8 @@ def tell_dqd(self,
status_batch = np.asarray(status_batch)
value_batch = np.asarray(value_batch)
batch_size = solution_batch.shape[0]
metadata_batch = (np.empty(batch_size, dtype=object) if
metadata_batch is None else np.asarray(metadata_batch,
dtype=object))
metadata_batch = (np.empty(batch_size, dtype=object) if metadata_batch
is None else np.asarray(metadata_batch, dtype=object))

# Validate arguments.
validate_batch_args(archive=self.archive,
Expand Down Expand Up @@ -390,9 +390,8 @@ def tell(self,
status_batch = np.asarray(status_batch)
value_batch = np.asarray(value_batch)
batch_size = solution_batch.shape[0]
metadata_batch = (np.empty(batch_size, dtype=object) if
metadata_batch is None else np.asarray(metadata_batch,
dtype=object))
metadata_batch = (np.empty(batch_size, dtype=object) if metadata_batch
is None else np.asarray(metadata_batch, dtype=object))

# Validate arguments.
validate_batch_args(archive=self.archive,
Expand Down
4 changes: 2 additions & 2 deletions tutorials/lunar_lander.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -460,7 +460,7 @@
"\n",
"With the pyribs components defined, we start searching with CMA-ME. Since we use 5 emitters each with a batch size of 30 and we run 300 iterations, we run 5 x 30 x 300 = 45,000 lunar lander simulations. We also keep track of some logging info via `archive.stats`, which is an [`ArchiveStats`](https://docs.pyribs.org/en/latest/api/ribs.archives.ArchiveStats.html) object.\n",
"\n",
"Since it takes a relatively long time to evaluate a lunar lander solution, we parallelize the evaluation of multiple solutions with Python's [multiprocessing module](https://docs.python.org/3/library/multiprocessing.html), specifically the [`starmap`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap) method of [`multiprocessing.Pool`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool). With two workers, the loop should take **2 hours** to run. With two workers, it should take **1 hour** to run. Feel free to increase the number of workers based on the number of CPUs your system has available to speed up the loop further."
"Since it takes a relatively long time to evaluate a lunar lander solution, we parallelize the evaluation of multiple solutions with Python's [multiprocessing module](https://docs.python.org/3/library/multiprocessing.html), specifically the [`starmap`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap) method of [`multiprocessing.Pool`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool). With one worker, the loop should take **2 hours** to run. With two workers, it should take **1 hour** to run. Feel free to increase the number of workers based on the number of CPUs your system has available to speed up the loop further."
]
},
{
Expand Down Expand Up @@ -1121,7 +1121,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.13"
"version": "3.11.3"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 7067159

Please sign in to comment.