Skip to content

Commit

Permalink
Write readme Makefile target
Browse files Browse the repository at this point in the history
`$ make readme` cleans up Redis, runs the doctests, then cleans up Redis
again.  This way, I don't have to pollute `README.md` with Redis cleanup
commands.
  • Loading branch information
brainix committed Nov 16, 2020
1 parent 0b646d4 commit 7f3b3f3
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 64 deletions.
6 changes: 6 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,12 @@ else
python3 -m unittest --verbose $(tests)
endif

readme:
source $(venv)/bin/activate && \
python3 -c "from redis import Redis; redis = Redis(); print(redis.delete('dilberts', 'edible', 'expensive-function-cache', 'google-searches', 'lyrics', 'nextid:user-ids', 'printer', 'raj'))"; \
python3 -m doctest -v README.md; \
python3 -c "from redis import Redis; redis = Redis(); print(redis.delete('dilberts', 'edible', 'expensive-function-cache', 'google-searches', 'lyrics', 'nextid:user-ids', 'printer', 'raj'))"

release:
rm -f dist/*
source $(venv)/bin/activate && \
Expand Down
64 changes: 0 additions & 64 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,14 +36,6 @@ That was the hardest part.

### Dicts

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('raj'), 0, 1}
{0, 1}
>>>
```

Create a `RedisDict`:

```python
Expand Down Expand Up @@ -72,14 +64,6 @@ True

### Sets

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('edible'), 0, 1}
{0, 1}
>>>
```

Create a `RedisSet`:

```python
Expand Down Expand Up @@ -112,14 +96,6 @@ False

### Lists

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('lyrics'), 0, 1}
{0, 1}
>>>
```

Create a `RedisList`:

```python
Expand Down Expand Up @@ -156,14 +132,6 @@ RedisList['everything', 'in', 'its', 'right', 'place']
and even machines, without a single point of failure. [Rationale and algorithm
description.](http://antirez.com/news/102)

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('nextid:user-ids'), 0, 1}
{0, 1}
>>>
```

Instantiate an ID generator:

```python
Expand Down Expand Up @@ -207,14 +175,6 @@ description.](http://redis.io/topics/distlock)
API as closely as is feasible. In other words, you can use `Redlock` the same
way that you use `threading.Lock`.

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('printer'), 0, 1}
{0, 1}
>>>
```

Instantiate a `Redlock`:

```python
Expand Down Expand Up @@ -304,14 +264,6 @@ In general, you should only use `redis_cache()` when you want to reuse
previously computed values. Accordingly, it doesn’t make sense to cache
functions with side-effects or impure functions such as `time()` or `random()`.

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('expensive-function-cache'), 0, 1}
{0, 1}
>>>
```

Decorate a function:

```python
Expand Down Expand Up @@ -433,14 +385,6 @@ a margin of error up to 2%. However, they can reasonably accurately estimate
the cardinality (size) of vast datasets (like the number of unique Google
searches issued in a day) with a tiny amount of storage (1.5 KB).

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('google-searches'), 0, 1}
{0, 1}
>>>
```

Create a `HyperLogLog`:

```python
Expand Down Expand Up @@ -512,14 +456,6 @@ particular element before, you really must never have seen it). You can tune
your acceptable false positive probability, though at the expense of the
storage size and the element insertion/lookup time of your Bloom filter.

Clean up for the doctest (ignore me):

```python
>>> {redis.delete('dilberts'), 0, 1}
{0, 1}
>>>
```

Create a `BloomFilter`:

```python
Expand Down

0 comments on commit 7f3b3f3

Please sign in to comment.