Skip to content

Commit

Permalink
Merge pull request #3 from florian1345/readme
Browse files Browse the repository at this point in the history
Readme
  • Loading branch information
florian1345 committed Dec 21, 2021
2 parents 800048f + 72619e3 commit 232a6c3
Show file tree
Hide file tree
Showing 3 changed files with 83 additions and 3 deletions.
1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ edition = "2021"
documentation = "https://docs.rs/lru-mem/0.1.0/lru-mem/"
license = "MIT OR Apache-2.0"
categories = [ "data-structures" ]
readme = "README.md"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

Expand Down
81 changes: 80 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,81 @@
# lru-mem
An LRU cache implementation bounded by memory.

An implementation of a memory-bounded LRU (least-recently-used) cache for Rust.
It supports average-case O(1) insert, get, and remove. There are also
additional utility methods such as iterators, capacity management, and mutable
access.

Note that the memory required for each entry is only an estimate and some
auxiliary structure is disregarded. Therefore, the actual data structure can
take more memory than was assigned, however this should not be an excessive
amount in most cases.

# Motivating example

Imagine we are building a web server that sends large responses to clients. To
reduce the load, they are split into sections and the client is given a token
to access the different sections individually. However, recomputing the
sections on each request leads to too much server load, so they need to be
cached. An LRU cache is useful in this situation, as clients are most likely to
request new sections temporally localized.

Now consider the situation when most responses are very small, but some may be
large. This would either lead to the cache being conservatively sized and allow
for less cached responses than would normally be possible, or to the cache
being liberally sized and potentially overflow memory if too many large
responses have to be cached. To prevent this, the cache is designed with an
upper bound on its memory instead of the number of elements.

The code below shows how the basic structure might look like.

```rust
use lru_mem::LruCache;

struct WebServer {
cache: LruCache<u128, Vec<String>>
}

fn random_token() -> u128 {
// A cryptographically secure random token.
42
}

fn generate_sections(input: String) -> Vec<String> {
// A complicated set of sections that is highly variable in size.
vec![input.clone(), input]
}

impl WebServer {
fn new(max_size: usize) -> WebServer {
// Create a new web server with a cache that holds at most max_size
// bytes of elements.
WebServer {
cache: LruCache::new(max_size)
}
}

fn on_query(&mut self, input: String) -> u128 {
// Generate sections, store them in the cache, and return token.
let token = random_token();
let sections = generate_sections(input);
self.cache.insert(token, sections)
.expect("sections do not fit in the cache");

token
}

fn on_section_request(&mut self, token: u128, index: usize)
-> Option<&String> {
// Lookup the token and get the section with given index.
self.cache.get(&token).and_then(|s| s.get(index))
}
}
```

For more details, check out the documentation.

# Links

* [Crate](https://crates.io/crates/lru-mem)
* [Documentation](https://docs.rs/lru-mem/)
* [Repository](https://github.com/florian1345/lru-mem)
4 changes: 2 additions & 2 deletions src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
//! This crate implements an LRU (least-recently-used) cache that is limited by
//! the total size of its entries. As more entries are added than fit in the
//! specified memory bound, the least-recently-used ones are ejected. The cache
//! supports O(1) insertion, retrieval, and removal.
//! supports average-case O(1) insertion, retrieval, and removal.
//!
//! Note that the memory required for each entry is only estimated and some
//! Note that the memory required for each entry is only an estimate and some
//! auxiliary structure is disregarded. With some data structures (such as the
//! [HashMap] or [HashSet](std::collections::HashSet))), some internal data is
//! not accessible, so the required memory is even more undermested. Therefore,
Expand Down

0 comments on commit 232a6c3

Please sign in to comment.