New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annnnd I'm done (grammar/spelling corrections) #813

Open
wants to merge 11 commits into
base: master
from
View
@@ -3,16 +3,16 @@
Since the Redis 4.0 version (currently in release candidate state) Redis
supports the ARM processor, and the Raspberry Pi, as a main
platform, exactly like it happens for Linux/x86. It means that every new
release of Redis is tested on the Pi environment, and that we take
release of Redis is tested on the Pi environment, and that we keep
this documentation page updated with information about supported devices
and information. While Redis already runs on Android, in the future we look
forward to extend our testing efforts to Android to also make it an officially
forward to extending our testing efforts to Android to also make it an officially
supported platform.
We believe that Redis is ideal for IoT and Embedded devices for several
reasons:
* Redis has a very small memory footprint and CPU requirements. Can run in small devices like the Raspberry Pi Zero without impacting the overall performance, using a small amount of memory, while delivering good performance for many use cases.
* Redis has a very small memory footprint and CPU requirements. It can run in small devices like the Raspberry Pi Zero without impacting the overall performance, using a small amount of memory, while delivering good performance for many use cases.
* The data structures of Redis are often a good way to model IoT/embedded use cases. For example in order to accumulate time series data, to receive or queue commands to execute or responses to send back to the remote servers and so forth.
* Modeling data inside Redis can be very useful in order to make in-device decisions for appliances that must respond very quickly or when the remote servers are offline.
* Redis can be used as an interprocess communication system between the processes running in the device.
@@ -30,7 +30,7 @@ run as expected.
## Building Redis in the Pi
* Grab the latest commit of the Redis 4.0 branch.
* Just use `make` as usually to create the executable.
* Just use `make` as usual to create the executable.
There is nothing special in the process. The only difference is that by
default, Redis uses the libc allocator instead of defaulting to Jemalloc
@@ -44,7 +44,7 @@ as the libc allocator.
Performance testing of Redis was performend in the Raspberry Pi 3 and in the
original model B Pi. The difference between the two Pis in terms of
delivered performance is quite big. The benchmarks were performed via the
loopback interface, since most use cases will probably use Redis from within
loopback interface, as most use cases will probably use Redis from within
the device and not via the network.
Raspberry Pi 3:
@@ -61,6 +61,4 @@ Raspberry Pi 1 model B:
* Test 3: Like test 1 but with AOF enabled, fsync 1 sec: 1,820 ops/sec
* Test 4: Like test 3, but with an AOF rewrite in progress: 1,000 ops/sec
The benchmarks above are referring to simple SET/GET operations. The performance is similar for all the Redis fast operations (not running in linear time). However sorted sets may show slightly slow numbers.
The benchmarks above are referring to simple SET/GET operations. The performance is similar for all the Redis fast operations (not running in linear time). However sorted sets may show slightly slower numbers.
@@ -80,11 +80,11 @@ Event Loop Processing
`ae.c:aeProcessEvents` looks for the time event that will be pending in the smallest amount of time by calling `ae.c:aeSearchNearestTimer` on the event loop. In our case there is only one timer event in the event loop that was created by `ae.c:aeCreateTimeEvent`.
Remember, that timer event created by `aeCreateTimeEvent` has by now probably elapsed because it had a expiry time of one millisecond. Since, the timer has already expired the seconds and microseconds fields of the `tvp` `timeval` structure variable is initialized to zero.
Remember, that the timer event created by `aeCreateTimeEvent` has probably elapsed by now because it had an expiry time of one millisecond. Since the timer has already expired, the seconds and microseconds fields of the `tvp` `timeval` structure variable is initialized to zero.
The `tvp` structure variable along with the event loop variable is passed to `ae_epoll.c:aeApiPoll`.
`aeApiPoll` functions does a [`epoll_wait`](http://man.cx/epoll_wait) on the `epoll` descriptor and populates the `eventLoop->fired` table with the details:
`aeApiPoll` functions does an [`epoll_wait`](http://man.cx/epoll_wait) on the `epoll` descriptor and populates the `eventLoop->fired` table with the details:
* `fd`: The descriptor that is now ready to do a read/write operation depending on the mask value.
* `mask`: The read/write event that can now be performed on the corresponding descriptor.
View
@@ -91,4 +91,4 @@ Look at `sdslen` function and see this trick at work:
Knowing this trick you could easily go through the rest of the functions in `sds.c`.
The Redis string implementation is hidden behind an interface that accepts only character pointers. The users of Redis strings need not care about how its implemented and treat Redis strings as a character pointer.
The Redis string implementation is hidden behind an interface that accepts only character pointers. The users of Redis strings need not care about how it's implemented and can treat Redis strings as a character pointer.
View
@@ -30,9 +30,9 @@ Other features include:
* [LRU eviction of keys](/topics/lru-cache)
* [Automatic failover](/topics/sentinel)
You can use Redis from [most programming languages](/clients) out there.
You can use Redis from [most programming languages](/clients) out there.
Redis is written in **ANSI C** and works in most POSIX systems like Linux,
\*BSD, OS X without external dependencies. Linux and OS X are the two operating systems where Redis is developed and more tested, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. There
\*BSD, OS X without external dependencies. Linux and OS X are the two operating systems where Redis is developed and tested the most, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. There
is no official support for Windows builds, but Microsoft develops and
maintains a [Win-64 port of Redis](https://github.com/MSOpenTech/redis).
View
@@ -2,22 +2,22 @@ Redis latency monitoring framework
===
Redis is often used in the context of demanding use cases, where it
serves a big amount of queries per second per instance, and at the same
serves a large number of queries per second per instance, and at the same
time, there are very strict latency requirements both for the average response
time and for the worst case latency.
While Redis is an in memory system, it deals with the operating system in
While Redis is an in-memory system, it deals with the operating system in
different ways, for example, in the context of persisting to disk.
Moreover Redis implements a rich set of commands. Certain commands
are fast and run in constant or logarithmic time, other commands are slower
O(N) commands, that can cause latency spikes.
O(N) commands that can cause latency spikes.
Finally Redis is single threaded: this is usually an advantage
from the point of view of the amount of work it can perform per core, and in
the latency figures it is able to provide, but at the same time it poses
a challenge from the point of view of latency, since the single
thread must be able to perform certain tasks incrementally, like for
example keys expiration, in a way that does not impact the other clients
thread must be able to perform certain tasks incrementally, for
example key expiration, in a way that does not impact the other clients
that are served.
For all these reasons, Redis 2.8.13 introduced a new feature called
@@ -50,16 +50,16 @@ event. This is how the time series work:
* Every time a latency spike happens, it is logged in the appropriate time series.
* Every time series is composed of 160 elements.
* Each element is a pair: an unix timestamp of the time the latency spike was measured, and the number of milliseconds the event took to executed.
* Each element is a pair: a Unix timestamp of the time the latency spike was measured, and the number of milliseconds the event took to executed.
* Latency spikes for the same event happening in the same second are merged (by taking the maximum latency), so even if continuous latency spikes are measured for a given event, for example because the user set a very low threshold, at least 180 seconds of history are available.
* For every element the all-time maximum latency is recorded.
How to enable latency monitoring
---
What is high latency for an use case, is not high latency for another. There are applications where all the queries must be served in less than 1 millisecond and applications where from time to time a small percentage of clients experiencing a 2 seconds latency is acceptable.
What is high latency for one use case is not high latency for another. There are applications where all the queries must be served in less than 1 millisecond and applications where from time to time a small percentage of clients experiencing a 2 second latency is acceptable.
So the first step to enable the latency monitor is to set a **latency threshold** in milliseconds. Only events that will take more than the specified threshold will be logged as latency spikes. The user should set the threshold according to its needs. For example if for the requirements of the application based on Redis the maximum acceptable latency is 100 milliseconds, the threshold should be set to such a value in order to log all the events blocking the server for a time equal or greater to 100 milliseconds.
So the first step to enable the latency monitor is to set a **latency threshold** in milliseconds. Only events that will take more than the specified threshold will be logged as latency spikes. The user should set the threshold according to their needs. For example if for the requirements of the application based on Redis the maximum acceptable latency is 100 milliseconds, the threshold should be set to such a value in order to log all the events blocking the server for a time equal or greater to 100 milliseconds.
The latency monitor can easily be enabled at runtime in a production server
with the following command:
@@ -83,9 +83,9 @@ The `LATENCY LATEST` command reports the latest latency events logged. Each even
* Event name.
* Unix timestamp of the latest latency spike for the event.
* Latest event latency in millisecond.
* All time maximum latency for this event.
* All-time maximum latency for this event.
All time does not really mean the maximum latency since the Redis instance was
All-time does not really mean the maximum latency since the Redis instance was
started, because it is possible to reset events data using `LATENCY RESET` as we'll see later.
The following is an example output:
Oops, something went wrong.
ProTip! Use n and p to navigate between commits in a pull request.