Official RAMCloud repo
Branch: master
Clone or download
yilongli InfUdDriver: implemented the selective signaling optimzation for TX path
With this optimization, {basic,homa}+infud can achieve almost 3000MB/s
outgoing throughput in the echo_basic benchmark using rc machines. This also
matches the performance numbers generated by `ib_send_bw --connection UD`.
Latest commit 1e639f4 Nov 21, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
apps ClusterPerf: allowed echo_basic to send concurrent RPCs to multiple s… Nov 25, 2018
benchmarks/homa Included HomaTransport and benchmark code used in the SIGCOMM'18 paper. Jul 12, 2018
bindings Fixed gradle build scripts bug for java bindings. May 9, 2017
clientTests Multiple clean-ups for the driver classes. Aug 6, 2016
config Included HomaTransport and benchmark code used in the SIGCOMM'18 paper. Jul 12, 2018
docs Add make docs using doxygen Jan 23, 2010
ft Added support for fast+infeth to wireshark decoder. Aug 3, 2011
gtest @ ddd3502 Bumped the gtest submodule to the current head Oct 17, 2015
hooks Move apps and nanobenchmarks out of src directory. Nov 7, 2016
logcabin @ 4e6a7ca Update LogCabin to v1.1.0 Jul 27, 2015
misc Updated YCSB code to work with new Java bindings Jun 8, 2015
nanobenchmarks Add a Perf benchmark for gettimeofday syscall Mar 28, 2018
rpm Add RPM SPEC files in new rpm directory Aug 15, 2014
scripts ClusterPerf: allowed echo_basic to send concurrent RPCs to multiple s… Nov 25, 2018
src InfUdDriver: implemented the selective signaling optimzation for TX path Nov 25, 2018
systemtests Added cold start tests in systemtests/recoverytest.py. Nov 2, 2012
.gitignore Added protobuf syntax specification to all .proto files. Jul 7, 2018
.gitmodules
.travis.yml Updated Travis to install doxygen from a .deb Aug 8, 2017
AUTHORS Checkpoint cleaner work. Aug 9, 2011
Doxyfile Fixed slow "make docs": disabled DOT graph generation by default. Apr 27, 2017
GNUmakefile Removed the GLIBCXX_USE_CXX11_ABI flag Sep 13, 2018
LICENSE add license title Aug 2, 2016
README.md Changed README and Travis now e-mails Jan 23, 2016
cpplint.py Fix lint. Sep 4, 2012
designNotes Fixes RAM-775 (WorkerTimers not firing) Sep 16, 2015
mergedep.pl Started draft of build system Nov 4, 2009
pragmas.conf.py Add pragma value to disable GCC warnings altogether Oct 24, 2010
pragmas.py Improve makefile performance Nov 7, 2010

README.md

RAMCloud

Build Status

For up to date information on how to install and use RAMCloud, see the RAMCloud Wiki: https://ramcloud.stanford.edu/wiki/display/ramcloud

What is RAMCloud?

note: the following is an excerpt copied from the RAMCloud wiki on 1/22/16.

RAMCloud is a new class of super-high-speed storage for large-scale datacenter applications. It is designed for applications in which a large number of servers in a datacenter need low-latency access to a large durable datastore. RAMCloud offers the following properties:

  • Low Latency: RAMCloud keeps all data in DRAM at all times, so applications can read RAMCloud objects remotely over a datacenter network in as little as 5μs. Writes take less than 15μs. Unlike systems such as memcached, applications never have to deal with cache misses or wait for disk/flash accesses. As a result, RAMCloud storage is 10-1000x faster than other available alternatives.
  • Large scale: RAMCloud aggregates the DRAM of thousands of servers to support total capacities of 1PB or more.
  • Durability: RAMCloud replicates all data on nonvolatile secondary storage such as disk or flash, so no data is lost if servers crash or the power fails. One of RAMCloud's unique features is that it recovers very quickly from server crashes (only 1-2 seconds) so the availability gaps after crashes are almost unnoticeable. As a result, RAMCloud combines the durability of replicated disk with the speed of DRAM. If you have used memcached, you have probably experienced the challenges of managing a second durable storage system and maintaining consistency between it and memcached. With RAMCloud, there is no need for a second storage system.
  • Powerful data model: RAMCloud's basic data model is a key-value store, but we have extended it with several additional features, such as:
    • Multiple tables, each with its own key space.
    • Transactional updates that span multiple objects in different tables.
    • Secondary indices.
    • Strong consistency: unlike other NoSQL storage systems, all updates in RAMCloud are consistent, immediately visible, and durable.
  • Easy deployment: RAMCloud is a software package that runs on commodity Intel servers with the Linux operating system. RAMCloud is available freely in open source form.

From a practical standpoint, RAMCloud enables a new class of applications that manipulate large data sets very intensively. Using RAMCloud, an application can combine tens of thousands of items of data in real time to provide instantaneous responses to user requests. Unlike traditional databases, RAMCloud scales to support very large applications, while still providing a high level of consistency. We believe that RAMCloud, or something like it, will become the primary storage system for structured data in cloud computing environments such as Amazon's AWS or Microsoft's Azure. We have built the system not as a research prototype, but as a production-quality software system, suitable for use by real applications.

RAMCloud is also interesting from a research standpoint. Its two most important attributes are latency and scale. The first goal is to provide the lowest possible end-to-end latency for applications accessing the system from within the same datacenter. We currently achieve latencies of around 5μs for reads and 15μs for writes, but hope to improve these in the future. In addition, the system must scale, since no single machine can store enough DRAM to meet the needs of large-scale applications. We have designed RAMCloud to support at least 10,000 storage servers; the system must automatically manage all the information across the servers, so that clients do not need to deal with any distributed systems issues. The combination of latency and scale has created a large number of interesting research issues, such as how to ensure data durability without sacrificing the latency of reads and writes, how to take advantage of the scale of the system to recover very quickly after crashes, how to manage storage in DRAM, and how to provide higher-level features such as secondary indexes and multiple-object transactions without sacrificing the latency or scalability of the system. Our solutions to these problems are described in a series of technical papers.

The RAMCloud project is based in the Department of Computer Science at Stanford University.

Learn More about RAMCloud

https://ramcloud.stanford.edu/wiki/display/ramcloud