The C based gRPC (C++, Node.js, Python, Ruby, Objective-C, PHP, C#)
C C++ C# Python Makefile Ruby Other
Latest commit a52c262 Jan 19, 2017 @markdroth markdroth committed on GitHub Merge pull request #9400 from markdroth/c++_max_message_size_methods
Add ChannelArguments methods for setting max send/recv message size.
Permalink
Failed to load latest commit information.
bazel Merge pull request #9326 from yang-g/license Jan 17, 2017
doc Attempt to fix formatting. Jan 18, 2017
etc Updating roots from Mozilla. Sep 26, 2016
examples Fixing a few items with the new Bazel BUILD system: Jan 13, 2017
include Add ChannelArguments methods for setting max send/recv message size. Jan 19, 2017
src Merge pull request #9400 from markdroth/c++_max_message_size_methods Jan 19, 2017
summerofcode Add project overview for gsoc submission Aug 23, 2016
templates gflags library target renamed in v2.2.0 Jan 19, 2017
test Merge pull request #9322 from apolcyn/deprecate_benchmark_core_lists Jan 18, 2017
third_party upgrade third_party/gflags to v2.2.0 Jan 19, 2017
tools Merge pull request #9397 from jtattermusch/fix_win_protoc_artifact Jan 19, 2017
vsprojects Improve docs on windows building Jan 19, 2017
.clang-format Make clang-format somewhat compatible across versions Jan 21, 2015
.clang_complete Support clang_complete based editor plugins Mar 21, 2016
.editorconfig Adding an editorconfig configuration file. Jan 31, 2015
.gitignore Enable running Python formatting Jan 17, 2017
.gitmodules Boringssl will need duplication for now. Jan 4, 2017
.istanbul.yml Moved gRPC node package root to repo root, made it depend on grpc.gyp Oct 1, 2015
.rspec Bundled C core with Ruby library Dec 18, 2015
.travis.yml Have Travis build the Sample app with frameworks too Jul 16, 2016
.yardopts Adding a .yardopts file at the root so rubydocs isn't getting lost. Feb 13, 2016
BUILD Merge pull request #8684 from gcasto/change_cronet_interface Jan 18, 2017
CMakeLists.txt regenerate Jan 19, 2017
CONTRIBUTING.md added more steps. Jun 27, 2016
Gemfile Bundled C core with Ruby library Dec 18, 2015
INSTALL.md Improve docs on windows building Jan 19, 2017
LICENSE Update copyrights Mar 31, 2016
MANIFEST.md Remove tox Jul 1, 2016
Makefile Merge pull request #9322 from apolcyn/deprecate_benchmark_core_lists Jan 18, 2017
PATENTS Create PATENTS Feb 26, 2015
PYTHON-MANIFEST.in Add Python3.5 artifact targets Aug 15, 2016
README.md Add perf link Jan 12, 2017
Rakefile Use config file template instead of Rakefile template Jan 17, 2017
WORKSPACE Boringssl will need duplication for now. Jan 4, 2017
binding.gyp Fix binding.gyp syntax Jan 13, 2017
build.yaml Merge pull request #8684 from gcasto/change_cronet_interface Jan 18, 2017
build_config.rb Fixed sanity errors Jan 18, 2017
composer.json php: require grpc extension to be installed before composer package Oct 6, 2016
config.m4 PHP: use a macro to specify extension src dir Jan 12, 2017
gRPC-Core.podspec Merge remote-tracking branch 'upstream/master' into revert-9063-rever… Jan 13, 2017
gRPC-ProtoRPC.podspec Advance objective c version to v1.0.2 Nov 28, 2016
gRPC-RxLibrary.podspec Advance objective c version to v1.0.2 Nov 28, 2016
gRPC.podspec missing file Nov 28, 2016
grpc.bzl Add copyright notice and documentation to .bzl file. Jul 2, 2015
grpc.def regenerated projects Nov 18, 2016
grpc.gemspec Merge remote-tracking branch 'upstream/master' into revert-9063-rever… Jan 13, 2017
package.json Merge branch 'master' into node_electron_build Jan 13, 2017
package.xml Merge pull request #9350 from stanley-cheung/php-update Jan 17, 2017
requirements.txt Bump python protobuf dependency to 3.0.0 Aug 3, 2016
setup.cfg Enable running Python formatting Jan 17, 2017
setup.py Fix Python setup-time diagnostic Dec 15, 2016

README.md

Build Status

gRPC - An RPC library and framework

Join the chat at https://gitter.im/grpc/grpc

Copyright 2015 Google Inc.

Documentation

You can find more detailed documentation and examples in the doc and examples directories respectively.

Installation & Testing

See INSTALL for installation instructions for various platforms.

See tools/run_tests for more guidance on how to run various test suites (e.g. unit tests, interop tests, benchmarks)

See Performance dashboard for the performance numbers for v1.0.x.

Repository Structure & Status

This repository contains source code for gRPC libraries for multiple languages written on top of shared C core library src/core.

Libraries in different languages may be in different states of development. We are seeking contributions for all of these libraries.

Language Source Status
Shared C [core library] src/core 1.0
C++ src/cpp 1.0
Ruby src/ruby 1.0
NodeJS src/node 1.0
Python src/python 1.0
PHP src/php 1.0
C# src/csharp 1.0
Objective-C src/objective-c 1.0

Java source code is in the grpc-java repository. Go source code is in the grpc-go repository.

See MANIFEST.md for a listing of top-level items in the repository.

Overview

Remote Procedure Calls (RPCs) provide a useful abstraction for building distributed applications and services. The libraries in this repository provide a concrete implementation of the gRPC protocol, layered over HTTP/2. These libraries enable communication between clients and servers using any combination of the supported languages.

Interface

Developers using gRPC typically start with the description of an RPC service (a collection of methods), and generate client and server side interfaces which they use on the client-side and implement on the server side.

By default, gRPC uses Protocol Buffers as the Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages. It is possible to use other alternatives if desired.

Surface API

Starting from an interface definition in a .proto file, gRPC provides Protocol Compiler plugins that generate Client- and Server-side APIs. gRPC users typically call into these APIs on the Client side and implement the corresponding API on the server side.

Synchronous vs. asynchronous

Synchronous RPC calls, that block until a response arrives from the server, are the closest approximation to the abstraction of a procedure call that RPC aspires to.

On the other hand, networks are inherently asynchronous and in many scenarios, it is desirable to have the ability to start RPCs without blocking the current thread.

The gRPC programming surface in most languages comes in both synchronous and asynchronous flavors.

Streaming

gRPC supports streaming semantics, where either the client or the server (or both) send a stream of messages on a single RPC call. The most general case is Bidirectional Streaming where a single gRPC call establishes a stream where both the client and the server can send a stream of messages to each other. The streamed messages are delivered in the order they were sent.

Protocol

The gRPC protocol specifies the abstract requirements for communication between clients and servers. A concrete embedding over HTTP/2 completes the picture by fleshing out the details of each of the required operations.

Abstract gRPC protocol

A gRPC RPC comprises of a bidirectional stream of messages, initiated by the client. In the client-to-server direction, this stream begins with a mandatory Call Header, followed by optional Initial-Metadata, followed by zero or more Payload Messages. The server-to-client direction contains an optional Initial-Metadata, followed by zero or more Payload Messages terminated with a mandatory Status and optional Status-Metadata (a.k.a.,Trailing-Metadata).

Implementation over HTTP/2

The abstract protocol defined above is implemented over HTTP/2. gRPC bidirectional streams are mapped to HTTP/2 streams. The contents of Call Header and Initial Metadata are sent as HTTP/2 headers and subject to HPACK compression. Payload Messages are serialized into a byte stream of length prefixed gRPC frames which are then fragmented into HTTP/2 frames at the sender and reassembled at the receiver. Status and Trailing-Metadata are sent as HTTP/2 trailing headers (a.k.a., trailers).

Flow Control

gRPC inherits the flow control mechanisms in HTTP/2 and uses them to enable fine-grained control of the amount of memory used for buffering in-flight messages.