gRPC can help minimize the barrier of cross-system communication by providing language-agnostic API definitions, backward and forward compatible versioning with protocol buffers, and pluggable load balancing and tracing. You will see how to quickly get up and running with the gRPC framework using Node.js from creating a protocol definition, creating meaningful health checks, and securing the endpoint. Additionally, this session will go over best practices and how to take full advantage of what gRPC has to offer.
3. Why gRPC?
From gRPC’s website:
“gRPC is a modern open source high performance RPC framework that can run in
any environment. It can efficiently connect services in and across data centers with
pluggable support for load balancing, tracing, health checking and authentication. It
is also applicable in last mile of distributed computing to connect devices, mobile
applications and browsers to back end services.”
4. Benefits of gRPC
● Low latency
○ Lack of parsing call parameters from paths and query strings allows for faster services
● Full-duplex streaming
○ Both requests and responses can be optionally streamed - gRPC uses HTTP/2 by default
● Supports multiple data formats
○ Protobuf, JSON, XML, FlatBuffers, and Thrift (varying levels of support)
● Static Types & Versioned Service Interface
○ Alleviates headaches where fields may be an object or an array/usually is an int, but under
some circumstance is a string
○ Fields can be depreciated without negative downstream consequences
(when handled properly)
6. gRPC vs REST
gRPC
● Supports streaming APIs over HTTP/2
● Uses Messages
● Strong Typing
● Not as straightforward to call from the the
browser as REST
● Supports many types of encoding
(protocol buffers by default)
● Fields can be deprecated without causing
breaking changes
REST
● Request/Response model over HTTP/1.1
● Utilizes resources and verbs
● Serialization
● Easy to call from the browser
● Supports limited number of encoding
(JSON by default)
● Deprecating fields is a breaking change
that requires versioning
8. What is a protocol buffer?
From Google:
Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing
structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be
structured once, then you can use special generated source code to easily write and read your structured
data to and from a variety of data streams and using a variety of languages.
In plain speak:
Protocol buffers are essentially a way to serialize a binary wire format message, where fields are
identified by order.
9. Protocol Buffers
gRPC uses Protocol Buffers as its default data format. To get started with gRPC,
you will need to create a .proto file that defines:
1. What services you will implement
2. The message definition for requests
3. The message definition for responses
10. Service Definition
Let’s create a service for running a pet store
(Lovingly borrowed from Swagger’s example
docs).
We’ll create endpoints for:
● Retrieving all pets
● Retrieving a pet by ID
● Creating a new pet
● Updating an existing pet
● Deleting a pet
We’ll define a pet as an entity with the following:
● id: unique identifier for the pet
● name: the pet’s name
● status: one of available, pending, or sold
11. The .proto file
Before we can write any code for our services,
we’ll need to define them in a .proto file.
The file will need to first define the syntax and
package name.
After that we can define what messages we’ll
send/receive and what services are available.
syntax = "proto3";
package petstore;
12. Creating the Pet Message
We’ve already determined a Pet is an entity with
the following:
● id: unique identifier for the pet
● name: the pet’s name
● status: one of available, pending, or sold
We’ll create a message that has three string
fields to represent this data.
By default, all fields are optionally. This will
allow us to use the same message for the
request and response for retrieving a pet.
message Pet {
string id = 1;
string name = 2;
string status = 3;
}
13. Defining Messages
Messages can be composed of a mixture of
scalar and custom types.
Scalar types include:
double, float, int32, int64, uint32, uint64, sint32,
sint64, fixed32, fixed64, sfixed32, sfixed64, bool,
string, bytes
Numeric values default to zero, booleans to
false, and strings to an empty string.
message Pet {
string id = 1;
string name = 2;
string status = 3;
}
14. Assigning Field Numbers
You’ll notice that each field in the message definition has a
unique number.
These fields numbers are used to identify your fields for the
binary parser and should not be changed once your
message is in use.
Fields 1-15 take one byte to encode, and should be reserved
for frequently occurring message elements.
Fields 16-2047 take two bytes to encode.
Deprecated fields should still be defined in your message
definition with their original field number.
message Pet {
string id = 1;
string name = 2;
string status = 3;
}
15. Enums
We wanted to limit status to be one of available,
pending, or sold. We can do this by defining an
enum.
In order to future proof our service, we’ll define
the default status as unknown in case we
decide to deprecate the field in the future.
enum Status {
UNKNOWN = 0;
AVAILABLE = 1;
PENDING = 2;
SOLD = 3;
}
message Pet {
string id = 1;
string name = 2;
Status status = 3;
}
16. Repeated Fields and Empty Messages
What if we want to return multiple pets? We can
do this by using the repeated key.
How about instances where we want to send
empty messages? We will still need to define a
message for those instances, but we will not
give it any fields.
message Pet {
string id = 1;
string name = 2;
Status status = 3;
}
message Pets {
repeated Pet pets = 1;
}
message Empty {}
17. Defining Services
Now that we’ve defined our messages, we’ll
need to define what services we’ll make
available.
We’ll create services for:
● Retrieving all pets
● Retrieving a pet by ID
● Creating a new pet
● Updating an existing pet
● Deleting a pet
service PetStore {
rpc GetAll(Empty) returns (Pets) {}
rpc GetPet(Pet) returns (Pet) {}
rpc CreatePet(Pet) returns (Pet) {}
rpc UpdatePet(Pet) returns (Pet) {}
rpc DeletePet(Pet) returns (Empty) {}
}
18. Putting it all Together
syntax = "proto3";
package petstore;
service PetStore {
rpc GetAll(Empty) returns (Pets) {}
rpc GetPet(Pet) returns (Pet) {}
rpc CreatePet(Pet) returns (Pet) {}
rpc UpdatePet(Pet) returns (Pet) {}
rpc DeletePet(Pet) returns (Empty) {}
}
message Empty {}
enum Status {
UNKNOWN = 0;
AVAILABLE = 1;
PENDING = 2;
SOLD = 3;
}
message Pet {
string id = 1;
string name = 2;
Status status = 3;
}
message Pets {
repeated Pet pets = 1;
}
20. Using Protobufs in Node
In Node, there are two options for using Protocol Buffers. Static generation or
dynamic loading.
With static generation, you’ll use the proto tool to load your protobuf definition and
create static JavaScript files that you’ll use for creating your server and client. This
is how protobufs are handled in most other supported languages.
With Node you also have the option to dynamically load your proto definition,
making it function similar to any other dependency you might have.
For our example, we’ll use dynamic loading.
21. Loading the Protobuf
You’ll need two modules to get started
● grpc
● @grpc/proto-loader
When loading the protocol definition, you’ll have a couple of options.
● keepCase: keep field casing instead of converting to camelcase
● longs: Should long numbers be treated as strings or numbers
● enums: Should enums use the string or numeric value
● defaults: Should default values be set
24. Creating the gRPC Server
To run a gRPC server, you will need to create one in Node.
const grpc = require('grpc')
const server = new grpc.Server()
Additionally, you must tell it what security to use and what port to run on. For our
example, we’ll use the simplest type of security: insecure.
server.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure())
server.start()
25. Adding the Service Definition
We now have a running server, but it doesn’t actually do anything. Let’s assume
we have a local package that already contains all of the service logic. We’ll also
need to load the proto definition from before.
const services = require('./services')
const proto = require('../proto')
For our server to register as a valid petstore server for our protobuf definition, it
must implement all of the services we defined.
26. Full Server File
const grpc = require('grpc')
const proto = require('../proto')
const services = require('./services')
const server = new grpc.Server()
server.addService(proto.petstore.PetStore.service, {
getAll: services.getPets,
getPet: services.getPet,
createPet: services.createPet,
updatePet: services.updatePet,
deletePet: services.deletePet
})
server.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure())
server.start()
27. Writing Services
A service will receive two arguments, a call and a callback. The call contains the
request along with some other metadata. The callback has a signature of (error,
message).
const getPet = (call, callback) => {
const pet = db.getPet(call.request)
callback(null, pet)
}
28. Error Handling
We have a service that looks up a pet by ID, but what if we don’t have a record for that pet? Should we
send back a pet message with defaulted fields? While that is an option, gRPC has standardized status
codes, much like REST. We can build standardized status messages using the @grpc/grpc-js package.
const { status } = require('grpc')
const grpc = require('@grpc/grpc-js')
const getPet = (call, callback) => {
const pet = db.getPet(call.request)
if (!pet) {
const err = new grpc.StatusBuilder().withCode(status.NOT_FOUND).withDetails('Pet Not Found').build()
callback(err)
return
}
callback(null, pet)
}
29. Unimplemented Methods
As stated earlier, for our server to be of type petstore, it must implement all five
services, but what if we didn’t want to implement one of them? gRPC provides a
status code for unimplemented services.
const { status } = require('grpc')
const grpc = require('@grpc/grpc-js')
const deletePet = (call, callback) => {
const err = new grpc.StatusBuilder().withCode(status.UNIMPLEMENTED).withDetails('Service Not Implemented').build()
callback(err)
}
32. Creating the gRPC Client
Creating a gRPC client is fairly straightforward. You will just need to load a
protocol definition and know where the gRPC server is running.
const { credentials } = require('grpc')
const proto = require('../proto')
const client = new proto.petstore.PetStore('localhost:50051', credentials.createInsecure())
From there, you can make calls to your server and handle any errors.
client.getPet({ id:'23' }, (err, pet) => {
if (err) {
console.log(err.details) // Prints the error message, in this case “Pet Not Found”
}
console.log(pet)
})
34. Health Checks
● gRPC provides a standard protobuf definition for health checks.
● This defines two methods, Check and Watch.
○ Check is standard call. Watch is a streaming call.
● Returns one of three statuses: UNKNOWN, SERVING, NOT_SERVING
● The client can decide how to handle these statuses
○ For instance, if the health check returns NOT_SERVING you can fail the request, or queue it
and try again with an exponential backoff
35. Implementing Multiple Server Types
A single server can implement multiple server types. For instance, you can
implement both petstore and a health check server.
const server = new grpc.Server()
server.addService(proto.petstore.PetStore.service, {
// service functions
})
server.addService(proto.health.Health.service, {
// service functions
})
36. Importing Proto Definitions
You can reuse definitions from one proto file by importing them in another.
import "myproject/other_protos.proto";
For well known types, you can use the “Any” type which takes a definition URL for
deserializing arbitrary JSON and the value
{
"@type": "type.googleapis.com/google.protobuf.Duration",
"value": "1.212s"
}
37. Security
gRPC supports 3 security schemes:
1. Unsecure
2. SSL/TLS
3. Token-based authentication with Google (should only be used with Google
services)