Skip to content

scarf-y/grpc-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Name : Daffa Naufal Rahadian
NPM : 2306213003

Reflection

  1. What are the key differences between unary, server streaming, and bi-directional streaming RPC (Remote Procedure Call) methods, and in what scenarios would each be most suitable?
    = Unary RPC involves a single request and response, ideal for simple operations like fetching or updating a resource (e.g., GetUser, CreateOrder). Server streaming RPC allows the client to send one request and receive a stream of responses, making it suitable for large datasets or real-time updates (e.g., log streams or live scores). Bi-directional streaming RPC enables both client and server to send messages independently in a stream, fitting real-time, interactive scenarios like chat apps, live collaboration, or IoT telemetry.

  2. What are the potential security considerations involved in implementing a gRPC service in Rust, particularly regarding authentication, authorization, and data encryption?
    = When implementing a gRPC service in Rust, key security considerations include using TLS to encrypt data in transit, preventing eavesdropping and tampering. Authentication ensures only verified clients can access the service—commonly handled with TLS client certificates, JWTs, or API tokens. Authorization then restricts access based on roles or scopes, ensuring clients only perform allowed actions. Additionally, input validation, rate limiting, and auditing are essential to defend against abuse, while keeping dependencies updated helps mitigate vulnerabilities in third-party crates like tonic and prost.

  3. What are the potential challenges or issues that may arise when handling bidirectional streaming in Rust gRPC, especially in scenarios like chat applications?
    = When handling bidirectional streaming in Rust gRPC, especially in chat apps, one of the main challenges is dealing with concurrent message sending and receiving without blocking the main async runtime. Since both the client and server can send messages at any time, you have to carefully manage the message flow, often using channels like mpsc to avoid deadlocks or missed messages. Another tricky part is error handling—if one side disconnects or the stream encounters an error, it's easy to silently fail or panic if you don’t handle Result properly. There's also the issue of backpressure, where sending too fast without the receiver keeping up can overflow buffers. In real-time use cases like chat, you also need to consider how to keep the connection alive, deal with dropped connections, and possibly add logic for reconnecting or queuing messages, which adds more complexity.

  4. What are the advantages and disadvantages of using the tokio_stream::wrappers::ReceiverStream for streaming responses in Rust gRPC services?
    = Using tokio_stream::wrappers::ReceiverStream in Rust gRPC services offers several practical advantages and a few trade-offs. On the plus side, it provides an easy way to turn a tokio::mpsc::Receiver into a stream that can be returned directly from a gRPC method, making it convenient to implement server or bidirectional streaming. It’s also naturally async and integrates well with the Tokio runtime, allowing background tasks to send messages into the channel while the stream yields them to the client. However, its main disadvantage is that it introduces buffering and potential backpressure issues—if the receiver is too slow or the sender sends too fast, messages can be dropped or cause memory growth. Also, since mpsc is bounded by default, managing the channel size and handling send errors (e.g., client disconnects) becomes necessary to avoid panics or silent failures. Overall, it’s simple and effective, but needs careful handling in high-throughput or real-time scenarios.

  5. In what ways could the Rust gRPC code be structured to facilitate code reuse and modularity, promoting maintainability and extensibility over time?
    = To improve code reuse and modularity in Rust gRPC services, the code could be structured by separating concerns into clearly defined modules. For example, each service (Payment, Transaction, Chat) should be placed in its own module (payment.rs, transaction.rs, chat.rs) containing both the implementation and any helper functions or types it needs. The main grpc_server.rs would then just import and register these services, keeping it clean and focused on server startup. Common logic like authentication, error handling, logging, or channel management could be abstracted into shared utility modules. Traits can be used to define reusable interfaces for shared behavior between services, and builder patterns or dependency injection can help with testability and configuration. This kind of modular setup not only reduces repetition but also makes it easier to add new features or swap out components without disrupting existing code.

  6. In the MyPaymentService implementation, what additional steps might be necessary to handle more complex payment processing logic?
    = In the MyPaymentService implementation, to handle more complex payment processing, I would need to add input validation (e.g., checking for valid amount or user ID), implement authentication and authorization to verify the requester, and integrate with a real payment gateway or external API. I’d also handle database transactions to record payment status safely, ensure idempotency to avoid double charges on retries, and map internal errors to appropriate gRPC Status codes. Additionally, logging and monitoring should be added for observability, and retries or timeouts might be necessary when dealing with external services.

  7. What impact does the adoption of gRPC as a communication protocol have on the overall architecture and design of distributed systems, particularly in terms of interoperability with other technologies and platforms?
    = Adopting gRPC significantly influences the architecture of distributed systems by promoting strongly-typed, contract-first communication through Protocol Buffers, which improves efficiency and clarity in API design. It enables high-performance, low-latency communication via HTTP/2, making it ideal for microservices. However, its binary protocol can limit human readability and browser compatibility, requiring proxies like gRPC-Web for frontend integration. Interoperability with non-gRPC systems may demand adapters or REST gateways, adding complexity. Still, gRPC's language-agnostic tooling and wide ecosystem support help maintain cross-platform compatibility in polyglot environments.

  8. What are the advantages and disadvantages of using HTTP/2, the underlying protocol for gRPC, compared to HTTP/1.1 or HTTP/1.1 with WebSocket for REST APIs?
    = Using HTTP/2, which underlies gRPC, brings several advantages over HTTP/1.1 or HTTP/1.1 with WebSocket. HTTP/2 supports multiplexing, allowing multiple streams over one connection, which reduces latency and improves throughput—something HTTP/1.1 struggles with due to head-of-line blocking. It also compresses headers and supports bi-directional streaming natively, making it more efficient for real-time communication than HTTP/1.1, and more structured than raw WebSockets. However, its binary framing makes it harder to debug than the human-readable HTTP/1.1. Additionally, while WebSockets offer full-duplex communication, they lack built-in request-response semantics and can be trickier to secure and scale. Overall, HTTP/2 is more performant and modern, but has a steeper learning curve and narrower tool support compared to HTTP/1.1 and WebSockets.

  9. How does the request-response model of REST APIs contrast with the bidirectional streaming capabilities of gRPC in terms of real-time communication and responsiveness?
    = The request-response model of REST APIs is inherently synchronous and stateless, where each client request receives a single response, making it less suitable for real-time or continuous communication. In contrast, gRPC's bidirectional streaming allows both the client and server to send and receive messages independently over a single persistent connection, enabling real-time, low-latency interactions. This makes gRPC more responsive and efficient for use cases like chat apps, live updates, or telemetry streams, where ongoing communication is critical, whereas REST often requires workarounds like polling or long-polling to approximate similar behavior.

  10. What are the implications of the schema-based approach of gRPC, using Protocol Buffers, compared to the more flexible, schema-less nature of JSON in REST API payloads?
    = The schema-based approach of gRPC using Protocol Buffers enforces strict contracts between client and server, which enhances reliability, forward/backward compatibility, and performance due to smaller, binary-encoded messages. This allows tools to auto-generate code, catch errors at compile time, and ensure consistency across services. However, it also adds complexity to development, requiring a predefined .proto schema and code regeneration with every change. In contrast, JSON in REST APIs is more flexible and human-readable, making it easier to debug and evolve quickly, but it lacks enforced structure, can lead to inconsistencies, and incurs more overhead due to its verbose, text-based format.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages