Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Service mesh on Kubernetes - Istio 101

A service mesh is a necessary tool in your cloud native infrastructure. The era of service meshes ushers in a new layer of intelligent network services that are changing the architecture of modern applications and the confidence with which they are delivered. Istio, as one of many service meshes, but one with a vast set of features and capabilities, needs an end-to-end guide

  • Soyez le premier à commenter

Service mesh on Kubernetes - Istio 101

  1. 1. Service Mesh on Kubernetes – Istio Huy Vo, Engineering Manager
  2. 2. Huy Vo • Engineering Manager • Technology interests: • Distributed Computing. • Deep Learning. Axon
  3. 3. Outline • Micro-services and Challenges • Service Mesh • Istio • Demo
  4. 4. Micro-services and Challenge
  5. 5. Microservices • Technology Heterogeneity • Resilience • Scaling • Ease of deployment • Optimizing for Replaceability Benefits:
  6. 6. But… the network is hard • Communication between services • Load Balance • Discovery Service • Observability • Distributed tracing • Logs • Monitoring • Fault Tolerance • Circuit breaker • Retry mechanism
  7. 7. Communication between services
  8. 8. Observability How well do you really understand what’s going on in these environments?
  9. 9. Fault Tolerance With our services communicating with numerous external resources, failures can be caused by: • Networking issues • System overload • Resource starvation (e.g. out of memory) • Bad deployment/configuration
  10. 10. Service Mesh
  11. 11. Client Libraries: The First Service Meshes? • The restriction use of multiple language-specific frameworks and/or application servers to run them. • Complexity when upgrade version library. • Forward compatibility and Backward compatibility
  12. 12. Service Mesh • It takes the logic governing service- to-service communication out of individual services and abstracts it to a layer of infrastructure. • Service engineer focus only on service business. • Don’t restrict to any language/framework.
  13. 13. Control plan vs Data plan • Data Plan: • Touches every packet/request in the system. • Service discovery • Health checking • Routing. • Observability. • Authentication/authoriz ation. • Load balancing • Control Plan: • Does not touch any packet/request in the system. • Provide policy. • Provide configuration. • Unifies telemetry collection.
  14. 14. ISTIO
  15. 15. What is Istio? • Data plan: Envoy proxy as Sidecar • Control plan: • Pilot • Galley • Citadel • Mixer Functionality: • Load Balancing • Fine-grained control traffic • A pluggable policy layer like rate limits, access control, quotas. • Automatic metrics, logs, traces. • Secure service-to-service
  16. 16. Galley • Primary configuration ingestion and distribution mechanism within Istio. • It provides a robust model to validate, transform, and distribute configuration states to Istio components insulating the Istio components from Kubernetes details
  17. 17. Pilot
  18. 18. Citadel • Key Management Service. • Provides encryption service-to- service with built-in identity and credential management
  19. 19. Mixer Provides: • Policy enforcement • Rate limits. • Header routing • Denial – whitelist/ black list • Telemetry collection. • Logs • Metrics • Trace
  20. 20. Sidecar proxy - Envoy • A C++ L4/L7 proxy • All traffic in/out service through proxy. • Features: • Dynamic service discovery • Load balancing • TLS termination • HTTP/2 and gRPC proxies • Circuit breakers • Health checks • Staged rollouts with %-based traffic split • Fault injection • Rich metrics
  21. 21. Traffic Management
  22. 22. Traffic Steering
  23. 23. Traffic Splitting
  24. 24. Traffic Mirroring
  25. 25. Resilience
  26. 26. Load-Balancing Strategy • Client side load balancing • Do not need reverse proxy -> remove single point of failure.
  27. 27. Circuit breaking
  28. 28. Retries
  29. 29. Timeouts
  30. 30. Fault Injection
  31. 31. Telemetry
  32. 32. How it work • Mixer collects metrics emitted by Envoys • Adapters in the Mixer normalize and forward to monitoring backend • Metrics backend can be swapped at runtime
  33. 33. Metrics
  34. 34. Logs
  35. 35. Trace • Envoy proxy is responsible for generating the initial trace headers and doing so in an OpenTelemetry–compatible way • Your application requires a thin-client library to collect and propagate a small set of HTTP headers: • x-request-id • x-b3-traceid • x-b3-spanid • x-b3-parentspanid • x-b3-sampled • x-b3-flags • x-ot-span-context
  36. 36. Visualization
  37. 37. How HPA + CA + Istio
  38. 38. Demo
  39. 39. QUESTIONS?

×