Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

DCSF 19 Microservices API: Routing Across Any Infrastructure

425 vues

Publié le

Alex Hokanson + Brett Inman, Docker

Microservice architectures can be difficult to implement. Specifically how to route to the a service correctly and ensure that traffic is spread across all instances of that service. What happens in a cloud environment where it is normal to lose and gain service instances as a part of daily operations? How do you configure something to be able to consistently route to your service when you don’t even know where your service is running!? At Docker, we developed our own highly available and automated API server on top of HAProxy with deep integration with Consul. Our API server acts as a service discovery and load balancing service to ensure availability in a highly dynamic environment. In addition to running such a complex application, we need to support thousands of requests per second while being able to monitor every request that comes through--that is no small feat!

In addition to running a highly available API server, we also recently migrated it from running natively on Ubuntu 14.04 to run all components inside of containers by using Kubernetes with Docker Enterprise. With the containerization journey came some benefits along with new challenges that were not foreseen.

Publié dans : Technologie
  • Soyez le premier à commenter

  • Soyez le premier à aimer ceci

DCSF 19 Microservices API: Routing Across Any Infrastructure

  1. 1. Microservices Enabled API Server - Routing Across Any Infrastructure
  2. 2. Manager, Infrastructure Engineering Docker Brett Inman Infrastructure Engineer Docker Alex Hokanson
  3. 3. Docker Infrastructure
  4. 4. Why?
  5. 5. Traditional Infrastructure Scooting along...
  6. 6. Traditional Infrastructure Developer Ops Team Traditional Deployment Methods Edge Load Balancer
  7. 7. Challenge - Scaling Developer Traditional Deployment Methods
  8. 8. Challenge - Failure Traditional Deployment Methods Edge Load Balancer Developer
  9. 9. How do you pronounce k3s? Orchestration
  10. 10. Orchestration SERVICE LEVEL ENTITIES BINPACKING HIGH AVAILABILITY
  11. 11. Orchestration - Challenges
  12. 12. Interlock / Ingress • Adds Layer 7 • Host/path context • Limited configuration Routing Mesh / Services • Layer 4 • Specific port, each node • Routed to containers • Usually extra hop • Hard to inspect Solutions
  13. 13. Roll Your Own... PhysicalVirtualizationPublic Cloud Container Engine HAProxy Unorchestrated Containers Swarm Kubernetes Native Applications HAProxy ExporterMtailLogstash
  14. 14. Web Request HAProxy registrator consul-template consul nodeport
  15. 15. Demo Time Check out this terminal window!
  16. 16. Welcome to the real world Production
  17. 17. Self Service Redundancy, Scaling, and Availability Lists and Parameters Rate Limiting Introspection and Metrics
  18. 18. Rate Limiting
  19. 19. Rate Limiting - Peering 1000 req/s 3000 req/s 1000 req/s
  20. 20. Rate Limiting - Tarpitting
  21. 21. Introspection
  22. 22. HAProxy registrator consul-template consul
  23. 23. Recap ● Orchestration - the solution and the problem ● Out of the Box Solutions ○ Routing Mesh / Services ○ Interlock / Ingress ● Our Solution ○ k8s, swarm, containers, whatever ● The Real World ○ Self Service, Redundancy/Scaling, Parameterization ○ Introspection, Logs, Metrics ○ Rate Limiting ○
  24. 24. Hallway Track Thank you! hallwaytrack.dockercon.com/hall way-tracks/41333/ Thursday May 2nd at 10:00 am

×