Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Enhanced Security and Visibility for Microservices Applications

Unified solution for application load balancing, security and traffic analytics in Kubernetes can do wonders for better user experience.

  • Identifiez-vous pour voir les commentaires

Enhanced Security and Visibility for Microservices Applications

  1. 1. Reliable Security Always™ Enhanced Security & Visibility for Microservice Applications Akshay Mathur @akshaymathu
  2. 2. 2 NEW DE-FACTO STANDARDS: Growing Industry Trend: Containers and Kubernetes APPLICATIONS Moving from Monolith to Micro Services APPLICATION DEPLOYMENTS Moving from Hardware Servers or Virtual Machines to Containers o Adopted by all industry major players – AWS, Azure, Google, VMWare, RedHat. o 10X increase in usage in Azure and GCP last year o 10X increase in deployment last 3 years o Deployment Size increased 75% in a year Growing Kubernetes Adoption
  3. 3. 3 Key Requirements of Modern Teams … EFFICIENT OPERATIONS VISIBILITY & CONTROL Application Security SSL Encryption Access Control Attack Protection and Mitigation Analytics Faster troubleshooting Operational intelligence Central Management Multi-services Multi-cloud
  4. 4. 4 Real-world Challenges
  5. 5. 5 An E-Com Company: Access Control between Microservices • Security and compliance require monitoring traffic between microservices • In absence of policy enforcement, this company isolated clusters Kubernetes Node Kubernetes Node Kubernetes Node Kubernetes Node
  6. 6. 6 A FinTech Company: Access Control and Traffic Flow Visibility problem • Separated microservices via namespaces • Controlled traffic flow via application Gateway Kubernetes Node Kubernetes Node Kubernetes NodeKubernetes Node
  7. 7. 7 All Companies: Need to keep latency at minimum • Multiple traffic handling layers add its own latency ◦ IPS/IDS ◦ L7 LB ◦ Kube Proxy Kubernetes Node
  8. 8. 8 A Media Service Company: Security Increased Cost of Operations • Istio sidecar model was tried for security implementation • Sidecar model increased resource requirement leading to increased cost Kubernetes Node
  9. 9. 9 All Companies: Need to Manage Security across Environments • Not all workloads are in Kubernetes • Managing security separately for each env was challenging Public Private Data Center
  10. 10. 10 Challenges in Kubernetes Environment Characteristics of K8s Environment Impact Only L3 policy support L7 security rules can’t be created Multiple Layers in traffic flow Increased latency IP addresses of pods keep changing IP based security policy become obsolete No access control between microservices Complicated deployment architecture No application traffic visibility Difficult to fine-tune security policies
  11. 11. 11 Shared Security Responsibility Model Source: https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-security-the-shared-responsibility-model-in-gke- container-security-shared-responsibility-model-gke
  12. 12. 12 Security & Policy Enforcement
  13. 13. 13 Minimizing Cost of Operations vs Kubernetes NodeKubernetes Node Sidecar Proxy Deployment Hub-Spoke Proxy Deployment Resource intensive Expensive TCO Low overhead Lower TCO
  14. 14. 14 Traffic Handling and Security are related Modern Approach Unified solution providing Load Balancing as well as Security Pros: • Operational simplicity • Better application performance Kubernetes Node Fact: • Incoming traffic is to be decrypted and evaluated • When Deny the traffic: Security • When Send traffic to right Application Server: Load Balancing Traditional Approach: Deploy Load Balancing and Security solutions separately Cons: • Operational Complexity • Increased latency
  15. 15. 15 For East-West Traffic • Access control between microservices • Transparent encryption for traffic between nodes • Lower resource requirement as compare to sidecar service mesh model • Application layer traffic visibility and analytics Node 1 Node 2 S1 S2
  16. 16. 16 For North-South Traffic • Container-native load balancer for L7 traffic routing (with ability to route traffic based on any info in HTTP header) • SSL offload • Reduced application response time • Web Application Firewall • L7 DDoS protection • Central management for load balancer • Application layer traffic visibility and analytics Kubernetes Cluster
  17. 17. 17 More about the LB • Deployed as DaemonSet ◦ Image on Docker Hub ◦ Uses host networking • Based on NginX core ◦ 3rd party modules – ModSec, LuaJit etc. ◦ Custom modules • Connection Pooling • Distributed Limit Enforcement • Dynamic Upstream
  18. 18. 18 More about the Kubernetes Connector • Deployed as K8s ‘Deployment’ ◦ Image on Docker Hub ◦ One instance in a cluster • Monitors Lifecycle of Containers and Ingress Resource • Calls APIs to update LB
  19. 19. 19 Policy Configuration • Infrastructure as code • Kubernetes Service aand Ingress definitions are extended via annotations • Simple annotations to configure policies
  20. 20. 20 Application Layer Visibility
  21. 21. 21 Descriptive Analytics • Health Status • Logs & Events PERFORMANCE MONITORING Diagnostic Analytics • Per-App metrics • Trend Analysis FASTER TROUBLESHOOTING Predictive Analytics • Anomalies/Threats • Correlation INSIGHTS Prescriptive Analytics • Policy updates • Behavior Analysis ADAPTIVE CONTROLS Visibility, Analytics & Insights
  22. 22. 22. Per-Service Visibility, Analytics & Reporting o Comprehensive metrics & logs o View, monitor and analyze o Efficient troubleshooting o Generate custom reports
  23. 23. 23 Use Case – Troubleshooting High Response Time LB Only Confidence: Low No direct way to debug Alternate is to collect access logs from all application servers Merge the logs and move them to a log analyzer Get the info about request processing time by server ◦ Time taken in network remains unknown ◦ Geo distribution remains unknown Manually correlate and analyze LB + Application Analytics Confidence: High Harmony Portal displays end-to-end response time for the application Drill-down charts are available for historical analysis with enriched data Breakup of time taken in different portions of request-response cycle is available Segmented data by various aspects is available Access logs of individual transaction may be used for further isolation Customers are complaining about slowness of Application 2days 5mins
  24. 24. 24
  25. 25. 25. A10 ADC: Per-app Visibility : End-to-End Latency o Distinguish between application, client and infrastructure issues o Quickly identify consistent or one-off glitch o Pinpoint concerns and take corrective action
  26. 26. 26 Takeaways: Simplified and Improved Security & Analytics • Simple Architecture • Clear ‘Dev’ and ‘Ops’ separation • ‘Config as code’ for automation • Application Traffic Analytics for efficiency
  27. 27. 27 Thank You @akshaymathu amathur@a10networks.com Skype: mathurakshay Sample Config Files @ https://gist.github.com/c-success Steps to try @ http://docs.hc.a10networks.com/IngressController/2.0/a10-ladc-ingress-controller.html
  28. 28. Thank You Reliable Security Always™