This document discusses distributed tracing and how it can help solve problems caused by microservices. It covers what distributed tracing is, how it works, popular implementations like OpenTracing and Zipkin, and best practices for using distributed tracing. OpenTracing is introduced as a vendor-neutral standard that helps library developers implement tracing and defines common formats for propagating traces between services. Code examples are provided for collecting trace data using OpenTracing and Zipkin.
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Introduction to Distributed Tracing
1. Distributed Tracing: How the
Pros Debug Concurrent and
Distributed Systems
By Aaron Stannard,
CEO, Petabridge
2. What We’re Going to Cover
• Microservices and common “people”
problems they cause
• How distributed tracing solves some of
these problems
• What distributed tracing is and how it
works
• The OpenTracing standard and its
implementations
• Distributed tracing best practices
11. OpenTracing
• Vendor-neutral standard
for facilitating distributed
tracing
• Enforces a common
lexicon across all tracing
products
• Helps library and
framework maintainers
implement tracing
• Defines common carrier
formats for propagating
traces between services
16. Best Practices
• Standardize on carrier formats inside your
services
– i.e. B3 HTTP headers, dictionary formats
• Introduce tracing at the infrastructure level,
if you can
– i.e. inject into HTTP request processing
pipeline
• Use OpenTracing IScopeManager
– Automatically resolves current active Span
• Have logging infrastructure inject events
into active span
Petabridge is in the process of releasing its first distributed tracing product….
So I wanted to share some insights with you about this technology and how it’s changing the way we debug and troubleshoot software in production.
Monoliths have long been the default way of building systems, but increasingly people are moving more and more towards distributed and microservice architectures.
This trend probably isn’t going to reverse itself any time soon, fundamentally because micoservices offer us greater scalability at the team level.
Specifically, microservices are for scaling teams, not products.
We can take our microservice architecture and partition our organization responsible for producing the complete software product along the lines of each independent service.
This gives our companies and teams the benefits of encapsulation and implementation transparency. If I’m on the Service 1 team, I don’t need to know how the Service 2 team is implemented. I only need to be aware of the messaging contracts that the public endpoints of Service 2 accept.
As long as those contracts are honored or are changed in a version-aware fashion, microservices allow all of these different teams to observe their own independent release cycles and, in theory, their own preferred technology stacks.
The benefits of microservices aren’t without costs, however. While microservices promise greater flexibility and independence for each participating development team, one of the immediate costs is that most traditional monitoring and tracing infrastructure breaks immediately upon moving away from a monolithic design to a microservices design.
In a monolith, your monitoring infrastructure only needs to tell a single story from the perspective of one process – all of the data on why a request succeeded or failed is right there inside the same process.
In microservices this is no longer true – we lose “coherence” around the flow of information across process boundaries. If Service 1 receives a request and, in turn, has to issue a request to Service 2 and it never receives a response back… It won’t be obvious why, because those implementation details are privately encapsulated inside the remote microservice instance.