By Nick Heudecker, Senior Director, Market Strategy & Intelligence for Cribl
Modern applications struggle under the weight of their own complexity. Microservice architectures create hundreds, or even thousands, of services composed for dozens of different technologies. By design, the teams building these services rarely communicate with one another. In companies with automated deployments, new software may be deployed into production dozens of times a day.
Increasing the complexity, each service may also have its own database and data model, which is also independently managed. Add in short-lived containers and dynamic scaling, and it’s easy to understand why the only time companies can test their applications is the moment they’re deployed in front of customers.
In short, we’ve made customers unwitting acceptance testers. It’s chaotic and can have a negative impact not only on customer satisfaction but also costs to your brand.
The upside is that monitoring is one way to have visibility into these environments to mitigate potential errors before they’re exposed to the customer, however its effectiveness only goes so far. Monitoring solutions simply haven’t kept pace with the realities of today’s application environments.
Instead, DevOps and ITOps teams need to evolve past monitoring into observability.
The case for an observability pipeline
Observability is the characteristic of software and systems allowing them to be “seen,” and to answer questions about their behavior. Unlike monitoring, observable systems invite investigation by unlocking data from siloed log analytics applications.
The potential benefits are huge. An end-to-end observability pipeline can in most instances lower infrastructure costs by 30% and resolve issues four times faster, improving customer satisfaction and increasing spend by 15%.
When comparing observability with monitoring, it’s clear that monitoring hasn’t kept pace with modern complexity for three reasons:
- Exorbitant costs – the high price tag of monitoring force teams to compromise on what exactly they’re monitoring. With tough decisions around which logs, metrics, and traces to keep whilst also staying within budget, many teams even at the largest enterprises simply can’t store everything they need to observe their environment.
- Static views – traditional monitoring systems don’t reflect our modern reality. Systems scale dynamically, and DevOps teams may deploy code across thousands of containers dozens of times each day. Pre-built dashboards and alerts don’t accurately portray today’s infrastructure reality.
- Narrow focus – monitoring is a point solution, targeting a single application or service. A failure in one service cascades to others, and unravelling those errors is well beyond the scope of monitoring applications.
However, implementing observability requires a way to collect and integrate data from complex systems, which is where the observability pipeline comes in. An observability pipeline decouples the sources of data from their destinations. This decoupling allows teams to enrich, redact, reduce and route data to the right place for the right audience. The observability pipeline gets you past what data to send and lets you focus on what you want to do with it.
The provision of context around logs and metrics means an observability pipeline makes debugging faster by allowing you to ask “what if” questions of the environment, instead of the pre-calculated views prevalent in monitoring solutions. Faster debugging and root cause analysis means fewer customers experiencing errors in production, which drives up sales.
Another benefit of an observability pipeline is rationalising infrastructure costs. Often, the team deploying infrastructure isn’t the team paying for it, resulting in over-provisioned infrastructure. Collecting performance data, even for transient infrastructure like containers, gives ITOps and DevOps teams visibility into how many resources are being consumed and where optimisations are possible.
Time to make a change
In my opinion, the case is clear. We are in the middle of a much-needed shift.
Traditional monitoring can’t keep up with the pace or needs of modern organisations, leaving both ITOps and DevOps in the dark. The early trailblazers implementing observability pipelines today will be those that pull ahead, not only recouping previous lost spend but ensuring customers are experiencing quality products that meet expectations rather than being left scratching their heads in frustration when applications fail to live up to their promises.
Jesse Pitts has been with the Global Banking & Finance Review since 2016, serving in various capacities, including Graphic Designer, Content Publisher, and Editorial Assistant. As the sole graphic designer for the company, Jesse plays a crucial role in shaping the visual identity of Global Banking & Finance Review. Additionally, Jesse manages the publishing of content across multiple platforms, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.