Home Digital Observability pipelines and AIOps – using data in the right way to ensure the UK’s National AI Strategy is a success

Observability pipelines and AIOps – using data in the right way to ensure the UK’s National AI Strategy is a success

by jcp

By Nick Heudecker, Senior Director, Market Strategy,Cribl

TheNational AI Strategy is an ambitious project. Building upon the UK’s current strengths in the sector, within ten years the Government wants the impact and benefits of artificial intelligence to be felt across the country. The strategy states that “the UK government sees being competitive in AI as vital to our national ambitions on regional prosperity and for shared global challenges such as net zero emissions, health resilience and environmental sustainability”.

With this aim, the National AI Strategy identifies several areas of key focus. These include attracting the right talent and development of AI skills, participating in international research initiatives, specialisedfinance and VC programmes as well as access to data and data policy frameworks.

Importantly, both the private and public sectors are being encouraged to make the most of new tools to help turn Britain into an AI superpower and gain an edge on the world stage.

Where should businesses start?

AI-augmented design, physics-informed AI, AI-driven innovation amongst others all feature onGartner’s 2021 Hype Cycle for Emerging Technologies. While certainly exciting and promising, the practical implementation of these technologies for most organisations will likely seem somewhat distant.

The best place to start with implementing AI into an organisation is by identifying where it can have the biggest impact on empowering the workforce whilst also fitting seamlessly into current processes and enhancing operations.

With the encouragement of the National AI Strategy, many enterprises will likely start their AI journey by exploring the use of AI for IT Operations (AIOps) to improve processes by reducing alert fatigue, proactively detecting performance problems and avoiding outages. This is one area that will likely receive significant attention throughout the National AI Strategy’s ten-year timeline and is where businesses can make a lot of early gains.

However, examples of successful AIOps are hard to come by and could be in danger of becoming an unfortunate joke. The truth is that deploying AIOps requires access to huge amounts of operational data. Making sense of such datasets isn’t simple, and a one-size fits all approach to automating processes, detecting data anomalies, and determining causality simply isn’t practical. In fact, a one-size fits all approach is doomed to fail.

The power of observability in making AIOps a success

To be successful, AIOps tools need the flexibility to ingest and index data from many sources. These include infrastructure, networks, applications, a range of monitoring tools, and deployed software agents. All data from these diverse sources must be normalised before it can be used for either real-time analytics over data in flight or for historical analysis over larger datasets at rest.

Successfully deploying AIOps into the enterprise means managing three core constraints: volume, accuracy, and precision:

  1. Volume – vast amount of operational data flow out of systems in hundred of different formats over dozens of protocols. The problem is that this data isn’t effective for AIOps. Even with the collection of huge terabytes and petabytes of telemetry data, there’s a shortage of high quality, representative data for AIOps.
  2. Accuracy – with the variety of data sources consumed, ensuring consistent data quality and integrity is essential to model performance. Data quality impacts all types of artificial intelligence projects, not just AIOps. According to a recent survey, 87% of data professionals are concerned about data quality impacting their AI implementations.
  3. Precision – model iteration is an important part of AI implementation. Successful iteration not only requires running multiple tests with the same parameters and data sets, but it also involves evaluating the variability between tests to ensure ongoing precision of the tools you’re putting in place. Without effectively managing the volumes of data and ensuring their accuracy, your AIOps tools can’t achieve reliable levels of precision.

If you want value from AIOps you need to understand the data and, as is evident, the problem for realising successful AIOps is often the data it relies on.

One way around this problem is the use of unified observability pipelines. These pipelines drive operational efficiencies by getting the right data, to the right destinations – in the right formats – at the right time. As a result, businesses can more effectively realise digital transformation initiatives, slash costs and improve performance.

Observability pipelines provide the much-needed control enterprises must have over their data to make AIOps a success. Deploying effective AIOps requires accurate data from across your monitoring infrastructure formatted for your AIOps platforms. An observability pipeline unlocks data from silos and provides a single point for data enrichment, filtering, refinement, and routing to any AIOps platform a team uses.

By unlocking data from silos and providing a single point for data enrichment, filtering, refinement, and routing to any AIOps platform, observability pipelines will help put many UK organisations on the front foot when it comes to realising the full benefits of transitioning to a leading AI-enabled economy.

You may also like