By Dan Sommer, Qlik Senior Director, Global Market Intelligence Lead
At the start of this year, we all took a deep sigh of relief thinking that unprecedented events were behind us, but as the year continued, it became clear that change on a macro scale is here to stay, and we now find ourselves stuck in a perfect storm.
An economic recession is on the horizon, conflicts continue to impact global markets and organizations all over the world are looking at their bottom lines, working out what is the smart investment and why. We’re already seeing the effects on the tech landscape with VC funding declining, tech de-coupling, a continued lack of access to data skills and more complex regulations coming into place.
With so much pressure to innovate it can be hard to know what to focus on. But what’s clear is that achieving decision accuracy and integrating siloed and distributed data sets to accurately see the big picture, in real-time will be vital to survival and future success. That’s why we’ve outlined five key trends that every data-driven business should act upon in 2023.
- AI moves deeper into the data pipeline: As economic uncertainty continues, many will see a pull back on investment and hiring. However, with the global skills shortage continuing to impact companies of all sizes, ensuring technologies such as Artificial Intelligence (AI) and Machine Learning (ML) are able to automate some of the more menial data preparation tasks will be crucial. By moving AI deeper into the data pipeline before an application or dashboard has even been built, we can finally start to shift the breakdown of time spent on data preparation versus data analytics. Right now, less than 20% of time is spent analyzing data, while just over 80% of the time is spent collectively on searching for, preparing, and governing the appropriate data. Doing this would enable hard-to-come-by data talent to focus on the value-add; cross-pollinating and generating new insights that weren’t possible before. A far more productive use of their time.
- Invest more in derivative and synthetic data to prepare for unprecedented events: If the last few years have taught us anything, it’s the value of investing time and resources into risk prediction and management. Unfortunately, prior to COVID-19 there wasn’t enough real data on pandemics readily available to the average operation to prepare for such a crisis, but this is precisely where synthetic data plugs the gap. Research suggests that models trained on synthetic data can be more accurate than others; and of course, it eliminates some of the privacy, copyright, and ethical concerns associated with the real. Whilst derivative data allows us to repurpose data for multiple needs, and enables crucial scenario-planning needed to prepare for future issues and crises.
- Be ready for natural language capabilities to rival humans: Many organizations have been using language AI in its basic form for some time now. Think about how often you’ve interacted with a customer support chat bot to resolve your issues with your bank or insurance provider. The popularity of this technology is set to grow at around 18% for the next few years; but also evolve dramatically. There are several new models in development which are significantly more powerful than anything we use today. Where those will take us, we can only imagine but what we do know is that natural language capabilities will have huge implications for how we query information and how it’s interpreted and reported. We’ll find not only the data we’re looking for but also the data we hadn’t thought to ask about. That’s why businesses need to capitalize on this.
- Mitigating supply-chain disruption with real-time data: The aftershocks of COVID-19 and continued global conflicts are still compromising supply chains. Anyone who has attempted to buy a new car (a computer, or even something as basic as toilet paper) in the last few years knows how seriously supply chains have been impaired. Things show no sign of abating over the next few years and so too does the need to react quickly, or ideally “pre-act” to forecast issues before they even start. Having the power to analyze data in real-time and in context is key to this. It’s no wonder that IDC predicts that by 2027 sixty percent of data capture and movement tech spending will be about enabling real-time simulation, optimization, and recommendation capabilities.
- X fabric connects data governance as it becomes even more complex: Investment in data and analytics has dramatically accelerated thanks to the pandemic, and will continue to do so with 93% of companies indicating they plan to continue to increase budgets in these areas. But rapidly shifting rules and regulations around privacy, as well as the distribution, diversity and dynamics of data is holding back organizations’ abilities to really squeeze the best competitive edge out of it. This becomes especially challenging in a fragmented world, as data governance becomes even more complex. Improving access, real-time movement and advanced transformation of data between sources and systems across the enterprise is crucial to organizations realizing the full power of data. This is why an increasing number of businesses are turning to data control plane architecture, an “X-fabric” not just for your data, but also for your applications, BI dashboards and algorithms, enabled by catalogs and cloud data integration solutions. This is a critical component in the modern distributed environment for any organization that wants to act with certainty.
The good news is that after the last few years, we’re all better prepared to roll with the punches than ever before. As data and analytics professionals, we need to adjust to more fragmentation, with disparate data centers, disrupted supply chains, the consistent need for innovation, and hampered access to skilled labor. And in a world where crisis has become a constant, calibrating for it becomes a core competence – so we can react in the moment and anticipate what’s coming next.