The challenge of mass observability data -- how much is too much?

Digital transformation has become ubiquitous throughout every industry, as the world grows more reliant on software-driven services. As this trend continues, customers and end users have increasingly heightened expectations that organizations will deliver better-quality, more efficient, and secure digital services, at greater speed. Multicloud environments, which are built on an average of five different platforms, are at the heart of this transformation. They enhance organizations’ agility, so DevOps teams can accelerate innovation.

However, these Multicloud environments have introduced new challenges given their complexity and scale. Applications span multiple technologies and contain millions of lines of code and generate even more dependencies. It is now beyond human capacity for DevOps teams to manually monitor these environments, piece together and analyze logs to gain the insights they need to deliver seamless digital experiences.

AIOps to the rescue

Enterprises are increasingly using artificial intelligence for operations (AIOps) platforms to tame Multicloud complexity and to overcome challenges. AIOps combines big data and machine learning techniques to automate IT operations, ensuring that organizations can accelerate innovation and free developers’ time for more strategic work.  

AIOps, however, is only as smart as the quality and quantity of the logs and other data that teams feed into it, which is why observability is essential. Organizations need to capture detailed metrics, logs, and traces from multicloud applications and infrastructure and feed this into AIOps platforms. This is what enables AI to give DevOps teams the insights they need to optimise applications, to deliver better customer experiences, and to drive more positive business outcomes. With better-quality observability data, AIOps solutions can provide more valuable context. In turn, teams can operate in a more agile and informed way. 

Data, data, everywhere

The problem is that in the race to collect more user session data, metadata and business outcomes information, organizations are being overwhelmed with data. The huge quantity of data from the thousands of microservices and containers in their multicloud environment, and every tap, click, or swipe of a user interacting with a digital service, means organizations are often simply overloaded. They’re finding it difficult to keep up using traditional log monitoring and analytics solutions, which weren’t built for the continued explosion of observability data. 

As a result, it’s getting more difficult for organizations to ingest, store, index, and analyze observability data at proper scale. The financial and time implications have become uneconomical. Further challenges are created by the number of data silos that have built up as organizations have come to rely on multiple monitoring and analytics solutions for different purposes. This fragmented approach makes it difficult to analyze log data in context, which limits the value of the AIOps insights organizations can unlock. 

In addition, organizations are often forced to move historical log data into "cold storage" (or a storage repository for inactive data), or alternatively, they scrub or discard that data entirely, given the cost of primary storage. While this makes log analytics more cost-effective, it also reduces the impact and value it brings to modern AIOps-driven approaches. With log data in cold storage, organizations are unable to use AIOps platforms to query it on demand for real-time insights, or to provide more context surrounding the cause of potential issues. The data needs to be rehydrated and reindexed before teams can run queries and gain insights, which can take hours or even days. This delay may generate insights that are outdated, with limited value for preventing problems before customer experience is affected.

Limitless observability in a cloud-native world

Dependency on multicloud environments and AIOps-driven automation shows no signs of slowing, as the world’s appetite for digital services continues to increase. As a result, organizations must find new approaches to capture, ingest, index, store, and operationalize observability data -- in ways that are fit for the cloud-native world. 

This is creating the need for log analytics models that are designed to keep pace with the complexity of multicloud environments and scale limitlessly with the huge volumes of metrics, logs, and traces they create. Data lakehouses are a powerful solution, combining the structure, management and querying features of a data warehouse, with the low-cost benefits of a data lake. This eliminates the need for teams to manage multiple sources of data, piece them together manually, and move them between hot and cold storage, which increases the speed and accuracy of AIOps insights. 

In this way, organizations can unlock data and log analytics in full context and at enormous scale, to enable faster querying that delivers more precise answers from AIOps. Organizations armed with these capabilities can drive more intelligent automation to support flawless digital interactions for their customers and end users, giving them an invaluable competitive edge in an increasingly connected world.

Image credit: Momius/depositphotos.com

Greg Adams is Regional VP UK&I, Dynatrace.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.