Mezmo has added additional capabilities to streamline the flow of telemetry data within DevOps workflows as part of an effort to make it simpler to surface insights and reduce the total cost of observability.
The company has also added integrations with more data sources to the Telemetry Pipeline platform, along with controls that make it simpler to optimize data storage and usage. For example, an Events-to-Metrics Processor capability identifies and extracts metrics from logs to make them easier for third-party tools to consume.
Mezmo CEO Tucker Callaway said these additions collectively make it simpler to surface insights that will make DevOps workflows more efficient. Previously, the company added capabilities such as rollback and redeploy, sequential parsing, error history management and data sample management to enable DevOps teams to manage telemetry data more efficiently.
In effect, Mezmo is making it easier to apply engineering best practices to the massive amounts of telemetry data generated across DevOps workflows, he noted.
No one knows for certain if all that data will require the addition of data engineers to DevOps teams, but as DevOps continues to evolve, it’s clear there is a need to cost-effectively manage data at scale. DevOps teams that embrace observability are quickly discovering that storing logs, traces and metrics for an extended amount of time can quickly become cost prohibitive. Naturally, there’s a lot more sensitivity to those costs during challenging economic times.
In theory, of course, various forms of artificial intelligence should automate data engineering best practices, but in the meantime, there simply isn’t enough data engineering expertise available. The need for a platform to optimize the flow of telemetry data across DevOps workflows is becoming more acute.
It’s not clear how quickly DevOps teams are embracing observability, but it’s clear that relying on predefined metrics to monitor IT environments is no longer sufficient. DevOps teams need to be able to query telemetry data to, for example, identify bottlenecks created by dependencies between microservices. Resolving those issues can’t be slowed down by a lack of data to query. Most problems can usually be traced back to a recent update to an application, but there are always going to be some issues that require access to historical data. That historical data should be quickly and easily available, especially since application downtime directly equates to lost revenue for many organizations.
Observability, regardless of cost, is rapidly becoming a requirement for DevOps teams’ success as application environments become more complex in the cloud-native era. Dependencies between the microservices that make up those applications are too difficult to track and analyze without the aid of an observability platform. The challenge is not every observability platform provider is equally concerned about how much it costs to store all that data. Striking the right balance between the amount of telemetry data collected and the cost of storing it has become a much larger concern.