Mezmo this week added a free trial and a community plan for the Mezmo Telemetry Pipeline service to make it simpler for DevOps teams to store and manage the large amounts of telemetry data they are collecting.
Mezmo CEO Tucker Callaway said as more DevOps teams embrace observability to minimize disruptions to application environments, they are struggling to manage the explosion of data being created. The Mezmo platform makes it possible to apply data engineering best practices to manage that data using a set of drag-and-drop visual tools, he added. A set of data transformation tools also makes it simpler to extract metrics embedded in the logs, summarize events and metrics and then forward those metrics to downstream platforms for analysis.
The free edition of the platform makes it possible for organizations to begin to apply those capabilities without any upfront costs required, noted Callaway.
It’s not clear to what degree DevOps teams may need to add data engineering to manage all the telemetry data being collected. It’s already apparent that the cost of storing and managing all that data is becoming a significant issue as more applications are instrumented. In fact, it’s nearly impossible to manage cloud-native applications without being able to observe interactions between the microservices used to construct them.
Log management has always been a challenge, but with the addition of traces, the amount of telemetry data being collected to surface metrics has increased. The Mezmo platform makes it possible to set a daily or monthly hard limit on the volume of logs stored or, alternatively, set soft daily or monthly quotas that can apply throttling logic to ensure mission-critical log data will continue to flow.
DevOps teams can also make use of soft limits to reallocate storage resources from one team to another based on how much log data might be generated for a specific amount of time.
Reducing storage costs has naturally become a higher priority during challenging economic times, so there’s a lot more pressure for DevOps teams to be more efficient. Finance teams are reducing costs by, for example, applying quotas to the amount of storage resources being made available to individual DevOps teams. The days when cloud storage resources were made available indiscriminately are over.
It’s not clear how quickly DevOps teams are embracing observability, but it is certain that relying on predefined metrics to monitor IT environments is no longer sufficient. DevOps teams need to be able to query telemetry data to, for example, identify bottlenecks that are often created by dependencies between microservices. The issue is striking a balance between the amount of data being collected and the cost of storing it all.
Regardless of approach, observability is rapidly becoming a requirement for DevOps teams to succeed. In fact, there is already no shortage of observability platforms to choose from. The issue is that not every observability platform provider is equally concerned about the cost of storing all the data being collected.