Over the last few years, cloud computing has grown more expensive than ever. Initially drawn to the promise of cutting costs on infrastructure spend, companies far and wide flocked to behemoths like AWS and Google Cloud to host their services. Technical teams were told this would reduce engineering costs and increase developer productivity, and in some cases it did.
Fundamental shifts in AI/ML were made possible by the ability to batch jobs and run them in parallel in the cloud. This reduced the amount of time it took to train certain types of models and led to faster innovation cycles. Another example was the shift in how software is actually architected: from monolithic applications running on VMs to a microservices and container-based infrastructure paradigm.
Yet, while the adoption of the cloud fundamentally changed how we build, manage and run technology products, it also led to an unforeseen consequence: runaway cloud costs.
While the promise of spending less spurred companies to migrate services to the cloud, many teams didn’t know how to do this efficiently and, by extension, cost-effectively. This created the first up-front investment opportunity we have seen behind the recent surge in venture funding to cloud observability platforms like Chronosphere ($255 million), Observe ($70 million) and Cribl ($150 million).
The basic thesis here is simple: If we provide visibility into what services cost, we can help teams reduce their spend. We can liken this to the age-old adage that goes something like, “You cannot change what you cannot see.” This has also been the primary driver for larger companies acquiring smaller observability players: to reduce the risk of churn by baiting customers with additional observability features, then increase their average contract value (ACV).
Source @TechCrunch