If you are reading this, I don’t have to convince you any further of the powerful intelligence we can derive from logs and machine data. If you are anything like the many, many users, customers and prospects we have been talking to over the years, you might, however, have some level of that pesky modern condition commonly known as volume anxiety. The volume here, of course, is the volume of data––there is a lot of it, and it keeps growing. And all those logs, all that data is useful, in different ways, for monitoring and troubleshooting, security monitoring and troubleshooting, and to continuously optimize your business. But making all this data available for analytics can become expensive quickly. So I am sure you know the anxiety that comes from having to decide which data to keep, and which data to drop.
Today I am happy to talk about a couple of things we have been working on to counter volume anxiety. On top of the existing option to ingest data into our Frequent tier, we are now in preview for an additional data tier. This new tier is part of our Interactive Intelligence Service and geared towards infrequent analytics. In other words, we believe this new data tier is absolutely perfect for all that data that you just know you will want to have managed by Sumo, and that you know you will need to analyze––but maybe not every hour of every day. For this new tier, we are introducing a different pricing model: a very, very low cost of $0.10/GB to ingest and process that data, combined with a modest cost of approximately $4 per TB analyzed.
We believe this is a disruptive model and unique in our space because it allows you to get the best of both worlds. Many of you have told us that you don’t want to drop the data or put it in an inferior system. We believe that at this very low price point for ingesting and processing data, it will become possible to manage all the logs with Sumo. On the other hand, we are not forcing you to make a tradeoff between the cost and the availability of the data. In the end, you need to be able to analyze it right when you need to analyze it, and having to go through a process of re-ingesting, rehydrating, or otherwise bringing the data back from some sort of archive creates a tremendous amount of delay. With our new tier, you will be able to analyze all the logs on-demand without any additional hurdles.
This is not to say that archiving data can not also be appropriate if you cast an even wider net for data. This is why we are now also providing a new Archiving Intelligence Service which is meant for data that you know you will basically not analyze at all in most cases, but for which you need a security blanket of being able to reingest subsets into Sumo when there is a need. We are providing this new archiving capability completely free of cost! Starting now, you will be able to use our collectors to archive to AWS S3 for free. In the event that you find that a slice of this data does indeed need to be ingested for analysis back into Sumo Logic, there is a new UI that will allow you to select data by a time range, and we will then ingest and process it and make it available for analysis.
Nobody likes limits, and nobody likes making hard decisions based on little information. We think these new capabilities will make your life easier and will also make it easier to get even more value out of the Sumo platform. Please reach out so we can tell you more about these new capabilities and get you onboarded.
Please also see our accompanying press release.
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.