We are excited to announce the beta release of Sumo Logic’s Archive Intelligence Service, which enables customers to forward logs directly from Sumo Logic’s installed collector to their own, self-managed AWS S3 buckets. This service gives users the ability to reliably gather and economically store log data which may not be needed for immediate analysis or operations, but is still important to keep for later use. With the Archive Intelligence Service, users can utilize the power and convenience of Sumo Logic’s collection system to assure these logs are stored in S3, and ready for ingest on command. Because the installed collector moves logs directly into AWS S3, we eliminate the cost associated with ingestion and indexing, and provide this capability free of cost.
Many users need to reliably store log data which they may not use for critical business decisions or operations, but are still needed for and business optimization, or in case there’s a need to retroactively investigate incidents discovered many years later. The Archive intelligence Service addresses these needs with the full conveniences of Sumo Logic’s centralized collection management features. Users can configure forwarding and processing rules for each collector from within the Sumo Logic user interface, or via the Collection API giving them the flexibility to easily control which logs are archived and which logs are ingested into the Sumo Logic platform. Users can either split or mirror any portion of the log stream, sending some portion of their logs to archive, and ingesting others as they require. The collector’s data masking and hashing rules can be applied to log data before it is sent to the user’s archive bucket assuring that sensitive data is safely obscured within the archive.
The Archive Intelligence Service makes retrieval of logs from the archive a snap. Users can recover logs from their S3 bucket by creating a new Archive Source in the Collection tab of the Data Management page. The Archive Source is a new type of hosted collector with controls that allows user to choose the logs they want ingested and analyzed. They simply specify a target time range for the logs they wish to ingest, and the Archive Collector will ingest only those logs created at a time within the specified range.
Users can also control what logs and data are ingested with the use of processing rules, which are applied at time of ingest by the Archive Source collector. This additional level of filtration is particularly useful when the same bucket is used to archive logs from different types of applications and workloads. Since ingest of archive logs is executed on command, users can change the processing rules or time ranges as they like, and they will be applied to the next ingestion job.
Users can also configure the Archive Source Collector with it’s own metadata, separate from the metadata the original archiving collector was tagged with. Assigning separate metadata at time of ingestion gives you option to route archive data to specific a partition, so that ingested archive logs are kept separate from other data which has come through normal ingestion channels. You may also use this metadata in queries to better focus your searches. As a convenience, the Archive Service will automatically associate the metadata fields that are tagged on the installed collector which originally archived the data to S3. This metadata will be added as long as the installed collector exists. However, the original log metadata is embedded in each archived message, so you can always use Sumo Logic’s Field Extraction Rules or Parse Operators to extract the metadata if the original collector had been deleted.
We’re excited to beta this new service which allows customers to send unlimited log or other machine data to their own AWS S3 bucket, for free, and with the reliability and convenience of Sumo Logic’s collection management features Please contact you Sumo Logic account manager to be included in the beta program.
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.