In today’s ever-changing business landscape, those that operate using a software-driven model will be the most successful. These businesses recognize the power of transforming enormous volumes of data generated by digital operations into real-time insights that propel further success. The ability to do this in real-time, all the time, across multiple functional disciplines, lies at the heart of continuous intelligence.
Kubernetes is first and foremost an orchestration engine that has well-defined interfaces that allows for a wide variety of plugins and integrations to make it the industry leading platform in the battle to run the world’s workloads. From machine learning to running the applications a restaurant needs, Kubernetes has proven it can run things.
Content Delivery Networks, or CDNs, provide ultra-fast global connectivity, traffic protection, and a better user experience. They help connect subscribers who may be geographically far from one another to content in a timely manner. In order to stay competitive, most organizations use CDNs to deliver content to their customers.
“Americans have their minds wrapped around a two-party system. It is hard to get people to envision something different — despite the fact that there have been tectonic changes in the American political parties at many different junctures in our history. Building a new political party from scratch feels daunting and naïve.”
The OpenTelemetry Collector is a new, vendor-agnostic agent that can receive and send metrics and traces of many formats. It is a powerful tool in a cloud-native observability stack, especially when you have apps using multiple distributed tracing formats, like Zipkin and Jaeger; or, you want to send data to multiple backends like an in-house solution and a vendor. This article will walk you through configuring and deploying the OpenTelemetry Collector for such scenarios.
The countdown is on to our 4th annual Illuminate user conference October 6-7, 2020! This year we are going virtual to keep everyone healthy and safe, and while we will miss seeing all of our customers and partners, we are excited to host the premier education platform for machine data analytics to help businesses accelerate digital transformation and customer experiences.
Persistence is effectively the ability of the attacker to maintain access to a compromised host through intermittent network access, system reboots, and (to a certain degree) remediation activities. The ability of an attacker to compromise a system or network and successfully carry out their objectives typically relies on their ability to maintain some sort of persistence on the target system/network.
Last week Sumo Logic announced our new Observability Suite, which included the public introduction of the closed beta for our distributed tracing capabilities as part of our Microservices Observability solution. This new solution will provide end-to-end visibility into user transactions across services, as well as seamless integration into performance metrics and logs to accelerate issue resolution and root-cause analysis. In this blog, we’ll explore the new solution in detail.
As more and more applications move to the cloud, the complexity of application architectures inevitably increases. It is a burden we willingly take on because the benefits—flexible deployment, technology diversity, independent scaling, and much more— tend to far outweigh the costs. But along this transition, most organizations face a dilemma, to divert resources to the necessary tooling for effective monitoring and troubleshooting of these systems – i.e. observability – or slow the rate of migration to the cloud.
Automation is a key component in the management of the entire software release lifecycle. While we know it is critical to the Continuous Integration/Continuous Delivery process, it is now becoming equally essential to the underlying infrastructure you depend on. As automation has increased, a new principle for managing infrastructure has emerged to prevent environment drift and ensure your infrastructure is consistently and reliably provisioned.
I am spending a considerable amount of time recently on distributed tracing topics. In my previous blog, I discussed different pros and cons of various approaches to collecting distributed tracing data. Right now I would like to draw your attention to the analysis back-end: what does it take to be good at analyzing transaction traces? As mentioned in the blog above, one of the most important outcomes of adopting open source tracing standards is a freedom to choose the right analysis backend, as long as it supports these standards. So, what is the requirement list for a distributed tracing backend? What should it do and what are absolute must-haves? We have looked at many free, open source and commercial offerings on the market and found a few tools that are good here or there, but nothing would fully match a complete list.
There has been increasing buzz in the past decade about the benefits of using a microservice architecture. Let’s explore what microservices are and are not, as well as contrast them with traditional monolithic applications. We’ll discuss the benefits of using a microservices-based architecture and the effort and planning that are required to transition from a monolithic architecture to a microservices architecture.