Sumo Logic is excited to join in celebrating the 52nd Earth Day! Earth Day is a global event for environmental awareness, and sustainability, and as well as honoring the anniversary of the birth of the modern environmentalist movement. We believe it's essential now more than ever to move toward greener practices as a business and as people.
As organizations have moved toward a microservices design pattern, the need for reliable and performant solutions that enable decoupled services to communicate with one another has grown. RabbitMQ is an open-source message broker designed for this purpose. We’ll discuss what RabbitMQ is, how it works, why it needs to be monitored and how Sumo Logic can effectively do this.
ActiveMQ is a message-oriented middleware, which means that it is a piece of software that handles messages across applications. It acts as a broker that can help facilitate asynchronous communication patterns like publish-subscribe and message queues. The main goal of those servers is to create a scalable and reliable message bus that different components can use to communicate with each other.
Managing the security of your Amazon Web Services (AWS) environment requires constant vigilance. Your strategy should include identifying potential threats to your environment and proactively monitoring for vulnerabilities and system weaknesses that malicious actors might exploit. In a complex environment—such as your AWS account with a multitude of services, coupled with various architectures and applications—the ideal solution should be both comprehensive and straightforward.
We’re excited to announce updates to Sumo Logic AWS Quick Start Integrations that enable customers to automate the integration of AWS Security Reference Architecture within Sumo Logic Cloud SIEM powered by AWS. The new integrations automate the collection, ingestion, and analysis of applications, infrastructure, security, and IoT data to derive actionable insights for security engineering teams.
Let’s take a look into why and how you should be closely monitoring your Windows server environments from a security perspective. We’ll investigate the types of logs, events and other actions that you should consider. Finally, we’ll look at how you centralize monitoring into a central dashboard, and automate many of the tedious aspects of Windows security monitoring.
Enterprise SOCs are becoming a crucial part of most organizations’ management departments due to the increase in digitization and interconnectivity. SOCs play a major role in monitoring, managing, and responding to security alerts within a company's daily operations. Since cyber attacks have become more sophisticated, the requirements for SOCs have changed due to increased volumes of data, the complexity of security ecosystem tools, and increased data sources and attack vectors. When it comes to efficiency, SOCs need to expand their focus beyond log management and data analytics to include more advanced functionalities such as automation, leveraging big data and AI for intelligent decision support, and increasing visibility into their product through observability.
Application performance management (APM) and distributed tracing are practices that many teams have been using for years to help detect and mitigate performance issues within applications—while the first one was born in the era of big single-host monoliths, the latter is especially useful for distributed applications that use a microservices architecture, in which tracing is critical for pinpointing the source of performance issues.
HAProxy is one of the fastest and most widely-used load balancing solutions available today. If you’re already using HAProxy, or if you’re considering using HAProxy in your environment, then this is a great place to start. On this page, we discuss HAProxy logging and why logging is such a vital component of the load balancer implementation. We then take a deep dive into the logging offered by HAProxy. Finally, you’ll read about working with the HAProxy logging format and how you can configure the logging to suit your needs better.
It’s not every day that you get four CTOs of leading Cloud companies in a discussion about security, the changing role of the security operations center (SOC), and how best to manage data, artificial intelligence(AI), and service providers in these challenging times. To close out the 2021 Modern SOC Summit, Christian Beedgen, Sumo Logic’s CTO, hosted a discussion with Peter Silberman, CTO at Expel.io, Scott Lundgren, CTO at Carbon Black, and Todd Weber, the CTO at Optiv.
In today's environment, security teams face a pervasive threat landscape, with the expectation that some threat actors will be successful in bypassing perimeter defenses. To deal with this, security teams must learn how to actively hunt down threats, both outside and inside the perimeter, using solutions, such as Sumo Logic’s Cloud SIEM Enterprise and Continuous Intelligence Platform.
In this article we look at how to monitor Cassandra database clusters. We start with the basic architecture of a Cassandra cluster, and mention the most important metrics to gather. Next, we advance step-by-step into configuring and setting up a monitoring stack with Jolokia, Telegraf and Sumo Logic collectors and dashboards – everything you need to monitor Cassandra databases.
Many of our customers today leverage Office 365 GCC High, including organizations looking to meet evolving requirements for working with the United States Department of Defense. Sumo Logic enables customers to leverage our out-of-the-box monitoring and analytics capabilities to analyze Office 365 GCC High data to offer security engineers and security analysts stronger situational awareness of internal employee data.
Security and IT teams may be loathe to admit it, but security has historically been mostly a reactive affair. Security engineers monitored for threats and responded when they detected one. They may have also taken steps to harden their systems against breaches, but they didn’t proactively fight the threats themselves.
Threat hunting is emerging as a must-have addition to cybersecurity strategies. By enabling organizations to find and mitigate threats before they ever touch their networks or systems, threat hunting provides the basis for a more proactive security posture – and one that delivers higher ROI on security tools and processes.
Companies generate data at an exponential rate, and the task of analyzing data to produce relevant security insights can be overwhelming. With evolving market dynamics and threat landscapes, security teams have a greater need for integrated and scalable monitoring that provides real-time and meaningful insights into the state of organizational security posture.
Since 2010, it has been Sumo Logic’s mission to democratize machine data. Naturally, we tend to focus on the outcomes: reliable and secure applications and systems that are the engines of successful modern businesses. But to drive these outcomes, and before the spotlight-hogging analytics kick in, algorithms require data. And this is where the magic starts! Sensu has been working on championing a monitoring as code approach to building observability pipelines for a decade now.
It's one thing to detect a cyber attack. It's another to know what the attackers are trying to do, which tactics they are using, and what their next move is likely to be. Without that additional information, it's difficult to defend effectively against an attack. You can't reliably stop an attack if you are unable to put yourself in the mindset of the attackers. This is why threat intelligence plays a critical role in modern cybersecurity operations. Threat intelligence delivers the context about attackers' motives and methods that teams need to react as effectively as possible against threats to their IT resources. Keep reading for a primer on what threat intelligence means, why it's important, and what to consider when implementing a threat intelligence strategy.
Observability is arguably the tech buzzword of the year. Whether or not you believe the hype, observability is all about how to ensure overall system health and deliver reliable customer experiences. This is done by observing the system, and when a problem arises, using real-time analytics to quickly help identify the what, where, and why of the problem.
Log analysis helps organizations determine the best way to optimize application functionality while giving development teams a leg up in root cause analysis. With that said, it’s not feasible to scroll through thousands of lines of log entries in a text editor. Instead, development teams need modern tools that enable them to centralize, filter, and analyze their logs in a way that allows them to glean valuable insights in a time-efficient manner.
Kubernetes, as a platform, is a comprehensive set of tools for orchestrating containers at scale. It consists of a modular architecture of specific components with a defined purpose. For example, the scheduler finds the ideal match for a particular pod and the kube-proxy manages the networking between the nodes and the master.
Kubernetes is first and foremost an orchestration engine that has well-defined interfaces that allows for a wide variety of plugins and integrations to make it the industry leading platform in the battle to run the world’s workloads. From machine learning to running the applications a restaurant needs, Kubernetes has proven it can run things.
“Americans have their minds wrapped around a two-party system. It is hard to get people to envision something different — despite the fact that there have been tectonic changes in the American political parties at many different junctures in our history. Building a new political party from scratch feels daunting and naïve.”
Kubernetes is an open-source container management system developed by Google and made available to the public in June 2014. The goal is to make deploying and managing complex distributed systems easier for developers interested in Linux containers. It was designed by Google engineers experienced with writing applications that run in a cluster.
Logging as a Service is a simple, highly scalable solution for managing all of your logs in a central, systematic way. With Logging as a Service, a cloud-based log aggregation tool collects log data from across your infrastructure, consolidates it in a central location, and provides the analytics and visualization tools you need to understand the data in your logs.
Persistence is effectively the ability of the attacker to maintain access to a compromised host through intermittent network access, system reboots, and (to a certain degree) remediation activities. The ability of an attacker to compromise a system or network and successfully carry out their objectives typically relies on their ability to maintain some sort of persistence on the target system/network.
Compared to even just a few years ago, the tools available for data scientists and machine learning engineers today are of remarkable variety and ease of use. However, the availability and sophistication of such tools belies the ongoing challenges in implementing end-to-end data analytics use cases in the enterprise and in production.
Customers regularly ask me what types of data sources they should be sending to their SIEMs to get the most value out of the solution. The driver for these conversations is often because the customers have been locked into a SIEM product where they have to pay more for consumption. More log data equals more money and, as a result, enterprises have to make a difficult choice around what log sources and data are what they guess is the most important. This often leads to blind spots from a logging perspective and requires that your analysts pivot to other tools and consoles to get any additional context and detail they can during an investigation.
Unless you’ve been living under a rock you are probably familiar with the recent Shadow Brokers data dump of the Equation Group tools. In that release a precision SMB backdoor was included called Double Pulsar. This backdoor is implemented by exploiting the recently patched Windows vulnerability: CVE-2017-0143.
Edge computing is likely the most interesting section of the broader world of IoT. If IoT is about connecting all the devices to the Internet, edge computing is about giving more processing power to devices at the edge. Edge computing views these edge devices as mini clouds or mini data centers. They each have their own mini servers, mini networking, mini storage, apps running on top of this infrastructure, and endpoint devices. Rather than sending data to the cloud for processing and receiving already-processed data from a central hub in the cloud, in edge computing all the processing happens on the edge device itself, or close to the edge device.
A type of credential reuse attack known as credential stuffing has been recently observed in higher numbers towards industry verticals. Credential stuffing is the process of automated probing of and access to online services using credentials usually coming from data breaches, or bought in the criminal underground.
An ever-increasing number of organizations are working in the cloud. It depends on their business model what cloud delivery model they use. The three most common deployment models for cloud services are software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-Service (IaaS).
In this post, we continue our discussion of use cases involving account take over and credential access in enterprise data sets. In the first part of this series, we introduced the definition of a VIP account as any account that has privileged or root level access to systems/services. These VIP accounts are important to monitor for changes in behavior, particularly because they have critical access to key parts of the enterprise. As a follow up to our first post, this blog will describe a real-time approach for automatically profiling VIP accounts and detecting when they are potentially being misused.
System administrators hold many key responsibilities within an IT organization. Most importantly, they must ensure that all systems, services, and applications are up, running, and performing as expected. When a system starts to lag or an application is down, the system administrators are called upon to troubleshoot and resolve the issue as quickly as possible to limit the impact on customers.
In a perfect world, computers would function properly on the network at all times. There would be no issues with the operating system and no problems with the applications. Unfortunately, this isn’t a perfect world. System failures can and will occur, and when they do, it is the responsibility of system administrators to diagnose and resolve the issues. But where can system administrators begin the search for solutions when problems arise? The answer is Windows event logs.
The last fifteen years have seen huge increases in developer productivity for several reasons, including the arrival of open source into the mainstream and the ability to better emulate target environments. In addition, the process of resetting a development environment back to the last known stable version has been vastly improved by Vagrant and then Docker.
Today's IT and DevOps teams have not one, but two, feature-rich open source Web servers to choose from: NGINX and Apache HTTP Server (which is often called simply "Apache"). At a high level, both platforms do the same core thing: Host and serve Web content. Both also offer comparable levels of performance and security.
Serverless computing is the latest, greatest thing in the technology world. Although the serverless concept has been around in one form or another for more than a decade, the introduction of serverless platforms from major cloud providers—starting with AWS Lambda in 2014—has brought serverless mainstream for the first time.
Serverless computing is becoming more popular as organizations look for new ways to deploy their applications in the cloud. With higher levels of abstraction, easier maintenance, a focus on high performance, and ephemeral workloads, serverless computing solutions like Lambda are finding a permanent place in the mix of cloud infrastructure options.
The principles of data protection are the same whether your data sits in a traditional on-prem data center or in a cloud environment. The way you apply those principles, however, are quite different when it comes to cloud security vs. traditional security. Moving data to the cloud introduces new attack-surfaces, threats, and challenges, so you need to approach security in a new way.
Database security refers to the various measures organizations take to ensure their databases are protected from internal and external threats. Database security includes protecting the database itself, the data it contains, its database management system, and the various applications that access it.
Internet security, in general, is a challenge that we have been dealing with for decades. It is a regular topic of discussion and concern, but a relatively new segment of internet security is getting the lion’s share of attention—internet of things (IoT). So why is internet of things security… a thing?
Microsoft Windows Internet Information Services (IIS) log files provide valuable information about the use and state of applications running on the web. However, it’s not always easy to find where those files are to determine important aspects of app usage like when requests for servers were made, by whom, and other user traffic concerns.
Here at Sumo Logic we’ve been talking a lot about the shift to Continuous Intelligence, and how software-centric companies and traditional organizations alike are being disrupted by traditional IT models. A newly commissioned white paper by the Enterprise Strategy Group, digs into the future of full-stack system management in the era of digital business. The author and Principle Analyst, Application Development and Deployment, Stephen Hendrick, examines the opportunity and challenge IT faces as an active participant in creating new, digital business models. “The opportunity centers on IT’s ability to create new business models and better address customer needs, while the challenge lies in it’s role as a disruptive force to establish enterprises that underestimate the power and speed of IT-fueled change.” Digital business models are fueling the growing acceptance of cloud computing and DevOps practices, resulting in new customer applications that are transforming many traditional markets into digital disruptors – Amazon, AWS, AirBnB, Facebook, Google, Netflix, Twitter, and Uber spring to mind as common examples. However, the rise of cloud-computing and continuous development and delivery practices also results in greater complexity and change within IT environments. Stephen discusses the emergence of technologies to address this trend. In addition, he introduces a Systems Management Reference model to analyze the role and relevance of continuous intelligence technologies to increase the adaptability of full-stack system management, thereby better serving the dynamic needs of the IT infrastructure and business. Stephen concludes, “continuous intelligence brings together the best that real-time, advance analytics has to offer by leveraging continuous real-time data to proactively support the evaluation of IT asset availability and performance within a highly secure environment. This approach reflects and is aligned with today’s modern architecture for application development and deployment, which includes microservices and immutable infrastructure.” Where does Sumo Logic fit into all of this? Quite simply we believe Sumo Logic’s purpose-built, cloud-native, machine data analytics service was designed to deliver real-time continuous intelligence across the entire infrastructure and application stack. This in turn enables organizations to answer questions they didn’t even know they had, by transforming the velocity, variety and volume of unstructured machine data overwhelming them into rich, actionable insights, to address and diffuse complexity and risk.