blog に戻る

2021年08月12日 Scott Fitzpatrick

How to troubleshoot Apache Cassandra performance using metrics and logs in debugging

In the era of data abundance, there exists a significant need for database systems that can effectively manage large quantities of data. For certain types of applications, an oft-considered option is Apache Cassandra. Like any other piece of software, however, Cassandra has issues that could potentially impact performance. When this happens, it’s critical to know where to look and what to look for in the effort to quickly restore service to an acceptable level.

Keep reading for a primer on Apache Cassandra, and some steps that can be taken to effectively isolate and resolve problems with cluster performance when they occur.

Apache Cassandra and Its Benefits

Apache Cassandra is an open-source NoSQL distributed database regularly utilized by organizations that need to collect and utilize massive amounts of data. Leveraged by both small companies and large enterprises, Cassandra provides several key benefits that make it a highly-reliable and effective solution for many use cases:

Cassandra is scalable - Cassandra’s node-based architecture enables it to scale with ease. This simplifies the process for increasing capacity and throughput, as necessary.

Cassandra is fault-tolerant - With Cassandra, data can be replicated across nodes and data centers, helping to eliminate the possibility of a single point of failure. This provides a high level of fault tolerance. Consider the scenario in which node A goes down. If node B contains a replica of the data from the now-unavailable node A, the impact of the failure is lessened and availability is maintained.

Troubleshooting Problems with Cassandra Performance

In any distributed system, an important objective when searching for root cause is to narrow down where the problem is occurring. In the context of Cassandra, this means identifying the instance or instances (node or nodes) that are causing the problem. It’s possible that the issue is occurring for the entire cluster. But it’s also possible that the problem is only present within a single data center, or with a specific set of nodes on which the same data has been replicated. Without knowing exactly where the problem is occurring, it can be difficult to gather the details necessary to formulate a fix and reach a resolution.

An effective strategy for understanding the scope of a problem in a Cassandra database is to monitor and leverage metrics data to gain critical insight into the issue at hand. This enables more targeted log analysis, which facilitates an easier path towards identifying root cause.

Metrics and Cassandra

Cassandra furnishes users with myriad metrics that can be of great value in incident response. Through monitoring these metrics, the existence of problems with cluster performance are more easily identified, as is the location of the problem within the cluster itself.

Some metrics made available by Cassandra include client request metrics, providing insight into timeouts, failures, request statistics (type, latency, throughput) and more. Table metrics are provided to help track the performance of tables within the distributed database. And, similarly, keyspace metrics are collected to provide insight into performance for each keyspace.

These metrics categories represent just a small portion of what’s available and valuable for monitoring and incident response. For a more complete list of Cassandra metrics, their descriptions, and the data types associated with each, you are encouraged to visit the official Cassandra monitoring documentation.

Cassandra and nodetool

Additionally, Cassandra comes packaged with a utility known as nodetool. This command line tool comes with a set of commands for viewing node and cluster status, viewing compaction information, gaining visibility into node-level statistics (such as load, memory usage, and cache effectiveness), and more. This tool can prove useful in debugging, enabling team members to quickly view information that may help narrow down problems to specific Cassandra instances.

Log data and Cassandra

All in all, careful monitoring and analysis of performance metrics can help to identify and isolate performance problems within a Cassandra cluster. Once this has been done, targeted analysis of Cassandra log data can assist in narrowing down the root cause.

Cassandra writes to several log files that can be of great help when troubleshooting problems, including system.log, debug.log and gc.log. Uncaught exceptions, information about table or keyspace alterations, information about compactions, and more, can be gathered by visiting and analyzing the messages in these log files. For a more in-depth look at Cassandra logging and what each of these log files contains, take a look at the Apache Cassandra common log documentation.

Sumo Logic for Apache Cassandra

Developers and IT folks know that logs and metrics tell the story when things go awry. This holds true in the case of Apache Cassandra. With that said, and as is often the case with any distributed system, monitoring performance and performing root cause analysis is easier when using a tool that is built to centralize and visualize metrics and log data. For Cassandra, Sumo Logic has an app for just that. With step-by-step instructions for setting up log and metrics collection along with pre-packaged dashboards within the app itself, Sumo Logic makes it easier than ever to monitor your Cassandra cluster and troubleshoot performance issues should they arise.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Scott Fitzpatrick

Scott Fitzpatrick

Scott Fitzpatrick is a Fixate IO Contributor and has nearly 8 years of experience in software development. He has worked with many languages and frameworks, including Java, ColdFusion, HTML/CSS, JavaScript and SQL.

More posts by Scott Fitzpatrick.

これを読んだ人も楽しんでいます