blog に戻る

2020年12月17日 Chris Tozzi

How to monitor Amazon Aurora RDS logs and metrics

Aurora, a hosted relational database service available on the Amazon cloud, is a popular solution for teams that want to be able to work with tooling that is compatible with MySQL and PostgreSQL without running an actual MySQL or PostgreSQL database.

In order to leverage Aurora’s benefits fully, it’s critical to log and analyze the various types of monitoring data that are available from an Aurora environment. Because Aurora generates multiple categories of log data and makes it available in different locations, you must have a comprehensive logging and monitoring strategy in place to keep track of all aspects of Aurora performance and availability.

In this article, we walk through the steps required to log and monitor Amazon Aurora as well as discuss several best practices for getting the most value out of Aurora log data.

What Is Amazon Aurora?

Amazon Aurora is a cloud service that allows users to store data within a relational database format. It is part of Amazon’s RDS service.

One of the main features of Aurora is that it is compatible with MySQL and PostgreSQL. This doesn’t mean that Aurora databases are simply MySQL or PostgreSQL databases, though; Aurora is a proprietary solution and is different from MySQL or PostgreSQL. However, because Aurora is compatible with both MySQL and PostgreSQL, users can manage Aurora databases using the same tools (such as the mysql CLI client or graphical MySQL management interfaces like Workbench) that they would use when working with a traditional MySQL or PostgreSQL database.

Aurora offers additional features that distinguish it from a stock cloud-based MySQL or PostgreSQL database service. It automatically increases storage allocations as databases grow, which eliminates the need for users to manage capacity planning on their own. It also automatically replicates data across multiple Amazon availability zones to provide high availability. And, according to Amazon, Aurora can achieve performance rates up to five times faster than those of generic MySQL and three times faster than PostgreSQL.

You can create genuine MySQL and PostgreSQL databases on Amazon RDS if you wish, but Aurora may be a better choice for some users based on the capacity planning, availability, and performance benefits described above.

Why Monitor and Log Aurora?

Although Aurora is designed to offer enhanced performance and availability compared to other types of databases, it is by no means immune to potential problems. Your Aurora databases could be disrupted by external failures, such as a DDoS attack that disrupts the availability of the Aurora service. Or, you could suffer from internal problems within your Aurora environment, like data corruption that makes parts of your database unusable or a configuration issue that leads to poor performance.

To safeguard against these risks, monitoring and logging all available data from both the Aurora service and your individual Aurora databases is crucial. Logging and monitoring will help you identify problems before they turn into serious disruptions. They may also give you insight into optimizations you can make to increase the availability and performance of your Aurora databases.

What Do Aurora Logs Monitor?

There are multiple monitoring and logging streams associated with Aurora environments and databases. Each one lets you log or monitor different types of information.

The main logging and monitoring streams for Aurora include:

  • CloudWatch alarms: Users can configure alarms using CloudWatch (Amazon’s monitoring and alerting tool) that allow them to monitor various metrics associated with Aurora databases, such as CPU utilization and I/O activity. If the metrics reach a certain threshold, CloudWatch will send an alert. CloudWatch is therefore useful for maintaining visibility into potential performance or availability issues with Aurora.
  • CloudWatch enhanced metrics: Optionally, you can enable enhanced metrics for Aurora. Enhanced metrics collect the same general types of data as the CloudWatch alarms described above, but they collect it from individual Aurora instances rather than from the virtual machine hypervisor that hosts the instances. As a result, enhanced metrics may be more accurate, especially if you have many Aurora instances running. Amazon charges additional fees if you enable enhanced metrics, however.
  • CloudTrail logs: Amazon automatically records API requests made to or from the Aurora service and stores these records as events in CloudTrail (Amazon’s cloud auditing tool). You can use CloudTrail to track user requests and actions related to your Aurora databases. You can also see which other Amazon cloud services have interacted with them.
  • Database logs: Aurora creates logs for individual databases. The logs can be downloaded through the Amazon Console or on the CLI; you can also collect them using the Amazon RDS API. The database logs provide visibility into the state of your databases and can help find and troubleshoot problems like data read and write errors.

There are additional ways to collect and use monitoring data from Aurora environments, such as using the Amazon Simple Notification Service to configure additional types of alerts. For a comprehensive description of the Aurora monitoring and logging streams that are available, see the Amazon documentation.

How Do I Monitor My Aurora Database?

Because there are multiple logging and monitoring streams available in Aurora, there is no singular way to monitor an Aurora database using native Amazon tools. Instead, the best approach will combine multiple monitoring paths and tools.

Consider the following best practices for Aurora monitoring:

  • Set up CloudWatch alarms to ensure you receive notifications when your Aurora databases are maxing out resource availability or exhibiting unusual behavior, such as excessive I/O operations.
  • Store the Aurora metrics data that you collect via CloudWatch as logs. By default, CloudWatch will only generate alarms when the metrics reach predefined thresholds. If you want to store metrics data over the long term rather than simply monitor it in real time, you’ll need to export the metrics from CloudWatch to an external location. You can export to an S3 storage bucket, or you can ingest the data directly into a logging tool like Sumo Logic.
  • Analyze database logs using a third-party analytics tool. This is important because Amazon primarily provides monitoring and alerting solutions (in the form of CloudWatch and CloudTrail) that deal only with the metrics generated by the Aurora service itself. If you want to monitor the internal health of your databases, you’ll need to analyze the database logs using an external tool that can understand MySQL- and PostgreSQL-compatible database logs.

Monitoring Aurora with Sumo Logic Unified Logging and Metrics

If you don’t want the hassle of having to juggle multiple native and third-party monitoring tools and data streams in order to manage Aurora logs, an alternative approach is to use a unified logging and metrics (ULM) solution like Sumo Logic.

Sumo Logic automatically collects metrics from across your Aurora environment, consolidates them in a single location, and gives you analytics tools for interpreting the data. This approach eliminates the need to collect data through multiple individual tools like CloudWatch and CloudTrail. It also helps you to correlate relevant events and performance indicators across different logging streams in ways that would be very difficult to achieve if you were analyzing the data manually.

And, because Sumo Logic offers full support for both MySQL- and PostgreSQL-compatible Aurora databases, you can use the same log collection and monitoring solution no matter which types of databases you decide to run on Aurora.

To see for yourself how Sumo Logic can simplify Aurora logging and monitoring, sign up for a free trial.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Chris Tozzi

Chris Tozzi

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

More posts by Chris Tozzi.

これを読んだ人も楽しんでいます