Oracle® Performance on NVMe™ – Test Results and Guide

Shailendra Tripathi

By Shailendra Tripathi, Fellow, Filesystem Development Engineering, and Ganesh Balabharathi, Sr. Technologist, Performance Engineering

NVMe protocol, devices and arrays are bringing very low latency and extreme performance to the data center. We wanted to test our IntelliFlash™ NVMe array to assess if and how real-world applications can exploit these performance levels. Although application performance naturally benefit from such improvements, the advantages gained vary significantly for each application. It is largely driven by the application’s own characteristics and its inherent scalability.

Oracle, a very popular enterprise database, has been selected to evaluate application level performance. This blog shares our testing results and insights gained looking at online transaction system performance, and focuses on the random read and write performance of the array in varying workloads.

The IntelliFlash N5800 Array

The IntelliFlash N5800 NVMe array provides fantastic performance from all three typical counts of performance – high random read/write IOPS, high aggregate throughput, and low latency. As recently announced, we have tuned the IntelliFlash OS 3.10 with improved data pathways to get the most from the NVMe-to-flash working relationship and deliver “Best in Class” random performance for mid-range full-featured NVMe all-flash arrays. The IntelliFlash N5800 delivers up to 2x improvement in random I/O performance and as much as 67% lower latency over SAS SSD arrays in typical I/O benchmarking tools like FIO, IOMeter or VDBench.

Test Configuration

Our set of tests was geared to measure the online transaction system performance (OLTP). A real- world application benchmark, SLOB, (https://kevinclosson.net/slob/) has been used to measure the OLTP performance. Four separate tests were performed by changing the SQL update percentage:

  • 0% SQL updates (100% read test)
  • 20% SQL update (+ 80% read) test,
  • 50% SQL update (+ 50% read) test
  • 100% SQL update test.

The goal of this configuration was to examine performance levels as the SQL updates vary from one extreme workload to another, representing all potential use cases of the database.

The test was conducted on an IntelliFlash N5800 array connected to two physical clients running individual database instances. The client is a dual-socket system with Intel® Xeon® Gold 6130, 2.1 GHz, 16 core (with hyperthreading enabled) CPUs with total 512GB physical memory.

In the test configuration, however, the memory assigned to Oracle DB was only 96GB, in order to reduce the DRAM cache hits on the client side. The DB was loaded with enough users to create close to 2TB of data per database instance. The actual tests were done for 15 minutes with a total of 256 users. The tests were repeated 5 times for each data point in order to find out any variability in the results. The results we achieved were consistent in each run. The last test, 100% update case, was done for longer time to assess if it holds well over time. The client and array test bed configuration are represented in the diagram below.

Test Results Summary – Oracle Performance on NVMe Array

Before capturing performance results, the memory assigned to the DB was validated with the cache hit rate. The database cache hit rate for the client is about 18%. It was intentionally kept this low to force more I/Os to hit the array.

The OLTP transaction numbers along with the latencies are shown in the figure below.

Latency: The figure in the diagram depicts the latency in microseconds at the application level. The latency in the case of mixed workloads is the weighted average of read and write latencies. The most remarkable is the latency profile of the tests with all the performance numbers closer to or below 400 microseconds. 

Performance: You can see that the performance ranges from 1.4 Million IOPS at 100% read to 667K IOS for 100% SQL update test. The figure also includes the FIO numbers in comparison to Oracle DB app performance numbers. The FIO was mimicker to run with 4 LUNs per client over 8K record size on the same setup. The FIO profiles were created to mimic the above profile. Hence, the 20% update test was is 20% random write + 80% random read profile in the FIO.

In all the tests, the CPU usage hit the limit on the array side. That is, the filesystem software scales well with the numbers of cores.

Understanding the Results – Application Environment

The results show that at 100% update rate the performance is at about half (46.5%) of the 100% read performance. The SQL update operation is a read-modify-write operation. Hence, at 100% update, the actual I/Os performed higher per operation is proportionately higher.

For example, for each SQL update, the I/Os generated at the array level were up to 2.2x times. Hence, 20% SQL update is at least 20% read + 20% write operations. Additionally, the internal data management also incur additional I/O during both read and write operations. In such workloads, the write operations are much more processing heavy than read operations (due to additional processing required for allocation, parity computations, compression, additional parity writes, and journaling for consistency). That’s why, as the update operations are added in the mix, the overall numbers appear to be dropping disproportionately from the application perspective.

There is one difference between FIO and Oracle. The FIO data type chosen was not compressible whereas Oracle data was compressible. As the results show, the array performance numbers are being exploited well by the application as compression allows more data to hit the highest performance tier within the array architecture. Overall, the application performance pattern closely follows the trend as seen in the FIO performance. The difference in I/O is largely due to different number of actual read/write numbers.

Detailed Results – I/O stack processing and SLOB

We also want to share with you detailed results that include two set of data. For each test, the detailed results show the array performance view. The array view shows the 15-minute window during each test run and captures the actual I/Os received and the latency seen for the operations at the protocol level on the array. Hence, it represents the full I/O stack processing on the array side.

The second set of data is the actual report generated from the SLOB benchmark tool for each database instance.

0% SQL Update, 1.4M IOPS, 11.2 GiB/s, 396us (R) latency

Array View

20% SQL Update, 858K IOPS, 7.1 GiB/s, 389us R, 290us W latencies

Array View

For the mixed tests, the write coalescing is a factor for the update operation latencies. As mentioned above, the write path has a higher CPU overhead than the read path. Additionally, the latencies from the drives are relatively higher when mixed workloads are present.

50% SQL Update, 780K IOPS, 6.8 GiB/s, 383us R, 431us W latencies

Array View:

The write latencies are higher due to higher coalescing for the update operations at the array. The higher the coalescing, the higher the array latency for the write operation. However, the application will not see the same latency because multiple clients’ I/Os are batched together. Hence, amortized latency seen at the client is lower at the application.

100% SQL Update, 667K IOPS, 6.2 GiB/s, 347us W Latency

Array View

As explained above, with increasing updates, multiple operations are coalesced and written together. Hence, the experienced higher write latency. Additionally, the NTB latency (the log journal is mirrored across the other node over NTB) when stressed, increases non-linearly, reflecting higher latencies for update operations.

Conclusion – Remarkable Oracle Performance on NVMe

NVMe arrays have higher performance and provide very low latency operations. We wanted to test if this is only reflected in the standard performance benchmarking tools, and how well real-world applications are able to exploit the performance from the IntelliFlash NVMe array.

We’re excited to see such outstanding performance results. This experiment focused on the random read and write performance from the array and particularly remarkable are the low latency results which should prove highly beneficial in day-to-day enterprise use cases.

Following this Oracle performance on NVMe testing, we will continue testing other real-world applications. Follow this blog (you can subscribe below) to read our subsequent testing of analytics oriented use case, evaluated with identical configuration.

Learn More

Visit our website to learn about the IntelliFlash all-NVMe array

To learn more about the NVMe protocol – read our technical guide

Read the solution brief: Accelerating Oracle Applications with IntelliFlash Arrays

The post Oracle® Performance on NVMe™ – Test Results and Guide appeared first on Western Digital Corporate Blog.

Previous Article
One Year Later… Dropbox Continues to Innovate with Western Digital
One Year Later… Dropbox Continues to Innovate with Western Digital

As new large-scale applications are developed, we have been working closely with our customers to optimize ...

Next Article
From Flash to Fabric – Where the Data Center is Headed
From Flash to Fabric – Where the Data Center is Headed

By 2025, most data centers will likely have adopted NVMe-based fabric (NVMe-oF) for some parts of their arc...