» Scimore News Blog

ScimoreDB TPC-C benchmarks, comparing with Clustrix, MySQL

2011-12-18 - Marius Slyzius

Recently, I had a chance to evaluate ScimoreDB on a single box: running Windows 2008 Server R2, and hardware: CPU 64 cores, 256 GB RAM, 100 GB Intel SSD. The machine was kindly provided by Bayonette AS in Norway with their new cloud effort called Hydra.

First, I installed the database and run the load test that executes a simple "SELECT 1" SQL command. To my big surprise, the stress tool, running 60 threads, could not achieve more than 60K queries per second. The server utilized 6% of CPU, 200 Mbs network. As it seems, there was a plenty of power left, I assumed, I can start one more stress tool, and throughput should double. After starting the second stress tool, the CPU was 35%, network 200 Mbs, and, still the same - 60K queries / sec. Something just eat 29% of CPU! Going through performance counters revealed that it was spin locks, used by the critical sections. Even, when critical section locks as small as code block around a single IF statement - the 64 threads, each running on separate CPU, sooner or later will catch up each other and collide - leading to 1 thread issuing a heavy semaphore lock, while other 63 threads keeps spinning and wasting the CPU. I found a quite good article by VoltDB , describing why traditional databases have a difficulty to scale. According VoltDB, we could end up wasting more than 90% of resources for not database related tasks. This gave me an idea to try to pursue the VoltDB like path: instead of running one giant database per machine, run many distinct databases utilizing a fewer CPU's in a single distributed database cluster. To verify the theory, I have installed a single database using max 7 CPU's. This time, instead of executing SELECT 1, I execute a stored procedure that queries the table of 1 mln. rows on a primary key. A single stress tool executed 90K queries/sec using 8% of CPU. After launching additional stress tool, the throughput increased to 120K and CPU 11%. Now, everything looks right, since the CPU usage follows the throughput. Later, I discovered, it was network bottleneck, preventing to reach higher numbers.

So far, I have done the simple tests. But how about doing something more realistic? TPCC benchmark seems like a good candidate, especially we have it ready and other database vendors do it too. As I already learnt, I can't use a single ScimoreDb instance, instead, I install 8 instances (we call them nodes too) on the machine, and join them to the cluster. Each node configured to use 15GB of RAM and 7 CPU's. Next, I create TPCC distributed database, populate 400 warehouses and restart all nodes to anticipate the data cache. Finally, I run the TPCC test application and compare the results with MySQL and Clustrix TPCC workloads, created by Percona. The TPCC client application tests with 64, 256 and 1024 concurrent threads (user sessions). For each set of user sessions, I gather the number of new order transactions (NOT) executed in 10 seconds; the same way Percona does.

Throughput Comparison:

Throughput per threads - NOT/10sec.

System 64 256 1024

Clustrix - 3 Nodes 2673 3298 3121 Clustrix - 6 Nodes 4583 7590 7022 Intel SSD Server 1880 1946 1798 HP / FusionIO 2608 2287 1554

ScimoreDB 3661 3850 3375

While ScimoreDB performance for 1024 user sessions slightly drops, the overall results seems still quite all right, to compare with MySQL and Clustrix 3 nodes.

ScimoreDB can scale quite well on a large server using VoltDB approach. However, you need to be careful choosing right database setup. The rule of thumb, if machine has more than 16 cores, install multiple nodes, each utilizing 7 CPU's and join them to a cluster. The client application can connect to any ScimoreDB node, or, list the nodes in a connection string for load balancing purpose

I encourage everyone to try our TPCC benchmarks.