Piper Push Cache Documentation Download Samples Contact  

Piper Push Cache is Fast!

After a pass on performance improvements and stability it turns out Piper is very fast at serving up static pages. A set of informal benchmarks against one of the industry leaders, nginx showed surprising results.

Before going further it should be pointed out that the benchmark was for static pages only. nginx is a mature product with many advanced features that Piper does not yet have. Out of the box, Piper was faster in the benchmarks run under the conditions described.

Goals

After finishing the performance improvements on Piper the goal of the benchmark was to see how close in performance it was to one of the industry leaders. Just a rough idea so out of the box would be fine. A simple, small 100 byte file would be enough to test the handling of connections and serving up a page on local connections. The hope was that Piper would at least be in the ball park of nginx.

Setup

The setup was an old i7 920 running at 2.67GHz running Ubuntu 14.04. An 8 CPU machine so not a bad performer but still not as fast as new machines. The runs would all be local. The test wasn't for network speed but for performance of the servers. The only change to the environment was that the ulimit was increased to 4096.

The test driver had to handle a lot of simultaneous connections to be a reasonable test. After looking around it seemed best to write a simple multi-threaded driver that could handle a variable number of open connections per thread. Each thread attempts to keep the designated number of connections open with repeated GET requests. Based on the runs it is more than capable of pushing the servers to the limit.

The version of Piper used was 0.9.7. The nginx version was 1.8.1.

Results

Most benchmark runs were run with a 100 open connections per thread. The first data point at 10 is the exception. The number of threads was increased and the latency and throughput was calculated based on the number of completed GET requests. Latency is post as milliseconds while throughput is in GET requests per second. Missing responses were tracked. Several minutes of no activity was allowed between each run. That seems to make a difference on all servers. Finally each run was 10 seconds long and two runs were made for each data point.

Open Piper ngnix
Connections Latency Throughput Latency Throughput
10 0.24 19985 0.55 16661
100 2.32 21170 6.11 15978
200 2.78 35488 616.66 256
300 2.97 50344 717.76 195
400 4.68 43379 859.71 175
500 8.61 51295 809.15 209
600 10.00 39535 64.33 2832
700 13.13 32341 766.38 769
800 14.49 38867 43.54 6076
900 11.99 38336 615.50 1375
1000 11.79 39702 522.94 2719
1100 14.61 41762 574.10 1095
1200 16.37 38459 lost 1, 86.10 4239
1300 20.98 37565 lost 20, 473.70 2060
1400 25.17 36690 80.69 5096
1500 28.45 37388 lost 5, 245.34 2104
1600 23.18 35326 lost 19, 73.20 3082
2000 22.17 34508 lost 82, 186.94 1797
4000 47.42 36122 lost 17, 134.84 4480

Graphing the results. First throughput in GET requests per second.

With the corresponding latency in millisecond before a response is received.

Notice that Piper handles the heavy loads with a gradual degradation of mostly the latency all the way up to 4000 open connection managed by 40 threads in the driver. Similar results from just 8 feeder threads with more connections to reach 4000 open connections.

nginx falls over at 200 open connections with latencies of over 600 milliseconds but gives varying results, but never great. I'm sure some tuning would help that but it would require someone with more experience with nginx. This was an out of the box benchmark.

In search of where nginx drops off, several incrementation runs were made to zero in on the point where latency and throughput became worse. It seems that around 140 open connections nginx starts to hit the wall. Periodically there would be a relative better run from nginx but when averaged with the second run would drag it down.

If anyone is willing to provide some tuning to nginx it would be great to tune nginx and rerun the benchmarks. Please contact me if you would like to volunteer.