top of page

Kubernetes Persistent Storage Performance Test

Updated: Jun 22, 2023

IMPORTANT NOTE: The results from individual storage performance tests cannot be evaluated independently, but the measurements must be compared against each other. There are various ways to perform comparative tests and this is one of the simplest approaches.


For verification I used exactly the same lab with Azure AKS 3 node cluster and 1TB premium SSD managed disk attached to each instance. Details you can find in the previous blog.

To run our tests I decided to use the same load tester called Dbench. It is K8s deployment manifest of pod, where it runs FIO, the Flexible IO Tester with 8 test cases. Tests are specified in the entry point of Docker image:

  • Random read/write bandwidth

  • Random read/write IOPS

  • Read/write latency

  • Sequential read/write

  • Mixed read/write IOPS

At the start, I ran Azure PVC tests to get a baseline for comparison with last year. The results were almost the same, therefore we can assume conditions remained unchanged and we would achieve the same numbers with the same storage versions.


Random read/write bandwidth

Random read test showed that GlusterFS, Ceph and Portworx perform several times better with read than host path on Azure local disk. OpenEBS and Longhorn perform almost twice better than local disk. The reason is read caching. The write was the fastest for OpenEBS, however Longhorn and GlusterFS got also almost the same value as a local disk.

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn
Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn

Random read/write IOPS

Random IOPS showed the best result for Portworx and OpenEBS. OpenEBS this time got even better IOPS on write than native Azure PVC, which is almost technically impossible. Most probably it is related to Azure storage load at different times of test case runs.

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn

Read/write latency

Latency read winner remained the same as last time. LongHorn and OpenEBS had almost double of PortWorx. This is still not bad since native Azure pvc was slower than most of the other tested storages. However latency during write was better on OpenEBS and Longhorn. GlusterFS was still better than other storages.

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn


Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn


Sequential read/write

Sequential read/write tests showed similar results as random tests, however Ceph was 2 times better on read than GlusterFS. The write results were almost all on the same level and OpenEBS and Longhorn achieved the same.

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn


Mixed read/write IOPS

The last test case verified mixed read/write IOPS, where OpenEBS delivered almost twice higher than PortWorx or Longhorn on read as well as write.

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn

Conclusion

This article shows how significantly an open source project can change in a single year! As a demonstration let’s take a look at comparison of IOPS between OpenEBS cStor and OpenEBS MayaStor on exactly the same environment.

Kubernetes Persistent Storage Performance Test - azure pvc hostpath portworx glusterfs ceph openebs longhorn

Please take the results just as one of the criteria during your storage selection and do not make final judgement just on my blog data. We can conclude from the tests:

  • Portworx and OpenEBS are the fastest container storage for AKS.

  • OpenEBS seems to become one of the best open source container storage options with a robust design around NVMe.

  • Longhorn is definitely a valid option for simple block storage use cases and it is quite similar to OpenEBS Jiva backend.

Of course this is just one way to look at container storage selection. The interesting parts are also scaling and stability.


3,308 views1 comment
Stationary photo

Be the first to know

Subscribe to our newsletter to receive news and updates.

Thanks for submitting!

Follow us
bottom of page