Reproducible Benchmarking of Cloud-Native Applications with the Kubernetes Operator Pattern.

Henning, Sören , Wetzel, Benedikt and Hasselbring, Wilhelm (2021) Reproducible Benchmarking of Cloud-Native Applications with the Kubernetes Operator Pattern. Open Access [Paper] In: Symposium on Software Performance 2021. , 09.11.2021, Leipzig, Germany . CEUR Worshop Proceedings, Vol-3043 .

[thumbnail of SSP2021-benchmarking-operator.pdf]
Preview
Text
SSP2021-benchmarking-operator.pdf - Published Version
Available under License Creative Commons: Attribution 4.0.

Download (411kB) | Preview

Abstract

Reproducibility is often mentioned as a core requirement for benchmarking studies of software systems and services. "Cloud-native" is an emerging style for building large-scale software systems, which leads to an increasing amount of benchmarks for cloud-native tools and architectures. However, the complex nature of cloud-native deployments makes the execution and repetition of benchmarks tedious and error-prone. In this paper we report on our experience with developing a benchmarking tool based on established cloud-native patterns and tools. In particular, we present a benchmarking tool architecture based on the Kubernetes Operator Pattern. Accompanied with a role model and a data model for describing benchmarks and their executions, this architecture aims to simplify defining, distributing, and executing benchmarks for better reproducibility.

Document Type: Conference or Workshop Item (Paper)
Keywords: Benchmarking, Reproducibility, Cloud-native, Kubernetes
Research affiliation: Kiel University > Software Engineering
Publisher: CEUR
Date Deposited: 09 Dec 2021 11:17
Last Modified: 24 Feb 2022 14:25
URI: https://oceanrep.geomar.de/id/eprint/54582

Actions (login required)

View Item View Item