PerfKitBenchmarker


PerfKit Benchmarker is an open source benchmarking tool used to measure and compare cloud offerings. PerfKit Benchmarker is licensed under the Apache 2 license terms. PerfKit Benchmarker is a community effort involving over 500 participants including researchers, academic institutions and companies together with the originator, Google.

General

PerfKit Benchmarker is a community effort to deliver a repeatable, consistent, and open way of measuring Cloud Performance. It supports a growing list of cloud providers including: Alibaba Cloud, Amazon Web Services, CloudStack, DigitalOcean, Google Cloud Platform, Kubernetes, Microsoft Azure, OpenStack, Rackspace, IBM Bluemix. In addition to Cloud Providers to supports container orchestration including Kubernetes and Mesos and local "static" workstations and clusters of computers .
The goal is to create an open source living benchmark that represents how Cloud developers are building applications, evaluating Cloud alternatives, learning how to architect applications for each cloud. Living because it will change and morph quickly as developers change.
PerfKit Benchmarker measures the end to end time to provision resources in the cloud, in addition to reporting on the most standard metrics of peak performance, e.g.: latency, throughput, time-to-complete, IOPS. PerfKit Benchmarker reduces the complexity in running benchmarks on supported cloud providers by unified and simple commands. It's designed to operate via vendor provided command line tools.
PerfKit Benchmarker contains a canonical set of public benchmarks. All benchmarks are running with default/initial state and configuration. This provides a way to benchmark across cloud platforms, while getting a transparent view of application throughput, latency, variance, and overhead.

History

PerfKit Benchmarker was started by Anthony F. Voellm, Alain Hamel, and Eric Hankland at Google in 2014. Once an initial "alpha" was in place Anthony F. Voellm and Ivan Santa Maria Filho built a community including ARM, Broadcom, Canonical, CenturyLink, Cisco, CloudHarmony, CloudSpectator, EcoCloud@EPFL, Intel, Mellanox, Microsoft, Qualcomm Technologies, Inc., Rackspace, Red Hat, Tradeworx Inc., and Thesys Technologies LLC.
This community worked together behind the scenes in a private GitHub project to create an open way to measure cloud performance. This community released the first public "beta" was released on February 11, 2015 and announced in a at which point the was open to everyone. After almost a year and with large adaption the on December 10, 2015.
Many members have made significant contributions for which the PKB community is very appreciative. In particular Carlos Torres from Rackspace and Mateusz Blaszkowski from Intel made such significant contributions they became the first members outside Google to have "committer" rights.
You can find out more about the

Benchmarks

A list of available benchmarks from PerfKitBenchmarker:
Big Data / IoTHigh Performance Computing
Scientific Computing
SimulationWeb benchmarks
Workloads- Aerospike YCSB
- Cassandra YCSB
- Hadoop Terasort
- HBase YCSB
- MongoDB YCSB
- Redis YCSB
- HPCC
-
- OLDIsim
- etcd

- Tomcat

Industry Participants

Since Google open sourced the PerfKitBenchmarker, it became a community effort from over 30 leading researchers, academic schools and industry companies. Those organizations include: ARM, Broadcom, Canonical, CenturyLink, Cisco, , , @EPFL, Intel, Mellanox, Microsoft, Qualcomm Technologies, Rackspace, Red Hat, and . In addition, Stanford and MIT are leading quarterly discussions on default benchmarks and settings proposed by the community. @EPFL is integrating into PerfKit Benchmarker.

Example runs

Example run on Google Cloud Platform
$./pkb.py --cloud=GCP --project= --benchmarks=iperf --machine_type=f1-micro
Example run on AWS

$./pkb.py --cloud=AWS --benchmarks=iperf --machine_type=t1.micro
Example run on Azure

$./pkb.py --cloud=Azure --machine_type=ExtraSmall --benchmarks=iperf
Example run on Rackspace

$./pkb.py --cloud=Rackspace --machine_type=standard1 --benchmarks=iperf
Example run on a local machine
$./pkb.py --stack_vm_file=local_config.json --benchmarks=iperf