SPEC Cloud IaaS 2018: Glossary
The glossary for SPEC Cloud IaaS 2018 compiles the names and terms used in the benchmark and its documentation. The US government’s NIST Definition of Cloud Computing [NISTPUB145] provides many of the definitions used. The SPEC Cloud working group’s 2012 OSG Cloud White Paper [CloudWhitePaper] provide additional terms. The glossary covers terminology changes up through the current benchmark release.
Glossary (sorted alphabetically)
- Application Instance (AI)
- A group of instances created to run a single workload collectively. An application instance comprises a workload driver instance and set of instances, which are stressed by the workload driver. SPEC Cloud IaaS 2018 benchmark adds load to the cloud under test by replicating multiple application instances during the scale-out phase.
- Application Instance Provisioning Time
- Measures the time from request to create a new application instance until the associated cluster of instances reports readiness to accept client requests.
- Application Instance Run (AI_run)
- Denotes the creation of the dataset, running of load generator, and collection of results for a single AI. A valid application instance created during scale-out phase has one or more runs.
- Baseline phase
- In the baseline phase, performance measurements are made for individual AIs to establish parameters for use in QoS and relative scalability calculations for each workload. The baseline driver instructs CBTOOL to create a single AI starting with the KMeans workload. Each AI is required to run its workload a minimum of five times, including recreating the data set at the start of the run. After the workload runs complete and their data is collected, a command is issued to delete the AI. Next, a new AI is created for another AI iteration. A minimum of five AI iterations is required, first for KMeans and then for YCSB. The data is averaged and reported in the baseline section of the FDR in addition to its use in the scale-out phase for QoS and relative scalability calculations.
- Benchmark phases
- The benchmark has two phases namely baseline and scale-out.
- Black-box Cloud
- A cloud provider provides a general specification of the SUT, usually based on how the cloud consumer is billed. The exact hardware details corresponding to these compute units may not be known, typically when the tester is not also the cloud provider. [CloudWhitePaper]
- CBTOOL
- Cloud Rapid Experimentation and Analysis Tool (CBTOOL) is the open source framework for automating IaaS cloud testing. SPEC selected CBTOOL as the test harness for its cloud benchmarks. For more details on CBTOOL see https://github.com/ibmcb/cbtool/wiki.
- Cloud (system or service)
- Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. [NISTPUB145]
- Cloud Consumer
- A person or organization that is a customer of a cloud. A cloud customer may itself be a cloud and that clouds may offer services to one another. [NISTPUB145]
- Cloud Provider
- An organization that provides cloud services to customers who pay for only the computing time and services used. [NISTPUB145]
- Full Disclosure Report (FDR)
- A package of information documenting the results of a test and the testbed configuration. The goal of providing this documentation is so that an independent third party can reproduce the SUT and replicate the results without further information. At the end of a test, an HTML page is generated containing the textual description of the testbed’s configuration and the primary and secondary metrics. This page is commonly called the FDR, but a full disclosure report also includes an archive of supporting documentation from the test (e.g., configuration, log and yaml files), diagrams, and any additional information as required by the Run and Reporting Rules.
- Hybrid Cloud
- The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds). [NISTPub145]
- Infrastructure as a Service (IaaS)
- The Service Provider gives the Cloud Consumer the capability to the provision processing, storage, and network resources. They can also deploy and run arbitrary operating systems. The End Consumer does not manage or control the underlying physical cloud infrastructure but has control over the operating system, assigned storage, deployed applications, and limited control of select networking components (e.g., host firewalls). [CloudWhitePaper]
- Instance
- An instance is an abstracted execution environment which presents an operating system (either discrete or virtualized). The abstracted execution environment presents the appearance of a dedicated computer with CPU, memory, and I/O resources available to the operating system. In SPEC Cloud, an instance consists of a single OS and the application software stack that supports a single SPEC Cloud component workload. There are several methods of implementing an instance including physical machines, virtual machines, or containers. An instance is created or destroyed using an API provided by an IaaS cloud.
- Instance image
- An image on the disk used to provision an instance. Formats for instance image include QCOW2 (Qemu copy on write 2), RAW, or AMI (Amazon machine image).
- Mean Instance Provisioning Time
- Averages the provisioning time for instances from all valid application instances. Each instance provisioning time measurement is the time from the initial instance provisioning request to connectivity on port 22 (ssh).
- Performance Score
- An aggregate of the workload scores for all valid AIs represents the total work done at the reported number of Replicated Application Instances. It is the sum of the KMeans and YCSB workload performance scores normalized using the reference platform. The reference platform values used are a composite of baseline metrics from several different white-box and black-box clouds. Since the Performance Score is normalized, it is a unit-less metric.
- Physical machine
- A set of connected components consisting of one or more general-purpose processors (CPUs), memory, network connectivity, and mass storage on either local (disk) or remote (network attached, block storage). The physical machine can have standalone physical packaging or be a blade installed in a blade chassis. An example would be a multi-core server with 4 GB of memory, 250 GB disk, and 1 Gb/s network adapter.
- Primary Metrics
- The benchmark reports the following four primary metrics: Replicated Application Instances, Performance Score, Relative Scalability, and Mean Instance Provisioning time. These are considered required metrics by the SPEC Fair-Use Policy in any disclosures that use data taken from the benchmark report.
- Provisioning
- Makes available the infrastructure services requested by the cloud user. The benchmark issues provisioning requests for the allocation of cloud instances. The cloud returns the CPU, memory, storage and network resources in response.
- Provisioning Time
- The measured time needed to bring up a new instance.
- Private Cloud
- The cloud infrastructure provisioned for exclusive use by a single organization comprising single or multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises. [NISTPub145]
- Public Cloud
- The cloud infrastructure provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider. [NISTPub145]
- Quality of Service (QoS)
- The minimum percent (e.g., 95%) of collected values that complete within a predefined threshold in each measurement category. The Run and Reporting Rules document contains specific QoS limitations for each workload and benchmark.
- Relative Scalability
- Measures whether the work performed by application instances scales linearly in a cloud. When multiple AIs run concurrently, each AI should offer the same level of performance as that measured for an AI running similar work during the baseline phase when the tester introduces no other load. Relative Scalability is expressed as a percentage (out of 100).
- Replicated Application Instances
- The total number of valid AIs that have completed at least one application iteration at the point the test ends. The total copies reported is the sum of the Valid AIs for each workload (KMeans and YCSB) where the number of Valid AIs for either workload cannot exceed 60% of the total. The other primary metrics are calculated based on conditions when this number of valid AIs is achieved.
- Response Time
- The time between the issuing of a work item request until its completion. This definition is identical to the YCSB Latency metric.
- Scale-out phase
- In the Scale-out phase, new application instances for each workload are created every few minutes using a uniform distribution of five to ten minutes. The workloads run concurrently until the test reaches a stopping condition based on the QoS limits or the maximum number of AIs. The number of valid AIs for each workload when the benchmark ends along with their associated metrics are used to determine the primary metrics. A benchmark report is generated at the end of the scale-out phase.
- SUT
- The SUT is the cloud environment being tested. This includes all hardware, network, base software, and management systems used for the cloud service. It does not include any client(s) or driver(s) necessary to generate the cloud workload, nor the network connections between the driver(s) and SUT. The actual set of SUT constituent pieces differs based on whether it is a white-box or black-box cloud. [CloudWhitePaper]
- Variability
- The difference in measured results between runs of the benchmark. In public clouds, variability may arise due to factors such as geographic region and time or date of execution. The randomizing features of the benchmark used for data generation and AI generation may also introduce a small degree of variation between runs.
- White-box Cloud
- The SUT’s exact engineering specifications including all hardware and software are known and under the control of the tester, typically the case for private clouds. [CloudWhitePaper]
References
[NISTPUB145] | (1, 2, 3, 4, 5, 6, 7) Mell, P., & Grance, T.; NIST Definition of Cloud Computing, Publication No. 145, 2011 http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf |