SPEC virt_sc ® 2013 Frequently Asked Questions

Version 1.1 - September 21, 2016



1. What is SPEC virt_sc ® 2013 V1.1?

SPEC virt_sc ® 2013 V1.1 is a software benchmark product developed by the Standard Performance Evaluation Corporation (SPEC), a non-profit group of computer vendors, system integrators, universities, research organizations, application software stacks. The benchmark is intended to be run by hardware vendors, virtualization software vendors, application software vendors, datacenter managers, and academic researchers.

2. Why did SPEC develop SPEC virt_sc V1.1?

The release of SPEC virt_sc V1.1 enhances security protocol support to the webserver workload, addresses minor defects in the reporter and syntax checker, and updates the Power and Temperature Daemon (PTD) to the latest version.

3. How does SPEC virt_sc V1.1 compare to SPEC virt_sc V1.0?

SPEC virt_sc V1.1 is an update to SPEC virt_sc 2013 V1.0 and shares its benchmark architecture, workload implementation, harness, and run requirements. For application stacks that require Transport Layer Security (TLS), it supports TLSv1, TLSv1.1, and TLSv1.2 and adds newer ciphers. Existing support of SSLv3 remains available and is the default.

4. What does SPEC virt_sc measure?

The benchmark presents an overall workload that achieves the maximum performance of the platform when running a set of four application workloads against one or more sets of Virtual Machines called "tiles". Scaling the workload on the SUT (System Under Test) consists of running an increasing number of tiles. Peak performance is the point at which the addition of another tile either fails the Quality of Service criteria or fails to improve the overall metric.

The benchmarker has the option of running with power monitoring enabled and can submit results to either the performance with SUT power category and/or performance with Server only power category.

5. What kinds of workloads are used in the SPEC virt_sc benchmark?

The suite consists of several SPEC workloads that represent applications that industry surveys report to be common targets of virtualization and server consolidation. We modified each of these standard workloads to match an enterprise server consolidation scenario's resource requirements for CPU, memory, disk I/O, and network utilization for each workload. The SPEC workloads used are modified versions of SPECweb2005, SPECjAppServer2004, SPECmail2008, and SPEC CPU2006.

6. When and where are SPEC virt_sc results available?

Initial SPEC virt_sc results are available on SPEC's web site. Subsequent results are posted on an ongoing basis following each two-week review cycle: results submitted by the two-week deadline are reviewed by SPEC virtualization subcommittee members for conformance to the run rules, and if accepted at the end of that period are then publicly released. Results disclosures are at: http://www.spec.org/virt_sc2013/results.

7. What are the limitations of SPEC virt_sc?

SPEC virt_sc is a standardized benchmark, which means that it is an abstraction of the real world. For example all the database servers can use the same database archive to restore their copy of the database. This helps reduce the complexity of setting up the test.

8. Can I use SPEC virt_sc to determine the size of the server I need?

SPEC virt_sc results are not intended for use in sizing or capacity planning.

9. What is a tile?

In SPEC virt_sc, a tile is a single unit of work that is comprised of four application workloads that are driven across five distinct virtual machines plus a separate Database Server VM. The load on the SUT is scaled up by configuring additional sets of the VMs as described below and increasing the tile count for the benchmark.  For a SPEC virt_sc tile, the workloads and their VMs are:

The Application Server VM for each tile requires an enterprise-class Database Server VM backend. Each Database Server VM is shared by up to four appserver VMs. For every four consecutive tiles, a separate Database Server VM is required. Only the last Database Server VM may be shared by fewer than four tiles if the number of tiles is not a multiple of four.

10. What is a fractional tile?

When the SUT does not have sufficient system resources to support the full load of an additional tile, the benchmark offers the use of a fractional load tile. A fractional tile consists of an entire tile with all VMs but running at a reduced percentage of its full load.

11. How can I obtain the benchmark?

SPEC virt_sc is available via web download from the SPEC site at $3000 for new licensees and $1500 for academic and eligible non-profit organizations. The order form is at: http://www.spec.org/order.html.

12. What is included with SPEC virt_sc?

The benchmark includes the code necessary to run the driver system(s), the server-side file set generation tools, and dynamic content implementations. It is at the tester's discretion to choose the application stack.

The SPEC virt_sc kit contains:

13. What hardware is required to run the benchmark?

See the Run Rules and the User's Guide for more detailed information and requirements.

14. What if I have a problem configuring or running the SPEC virt_sc benchmark?

You can find more information on how to set up and run the benchmark in the User's Guide and the Client Harness User's Guide. You also can register at the SPECvirt Forum where you can post questions and review solutions to common problems. If you cannot resolve your issue using these methods, please send email to bring it to the attention of the SPEC Virtualization subcommittee.

At the Forum you also can find an ExampleVM environment for the SPEC virt_sc V1.1 benchmark. This ExampleVM environment includes documentation and scripts to help configure the six virtual machines needed in a SPEC virt_sc tile as well as a client virtual machine. Even if the configuration is not exactly the same as one your are trying to set up, having a working example for comparison is a valuable tool to aid you in setting up your own environment. See this topic for more information.

15. How can I submit SPEC virt_sc results?

Only SPEC virt_sc licensees can submit results. SPEC member companies submit results free of charge, and non-members may submit results for an additional fee. All results are subject to a two-week review by SPEC virtualization subcommittee members. First-time submitters should contact SPEC's administrative office.

SPEC virt_sc submissions must include both the raw output file and configuration information required by the benchmark. During the review process, other information may be requested by the subcommittee. You can find submission requirements in the run rules.

16. Where are the SPEC virt_sc run rules?

The current version of the run rules can be found at: http://www.spec.org/virt_sc2013/docs/SPECvirt_RunRules.html.

17. Where can I go for more information?

The SPEC virt_sc Design Document contains design information on the benchmark and workloads. The Run and Reporting Rules, the User's Guide, and the Client Harness User's Guide contain instructions for installing and running the benchmark. See http://www.spec.org/osg/virtualization for the available information on SPEC virt_sc.

18. What control mechanism is used to drive the workloads?

SPEC developed a test harness driver to coordinate running the component workloads in one or more tiles on the SUT. The harness allows you to run and monitor the benchmark, collects measurement data as the test runs, post-processes the data at the end of the run, validates the results, and generates the test report.

19. What is the performance metric for SPEC virt_sc?

The benchmark supports three categories of results, each with its own primary metric. Results may be compared only within a given category; however, the benchmarker has the option of submitting results from a given test to one or more categories. The first category is Performance-Only and its metric is SPEC virt_sc which is expressed as "SPEC virt_sc @ <5*Number_of_Tiles + Number_of_DBservers> VMs" on the reporting page. SPEC virt_sc_PPW (performance with SUT power) and SPEC virt_sc_ServerPPW (performance with Server only power) are performance per watt metrics obtained by dividing the peak performance by the peak power of the SUT or Server, respectively, during the run measurement phase.

20. Does the benchmark support multiple servers?

No. Currently the benchmark is designed for a single host system.

21. Can I report results for open source software?

Yes, you can use open source products when running the benchmark as long as you comply with open source requirements specified in the Run Rules.

22. Are the results independently audited?

The SPEC Virtualization subcommittee reviews all results but does not require that they be independently audited.

23. Can I announce my results before they are reviewed by the SPEC subcommittee?

No. SPEC must review and accept the result before it can be announced publicly.

24. Are results sensitive to components outside of the SUT -- e.g. client driver machines?

Yes, the client driver machines must be configured properly to accommodate the workloads. You may use one or more physical systems for client load drivers, clients may be virtualized, and the tile may be driven by multiple clients. Note that client resource requirements for SPEC virt_sc are higher than for SPEC virt sc 2010. See the User's Guide for more information regarding hardware and software requirements for the clients.

25. Does SPEC virt_sc have a power measurement component associated with the benchmark?

SPEC virt_sc implements the SPECpower methodology for power measurement. The benchmarker has the option of running with power monitoring enabled and can submit results to any of three categories: * performance only (SPEC virt_sc) * performance/power for the SUT (SPEC virt_sc_PPW) * performance/power for the Server-only (SPEC virt_sc_ServerPPW)

You can find more information on power measurement in the Client Harness User's Guide and Run Rules.

26. Can I compare the results of SPEC virt_sc workloads to the results of the SPEC benchmarks from which they were derived? For example, can I compare a SPECweb2005 result to the result of the SPEC virt_sc web server component?

No, they are not. Several substantive changes have been made that make the SPEC virt_sc workloads unique.

27. Can I compare SPEC virt_sc results in different categories?

No. Results between the different SPEC virt_sc categories cannot be compared.

28. Can I compare SPEC virt_sc with other virtualization benchmarks?

No. SPEC virt_sc is unique and not comparable to other benchmarks.

29. What is a "compliant" result of SPEC virt_sc?

A compliant benchmark result meets all the requirements of the SPEC virt_sc run rules for a valid result. In addition to the run and reporting rules, several validation and tolerance checks are built-in to the benchmark. If you intend to publicly use the SPEC virt_sc metrics, the result must be compliant and accepted by SPEC.

30. Can I run other workload levels?

Yes, for non-compliant runs only. You may set the load level for each or all workloads to be heavier or lighter as your needs dictate. You can set these load levels by changing parameters in the Control.config file and possibly each workload's configuration file.

31. How long does it take to run the benchmark?

The run time is approximately three hours with default settings.

32. How can I use the benchmark to research performance related to a specific component of the benchmark such as the memory, storage, hypervisor, or the application server VM?

SPEC virt_sc has been implemented as a standardized end-to-end benchmark designed to stress all layers of a system that handles a workload representative of server consolidation. Performance critical components include the server hardware (Processors, Memory, Network, Storage, etc.), the virtualization technology (hardware virtualization, operating system virtualization, and hardware partitioning), the guest (VM) operating systems, and the guest application software stacks. Selection and tuning of any of these components can have significant effects on the overall performance of the SUT.

The best way to differentiate the performance characteristics of different versions or products for a specific element of a system is to hold all other elements constant and change only component you are interested in. For example if you want to see the effects of RAID 5 vs RAID 10, then keep the other elements of the server, virtualization products, and the guest VMs the same and install copies of the VMs on the RAID 5 storage and RAID 10 storage while keeping other storage elements such as number of LUNs the same and run your tests. Similarly if you want to compare versions of hypervisors, then you need to keep the rest of the platform constant. If you change other elements such as the software running on the VMs, it can significantly impact the overall results.

33. What types of virtualization platforms are supported by SPEC virt_sc?

SPEC virt_sc supports hardware virtualization, operating system virtualization, and hardware partitioning. The benchmark does not address multiple host performance or application virtualization.

34. What skills do I need to run SPEC virt_sc?

The documentation assumes that you have familiarity with virtualization concepts and implementations. You require experience with the installation, configuration, management, and tuning of your selected hypervisor platform. You must know how to use your virtualization platform to create, administer, and modify virtual machines and allocate system resources including:


Product and service names mentioned herein may be the trademarks of their respective owners.

Copyright © 2013-2016 Standard Performance Evaluation Corporation (SPEC).

All rights reserved.