SPEC logo

Standard Performance Evaluation Corporation

SPECmail2009 Release 1.0

Run and Reporting Rules

Metrics: SPECmail_Ent2009 - Mailserver Enterprise 2009
SPECmail_Ent2009Secure - Mailserver Enterprise 2009 with Transport Security
Document version: 1.0
Last modified: Feb 2009

Sections:

Introduction

Run Rules

Reporting Rules

Results Disclosures

Fair Results Usage

Submission

Benchmark Kit

1.0 Introduction

This document specifies how SPECmail2009 is to be run for measuring and publicly reporting performance results. These rules abide by the norms laid down by SPEC.  The rules ensure that results generated with this suite are meaningful, comparable to other generated results, and are repeatable (with documentation covering factors pertinent to duplicating the results).

Per the SPEC license agreement, all results publicly disclosed must adhere to these Run and Reporting Rules.

1.1 Philosophy

SPEC believes the user community will benefit from an objective series of tests, which can serve as common reference and be considered as part of an evaluation process.

SPEC is aware of the importance of optimizations in producing the best system performance. SPEC is also aware that it is sometimes hard to draw an exact line between legitimate optimizations that happen to benefit SPEC benchmarks and optimizations that specifically target the SPEC benchmarks. SPEC wants to increase awareness of implementers and end users to issues of unwanted benchmark-specific optimizations that would be incompatible with SPEC's goal of fair benchmarking.

SPEC expects that any public use of results from this benchmark suite shall be for Systems Under Test (SUTs) and configurations that are appropriate for public consumption and comparison. Thus, it is required that:

  • Hardware and software used to run this benchmark must provide a suitable environment for supporting Internet mail transmission using standardized email protocols.
  • Optimizations utilized must improve performance for a larger class of workloads than just the ones defined by this benchmark suite. There must be no benchmark specific optimizations.
  • The SUT and configuration is generally available, documented, supported, and encouraged by the providers.

To ensure that results are relevant and publishable, SPEC expects that the hardware and software implementations used for running the SPEC benchmarks adhere to following conventions:

  • Proper use of the SPEC benchmark tools as provided.
  • Availability of an appropriate full disclosure report.
  • Support for all of the appropriate protocols.

1.2 Caveat

SPEC reserves the right to investigate any case where it appears that these guidelines and the associated benchmark run and reporting rules have not been followed for a published SPEC benchmark result.  SPEC may request that the result be withdrawn from the public forum in which it appears and that the benchmarker correct any deficiency in product or process before submitting or publishing future results.

SPEC reserves the right to adapt the benchmark codes, workloads, and rules of SPECmail2009 as deemed necessary to preserve the goal of fair benchmarking. SPEC will notify members and licensees if changes are made to the benchmark and will rename the metrics (e.g. from SPECmail_Ent2009 to SPECmail_Ent2009a).

Relevant standards are cited in these run rules as URL references, and are current as of the date of publication. Changes or updates to these referenced documents or URL's may necessitate repairs to the links and/or amendment of the run rules. The most current run rules will be available at the SPEC web site at http://www.spec.org. SPEC will notify members and licensees whenever it makes changes to the documentation.

2.0 Run Rules

The production of any compliant SPECmail2009 test results requires that the tests be run in accordance with these run rules. These rules relate to the requirements for the System Under Test (SUT) and the test bed (i.e. SUT, clients, and network), including protocols, operation, configuration, test staging, optimizations, and measurement.

2.1 Protocols

As Internet email is defined by its protocol definitions, SPECmail2009 requires adherence to the relevant protocol standards:

  RFC   821 : Simple Mail Transfer Protocol (SMTP)
  RFC 2060 : Internet Mail Application Protocol - Version 1 (IMAP4)

The SMTP and IMAP4 protocols imply the following:

  RFC   791 : Internet Protocol (IPv4)
  RFC   792 : Internet Control Message Protocol (ICMP)
  RFC   793 : Transmission Control Protocol (TCP)
  RFC   950 : Internet Standard Subnetting Procedure
  RFC 1122 : Requirements for Internet Hosts - Communication Layers

The benchmarker may choose to run SPECmail2009 using secure transport channels between the simulated clients and the mail server. In that condition, SPECmail2009 requires adherence to these additional standards:

  RFC 2595 : Using TLS with IMAP, POP3 and ACAP
  RFC 3207 : SMTP Service Extension for Secure SMTP over Transport Layer Security
  RFC 4616 : The PLAIN Simple Authentication and Security Layer (SASL) Mechanism
  RFC 4954 : SMTP Service Extension for Authentication
Specifically, the TLSv1 protocol using the cipher SSL_RSA_WITH_RC4_128_MD5.

Internet standards are evolving standards. Adherence to related RFC's (e.g. RFC 1191 Path MTU Discovery) is also acceptable provided the implementation retains the characteristic of interoperability with other implementations.

2.2 General Availability

The entire test bed (SUT, clients, and network) must be comprised of components that are generally available, or shall be generally available within 3 months of the first publication of the results. For more detailed information on the Report Generating rules, please refer to Section 3.3.

Products are considered generally available if they are orderable by ordinary customers and ship within a reasonable time frame. This time frame is a function of the product size and classification, and common practice. Some limited quantity of the product must have shipped on or before the close of the stated availability window. Shipped products do not have to match the tested configuration in terms of CPU count, memory size, and disk count or size, but the tested configuration must be available to ordinary customers. The availability of support and documentation for the products must coincide with the release of the products.

Hardware products that are still supported by their original or primary vendor may be used if their original general availability date was within the last five years. The five-year limit is waived for hardware used in client systems.

Software products that are still supported by their original or primary vendor may be used if their original general availability date was within the last three years.

In the disclosure, the benchmarker must identify any component that is no longer orderable by ordinary customers.

2.3 Stable Storage

The SUT must utilize stable storage for the mail store. Mail servers are expected to safely store any email they have accepted until the recipient has disposed of it. To do this, Mailservers must be able to recover the mail store without loss from multiple power failures (including cascading power failures), operating system failures, and hardware failures of components (e.g. CPU) other than the storage medium. At any point where the data can be cached, after the server has accepted the message and acknowledged its receipt, there must be a mechanism to ensure any cached message survives the server failure.

  • Examples of stable storage include:
    • Media commit of data; i.e. the message has been successfully written to the disk media.
    • An immediate reply disk drive with battery-backed on-drive intermediate storage or an uninterruptible power supply (UPS).
    • Server commit of data with battery-backed intermediate storage and recovery software.
    • Cache commit with UPS.
  • Examples which are not considered stable storage:
    • An immediate reply disk drive without battery-backed on-drive intermediate storage or UPS.
    • Cache commit without UPS.
    • Server commit of data without battery-backed intermediate storage and recovery software.

If an UPS is required by the SUT to meet the stable storage requirement, the benchmarker is not required to perform the test with an UPS in place.  The disclosure must state that an UPS is required. Supplying a model number for an appropriate UPS is encouraged but not required.

If a battery-backed component is used to meet the stable storage requirement, that battery must have sufficient power to maintain the data for at least 48 hours to allow any cached data to be committed to media and the system to be gracefully shut down. The system or component must also be able to detect a low battery condition and prevent the use of the component or provide for a graceful system shutdown.

2.4 Single Logical Server

The SUT must present to mail clients the appearance and behavior of a single logical server for each protocol. Specifically, the SUT must present a single system view, in that the results of any mail transaction from a client that change the state on the SUT must be visible to any/all other clients on any subsequent mail transaction. For example, if User_1 has 10 mail messages in his mailbox on the SUT, then that user could read those 10 messages from any client system.

2.5 Mail Server Logging

For a run to be valid, the following attributes related to logging must hold true:

  • The mail server must make at least one entry into a log file for each SMTP and IMAP session initiated.  The log entry must include as a minimum the following fields:
     
    • SMTP:
        time stamp  (month / day / hour / minute / sec)
        message identifier of the transferred message
    • IMAP:
        time stamp  (month / day / hour / minute / sec)
        user identifier
  • The log file records do not have to be synchronously committed to storage, but must be scheduled for non-volatile storage within 60 seconds.
  • The server must maintain the log for the entire duration of the run.
  • A binary format may be used for logging; however, ASCII translation is required when providing log files as part of the full disclosure materials.
  • All logs from the SMTP and IMAP servers generated during the submitted benchmark run should be retained until the results are accepted.  These substantiate the actual workloads and can be requested during the review process.

2.6 Networking

For a run to be valid, the following attributes that relate to TCP/IP network configuration must hold true:

  • Since SPECmail2009 is a representation of enterprise mail servers, the network is not constrained by any external Internet connectivity limits. So the original SPECmail2001 TCP Maximum Segment Size limitations does not apply.
     
  • The value of TIME_WAIT must be at least 60 seconds.
  • On those systems that do not dynamically allocate TCP TIME_WAIT table entries, the appropriate system parameter must be configured to ensure that user ports are not reused before TIME_WAIT period expires. This would be set on a per client and per server node basis as applicable.
  • As a basis for calculation, the OS and e-mail server configuration should support 5 IMAP sessions/hour per SPECmail_Ent2009 or SPECmail_Ent2009Secure user. So for 100 SPECmail_Ent2009 users, there must be at least 300 TIME_WAIT table entries.

Note: SPEC intends to follow relevant standards wherever practical, but with respect to this performance sensitive parameter it is difficult due to ambiguity in the standards. RFC1122 requires that TIME_WAIT be 2 times the maximum segment life (MSL) and RFC793 suggests a value of 2 minutes for MSL. So TIME_WAIT itself is effectively not limited by the standards. However, current TCP/IP implementations define a de facto lower limit for TIME_WAIT of 60 seconds, which is the value, used in most BSD derived UNIX implementations.

2.7 Initializing and Running the Benchmark

To make an official SPECmail2009 test run, the benchmarker must perform the following steps:

  1. Pre-populate the clean mail store on the server. This can be accomplished by:
    • Run the initialization sequence (java specimap –init) as described in the User Guide.

or

    • Restore an archive of the mail store created after the successful completion of the initialization sequence described above.
  1. Start the SPECmail2009 test using the default options required for a compliant test.

2.8 Optimization

Benchmark specific optimization is not allowed. Any optimization of either the configuration or software used on the SUT must improve performance for a larger class of workloads than that defined by this benchmark and must be supported and recommended by the provider. Optimizations that take advantage of the benchmark's specific features are forbidden. Examples of inappropriate optimization include, but are not limited to, taking advantage of specially formed test user account names, the fixed set of message sizes in the workload, or the workload's mailbox sizes.

2.9 Measurement

The provided SPECmail2009 tools must be used to run and produce the measured SPECmail_Ent2009 or SPECmail_Ent2009Secure results.  The SPECmail_Ent2009 and SPECmail_Ent2009Secure metrics are functions of the SPECmail2009 Enterprise workload, the associated mail store and the defined Quality of Service criteria. SPECmail_Ent2009 and SPECmail_Ent2009Secure results are not comparable to any other mail server performance metric, including each other.

2.9.1 Metric

SPECmail2009 expresses performance in terms of SPECmail_Ent2009 or SPECmail_Ent2009Secure IMAP Sessions per Hour. The benchmarker specifies the number of users for which the benchmark tools will generate a workload. The load generators presents a predefined mixture of SMTP and IMAP4 transactions to the E-mail server. Each SPECmail_Ent2009/SPECmail_Ent2009Secure IMAP user is represented by 1 or more SPECmail2009 Command sequences, in pre-defined combinations and durations during the peak hour. In addition to the SPECmail_Ent2009/SPECmail_Ent2009Secure metric, the benchmark also reports the configured number of SPECmail2009 users. 

The SPECmail_Ent2009Secure metric is just like the SPECmail_Ent2009 metric except that the former employs transport security (STARTTLS) on all IMAP and SMTP connections to the SUT, while the latter does not. The SPECmail_Ent2009Secure metric is new in SPECmail2009.

2.9.2 Workload

SPECmail_Ent2009/SPECmail_Ent2009Secure profile requires the SUT handle a selection of IMAP transactions and SMTP incoming messages for each IMAP user. The IMAP commands include LOGIN, FETCH, LIST, APPEND, SELECT, STORE and EXPUNGE, among others. SMTP incoming messages not intended for local users are relayed as outgoing SMTP messages. The workload parameters required for a valid run are contained in the default workload parameter file supplied with the benchmark. A detailed explanation of the workload is included in the SPECmail2009 Architecture White Paper.

2.9.3 Mail Store

It is the benchmarker's responsibility to ensure that the messages that make up the mail store are placed on the SUT so that they can be accessed properly by the benchmark. These folders and messages shall be used as the target working set. The benchmark performs internal validations to verify the expected results. No modification or bypassing of this validation is allowed.

The benchmark determines the initial working set size for the test based on a function of the number of IMAP4 users specified for the test, the message size distribution, and mailbox size distribution. Use the following rules as an estimate of the raw byte count needed for the data working set:

  • Working_Set_Size = 160 MB * UserCount

The actual size of the mail store and the amount of disk space to hold it is a function of the E-mail server product used and any additional storage overhead needed or configured. Another 10% should be added to the total storage space to accommodate the fluctuations in the workload.

The benchmarker is responsible for configuring the SUT with the corresponding number of user accounts and mailboxes required for the test. The benchmark suite provides tools for the initial population of the mail store.

Since the working set is not static and changes over the course of the test as messages are added or deleted, it is permitted for the benchmarker to capture the mail store image after the tools have created the initial population but before running any load tests (see section 2.7).

2.9.4 Quality of Service Criteria

The SPECmail2009 benchmark has specific Quality of Service (QoS) criteria for response times, delivery times and error rates. The QoS criteria are checked by the benchmark tools.

  • SPECmail2009 requires that for each request type except SMTP-DATA commands, 95% of all response times must be less than 5 seconds.
  • SPECmail2009 requires that 95% of all messages to local users be delivered to the target mailbox within 60 seconds.
  • SPECmail2009 requires that 95% of all messages to remote mail users must be received by the mail server (sink) within the measurement period.
  • SPECmail2009 requires that not more than 1% of transactions fail.

2.9.5 IMAP4 Autologout Timer

According to the IMAP4 RFC, 2060:

An IMAP4 server MAY have an inactivity autologout timer. Such a timer MUST be at least 30 minutes duration. The receipt of any command from the client during that interval should suffice to reset the autologout timer.  When the timer expires, the session does NOT enter the UPDATE state--the server should close the TCP connection without expunging any messages or sending any response to the client. Messages marked DELETED will remain in the server until another IMAP session issues the EXPUNGE command.

2.10 Load Generators

The SPECmail2009 benchmark requires the use of one or more client systems. One client system is designated as the prime client and will run the benchmark manager. One or more client systems act as load generators. One client system is designated as the smtpsink to handle the e-mails sent to remote addresses. Please refer to the User Guide for more detail on these roles.

A server component of the SUT must not be used as a load generator or a smtpsink when testing to produce valid SPECmail2009 results. A server component may be used as the prime client, but this is not recommended.

The client systems must have a Java Runtime Environment (JRE) version 1.5 or higher installed in order to run the benchmark tools.

2.11 SPECmail2009 Parameters

The SPECmail2009 benchmark provides three (3) parameter files that contain the testbed configuration and workload parameters.

The file SPECimap_sysinfo.rc contains the site-specific information that appears in the final report.

The file SPECimap_config.rc contains the testbed (clients and SUT) configuration information that should be modified. This data also appears in the final report.

The file SPECimap_fixed.rc contains the default workload parameters used to produce a compliant test result. This file must not be altered. Modifying the SPECimap_fixed.rc will not prevent the benchmark from running, but the results generated using the modified SPECimap_fixed.rc file will always be marked non-compliant.

To help ensure that the content of the parameter files is correct and can be used to produce a compliant test run, benchmarkers are encouraged to invoke the java specimap command with the -compliant switch. Then if there are problems in the rc files, the benchmark will generate appropriate warning messages but continue running the compliant test.

The SPECmail2009 User Guide provides detailed documentation on the parameters in the Specimap_config.rc, SPECimap_sysinfo.rc and Specimap_fixed.rc files.
 


3.0 Reporting Rules

In order to publicly disclose SPECmail2009 results, the benchmarker must adhere to these reporting rules in addition to having followed the run rules above. The goal of the reporting rules is to ensure the SUT and testbed are sufficiently documented such that someone could understand the results and reproduce the test.

3.1 Metrics And Result Reports

The benchmark single figure of merit, SPECmail_Ent2009 Sessions per Hour or SPECmail_Ent2009Secure Sessions per Hour, is the throughput measured during the run at the 100% load level.

The report of results for the SPECmail2009 benchmark is generated in HTML by the provided SPEC tools. These tools may not be changed, except for portability reasons with prior SPEC approval. The tools perform error checking and will flag some error conditions resulting in an "invalid result". However, these automatic checks are only there for debugging convenience and do not relieve the benchmarker of the responsibility to check the results and follow the run and reporting rules.

The section of the output.raw file that contains actual test measurements must not be altered. Corrections to the SUT descriptions may be made as needed to produce a properly documented disclosure.

3.2 Results Disclosure and Usage

SPEC requires that each licensee test location (city, state/province and country) measure and submit a single compliant result for review, and have that result accepted, before publicly disclosing or representing as compliant any SPECmail2009 . Only after acceptance of a compliant result from that test location by the subcommittee may the licensee publicly disclose any future SPECmail2009 result produced at that location in compliance with these run and reporting rules, without acceptance by the SPECmail subcommittee. The intent of this requirement is that the licensee test location demonstrates the ability to produce a compliant result before publicly disclosing additional results without review by the subcommittee.

SPEC encourages the submission of results for review by the relevant subcommittee and subsequent publication on SPEC's web site. Licensees, who have met the requirements stated above, may publish compliant results independently; however, any SPEC member may request a full disclosure report for that result and the test sponsor must comply within 10 business days. Issues raised concerning a result's compliance to the run and reporting rules will be taken up by the relevant subcommittee regardless of whether or not the result was formally submitted to SPEC.

Any test result not in full compliance with the run and reporting rules must not be represented using either the SPECmail2009 SPECmail_Ent2009 metric name or the SPECmail2009 SPECmail_Ent2009Secure metric name.

The metrics SPECmail_Ent2009 Sessions per Hour and SPECmail_Ent2009Secure must not be associated with any estimated results. This includes adding, multiplying or dividing measured results to create a derived metric.

Submissions must include the Submission File, a Configuration Diagram, and the Full Disclosure Archive for the run.

3.2.1 Fair Use of SPECmail2009 Results

Any public use of SPECmail2009 results must, at the time of publication, adhere to the then-currently-posted version of SPEC's Fair Use Rules, http://www.spec.org/fairuse.html

When competitive comparisons are made using SPECmail2009 benchmark results available from the SPEC web site, SPEC requires that the following template be used:

SPECmail2009 is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive numbers shown reflect results published on www.spec.org from date to date. [The comparison presented is based on basis for comparison.] For the latest SPECmail2009 results visit http://www.spec.org/osg/mail2009.

Notes:

  • The reported dates must cover the period in which the competitive results were published on the SPEC web site.
  • The bracketed phrase above ([...]) is required only if selective comparisons are used.

Example:

SPECmail2009 is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive numbers shown reflect results published on www.spec.org from Jan 12 to Mar 31, 2009. The comparison presented is based on best performing 4-cpu servers currently shipping by Vendor 1, Vendor 2 and Vendor 3. For the latest SPECmail2009 results visit http://www.spec.org/osg/mail2009.   

The rationale for the template is to provide fair comparisons by ensuring that:

  • The time period when the competitive data was published is clearly mentioned.
  • The subset of results used for comparison is clearly defined.

Test results that have not been accepted and published by SPEC must not be publicly disclosed except as noted in Section 3.2.2 Research and Academic Usage. Research and academic usage test results that have not been accepted and published by SPEC must not use the SPECmail2009 metrics, SPECmail_Ent2009 and SPECmail_Ent2009Secure.

3.2.2 Research and Academic Usage of SPECmail2009

SPEC encourages use of the SPECmail2009 benchmark in academic and research environments. It is understood that experiments in such environments may be conducted in a less formal fashion than that required of licensees submitting to the SPEC web site or otherwise disclosing valid SPECmail2009 results.

For example, a research environment may use early prototype hardware that simply cannot be expected to stay up for the length of time required to run the entire benchmark, or may use research software that is unsupported and not generally available. Nevertheless, SPEC encourages researchers to obey as many of the run rules as practical, even for informal research. SPEC suggests that following the rules will improve the clarity, reproducibility, and comparability of research results. Where the rules cannot be followed, SPEC requires the results be clearly distinguished from fully compliant results such as those officially submitted to SPEC, by disclosing the deviations from the rules and avoiding the use of the SPECmail2009 metric name.

3.3 Testbed Configuration Disclosure

The system configuration information that is required to duplicate published performance results must be reported. This list is not intended to be all-inclusive, nor is each performance neutral feature in the list required to be described. The rule is: If it affects performance or the feature is required to duplicate the results, then it must be described.

Any deviations from the standard default configuration for the SUT must be documented, so an independent party would be able to reproduce the result without further assistance.

For most of the following configuration details, there is an entry in the configuration file, and a corresponding entry in the tool-generated HTML result page. If information needs to be included that does not fit into these entries, the Notes sections must be used.

3.3.1 SUT Hardware

The following SUT hardware components must be reported:

  • Vendor's name
  • System model number, type and clock rate of processor, number of processors, and main memory size
  • Size and organization of primary, secondary, and other cache, per processor. If a level of cache is shared among processors in a system, it must be stated in the notes section of the disclosure.
  • Memory configuration options, if they affect performance, e.g. interleaving and access time
  • Other hardware, e.g. write caches, or other accelerators
  • Number, type, model, and capacity of disk controllers and drives
  • Disk subsystem configuration details, if they affect performance

3.3.2 SUT Software

The following SUT software components must be reported:

  • Mail Server software and version
  • Operating system and version
  • Type of file system
  • The values of maximum segment life (MSL) and TIME_WAIT. If TIME_WAIT is not equal to 2*MSL, that must be noted. (Reference section 4.2.2.13 of RFC 1122).
  • Any other software packages used during the benchmarking process
  • Other clarifying information as required to reproduce benchmark results; e.g. number of daemons, server buffer cache size, disk striping, non-default kernel parameters, and logging mode
  • Additionally, the submitter must make available a description of the tuning features that were utilized; e.g. kernel parameters and software settings, including the purpose of that tuning feature. Where possible, it must be noted how the values used differ from the default settings for that tuning feature. This disclosure can be part of the Notes sections or a separate document.

3.3.3 Network Configuration

A brief description of the network configuration used to achieve the benchmark result is required. The minimum information to be supplied is:

  • Number, type, and model of network controllers
  • Number and type of networks used
  • Base speed of network
  • A network configuration notes section may be used to list the following additional information:
    • Number, type, model, and relationship of external network components to support the SUT (e.g., any external routers, hubs, switches, etc.)
    • Relationship of clients, client type, and networks (including routers, hubs, switches, etc.), i.e. which clients are connected to which LAN segments. For example: "clients 1 and 2 on one ATM-622, clients 3 and 4 on second ATM-622, and clients 5, 6, and 7 each on their own 100TX segment."
    • Number, type, model, and relationship of external network components

3.3.4 Client Systems

The following client system properties must be reported:

  • Number of client systems
  • System model number, processor type and clock rate, number of processors
  • Main memory size
  • Network Controller
  • Operating System and Version
  • JRE version used to run the benchmark (i.e. invoke specimap and specimapclient)
  • Any non-default parameters (e.g. email, OS, and Network tuning parameters)

3.3.5 Configuration Diagram

A configuration diagram of the SUT must be provided in a common graphics format (e.g. PNG, JPEG, GIF). This will be included in the HTML formatted results page. An example would be a line drawing that provides a pictorial representation of the SUT including the network connections between clients, server nodes, switches and the storage hierarchy and any other complexities of the SUT that can best be described graphically.

3.3.6 General Availability Dates

The dates of general customer availability must be listed for the major components: hardware, mail server software, and operating system, by month and year. All the system, hardware and software features are required to be available within 3 months of the first publication of the result. The overall hardware availability date must be the latest of the hardware availability dates. The overall software availability date must be the latest of the software availability dates.

If pre-release hardware or software is used, then the test sponsor represents that the performance measured is the performance to be expected on the same configuration of the release system. If the test sponsor later finds the performance has dropped by more than 5% of that reported for the pre-release system, then the test sponsor must resubmit a corrected test result.

For additional information on general availability requirements, please refer to section 2.2 above.

3.3.7 Test Sponsor

The reporting page must list:

  • Organization which is reporting the results
  • SPEC license number of that organization
  • Date the test was performed, by month and year

3.3.8 Disclosure Notes

The Notes section is used to document information such as:

  • System tuning parameters other than default
  • Process tuning parameters other than default
  • Network type (i.e. 10BaseT, token-ring, 10GBit, etc)
  • MTU size of the network used
  • Background load, if any
  • Any approved portability change made to the individual benchmark source code including module name, line number of the change
  • Information such as compilation options must be listed if the end user is required to build the server software from sources
  • Critical customer-identifiable firmware or option versions such as network and disk controllers
  • Additional important information required to reproduce the results from other reporting sections that require a larger text area
  • Any supplemental drawings or detailed written descriptions, or pointers to same, that may be needed to clarify some portion of the SUT
  • Definitions of tuning parameters may be included or a pointer supplied to a separate document
  • Part numbers or sufficient information that would allow the end user to order the SUT configuration if desired
  • Identification of any components used that are supported but are no longer orderable by ordinary customers

3.4 Mail Server Log File Review

The following additional information must be provided if requested for SPEC's results review:

  • ASCII versions of the SMTP and IMAP log files from the SUT

4.0 Submission Requirements for SPECmail2009

Once the test sponsor has a compliant run and wishes to submit it to SPEC for review, they will need to provide the following:

  • The output.raw file containing the information outlined in section 3
  • File containing the configuration diagram of the SUT as a common Internet format - GIF, JPEG, PNG, BMP, PDF

Note: Sometimes the submission needs to include supplemental information, that did not fit within the report format. These include unusual setup/tuning needs or additional configuration information that helps explain the SUT. These additional files should be given to SPEC by special arrangements with the SPEC office staff. It should not be included in the submission e-mail message, because it will be stripped during results extraction.

Combine the output.raw and the configuration diagram into a zip or .tar.gz file, attach the zip or .tar.gz file to a message, then e-mail the message to submail2009@spec.org to begin the submission process.

Retain the following for possible request during the review:

  • The SUT's SMTP and IMAP log files from the run in ASCII format

SPEC encourages the submission of results for review by the relevant subcommittee and subsequent publication on SPEC's web site.  Vendors may publish compliant results independently as long as said results have been reviewed and accepted by the related SPEC sub-committee.


5.0 SPECmail2009 Benchmark Kit

SPEC provides client driver software, which includes the tools for running the benchmark and reporting its results. This software includes a number of checks for conformance with these run and reporting rules.

The client driver software is provided as Java bytecode and may be used to produce publishable SPECmail2009 results.  SPEC requires the user to provide any other software needed to run the benchmark, e.g. OS and JRE.

The kit also includes the SPECimap_config.rc, SPECimap_sysinfo.rc and SPECimap_fixed.rc files described above and a copy of the benchmark documentation (User Guide, Architecture White Paper, FAQ, and Run and Reporting Rules).

Licensees will be notified of any significant updates to the benchmark tools or documentation.  Updated versions of the documentation will be available at http://www.spec.org/mail2009.


Copyright © 2001-2009 Standard Performance Evaluation Corporation

All Rights Reserved