Updated: 20-October-2007
Creating a benchmark to provide a consistent, comparative measure of JMS-based messaging server performance is not simple. The challenges are numerous, for example:
To be useful and reliable, a JMS messaging benchmark must fulfill several
    fundamental requirements:
And in the end, you hope you have something that is fair, relevant, and
    understandable, that will address technology for a period of years.
Such is the case with SPECjms2007, which is the first industry-standard
    benchmark designed specifically for enterprise messaging servers based
    on JMS. It provides a consistent workload and performance metrics for
    competitive product comparisons, as well as a framework for in-depth
    performance analysis of enterprise JMS messaging. 
All of this is due to the efforts of a team of people from around the world, that SPEC would like to thank (with apologies and thanks to those who may not be listed).
SPECjms2007 was developed by the SPECjms working group which is part of
    SPEC’s Java subcommittee with the participation of Technische Universität
    Darmstadt, IBM, Sun, Oracle, BEA, Sybase and Apache.
Thanks to all members of the SPECjms working group, in particular:
Samuel Kounev from TU Darmstadt for serving as project manager, benchmark
    architect and working group chair,
Kai Sachs from TU Darmstadt for serving as benchmark architect, designer
    and lead developer,
 
    Marc Carter from IBM for serving as responsible developer for the design
    and implementation of the driver, automation, and reporting framework,
George Tharakan from Sun Microsystems for his invaluable contributions
    to the specification and design of the workload as well as to the definition
    of the run and reporting rules,
    
    Martin Ross from IBM, Binu John, Eileen Loh, Ken Dyer and Sagar Shirguppi
    from Sun Microsystems, Russell Raymundo and Tom Barnes from BEA, Anoop
    Gupta from Oracle, Sebastian Frischbier from TU Darmstadt and Evan Ireland
    from Sybase for their hard work on testing, profiling and debugging the
    benchmark,
Adrian Co from Apache for his excellent work on implementing the reporter,
    
    Tim Dunn from IBM, Saraswathy Narayan from Oracle and Tom Barnes from
    BEA for their contributions to the specification of the benchmark run
    and reporting rules.
Many thanks also to
Alejandro Buchmann from TU Darmstadt, Lawrence Cullen, Robert Berry, Alan
    Adamson, John Stecher and Matt Hogstrom from IBM, Steve Realmuto from
    BEA and Ricardo Morin from Intel for their continued support of the SPECjms2007
    project.
The German Research Foundation (Deutsche Forschungsgemeinschaft) for funding Samuel Kounev's work as part of grant No. KO 3445/2-1.
We would also like to thank:
Finally, we thank all of the people behind the scenes in architecture
    groups, product development and performance groups who supported their
    work for SPEC at their respective companies.
    Samuel Kounev, Release Manager, SPEC Java Subcommittee