Brian Stansberry's Blog

February 16, 2010

JBoss Application Server 6.0.0.M2 Is Out!

Filed under: JBoss — Brian Stansberry @ 5:45 pm

It gives me great pleasure to announce that JBoss Application Server 6.0.0.M2, the second milestone release of the AS 6 series, is available for download. This release builds on the first milestone release by adding support for Servlet 3.0 and JPA 2.0. We’ve also integrated the RESTEasy project and upgraded the server core to the JBoss Microcontainer 2.2 series.

Servlet 3.0 improves JEE by easing development through annotations, by adding standardized asynchronous processing support, and by improving pluggability and extensability through a modular web.xml. Rémy Maucherat and the JBoss Web team bring Servlet 3 capabilities to the AS through their integration of JBoss Web 3.0.

JPA 2.0 enhances Java persistence with its increased modeling flexibility, greater object-relational mapping capabilities, new locking options and a criteria query API. The Hibernate project brings JPA 2.0 capabilities to the AS through the integration of their 3.5.0-CR-1 release.

RESTEasy is a JBoss project that provides various frameworks to help you build RESTful Web Services and RESTful Java applications. It is a fully certified and portable implementation of the JAX-RS 1.0 specification and will provide the AS’s support for JAX-RS 1.1 by the final AS 6..0.0 release.

You can find the full release notes for the release here, and the official download location is here.

This release continues with the milestone versioning scheme Jason Greene discussed in his AS 6.0.0.M1 announcement. The milestone model focuses on time constrained releases that each provide a small set of completed features. A series of milestone releases will eventually culminate in the final AS 6.0.0 release. This second milestone is the first AS release that was produced via a truly time-boxed approach. I think it’s been quite a successful and enjoyable way to produce a release.

Many thanks to the entire JBoss AS team for their hard work on this release. Special thanks go to Steve Ebersole and the Hibernate team and Rémy Maucherat and the JBoss Web team for providing such a painless integration of the Servlet 3 and JPA 2 features. Thanks also to Jason Greene for giving me the opportunity to drive this release. It’s been a blast!

Enjoy, and onward to M3!


December 2, 2009

Clustering Features in JBoss Application Server 6.0.0.M1

Filed under: JBoss — Brian Stansberry @ 11:00 am

I’m very happy to report that the first milestone release of the community driven JBoss Application Server 6 series was released today. It’s available from and It includes support for some of the key technologies that are part of the EE6 specification, including:

Additional capabilities will be added in future milestones; be sure to keep an eye out for them!

In this post I’ll touch on a couple of the clustering-related features in AS 6.0.0.M1.

Integrated Support for the mod_cluster Load Balancer

The biggest new clustering-related feature in M1 is Paul Ferraro’s addition of out of the box support for mod_cluster. Thanks to the Jean-Frederic Clere’s leadership, a lot of hard work from Paul Ferraro and the mod_cluster team as well as excellent community feedback, mod_cluster has progressed rapidly since I first blogged about it. It’s always been possible to integrate mod_cluster into JBoss AS, but now Paul has done most of the leg-work for you.

The mod_cluster integration is included in the AS’ default, standard and all profiles. Currently it’s not included in the web profile.

To enable mod_cluster support in the AS, you need to do a couple things:

  1. Ensure that JBoss Web uses mod_cluster. This is simply a matter of uncommenting the following in the $JBOSS_HOME/server/[profile name]/deploy/jbossweb.sar/META-INF/jboss-beans.xml file.
    <!-- Uncomment to enable mod_cluster integration -->

    In a later AS 6 milestone, our goal is to remove the need for this step.

  2. Configure a unique name (known as a “jvmRoute”) for each back end server in your cluster. This is configured by adding an attribute to the Engine element in server.xml.
    <Engine name="jboss.web" defaultHost="localhost" jvmRoute="node01">

    Instead of hard-coding the jvmRoute in each server’s server.xml, I recommend instead using system property substitution:

    <Engine name="jboss.web" defaultHost="localhost" jvmRoute="${jboss.jvmRoute}">

    This allows you to control the value from the command line:

    $ ./ -Djboss.jvmRoute=node01

    Configuring a jvmRoute is not absolutely required; if one isn’t provided mod_cluster will generate one from the address and port of the JBoss Web Connector used for receiving requests, plus the name of the JBoss Web Engine. Still, configuring a jvmRoute is recommended, since the jvmRoute is appended to all session ids. The generated jvmRoute is lengthy and includes information you may not want to expose to the internet via session ids.

There are a number of cool improvements in the 1.1.0.Beta1 release of mod_cluster that ships with AS 6.0.0.M1. Check out Paul Ferraro’s blog entry to learn more.

HA Web Sessions via Database Persistence

Occasionally we receive input from the JBoss AS user community that they’d like to see an option to use database persistence as a mechanism to make web sessions highly available. The basic use case we hear for this is for environments where sessions need to be available to AS instances located across a WAN. The main (and still recommended) mechanism for making web sessions HA is to use our standard replication-based approach, which uses the JBoss Cache distributed caching library to replicate sessions to other nodes in the cluster. JBoss Cache/JGroups clusters can span a WAN but sometimes users find it impractical to configure their cluster(s) in that way. However, if their IT infrastructure already supports making RDBMS data accessible across the WAN, persisting sessions to the DB makes them available across the WAN.

JBoss 6.0.0.M1 is the first community release that includes support for this feature. I’ve created a wiki page with full details on how to configure the AS and your application for database persistence.

Improvements to Clustered Caching of Frequently Inserted Entity Types

Just a follow up to something I blogged about a couple months ago: a simple change to the Hibernate Second Level Cache integration with JBoss Cache makes it more performant to cache of entity types with frequent INSERTs but infrequent UPDATEs. AS 6.0.0.M1 includes this improvement.

Ok, enough blogging! Gotta get busy on milestone 2!

October 20, 2009

Infinispan-based Hibernate Second Level Cache Provider

Filed under: JBoss — Brian Stansberry @ 10:38 am

My colleague and good friend Galder Zamarreño has announced the availability of a Hibernate Second Level Cache provider that uses Infinispan as the backing cache. Great job, Galder!

I plan to use this Infinispan-based provider as the standard clustered second level caching option in JBoss AS 6.

October 14, 2009

Docs, docs, docs

Filed under: Hibernate,JBoss — Brian Stansberry @ 9:49 am

For the past few weeks, a lot of my time has been spent on documentation work, and a fair bit of new stuff is now available. So, without further ado:

JBoss Application Server 5.1 Clustering Guide

The AS 5.1 Clustering Guide is complete and is available at What you need to know to develop, deploy and run clustered applications on JBoss Application Server 5.1. The content in the guide is also correct for AS 5.0.0 and 5.0.1.

Many thanks to Paul Ferraro and Galder Zamarreno for their many contributions to the clustering guide.

Using JBoss Cache as a Hibernate Second Level Cache reference manual

The definitive guide on how to use Hibernate’s Second Level Cache feature in a clustered environment. Describes in detail how to use JBoss Cache as your second level cache provider.

There are currently two versions of this document:

  1. Hibernate 3.5 — Covers the integration of JBoss Cache 3 with the upcoming version of Hibernate.
  2. Hibernate 3.3 — Covers the integration of JBoss Cache 2 or 3 with Hibernate 3.3. The Hibernate 3.3 / JBoss Cache 3 combination is what is used in JBoss AS 5.x and JBoss Enterprise Application Platform 5.

Enjoy! And as always, comments, suggestions and edits are most definitely appreciated.

October 9, 2009

Collection Caching in the Hibernate Second Level Cache

Filed under: Hibernate,JBoss — Brian Stansberry @ 5:44 pm

Sacha Labourey asked a good question in response to my recent post on Improvements to Clustered Caching of Frequently Inserted Entity Types. So rather than responding in detail in the comments there, I’ve decided to take the opportunity to put up a separate post and get a little bit deeper into how the Hibernate Second Level Cache works.

The most common use case for the Second Level Cache is to cache entities. However, the second level cache also allows users to cache entity relationship information. Hibernate provides a “collection cache”, where it caches the primary keys of entities that are members of a collection field in another entity type. Say, for example, we have two entity types, Group and Member, where a Member participates in a many-to-one relationship with a Group:

public class Group {
  private Integer id;
  private String name;
  private Set members;

  public void addMember(Member member) {

  public void removeMember(Member member) {

  ..... getters and setters omitted

public class Member {
  private Integer id;
  private String name;
  private Group group;

  public Group getGroup() {

  public void setGroup(Group group) { = group;
  ..... other getters and setters omitted

If you tell Hibernate to cache Group entities in the second level cache, for each cached Group it will store the values for the “id” and “name” fields. However, it doesn’t by default store the contents of the “members” field. If a Group is read from the second level cache and the application needs to access the members field, Hibernate will go to the database to determine the current members of the collection.

If you want Hibernate to cache the contents of the members field, you need to tell it to do so by adding a “cache” element to the “members” declaration:

<hibernate-mapping	package="org.example">
  <class name="Customer" table="Customers">
    <cache usage="transactional"/>

    <id name="id"><generator class="increment"/></id>
    <property name="name" not-null="true"/>
    <set name="members" cascade="all" lazy="false">
      <!-- Cache the ids of entities are members of this collection -->
      <cache usage="transactional"/>
      <one-to-many class="org.example.Member"/>

In a JPA application, the same thing can be accomplished with the @org.hibernate.annotations.Cache annotation on the “members” field:

import javax.persistence.*;
import org.hibernate.annotations.Cache;
import org.hibernate.annotations.CacheConcurrencyStrategy;</code>

public class Group {

  private Integer id;
  private String name;

  @Cache (usage=CacheConcurrencyStrategy.TRANSACTIONAL)
  @OneToMany(mappedBy="customer", fetch=FetchType.EAGER, cascade=CascadeType.ALL)
  private Set members;


What happens behind the scenes when this collection is cached?

Well, first off, what is cached? Hibernate caches the primary keys of the entities that make up the collection. Not the entities themselves; i.e. there is no Set stored somewhere in the second level cache.

Next, where is it cached? Well exactly where is an implementation detail of the second level cache provider. But the key thing is collections are stored separately from the rest of the data associated with an entity. So, for a Group with an id of “1”, the values of the “id” and “name” fields will be stored in one area of the cache under key “#1”, while the members collection will be stored in a different area under key “members#1”. (Those keys are just examples; the actual keys are not strings.)

What are the caching semantics? Well, the key one is that collections are never updated in the cache; they are only invalidated out of the cache and then potentially cached again later as the result of another database read. So, if an application called Group.addMember(), Hibernate will remove that group’s membership collection from the cache. If JBoss Cache is the second level cache implementation, that removal will be propagated around the cluster; the collection will be removed from the cache on all nodes in the cluster.

If later the application needs to access the members from that group, another database read will occur and the current set of primary keys for the members will be put into the cache.

Ok, now how about Sacha’s question from my other post? What happens when a new Member is created and associated with a Group whose members collection is cached? As I stated above, Hibernate doesn’t update the collection in the cache, it just removes it. So, we’d expect the collection to be removed. And it should be, but there is an important subtlety that application developers need to be aware of:

Collections are only invalidated from the cache as a result of an operation on the Java object that represents the collection! Performing some Java operation that results in a change in the database whereby a fresh read of the database would add the member to the collection isn’t sufficient. To be very specific, this code will not result in the now out-of-date cached collection being removed from the cache:

  public Member addNewMember(Integer groupId, String memberName, EntityManager em) {
    Group group = em.find(groupId);
    Member member = new Member();

    return member;

The above method can leave out-of-date data in the cache. The correct implementation is:

  public Member addNewMember(Integer groupId, String memberName, EntityManager em) {
    Group group = em.find(groupId);
    Member member = new Member();

    // We need to apprise the group of its new member


     return member;

The “group.addMember(member)” is what modifies the collection, and that’s what triggers Hibernate to remove the old, outdated set of member PKs from the second level cache.

Of course, not updating both ends of the relationship in the Java code is bad programming practice anyway. But if you are caching collections, be sure to update those collections when you make changes to their membership at the database level.

October 8, 2009

Improvements to Clustered Caching of Frequently Inserted Entity Types

Filed under: Hibernate,JBoss — Brian Stansberry @ 10:07 pm

I made a very simple change today to Hibernate’s integration with JBoss Cache that should have big benefits. Hibernate integrates with JBoss Cache to allow second-level caching of entities, collections and query results in a clustered environment. I often advise people to be cautious about what types of entities they cache in a cluster. A clustered cache differs from a single node cache in that it needs to maintain consistency around the cluster. This means sending messages around the cluster when cache contents change. For entity types with a relatively high percentage of cache writes, the cost of these messages can outweigh the benefits of caching.

For entity caches, by default JBoss Cache is configured to send invalidation messages around the cluster when its contents change. Well, I realized that sending an invalidation message around the cluster when Hibernate has just inserted a newly created entity into the cache is just silly. The entity is brand new; there’s no way another node in the cluster could have a stale version of the entity that needs to be invalidated out of its local cache. Fortunately, the excellent RegionFactory SPI Steve Ebersole introduced in Hibernate 3.3 gives me all the contextual information I need to know that what’s being cached is a newly created entity. And JBoss Cache’s Option.setCacheModeLocal(true) API gives me the power to disable sending out the invalidation message when I put those newly created entities into JBC. Result: with the addition of a few lines of code I can remove these unnecessary messages.

What’s the benefit? Basically, a whole new category of entity types can now benefit from caching in a cluster. Types that may have a fairly high percentage of cache writes relative to reads, but where those writes represent a database INSERT, rather than an UPDATE. Imagine for example, a purchasing application, where user activity generates lots of Order and OrderLineItem entity inserts. Once those entities are created, they are unlikely to be changed, but in the course of a user’s interaction with the application there is a high enough likelihood that they will look at the order details again to make caching the entities worthwhile. Prior to today’s change, caching Order and OrderLineItem may not have been performant. Now, if reads of those entities are frequent enough to make caching them worthwhile in a non-clustered environment, it’s likely to be worthwhile in a cluster as well.

As always, load test your application with realistic usage scenarios before and after turning on caching of any entity type, collection or query result.

The JIRA for this change can be found in Hibernate’s JIRA at HHH-4484. The improved behavior will be available in the Hibernate Core 3.5 release.

December 15, 2008

Second Beta Release of the mod_cluster Project

Filed under: JBoss — Brian Stansberry @ 8:44 pm

Last Friday the 1.0.0.Beta2 release of mod_cluster came out. Props to Paul Ferraro and Jean-Frederic Clere for their hard work on this release. And much thanks to the community for the input you gave us on the first beta. Keep it coming!

Get it here:

Change log:

November 6, 2008

First Beta Release of the mod_cluster Project

Filed under: JBoss — Brian Stansberry @ 6:08 am

On behalf of the teams working on the JBoss Web and JBoss AS Clustering projects, I’m very pleased to announce the 1.0.0.Beta1 release of the mod_cluster project.

For full details on the project, please see the mod_cluster project page on Downloads are available at the project download page.

Like mod_jk and mod_proxy, mod_cluster is an httpd-based load balancer that can proxy requests to a cluster of Tomcat-based webservers (either standalone Tomcat, standalone JBoss Web or JBoss AS’s embedded JBoss Web). Where mod_cluster differs from mod_jk and mod_proxy is that it provides a back channel from the webservers back to the httpd servers. The webservers use this back channel to provide information to the httpd-side about their current state. The use of this back channel provides a number of advantages:

  • Dynamic configuration of httpd workers. No more tedious listing of static cluster topologies in and No more having to remember to update before adding a new node to a cluster. With mod_cluster, configuration is done on the application server side.  Start a new app server and it registers with the httpd-side, informing it of its configuration. Deploy a war and the app server notifies the httpd-side that URLs for the war’s hostname and context path can be routed to that server. Repetitious configuration is nearly eliminated, since the app server already knows most values, e.g. the address and port of the AJP connector.
  • Server-side load balance factor calculation.  A load balancer like mod_jk and mod_proxy_balancer can only base its load balancing decisions on information available on the load balancer side, e.g. how many requests it has passed to each node and how many sessions were associated with those requests. If the cluster is using more than one load balancer, each has no idea about the load being proxied by the others. And none have any knowledge of critical server-side information like CPU or heap utilization. With mod_cluster, the app servers periodically examine their running condition and tell the httpd-side the proportional load each should bear. And a configurable, pluggable set of load metrics gives great flexibility to admins in deciding what runtime metrics should drive the load balancing decision.
  • Fine grained web-app lifecycle control.  Traditional httpd-based load balancers do not handle web application undeployments particularly well. From the proxy’s perspective a backend server can handle a given URL if the URL is in the static global configuration and the server’s AJP connector is functioning. So, if you undeploy a war from a running server, the load balancer will continue to route requests for that war’s URLs to that server, leading to 404 errors. In mod_cluster, each server forwards any web application context lifecycle events (e.g. web-app deploy/undeploy) to the proxy informing it to start/stop routing requests for a given context to that server. Requests are stopped before the application undeploys. No more 404s.

An added benefit of mod_cluster is that, unlike mod_jk, use of the AJP protocol between httpd and the back-end servers will be optional. The httpd connections to application server nodes can use HTTP, HTTPS, or AJP. (Note that in this first beta release, testing has been focused on AJP.)

I encourage you to give mod_cluster a try and to give us your feedback. As with all open source projects, community feedback is essential. The best place for feedback is the mod_cluster user forum.

Many thanks to Jean-Frederic Clere, Paul Ferraro and Rémy Maucherat for their hard work on this release; the vast majority of the credit goes to them.


Create a free website or blog at