Archiv der Kategorie: Projects


Getting into CXF

For my new Job (oh, i did not blog about this one, but more information will follow soon) I am currently investigating a couple of EAI frameworks. One of them is Apache CXF. For this investigation, I am implementing a very easy task and use an easy tutorial (Creating a REST service with CXF and Spring in 10 minutes. Well, to be totally honest, it took me more then 10 minutes to get this up and running. Of course, this is mainly due to the fact, that I wanted to gather some more information about JXF and used my own services.
The first issue I stumbled upon was a problem with the error message: „no resource classes found“. Problem here was, that my ResourceService (call it Controller or whatever) implemented an interface and the JAX-RS Annotations were defined on the interface, instead of the concrete class. This was not working correctly. No I define all the annotations on the concrete class and everything went fine.
Another issue, I gathered on Tomcat, but not on Jetty, was a problem with the @Path-Annotation. I defined a @Path(„/folder“) on the class, and on the concrete method I defined another @Path(„/{id}“). This throw an exception on starting up Tomcat. After removing the „/“ from the second @Path (so: @Path(„{id}“). This was another step into the right direction.

Experiences with JAXB for large data structure

We are going to implement a Data-Export/Import out of a Spring-Hibernate Application with JAXB. During the Development of this feature, I learned quite some lessons and would like to let you participate.

First of all, we have a entity structure using inheritance all over the place. All entities are inheriting from a common „BaseEntity“. Furthermore we are using Bi-Directional relationships with Hibernate for most of the entities as well. Therefor the data-export with JAXB grows quite large and furthermore we received a „OutOfMemoryError: Java heap space“-Exception. During Trial/Error kind of learning JAXB, we received the „Class has two properties of the same name“ issue. How to solve all these issues?

First of all we tried and implemented the CycleRecoverable-Interface on our BaseEntity. Therefor the BaseEntity looks like the following:

public class BaseEntity implements Serializable, GenericEntity<String>, CycleRecoverable {

    @GenericGenerator(name="system-uuid", strategy = "uuid")
    protected String id;

    protected Integer version;

    public Object onCycleDetected(Context context) {
	    BaseEntity entity = new BaseEntity();
	    return entity;

    public String getId() {

// more getter and setter

On looking closely to the above code, you will recognize the method onCycleDetected. This method is defined in the interface „com.sun.xml.bind.CycleRecoverable“.

HINT: Do not use the sometimes suggested „com.sun.xml.internal.bind.CycleRecoverable“ interface, if you do not have the „correct“ interface, add the following dependency to your pom:


Furthermore you will recognize, that the @XmlID-Annotation is defined on the method and not on the property. This is due to the fact, that this method is already defined in the interface GenericEntity and otherwise you would receive the „Class has two properties of the same name“-exception. This is one lesson learned, declare the @XmlID and @XmlIDREF annontations on the methods and not on the properties 😉

One Level above the above mentioned BaseEntity we do have another Entity called BaseEntityParent, which defines a relationship to Child Objects, which contain some language specific settings:

public abstract class BaseEntityParent<T extends BaseEntityDetail> extends
		BaseEntity {

	@OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER,
		   mappedBy = "parent", orphanRemoval = true)
	@MapKey(name = "isoLanguageCode")
	private Map<String, T> details;

   // more stuff in here

This object did cause the „Class has two properties of the same name“-exception in the first place, because the Child Object references the parent itself as well. Therefor we implemented the CycleRecoverable interface in the BaseEntity.

public class BaseEntityDetail<T extends BaseEntityParent> extends BaseEntity {

	@ManyToOne(cascade = {CascadeType.PERSIST, CascadeType.MERGE})
	private T parent;

	@Size(max = 10)
	@Column(nullable = false, length = 10)
	private String isoLanguageCode;

After the implementation of the CycleRecoverable-Interface this problem was gone. We still received, like already stated the Out-Of-Memory exception on an Object with a large set of dependent Objects. Therefor we are now using the @XmlIDREF annotation in the related bi-directional objects. In the following we do have the Object module, which does have some bi-directional relationships to other objects:

@XmlRootElement(name = Module.MODEL_NAME)
public class Module extends BaseEntityParent<ModuleDetail> implements
		Serializable {

	@OneToMany(fetch = FetchType.LAZY, mappedBy = "module", orphanRemoval = true)
	@XmlElementWrapper(name = Document.MODEL_NAME_PLURAL)
	@XmlElement(name = Document.MODEL_NAME)
	private Set<Document> documents = new HashSet<Document>();

// more properties
@XmlRootElement(name = Document.MODEL_NAME)
public class Document extends BaseEntityParent<DocumentDetail> {

	@ManyToOne(cascade = {CascadeType.MERGE, CascadeType.PERSIST})
	private Module module;

	public Module getModule() {
		return module;

The Document defines now the @XmlIDREF on the method, like stated above (otherwise we did receive the „Class has two properties of the same name“-exception). This makes sure, that the module is just marshalled once and all references to this object are now just marshalled with the ID of the object.

Most probably there is still some space for improvements, but this approach (after implemented on all related objects) did save enough space to get rid of the „Out of Memory“-Exception. On implementing this „pattern“, we lowered the memory used for the marshalled XML-file quite a lot. In the first successful run, the XML-file was around 33MB, now it is just around 2MB for the same set of data.

One more word about the implementation of the CycleRecoverable interface. You can remove this implementation as soon as you have put all @XmlID and @XmlIDREF in place. For me it was really easy to find all missing parts, because I have had already a marshalled object tree in place (with CycleRecoverable) and could easily find out the missing parts (without CycleRecoverable) because of the error-messages, which are like:

MarshalException: A cycle is detected in the object graph. 
This will cause infinitely deep XML: 
ff80808131616f9b01316172b9840001 -> ff80808131616f9b013161797cda0019 -> 

I was able to search for the Entities using the ids in the already marshalled file 😉

I hope you do find this one helpful. Give it a try for yourself, and report back (in the comments or via email) if you have problems.

So, basically you do not need to implement the CycleRecoverable interface, if you put @XmlID and @XmlIDREF in place, thats definitly another lesson I learned during this implementation.

I think, our entity structure is not as unsual and we do have implemented some other nice stuff (eg. the Language specific stuff with the details) which could be useful for you as well. So, if you have any question about this, I am more then willing to help you out as well.

Hudson vs. Jenkins – Jenkins releasing often

This seems to be one of the main differences between the two projects, the Release Cycle. While Hudson is right now at 1.396 Jenkins is already at 1.400. I wander if the bugfixes in Jenkins are all merged back into Hudson, since there are quite some „nice“ fixes in there, especially some for Maven projects (see the Jenkins Changelog). Both projects have still not fixed my personal highlight bug (Error Generating Documentation). Hope that one of those will fix it soon. I promise that I will use the corresponding fork and will not look back 😉

Lets see, which fork is going to win this race.

Hudson vs. Jenkins – Hudsons future

Yesterday Sonatype announced a free webinar about the future of Hudson. I am very interested in the presentation from Jason van Zyl and the outcome therefor. So, I guess up until then, there will be no real news about this topic.

The presentation seems to be focused on Sonatypes (positive) influence on Hudson and the changes they would like to do. I guess, that this will contain also some output from the survey Sonatype has taken on the „Future of Hudson“.

Since JSR330 (Dependency Inejction) is now already implemented in Hudson, I guess that the Plugin API will change slightly, to use this new concept (which is great IMHO). Furthermore there will be (IMHO) some changes to the UI of Hudson to make a clearer distinction between Hudson and Jenkins.

So, IMHO Hudson is going to be an integral part of the offerings from Sonatype, let’s see where this leads the project and the community.

Spring 3 and Hibernate Envers

I wanted to add Audit functionalities to an application I am writing currently. I know that this seems to be possible with Spring Data JPA, but since this project has just reached Version 1.0.0.M1 I wanted to wait to use this one. Furthermore the application I am working on is already based on hibernate and some GenericDAO stuff we did with Hibernate, therefor a move does not seem to be too easy. Therefor I wanted to use Hibernate Envers.

The setup seems to be quite easy, following the steps in the documentation (Envers Documentation). I provided the @Audited Annotation to all Entities to be audited and provided also a new RevEntity and a new RevListener:

import javax.persistence.Entity;

import org.hibernate.envers.DefaultRevisionEntity;
import org.hibernate.envers.RevisionEntity;

public class RevEntity extends DefaultRevisionEntity {

    private String userName;

    public String getUserName() {
 	return userName;

    public void setUserName(String userName) {
 	this.userName = userName;
import org.hibernate.envers.RevisionListener;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Configurable;

public class RevListener implements RevisionListener {

    private Member member;

    public void newRevision(Object revisionEntity) {
	RevEntity revEntity = (RevEntity)revisionEntity;

	Member member = MemberHolder.getMember();

	String userName = null;
	if (member != null) {
		userName = member.getUsername();


The point here is to show, that I am using the Member, to find out what the current user is in our application.

Now, how do I use the newly created listener in Spring? After using Google a little, I found this post, which uses Envers, but unfortunately with the SessionFactory instead of the JPA EntityManager, like we are using. Furthermore I found this, which did not really help me as well, since there was another error message coming up, that my RevListener could not get instantiated. Since the custom EventListener was using Spring Dependency Injections for the Member, I could not use the above mentioned solution. I had to find a way to use the Spring Beans. See this blog post, which describes the problem and a possible solution.

So, basically one step back and the whole stuff again, this time using a Holder for our Members, which provides the current Member (User) of the system. This is done using the current security-context of the application and determine the member therein (see StackOverflow).


 * @link
public class MemberHolder {

    private MemberHolder() {
        // hidden default constructor, this is a "normal" utility class

    public static Member getMember() {
        Authentication auth = SecurityContextHolder.getContext().getAuthentication();
        Object principal = auth.getPrincipal();
        Member member;

        if (principal instanceof Member) {
            member = (Member) principal;
        } else {
            return null;

        if (member.getId() == null) {
            return null;
        return member;

This member-holder is then called in the RevisionListener:

import org.hibernate.envers.RevisionListener;


public class RevListener implements RevisionListener {

    public void newRevision(Object revisionEntity) {
	RevEntity revEntity = (RevEntity)revisionEntity;

	String userName = MemberHolder.getMember().getUsername();


Furthermore the application-context.xml was now corrected to look like this:

<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
	<property name="persistenceUnitName" value="persistenceUnit"/>
	<property name="persistenceUnitManager" ref="persistenceUnitManager"/>
	<property name="jpaVendorAdapter">
		<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"/>
	<property name="jpaDialect">
		<bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect"/>
	<property name="jpaProperties">
			<prop key="hibernate.dialect">${hibernate.dialect}</prop>
			<prop key="">${}</prop>
			<prop key="org.hibernate.envers.auditTablePrefix">AUD_</prop>
			<prop key="org.hibernate.envers.auditTableSuffix"></prop>
			<prop key="org.hibernate.envers.storeDataAtDelete">true</prop>
			<prop key="">
			<prop key="">
			<prop key="">
		        <prop key="hibernate.ejb.event.pre-collection-update">
			<prop key="hibernate.ejb.event.pre-collection-remove">
			<prop key="">

Some pitfalls I stumbled upon. Do not make the property look nice, e.g.:

<prop key="">

Your application Context will look nice, but you will get a ClassNotFoundException ;-(

Furthermore, your Custom Event Listener should not appear in the Events, e.g. do not do:

<prop key="">

This will lead to an exception like

Caused by: org.hibernate.MappingException: Unable to instantiate specified event (post-update) listener class:
	at org.hibernate.cfg.Configuration.setListeners(
	at org.hibernate.ejb.Ejb3Configuration.setListeners(
	at org.hibernate.ejb.EventListenerConfigurator.setProperties(
	at org.hibernate.ejb.Ejb3Configuration.configure(
	at org.hibernate.ejb.Ejb3Configuration.configure(
	at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(
	at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(
	at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(
	... 53 more
Caused by: java.lang.ArrayStoreException:
	at org.hibernate.cfg.Configuration.setListeners(

So, I do hope, that this helps you. It did help me 😉

Howto structure the Git-Repository (of


a little while ago, I felt the urge to propose a new git structure to my fellows at the team. I guess the explanation on the why could be helpful for others in the git community as well.

Currently we do have one repository with several packages in one git repository (server-core, server-extra, server-community). This was due to the fact, that the base-distribution (Arch Linux that is) uses SVN as their VCS and we were basically migrating their structure in an easy manner to our own systems.

Right now, we have one branch for each named release (e.g. redgum). Since we are planning to maintain at least two named releases (e.g. redgum and spruce) in parallel, and we have also a testing phase in each release, I suggested two branches for a named release (redgum, redgum-testing). This suggestion was accepted in the mailing list.

I have looked, how other distros with a named release are doing it, as well as for best practices in the git world (well, I really did this for the company I do work for). Fedora is using Git as their repository as well, and git is quite popular, so there is a wealth of information on this topic out there.


I propose to restructure the git repositories to the following structure:

server-core.git/kernel26-lts –> server-core/kernel26-lts.git
server-core.git/aif –> server-core/aif.git

In each of those newly created repositories we will then create at least two branches (e.g. redgum and redgum-testing) for each release. The master-Branch is a special branch, in that it is a development branch. Changes in there will go into release branches, but packages from the master-branches will never be put into a release-version directly. Packages always need to get created from the corresponding release branch, to establish a clean workflow for sign-offs of packages.

To let this work, we will need to adapt our dbscripts slightly, but this should not be too much of a hassel. The git repository could be restructured pretty much automatically and the current already available history could be migrated as well.

Furthermore I suggest to use a tool like repo or fedpkg, adapt it to our needs and implement this into our workflow. Package Updates can then be still done via the usual git commands, but to get a friendlier workflow, this tool can be used.


Let me first explain, why do we want to separate each package into its own repository like suggested by git and other distros like e.g. Fedora.

Git suggests to use a single repository for each project/package (a short explantion of this issue can be found on StackOverflow). Basically it comes to the point, that the usage of a repository per package is a best-practice in the git world, and best-pratices do have a sense, like shown in the previous link.

So where is our benefit in splitting repositories (server-core, server-extra and server-community) into several smaller ones (e.g. server-core/kernel26-lts.git)? IMHO the benefit is ease-of-use.

A TU (Trusted User) can then easily get one package, switch to the correct branch (e.g. redgum-testing) make the changes and push these changes back to the main repository. As soon as the package is out of the testing phase, the TU or somebody else can sign-off the changes and merge the package into the release branch (e.g. redgum). We could even say, that only signed packages are allowed to cross the border of the branches, I am unsure, if this could be done by git itself, but it could be a step in the workflow.

So, where is the difference now to the current structure?

Right now, as soon as a TU has made changes and pushed them into the right branch, other TUs will most probably work on other (or even the same) packages, meaning that the TU cannot easily go to the package and merge it into the release branch, he would merge all changes in the whole repository into the branch. The TU has the burden to look up the right commit-ID in the log of the whole repository (remember, git looks on the whole repository and not on sub-folders) and cherry-pick this change (even worse: changes). This does not seem to be right IMHO and it is against the best-practice.

If we are going to split all our repos (server-core, …) into several smaller git repositories, the look and feel of our repository handling will change slightly. To provide a consistent approach for all repositories (e.g. make a new branch for a new release), we will have to provide some tool to make the life of the TUs easier, thats what google is doing with repo and Fedora is doing with fedpkg (see: These tools offer some functionality not possible with git (at least as far as I know of, see: StackOverflow) like branching of several sub-projects, using the same branch for all sub-projects, committing changes in several sub-projects, etc. Also, there does exsits another alternatives (like git subtree, see: Git Subtree Documentation), these do not fit our needs very well.

Another advantage of this new structure is, that it could be used for automatic builds of packages as well. Right now, for example, starts a new build of a package by request. In the software engineering world there is something that is called Continuuous Integration (CI), which builds a package as soon as a change in the repository appears. The tool I am using for this right now (Hudson/Jenkins) does support the build of unix packages as well, so in the long run, it could be worthwhile to investigate this for our use-case as well.

So, WDYT? What are your thoughts on this one? I guess, that this is the way to go, even though it is not for RC4 or even redgum, but soon afterwards, I would suggest is the latest to implement this approach.

Connect from Host to VBox guest via serial console

Today I need to connect to the serial console of a VBox instance to test the serial console boot and connection of the guest system (ArchServer that is). There seem to be a lot of ways on how to do this, I am just explaining, what works for me, using Arch Linux as the Host:

  • socat UNIX-CONNECT:/home/triplem/com1 TCP-LISTEN:8040
  • telnet localhost 8040

This was not working, in that I always received strange Characters in my terminal as soon as I used e.g. the cursor keys. This did not work, even after changing the terminal emulation to VT100 and all others.

The same is true for socat unix-client:/home/triplem/com1 stdout

I do not seem to be able to work with this kind of stuff, the following tip was not working as well:
Work with VBox and serial console.

Hudson / Jenkins – Butler wars

Today I found a nice article about the split of Hudson and Jenking. It has a nice title and is really well written. Furthermore it shows some of the possible implications of this „war“.

Butler Wars.

Read on.

For more information about this story please also visit:

Hudson and Jenkins now on GitHub
Hudson vs. Jenkins – more thoughts
Hudson vs. Jenkins

Hudson and Jenkins now on GitHub

I think, the whole fork stuff with Hudson and Jenkins gets even more hilarious. In a vote, the Hudson community decided to move (take your breath) to

See InfoQ for more details about this movement. Now both Hudson as well as Jenkins are on github.

Jason (van Zyl) seem to have made already quite some changes to Hudson so, I would expect some major changes coming up for the next Hudson release. Of course these changes are related to (guess what?) Maven. These changes are going to diverge Hudson and Jenkins, and I guess that in the long run these two projects are going to diverge so much, that there is no „drop-in“ of Jenkins Plugins into Hudson and Vice-versa. On the other hand, Sonatype is known for their strong commitment to backward compatibility (see Maven 2/3), but is the same true for the Jenkins guys? I do hope so.

Still, new features developed for one of these forks will not directly appear in the other version. Since Sonatype seems to be doing some major restructuring of the hudson core, I doubt, that features on Hudson will easily be transportable to Jenkins, and vice versa.

Lets see whats going to happen 😉

For more information about this story please also visit:

Hudson vs. Jenkins – more thoughts
Hudson vs. Jenkins