Kategorie-Archiv: Java

Java Themes

Generate a Maven Release w/o Manual Intervention

At the company where I work at currently we do have several, around 70 to be precise, inter-related Components. Some of those Components are multi-modules, which makes a fully Automatic release-build even harder. The used SCM is git, awnd we do use Nexus as a Central artifact repository.

What are the Problems of the Release Generation Process? What are the manual steps, why are they manual? Where can failures happen in this Process? Are there any other Problems, which need to get resolved to make the Process easier and (hopefully) fully Automatic?

Well, identifying the Manual steps is quite easy, to replace those with automatisms will help but can be quite expensive or even plainly impossible. Lets take a look on those steps:

  • Check components if there are any dependencies, which are updated, and update those
  • Check components for changes to see where we do need to create a release
  • Create a release of this component

Check for updated dependencies

For some components, that are used throughout the whole Application and should be used in the same Version all over the place, we do have to update all other components, even if they do not use the latest SNAPSHOT version. This is mainly true for the Model of the core component, which is used in all Components accessing the DB and/or the Rest API. To make sure, that the same Version is used everywhere, the version is adopted in all released components manually. This is obviously rather error prone, since one can overlook some components and their dependencies.

Other Components are shared as well, but could be used in different versions in different Components. Here the developer decided which version to use in her component. This is error prone as well, since it can lead to dependencies issues via transitive dependencies.

The above mentioned points hint to architectural smells in the system, but to resolve those, would be a longer project and we wanted to make the whole release process easier quite urgently to be able to have some time to resolve those architectural issues.

We decided to just update all dependencies (internal ones, that is) to the latest version. This has still the disadvantage that some changes on one component are not probably tested on the Development and Staging Environment, but at least the reason for this is quite obvious and not hidden in „transitive“ dependencies like up until now.

How to update all dependencies to the latest version? Lets take a look at the Maven Versions plugin, which comes to the rescue.

  • versions:update-parent will update the parent version to the latest available version
  • versions:update-properties will update the versions of all dependencies in our project

That looks pretty simple, does it? Well, there are some drawbacks in these steps. One should avoid to update all dependencies (eg. used frameworks like spring) but just internal dependencies. This can be done using the „includes“ property of the plugin-goal. Problem here is, that you should know all groupIds (that is the classifier, we do use) of all internal dependencies before running this goal. Other then that, multi-module projects do have some problems here as well, in that the properties defined in the multi-module-master but used in the sub-modules are not updated correctly. That is why we defined that a new dependency has to be declared in the dependencyManagement in the multi-module-master. This is error-prone, since each developer has to follow this, but we do seldomly declare new internal dependencies in this phase anymore, so that this problem is minor.

To be able to recognize this change in the following steps, you need to make sure, that the changes are commited and pushed and to be on the safe side during the release preparation, make sure that no files (mvn versions will create a backup-file of your pom) are still lying around (mvn versions:commmit).

Check for Changes in Components

To see if a component has any changes, we should take a look into the Logs of the version control system. We should start with this on the bottom of the dependency hierarchy, so that if the component has changes, we do release it and all components above can be updated. In a manual step, we are usually going from the top to the bottom, to be sure, that all dependencies are met. This ist depending on the developers, which declare a dependency as „SNAPSHOT“, if they do need a later version. The problem with this approach is, that sometimes transitive dependencies for components, which do not use the SNAPSHOT version are updated, without the developer knowing. This could lead to problems, if the version changes method signatures, which will stay the same if the version updates are done automatically. Therefor some common rules for versioning should be defined (see Semver). Furthermore we do have to make sure, that all components are considered.

This can be automted quite easily in just comparing the last known „release“-commit with the current commit and decide if there are any changes in between.

Some helpful commands for this are:

git log --all --grep='[maven-release-plugin]' --format='%H' -n 1

The above command shows all commits containing the maven-release-plugin pattern. To see, if there are any changes after this commit, you need to grep the commit id from the latter command and do the following command:

git rev-list "$commit_id"..HEAD

Before running these commits, please make sure, that you are on the correct branch (in our case master or rc):

git ls-remote --exit-code $GIT_URL/$project refs/heads/$branch

Create a release

The creation of a release using the Release Plugin of Maven is not as painful as some people think, and works quite well for us since a long time. In the past, we did create a release manually via the Jenkins Plugin. To be able to let the release process run automatically without an interruption due to questions of the new version, you should use the command-line flag „–batch-mode“ for the maven call.

To make sure, that the release is working correctly and no changes on the git repository are made, you should use a „dryRun“ beforehand.

Automate It (Full Pull)

To automate the above mentioned steps, we do need a list of all projects in the correct sequence, which is manually created and adopted, as soon as some dependencies change and/or new dependencies are created (this make the whole process error prone as well, but up until now, there is now real alternative to it).

This list of projects is then processed in the given sequence. Each project is cloned and version bumped. After this changes are checked. As soon as this is done and there are changes, a release is created.

To make the implementation easier, we re-used the bash-libs from the kubernetes release project (mainly common.sh and gitlib.sh), but with some major extensions. For the maven calls, we created our own mvnlib extension.

My UPNP Stack


i have already witten, that I am ripping my whole CD collection (see discogs for the already ripped collection). I have written a for tagging my collection and this is working great.

To play music on my stereo, I thought, that UPNP is a great protocol for it. The stack I am currently using is involving the following toolset:

Please note, that parts of this stack are replaceable through other components, but this stack is right now the best working (at least for me). All components do use the UPNP enhancements from OpenHome, which is an OpenSource Project of Linn, IIRC.

The MediaRenderer could be replaced by UPMPDCli, a nice UPNP Frontend for MPD (my favourite Music Player. But then you should also use BubbleUPNPServer to enjoy all the benefits of the Openhome extensions.

MediaPlayer is using MPD or MPlayer to be able to play the music. MPD do offer quite some extensions, which still can be used with the above mentioned environment.

One Extension is LCD4Linux, which allows to show some information about the current played song on a small LCD. This is working on my Raspberry, but unfortunately this also seems to have some problems, in that the Display just freezes and the whole Box needs to get restarted. Since the used display is also very small (see Pearl Display) I decided to invest some more time and money into something slightly larger (see TaoTronics Display, Power Supply, HDMI to Component Adapter as well as a couple of additional needed cables (MiniHDMI, Component…)). I do hope that this is going to work out. For this stack, LCD4Linux is not needed anymore, since this is a „normal“ Screen. Therefor I plan to integrate a Full-Screen Display component into the MediaPlayer. As soon as this is finished, I will report back, right now I am still waiting, that all the above mentioned components do arrive.

On the beginning of my UPNP discoveries, I stumbled across the X10, which is also a nice toy, but unfortunately does not support gapless UPNP playback (see X10 Forum (German)). Unfortunately I needed to buy this device to discover this one ;-( It is still a nice playing toy, but right now is just used for Internet Radio Streaming, since even the Tagging I did with DiscogsTagger is totatlly screwed on this device and the X10 is showing me the albums in totally different format then shown eg. on minimserver.

So, you could buy yourself some expensive devices from Linn, Naim or …, or you spend you money on some decent Hardware like Raspberry PI (uh, the sound of this device is not realy good, without the addition of a good DAC like HifiBerry) or a Cubox-I and invest some time in installing the above mentioned stack, then all should be fine, without spending too much Money as well as time.

What do I want from my personal NAS

In this post I would like to gather some personal requirements for a NAS System I am going to build.

Right now, I am in the process of ripping all my CDs (around 950 unique releases – More than half of it is already finished). The target is to store all these releases on a personal NAS with the ability to stream those to my stereo. For this I have already selected minimserver as the UPNP-Server. This server has the requirement of a JDK to let it run. Therefore the NAS I am going to build must have the ability to run JAVA.

Since I am already using Linux quite heavily I do not like to let FreeNAS or NAS4Free run on this NAS, also I am very interested in the underlying File System (ZFS) and/or FreeBSD.

Since Linux offers a “similar” File System (btrfs) I would like to use this one for the NAS.

The services which I would like to run on the NAS are then the following:

There some other options, which would be nice, but are not as “necessary”. There is e.g. Ajenti, which provides a nice WebGUI for the administration of the NAS, but this does not really correspond to the way Arch Linux works 😉 A possibility would be to use e.g. CentOS or
Ubuntu as a distro, but I am unsure, if this is really going to work out, just for a nice GUI????

The above mentioned requirements are not really tough for todays hardware and therefor I would like to stick to the Stack provided in the nas-portal forum (see here).

Since I am going to use a filesystem, which seems to be picky about power outages, I am in need of a UPS, and I am currently thinking about this one.

So I am going to explain some more interesting stuff about the tagging of my FLACs for the minimserver and about the used tool (discogstagger) in some future posts. Stay seated, so that you can see how an absolute Hardware Noob tries to build his own NAS 😉

dbUnit in a Spring Environment with MSSQL

Today I needed to export and import some data from a database for testing purposes. I do knew dbUnit already and this seemed to be the tool of choice. I stumbled about a couple of problems, because another example of the usage of dbUnit with the Spring Framework was using Oracle and furthermore the FlatXmlDataSet. I wanted to use the XmlDataSet, because this seems to be easier to maintain manually.

In the following I will try to show, how to integrate dbUnit into the project. First of all, I needed to put the Maven Plugin into place:

                <!--jar file that has the jdbc driver -->


After this I was able to call mvn dbunit:export, which gave me an export of the current datastructure inside the database. It will generate a file target/dbunit/export.xml, which could then be used in the TestCases (we are using JUnit).

The TestCases are now looking something like this:

@ContextConfiguration(locations = {"classpath:/META-INF/spring/testAC.xml"})
public class ServiceTest {
	private DataSource dataSource;

Here we are autowiring the dataSource from the Application Context, this is needed to extract all necessary information for the database connection of dbUnit.

	// the following methods do make sure, that the database is setup correctly with
	// dbUnit
	private IDatabaseConnection getConnection() throws Exception {
    	// get connection
        Connection con = dataSource.getConnection();
        DatabaseMetaData databaseMetaData = con.getMetaData();
        IDatabaseConnection connection = new DatabaseConnection(con);
        DatabaseConfig config = connection.getConfig();
        config.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new MsSqlDataTypeFactory());
        return connection;

Here we are making sure, that the connection is setup correctly and that the right DataTypeFactory is used. This should be the same factory as used in the pom.xml (see above).

	private IDataSet getDataSet() throws IOException, DataSetException {
		File file = new File("src/test/resources/dbexport.xml");
		Reader reader = new FileReader(file);
		return new XmlDataSet(reader);

Fetching the dataset from a file and use it in the DatabaseOperation of dbUnit.

	public void testDelete() throws Exception {
        DatabaseOperation.CLEAN_INSERT.execute(getConnection(), getDataSet());
		EntityVersionId id = new VersionId("9a5a8eb1f02b4e06ba9117a771f2b69c", 2L);
		Entity entity = this.entityService.find(id);

Please note, that we are using a special EntityVersionId, which is part of our Framework and contains two values. This is a combined ID. The usual ID is an UUID (String) and a „version“ of type long. I guess, you will most probably not use something like this in your project.

Thats it, now everything works like expected 😉

Getting into CXF

For my new Job (oh, i did not blog about this one, but more information will follow soon) I am currently investigating a couple of EAI frameworks. One of them is Apache CXF. For this investigation, I am implementing a very easy task and use an easy tutorial (Creating a REST service with CXF and Spring in 10 minutes. Well, to be totally honest, it took me more then 10 minutes to get this up and running. Of course, this is mainly due to the fact, that I wanted to gather some more information about JXF and used my own services.
The first issue I stumbled upon was a problem with the error message: „no resource classes found“. Problem here was, that my ResourceService (call it Controller or whatever) implemented an interface and the JAX-RS Annotations were defined on the interface, instead of the concrete class. This was not working correctly. No I define all the annotations on the concrete class and everything went fine.
Another issue, I gathered on Tomcat, but not on Jetty, was a problem with the @Path-Annotation. I defined a @Path(„/folder“) on the class, and on the concrete method I defined another @Path(„/{id}“). This throw an exception on starting up Tomcat. After removing the „/“ from the second @Path (so: @Path(„{id}“). This was another step into the right direction.

Java 7 – minor classloading difficulties

Since I am using Arch Linux, I am more then accustomed to using the latest and greates versions of all the stuff. Unfortunately this is not always very good. During the last couple of days I experienced a couple of class loading issues with Java 7 (as opposed to Java 6).
I am currently testing Broadleaf Commerce and had to report an issue to theses guys because of some problems I did receive during compilation and running this application. Something similar happened to me on my project at work as well. I call this „class loading issues“, but it is probably slightly more. I do have problems loading configuration data correctly. (see issue 96 on the Broadleaf JIRA)-
To work around this issue, I just installed Java 6 again. Now it is working like a charm.

JAXB Experiences II

Like already stated, we do have a large object tree, which we are exporting to xml using JAXB. Now we are also importing this tree again. Because of several Bi-/Uni-Directional Depedencies this object tree is kind of hard to reflect in XML. We do have one Root-Element, and several objects depending on this element uni-directional. Since the export is started fro the Root element, we are calling the corresponding service to find all uni-directional dependencies. These dependencies are exported into separate XML-files.
Since we do have in the child a reference to the root, we need to reflect this via an Reference on the property, otherwise the XML-file would be quite large and reflect the whole objects more then once, which leads to problems as well.
Problem is now, that the XMLIDRef is not really working, because the XMLIDRef is not referencing another object in the same file (the root element is defined in the original file). To work around this, we are using @XmlJavaTypeAdapter on the root property in the child object. This Adapter needs to get initialized and assigned to the unmarshaller. Please take a look onto the example below:

	private Object unmarshall(Class clazz, Map<Class, XmlAdapter> adapters, String fileName) throws Exception {
        // Create a JAXB context passing in the class of the object we want to marshal/unmarshal
        final JAXBContext context = JAXBContext.newInstance(clazz);

        // Create the marshaller, this is the nifty little thing that will actually transform the object into XML
		final Unmarshaller unmarshaller = context.createUnmarshaller();
		unmarshaller.setEventHandler(new javax.xml.bind.helpers.DefaultValidationEventHandler());
		for (Class adapterClass : adapters.keySet()) {
			unmarshaller.setAdapter(adapterClass, adapters.get(adapterClass));

		log.info("Starting unmarshaller");
		Object unmarshalledObject = unmarshaller.unmarshal(new FileInputStream(fileName));
		log.info("Finished unmarshaller");

        return unmarshalledObject;

The concrete adapters are added in the calling method:

	public Child unmarshallEntityWrapper(RootElement element, String fileName) throws Exception {
		Map<Class, XmlAdapter> adapters = new HashMap<Class, XmlAdapter>();

		determineAdapter(element, adapters);
        return (Child)this.unmarshall(Child.class, adapters, fileName);

	private void determineAdapter(RootElement element, Map<Class, XmlAdapter> adapters) {
		RootElementAdapter adapter = new RootElementAdapter();
		for (SubElement subElement : element.getSubElements()) {
			adpater.put(subElement.getId(), subElement);

		adapters.put(Adapter.class, adapter);

The Adapter itself is pretty straight forward:

public class RootElementAdapter extends XmlAdapter<String, RootElement> {

	private Map<String, RootElement> rootElements = new HashMap<String, RootElement>();

	public Map<String, RootElement> getRootElements() {
		return rootElements;

	public RootElement unmarshal(String id) throws Exception {
		return rootElements.get(id);

	public String marshal(RootElement rootElement) throws Exception {
		return rootElement.getId();

The classes itself are pretty straight forward as well. Please notice, that the ChildElement is stored in another XML-file and because of this, we do need the Adapter, like already stated above. The RootElement does not have any relationship to the Child-elements:

@XmlRootElement(name = "RootElement")
public class RootElement implements Serializable {

public class ChildElement implements Serializable {
   public RootElement getRootElement() {

In the service, which is handling the marshalling, we are now fetching all ChildElements belonging to the RootElement via a service method like so:

public List<ChildElement> findChildsByRootElement(RootElement rootElement) {

Because we would like to marshall these elements into their own XML-file, we do have to create a Wrapper Object, which basically wraps all elements:

@XmlSeeAlso({ChildElement.class, ChildA.class, ChildB.class})
public class ChildElementWrapper {

	private Collection<ChildElement> childElements;

	public ChildElementWrapper() {

	public ChildElementWrapper(Collection<ChildElement> childElements) {
		this.childElements = childElements;

	@XmlElementWrapper(name = "childElements")
	@XmlElement(name = "childElement")
	public Collection<ChildElement> getChildElements() {
		return childElements;

	public void setChildElements(Collection<ChildElement> childElements) {
		this.childElements = childElements;

Another important thing, we learned during the implementation of this Import/Export layer was that the usual Inheritance from Hibernate using Discriminators is not really nicely handled. The Wrapper-Class has to use the @XmlSeeAlso-Annotation, like stated above. Now all the concrete implementations of the ChildElement-Class are marked with the type-attribute in the XML-file. Since we are using hibernates replicate and we need the Discriminator Value, we are using the following solution:

			switch (childElement.getElementType()) {
				case A:
					ChildA childA = (ChildA)childElement;
					this.jcDao.updateDiscriminator(childA, "DiscriminatorA");
				case B:
					ChildB childB = (childB)childElement;
					this.jcDao.updateDiscriminator(childB, "DiscriminatorB");

	public Object store(Object object) {
		if (this.entityManager == null) throw new IllegalStateException("Entity manager has not been injected");

		Session session = (Session)this.entityManager.getDelegate();

		session.replicate(object, ReplicationMode.OVERWRITE);

		return object;

	public void updateDiscriminator(Object object, String discriminator) {
		if (this.entityManager == null) throw new IllegalStateException("Entity manager has not been injected");

		Session session = (Session)this.entityManager.getDelegate();

		String hqlUpdate = "update ChildElement set type = :discriminator where id = :id";
		int updateEntities = session.createQuery(hqlUpdate).setString("id", object.toString())
									 .setString("discriminator", discriminator)

I hope, that this explanation is easy enough to follow 😉

Experiences with JAXB for large data structure

We are going to implement a Data-Export/Import out of a Spring-Hibernate Application with JAXB. During the Development of this feature, I learned quite some lessons and would like to let you participate.

First of all, we have a entity structure using inheritance all over the place. All entities are inheriting from a common „BaseEntity“. Furthermore we are using Bi-Directional relationships with Hibernate for most of the entities as well. Therefor the data-export with JAXB grows quite large and furthermore we received a „OutOfMemoryError: Java heap space“-Exception. During Trial/Error kind of learning JAXB, we received the „Class has two properties of the same name“ issue. How to solve all these issues?

First of all we tried and implemented the CycleRecoverable-Interface on our BaseEntity. Therefor the BaseEntity looks like the following:

public class BaseEntity implements Serializable, GenericEntity<String>, CycleRecoverable {

    @GenericGenerator(name="system-uuid", strategy = "uuid")
    protected String id;

    protected Integer version;

    public Object onCycleDetected(Context context) {
	    BaseEntity entity = new BaseEntity();
	    return entity;

    public String getId() {
        return this.id;

// more getter and setter

On looking closely to the above code, you will recognize the method onCycleDetected. This method is defined in the interface „com.sun.xml.bind.CycleRecoverable“.

HINT: Do not use the sometimes suggested „com.sun.xml.internal.bind.CycleRecoverable“ interface, if you do not have the „correct“ interface, add the following dependency to your pom:


Furthermore you will recognize, that the @XmlID-Annotation is defined on the method and not on the property. This is due to the fact, that this method is already defined in the interface GenericEntity and otherwise you would receive the „Class has two properties of the same name“-exception. This is one lesson learned, declare the @XmlID and @XmlIDREF annontations on the methods and not on the properties 😉

One Level above the above mentioned BaseEntity we do have another Entity called BaseEntityParent, which defines a relationship to Child Objects, which contain some language specific settings:

public abstract class BaseEntityParent<T extends BaseEntityDetail> extends
		BaseEntity {

	@OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER,
		   mappedBy = "parent", orphanRemoval = true)
	@MapKey(name = "isoLanguageCode")
	private Map<String, T> details;

   // more stuff in here

This object did cause the „Class has two properties of the same name“-exception in the first place, because the Child Object references the parent itself as well. Therefor we implemented the CycleRecoverable interface in the BaseEntity.

public class BaseEntityDetail<T extends BaseEntityParent> extends BaseEntity {

	@ManyToOne(cascade = {CascadeType.PERSIST, CascadeType.MERGE})
	private T parent;

	@Size(max = 10)
	@Column(nullable = false, length = 10)
	private String isoLanguageCode;

After the implementation of the CycleRecoverable-Interface this problem was gone. We still received, like already stated the Out-Of-Memory exception on an Object with a large set of dependent Objects. Therefor we are now using the @XmlIDREF annotation in the related bi-directional objects. In the following we do have the Object module, which does have some bi-directional relationships to other objects:

@XmlRootElement(name = Module.MODEL_NAME)
public class Module extends BaseEntityParent<ModuleDetail> implements
		Serializable {

	@OneToMany(fetch = FetchType.LAZY, mappedBy = "module", orphanRemoval = true)
	@XmlElementWrapper(name = Document.MODEL_NAME_PLURAL)
	@XmlElement(name = Document.MODEL_NAME)
	private Set<Document> documents = new HashSet<Document>();

// more properties
@XmlRootElement(name = Document.MODEL_NAME)
public class Document extends BaseEntityParent<DocumentDetail> {

	@ManyToOne(cascade = {CascadeType.MERGE, CascadeType.PERSIST})
	private Module module;

	public Module getModule() {
		return module;

The Document defines now the @XmlIDREF on the method, like stated above (otherwise we did receive the „Class has two properties of the same name“-exception). This makes sure, that the module is just marshalled once and all references to this object are now just marshalled with the ID of the object.

Most probably there is still some space for improvements, but this approach (after implemented on all related objects) did save enough space to get rid of the „Out of Memory“-Exception. On implementing this „pattern“, we lowered the memory used for the marshalled XML-file quite a lot. In the first successful run, the XML-file was around 33MB, now it is just around 2MB for the same set of data.

One more word about the implementation of the CycleRecoverable interface. You can remove this implementation as soon as you have put all @XmlID and @XmlIDREF in place. For me it was really easy to find all missing parts, because I have had already a marshalled object tree in place (with CycleRecoverable) and could easily find out the missing parts (without CycleRecoverable) because of the error-messages, which are like:

MarshalException: A cycle is detected in the object graph. 
This will cause infinitely deep XML: 
ff80808131616f9b01316172b9840001 -> ff80808131616f9b013161797cda0019 -> 

I was able to search for the Entities using the ids in the already marshalled file 😉

I hope you do find this one helpful. Give it a try for yourself, and report back (in the comments or via email) if you have problems.

So, basically you do not need to implement the CycleRecoverable interface, if you put @XmlID and @XmlIDREF in place, thats definitly another lesson I learned during this implementation.

I think, our entity structure is not as unsual and we do have implemented some other nice stuff (eg. the Language specific stuff with the details) which could be useful for you as well. So, if you have any question about this, I am more then willing to help you out as well.

Spring WebMVC Ajax Post-Requests

Today I tried to improve the performance of our newest webapp. We are using a DOJO Tree for displaying the structure of a document on the left side. This tree can be quite big and the loading of this tree takes quite some time. Therefor I decided to use AJAX to load only fragments of the page if possible.

This went quite well for most of the pages, because for the GET Requests, you can easily use something like

     var xhrArgs = {
         url: url,
         content: { "fragments": "body" },
         headers: {"Accept": "text/html;type=ajax"},
         handleAs: "text",
         preventCache: false,
         sync: true,
         handle: function(response, ioargs) {
         load: Spring.remoting.handleResponse,
         error: Spring.remoting.handleError

     //Call the asynchronous xhrGet
     var foo = dojo.xhrGet(xhrArgs);

Unfortunately this is not as easy for Post-Requests. Therefor you can easily use (I found it after quite some googleing) this one. Please take a close look onto the end of the page and you will see an example showing exactly this one. You have to use the formId parameter, otherwise it will not work as well.

Hudson vs. Jenkins – Jenkins releasing often

This seems to be one of the main differences between the two projects, the Release Cycle. While Hudson is right now at 1.396 Jenkins is already at 1.400. I wander if the bugfixes in Jenkins are all merged back into Hudson, since there are quite some „nice“ fixes in there, especially some for Maven projects (see the Jenkins Changelog). Both projects have still not fixed my personal highlight bug (Error Generating Documentation). Hope that one of those will fix it soon. I promise that I will use the corresponding fork and will not look back 😉

Lets see, which fork is going to win this race.