Archiv des Autors: triplem

Hibernate Entity and Revision Listeners

In a project I do work on currently, we do use Hibernate 4.3 and Envers 4.3. We wanted to store modification informations like creation date and modification date as well as userinformation on some domain entities as well.

This is done by using Entity Listeners in addition to Hibernate Envers. One problem we faced, was on how to retrieve the username in the Listeners, since this information is not Injectable in a JEE environment.

The following classes/interfaces are used to provide this functionalities. I do hope, that this helps others as well.

public class ModificationInformation implements Serializable {

  @Type(type = "org.javafreedom.LocalDateUserType")
  private LocalDate creationDate;

  @Type(type = "org.javafreedom.LocalDateUserType")
  private LocalDate modificationDate;

  private String createdBy;

  private String modifiedBy;

  public ModificationInformation() {

  public ModificationInformation(String createdBy) {
    this.createdBy = createdBy;

  public String getCreatedBy() {
    return this.createdBy;

  public void setCreatedBy(String createdBy) {
    this.createdBy = createdBy;

  public String getModifiedBy() {
    return this.modifiedBy;

  public void setModifiedBy(String modifiedBy) {
    this.modifiedBy = modifiedBy;

  public LocalDate getCreationDate() {
    return this.creationDate;

  public void setCreationDate(LocalDate creationDate) {
    this.creationDate = creationDate;

  public LocalDate getModificationDate() {
    return this.modificationDate;

  public void setModificationDate(LocalDate modificationDate) {
    this.modificationDate = modificationDate;



The above class is marked as @Embeddable and therefor is used as an embedded object inside our domain objects. All classes using this embedded object, do need to implement the following interface.

public interface HasModificationInformation {

  ModificationInformation getModificationInformation();


The domain object then looks like the following. All properties not related to this short post are obviously skipped.

@Table(name = "domain_object")
public class Domain extends implements HasModificationInformation {

  private ModificationInformation modificationInformation = 
                                    new ModificationInformation();

  public ModificationInformation getModificationInformation() {
    return this.modificationInformation;

This Entity is marked as @Audited, to be able to use Hibernate Envers and its functionality. Furthermore, we added a new Listener (ModificationInformationListener), to be able to act on certain events during the Entity Lifecycle.

We then needed to provide this Listener and furthermore to use Hibernate, the following classes are implemented.

public class ModificationInformationListener extends AbstractUserNameListener {

  public void prePersist(HasModificationInformation hmi) {
    if (hmi.getModificationInformation().getCreationDate() == null) {


  public void preUpdate(HasModificationInformation hmi) {

public class EnversRevisionListener extends AbstractUserNameListener implements RevisionListener {

  public void newRevision(Object revisionEntity) {
    EnversRevisionEntity rev = (EnversRevisionEntity) revisionEntity;

As you do see, both Listeners do use an Abstract class (AbstractUserNameListener), which allows to access the Principal of the current Session. This is necessary, because both of theses Listeners are not managed by CDI and therefore cannot use @Inject.

public abstract class AbstractUserNameListener {

  // could be, that this will not work for SOAP/REST API calls
  protected String getUserName() {
    BeanManager beanManager = CDI.current().getBeanManager();
    Bean<Principal> principalBean =
      (Bean<Principal>) beanManager.getBeans(Principal.class).iterator().next();
    CreationalContext<Principal> context = beanManager.createCreationalContext(principalBean);
    Principal principal =
      (Principal) beanManager.getReference(principalBean, Principal.class, context);

    String userName = principal.getName();

    if (userName == null) {
      userName = "SYSTEM";

    return userName;


The above method is basically copy-pasted from Stackoverflow.

Now every Update and Persist the internal ModificationInformationListener and the corresponding method in there is called and the user and date information is stored in the domain entity. Furthermore, every Lifecycle Event calls Hibernate Envers, which stores the username in the corresponding EnversRevisionEntity.

@Table(name = "envers_revision")
public class EnversRevisionEntity extends DefaultRevisionEntity {

 private String username;

  public String getUsername() {
    return username;

  public void setUsername(String username) {
    this.username = username;

Hope that this helps. Feedback would be greatly welcome.





Determine Musicbrainz Id from Discogs Release

I just figured out, how you are able to retrieve the Musicbrainz Id (MBID) from a specific Discogs Release. The following link can be used:

Obviously for the XXXX you need to put the release-Id of the release in discogs.

The Discogstagger2 contains a small script (scripts/ which calls the above mentioned url and puts the determined mb-id and the given discogs-id in a file. This script is based on an existing id.txt, which I do use to determine the discogs id for each releaase i own.

Install upmpdcli on Raspi 1 B+ with hifiberry dacplus using Archlinux

This could have been done using packages from the AUR, but I wanted to have the latest version. Furthermore some specifics do apply to my personal setup, because me is still using an rather old raspi 1 B+.

So, see the following steps to get this up and running on your own machine as well.

The first step is just for easier handling of users, so install sudo on the machine:

pacman -S sudo
visudo to add user alarm to sudoers

For details, on how to use sudo, please see the famous ArchWiki.

After this, several packages should get installed, to be able to compile all necessary packages:

pacman -S base-devel libupnp libmpdclient libmicrohttpd jsoncpp curl expat python2

As the usual user (alarm in my case), you can now download and compile and install libupnpp as well as upmpdcli. Note, that this could have been done using the AUR packages for these modules as well, but like I said, I wanted to do this all on my own and use the latest package versions.

curl -O
tar xzf libupnpp-0.16.0.tar.gz

cd libupnpp-0.16.0
./configure --prefix=/usr
sudo make install

cd ..

curl -O
tar xzf upmpdcli-1.2.15.tar.gz
cd upmpdcli-1.2.15
./configure --sysconfdir=/etc --prefix=/usr
sudo make install

groupadd --system upmpdcli
useradd -g upmpdcli --system upmpdcli -s /bin/false -d /
chown upmpdcli:upmpdcli /etc/upmpdcli.conf
mkdir /var/log/upmpdcli
chown upmpdcli:upmpdcli /var/log/upmpdcli
mkdir /var/cache/upmpdcli
chown upmpdcli:upmpdcli /var/cache/upmpdcli
mkdir /usr/share/upmpdcli
chown upmpdcli:upmpdcli /usr/share/upmpdcli

install -Dm644 systemd/upmpdcli.service /usr/lib/systemd/system/upmpdcli.service
sed '/\[Service\]/a User=upmpdcli' -i /usr/lib/systemd/system/upmpdcli.service

Now, several options should be edited in the /etc/upmpdcli.conf file, but this is not strictly necessary:

  • edit /etc/upmpdcli.conf
  • adopt friendlyname (eg. Upnp Sleeping Room)
  • adopt logfile (mine is /var/log/upmpdcli/upmpdcli.log)
  • adopt cachedir (mine is /var/cache/upmpdcli)

Now enable the services installed previously:

systemctl enable upmpdcli

Since me is using a hifiberry dac, I needed to add an overlay to the boot process by editing /boot/config.txt and adding the following line:


Since we are going to use alsa for the sound handling, we do need to install several packages:

pacman -S alsa-tools alsa-utils

To now enable this dac (basically a soundcard) the /etc/asound.conf should get edited:

pcm.!default {
type hw
card sndrpihifiberry
ctl.!default {
type hw
card sndrpihifiberry

To be able to handle alsa using the user „alarm“, we do need to add this user to the audio group as well:

sudo usermod -aG audio alarm

Since upmpdcli uses mpd, we do need to install this one as well:

pacman -S mpd

Now we can add the above mentioned soundcard to the file /etc/mpd.conf as well.

user "mpd"
pid_file "/run/mpd/"
state_file "/var/lib/mpd/mpdstate"
playlist_directory "/var/lib/mpd/playlists"
log_file "/var/log/mpd/mpd.log"
#log_level "verbose"
replaygain "album"
replaygain_preamp "15"

audio_output {
type "alsa"
name "sndrpihifiberry"
mixer_type "software"

Now you should reboot the machine. Afterwards, you can test the soundcard by issueing the following commands:

cat /proc/asound/cards
aplay -l

You can furthermore test the soundcard by issueing „aplay A-WAV-FILE“, which plays the given file.

All should be set, and the Controlpoint should now show the Renderer and be able to play albums and songs on this one.

Use rtl8812AU on ArchlinuxArm

For my Raspberry Pi, I needed to support the rtl8821au driver for my wireless USB device. The following steps worked for me.

This is based on this article

pacman -S make dkms linux-raspberrypi-headers

git clone
cd rtl8812AU

change Makefile:
CONFIG_PLATFORM_I386_PC from y to n

make install


To use the driver and the device, several additional steps have to be followed, this is based on StackExchange.

pacman -S netctl dhclient

cd /etc/netctl
install -m640 examples/wireless-wpa wireless-home

Adopt above file to your needs

Add /etc/netctl/hooks/dhcp:

netctl start wireless-home

netctl enable wireless-home

After all those steps and another reboot, the wifi-device should work.

DevOpsKube – Redmine SSO with KeyCloak via OpenId Connect

Because we could not make the logout functionality work using the SAML Plugin of Redmine (see last post), we decided to use the OpenId Connect Plugin. Due to the fact, that some functionality (eg. SSL Validation was always enabled, which is not wanted in Development Environments) we have forked this plugin and implemented some fixtures (see here). Those fixtures are already commited to the original project as Pull Requests. As soon as those are merged, we are going to use the original version instead of our fork.

Obviously DevOpsKube is not interested in a manual configuration of this SSO connection, but some documentation is always helpful. Unfortunately we haven’t found any documentation about the Integration of Redmine and KeyCloak via the OpenId Connect Plugin, therefor we do provide some additional documentation about this integration on our DevOpsKube Documentation. The Integration is already commited into our Repositories, but there is still some work to do on the KeyCloak setup (we do need to implement keys, which are used in Redmine as well as KeyCloak). As soon as this is done, we have implemented the first full integration between two components of our DevOpsKube Stack.

Hope you do find this helpful. If you would like to support us, in building up a modern SDLC stack on Kubernetes, do not hesitate to join our effort.

Redmine SSO with KeyCloak via SAML Protocol

For the DevOpsKube-Stack we are currently implementing a Single-Sign-On (SSO) solution for Redmine. For this we do use KeyCloak as the Identity Provider and the SAML Protocol using the Redmine Omniauth SAML Plugin. Unfortunately there is just the sample initializer found on the Plugin, but not any additional information. Therefor we do describe some steps on how to get this to work, for your own enjoyment.

  1. Install the Redmine Omniauth SAML Plugin like described on their README
  2. Create a client in your Keycloak Server. we have named it „redmine“
  3. Create Mappers in your keycloak for the redmine-Client using the following properties:
    1. Name: firstname, Type: User Property, Property: firstName, Friendly Name: givenName, SAML Attribute: firstname
    2. Name: lastname, Type: User Property, Property: lastName, Friendly Name: surname, SAML Attribute: lastname
    3. Name: email, Type: User Property, Property: email, Friendly Name: email, SAML Attribute: email
  4. Now the saml.rb file (see Sample File) should be configured like the following:

    Redmine::OmniAuthSAML::Base.configure do |config|
    config.saml = {
    :assertion_consumer_service_url => "http://REDMINE_URL/auth/saml/callback", # OmniAuth callback URL
    :issuer => "redmine", # The issuer name / entity ID. Must be an URI as per SAML 2.0 spec.
    :idp_sso_target_url => "https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/saml", # SSO login endpoint
    #:idp_cert_fingerprint => "certificate fingerprint", # SSO ssl certificate fingerprint
    # Alternatively, specify the full certifiate:
    :idp_cert => "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
    :name_identifier_format => "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
    :signout_url => "https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/saml/clients/redmine", # Optional signout URL, not supported by all identity providers
    :idp_slo_target_url => "https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/saml/clients/redmine",
    :name_identifier_value => "mail", # Which redmine field is used as name_identifier_value for SAML logout
    :attribute_mapping => {
    # How will we map attributes from SSO to redmine attributes -- custom properties
    :login => '',
    :mail => '',
    :firstname => 'info.first_name',
    :lastname => 'info.last_name'

    config.on_login do |omniauth_hash, user|
    # Implement any hook you want here

Right now, the logout is not working properly, but we do still work on this one. As soon as this integration is finished, we are able to provide the first „real“ integration for DevOpsKube.

If you have any further questions, do not hesitate to ask in the comments.

DevOpsKube: New Homepage

We just updated our homepage, which is now reachable via the newly registered Domain This homepage is fully generated from the README files of the charts as well as some additional Mardown-files.

Please have a look on this new Page. Any contributions are highly welcome.

DevOpsKube: Lots of updates

We updated the docker Image of jenkins to reflect the latest changes on the MySQL Docker Image (eg. we added the Makefile, a jenkinsfile and use the latest Version Bump Stuff. These new versions of the Image are now also reflected in the main DevOpsKube charts.

Furthermore we updated the Single-Node user-data to use the latest Kubernetes Version (1.4.6). This adoption is in order to reflect the changes found in the CoreOS Single-Node repository.

Next steps are now to use a seed-job for the Jenkins Docker Jobs we do have right now (mainly: docker-mysql as well as docker-jenkins) and to then integrate jenkins with our local Gogs chart, to be able to provide a fully working Docker Building in our SDLC stack.

If you are interested in our efforts, please join us. Any help and any contribution is welcome.

DevOpsKube: Make Jenkins build Docker Images

We did add the possibility to build Docker Images using Jenkins in our latest update of the Jenkins Chart. You can use a Jenkinsfile to configure the Image-Build-Job (see Docker Mysql). This script shows pretty easy, on how to build the docker image like it is done on the Docker-Hub Automatic Build. Basically this script checks out the Git-Repository, builds the Image and checks if the latest commit is a tag, and if so, tags the image accordingly. In each case the Image is then published on Docker Hub using the tag „latest“.
To make the tagging working, we do use the python script BumpVersion. To see how this works, you can take a look into the Makefile of this project.