Archiv der Kategorie: Projects

Projects

DevOpsKube: Make Jenkins build Docker Images

We did add the possibility to build Docker Images using Jenkins in our latest update of the Jenkins Chart. You can use a Jenkinsfile to configure the Image-Build-Job (see Docker Mysql). This script shows pretty easy, on how to build the docker image like it is done on the Docker-Hub Automatic Build. Basically this script checks out the Git-Repository, builds the Image and checks if the latest commit is a tag, and if so, tags the image accordingly. In each case the Image is then published on Docker Hub using the tag „latest“.
To make the tagging working, we do use the python script BumpVersion. To see how this works, you can take a look into the Makefile of this project.

DevOpsKube – Just launched…

The first version (the components mentioned in the roadmap for version 0.1 are implemented) is now published on our github page. The site is now reachable via devopsku.be.

It would be great, if you come by and contribute to this new and interesting project. Every contribution is greatly welcome.

DevOpsKube – My opinionated View of a full SDLC Stack on Kubernetes – Roadmap

It was decided to use the following Roadmap to be able to provide some useful Components already and to have something to extend on.

Note, that this roadmap is not fixed yet and it could be, that some mentioned features are coming in later versions. This roadmap should just describe the rough idead where DevOpsKube is heading to.

Version 0.1

All Components mentioned in the first post should be provided using MySql. Furthermore the configuration for these components is provided and docaumented. All necessary steps to setup a single node cluster (based on CoreOS) will be documented as well.

This will be a pre-liminary version to provide all the components and steps to build up the future „development“ environment.

Version 0.2

Add additional components to the Stack to provide a fully featured SDLC Stack. These components could be:

Version 0.3

Add additional components for eg. SSO and other things, which can be useful in an SDLC Stack:

This version should already provide SSO functionality for the defined components.

Version 0.4

Additional functionality to be able to create projects via a single REST-API call. This is the first version with some unique functionality. The REST-API should include a web-based client as well as a Command-line client.

Version 0.5

Make all of the components (if upstream allows) HA-able. Furthermore integrate those as much as possible.

Version 0.6

Be self-hosted. We should eat our own dog-food, therefor we should host this project on our own Kubernetes Cluster.

Generate a Maven Release w/o Manual Intervention

At the company where I work at currently we do have several, around 70 to be precise, inter-related Components. Some of those Components are multi-modules, which makes a fully Automatic release-build even harder. The used SCM is git, awnd we do use Nexus as a Central artifact repository.

What are the Problems of the Release Generation Process? What are the manual steps, why are they manual? Where can failures happen in this Process? Are there any other Problems, which need to get resolved to make the Process easier and (hopefully) fully Automatic?

Well, identifying the Manual steps is quite easy, to replace those with automatisms will help but can be quite expensive or even plainly impossible. Lets take a look on those steps:

  • Check components if there are any dependencies, which are updated, and update those
  • Check components for changes to see where we do need to create a release
  • Create a release of this component

Check for updated dependencies

For some components, that are used throughout the whole Application and should be used in the same Version all over the place, we do have to update all other components, even if they do not use the latest SNAPSHOT version. This is mainly true for the Model of the core component, which is used in all Components accessing the DB and/or the Rest API. To make sure, that the same Version is used everywhere, the version is adopted in all released components manually. This is obviously rather error prone, since one can overlook some components and their dependencies.

Other Components are shared as well, but could be used in different versions in different Components. Here the developer decided which version to use in her component. This is error prone as well, since it can lead to dependencies issues via transitive dependencies.

The above mentioned points hint to architectural smells in the system, but to resolve those, would be a longer project and we wanted to make the whole release process easier quite urgently to be able to have some time to resolve those architectural issues.

We decided to just update all dependencies (internal ones, that is) to the latest version. This has still the disadvantage that some changes on one component are not probably tested on the Development and Staging Environment, but at least the reason for this is quite obvious and not hidden in „transitive“ dependencies like up until now.

How to update all dependencies to the latest version? Lets take a look at the Maven Versions plugin, which comes to the rescue.

  • versions:update-parent will update the parent version to the latest available version
  • versions:update-properties will update the versions of all dependencies in our project

That looks pretty simple, does it? Well, there are some drawbacks in these steps. One should avoid to update all dependencies (eg. used frameworks like spring) but just internal dependencies. This can be done using the „includes“ property of the plugin-goal. Problem here is, that you should know all groupIds (that is the classifier, we do use) of all internal dependencies before running this goal. Other then that, multi-module projects do have some problems here as well, in that the properties defined in the multi-module-master but used in the sub-modules are not updated correctly. That is why we defined that a new dependency has to be declared in the dependencyManagement in the multi-module-master. This is error-prone, since each developer has to follow this, but we do seldomly declare new internal dependencies in this phase anymore, so that this problem is minor.

To be able to recognize this change in the following steps, you need to make sure, that the changes are commited and pushed and to be on the safe side during the release preparation, make sure that no files (mvn versions will create a backup-file of your pom) are still lying around (mvn versions:commmit).

Check for Changes in Components

To see if a component has any changes, we should take a look into the Logs of the version control system. We should start with this on the bottom of the dependency hierarchy, so that if the component has changes, we do release it and all components above can be updated. In a manual step, we are usually going from the top to the bottom, to be sure, that all dependencies are met. This ist depending on the developers, which declare a dependency as „SNAPSHOT“, if they do need a later version. The problem with this approach is, that sometimes transitive dependencies for components, which do not use the SNAPSHOT version are updated, without the developer knowing. This could lead to problems, if the version changes method signatures, which will stay the same if the version updates are done automatically. Therefor some common rules for versioning should be defined (see Semver). Furthermore we do have to make sure, that all components are considered.

This can be automted quite easily in just comparing the last known „release“-commit with the current commit and decide if there are any changes in between.

Some helpful commands for this are:


git log --all --grep='[maven-release-plugin]' --format='%H' -n 1

The above command shows all commits containing the maven-release-plugin pattern. To see, if there are any changes after this commit, you need to grep the commit id from the latter command and do the following command:


git rev-list "$commit_id"..HEAD

Before running these commits, please make sure, that you are on the correct branch (in our case master or rc):


git ls-remote --exit-code $GIT_URL/$project refs/heads/$branch

Create a release

The creation of a release using the Release Plugin of Maven is not as painful as some people think, and works quite well for us since a long time. In the past, we did create a release manually via the Jenkins Plugin. To be able to let the release process run automatically without an interruption due to questions of the new version, you should use the command-line flag „–batch-mode“ for the maven call.

To make sure, that the release is working correctly and no changes on the git repository are made, you should use a „dryRun“ beforehand.

Automate It (Full Pull)

To automate the above mentioned steps, we do need a list of all projects in the correct sequence, which is manually created and adopted, as soon as some dependencies change and/or new dependencies are created (this make the whole process error prone as well, but up until now, there is now real alternative to it).

This list of projects is then processed in the given sequence. Each project is cloned and version bumped. After this changes are checked. As soon as this is done and there are changes, a release is created.

To make the implementation easier, we re-used the bash-libs from the kubernetes release project (mainly common.sh and gitlib.sh), but with some major extensions. For the maven calls, we created our own mvnlib extension.

Install Gollum with Unicorn and nginx

I just documented on how to install gollum on an ArchLinux machine using Unicorn and an nginx reverse proxy. This documentation provides detailed installation instructions as well as config files.

One of the requirements I had, was to be able to run multiple instances of gollum, as well as using systemd to start and stop these instances easily. There is a gollum package in the AUR, but this does use the webRick, and therefor I decided to start from scratch.

My UPNP Stack

Hello,

i have already witten, that I am ripping my whole CD collection (see discogs for the already ripped collection). I have written a for tagging my collection and this is working great.

To play music on my stereo, I thought, that UPNP is a great protocol for it. The stack I am currently using is involving the following toolset:

Please note, that parts of this stack are replaceable through other components, but this stack is right now the best working (at least for me). All components do use the UPNP enhancements from OpenHome, which is an OpenSource Project of Linn, IIRC.

The MediaRenderer could be replaced by UPMPDCli, a nice UPNP Frontend for MPD (my favourite Music Player. But then you should also use BubbleUPNPServer to enjoy all the benefits of the Openhome extensions.

MediaPlayer is using MPD or MPlayer to be able to play the music. MPD do offer quite some extensions, which still can be used with the above mentioned environment.

One Extension is LCD4Linux, which allows to show some information about the current played song on a small LCD. This is working on my Raspberry, but unfortunately this also seems to have some problems, in that the Display just freezes and the whole Box needs to get restarted. Since the used display is also very small (see Pearl Display) I decided to invest some more time and money into something slightly larger (see TaoTronics Display, Power Supply, HDMI to Component Adapter as well as a couple of additional needed cables (MiniHDMI, Component…)). I do hope that this is going to work out. For this stack, LCD4Linux is not needed anymore, since this is a „normal“ Screen. Therefor I plan to integrate a Full-Screen Display component into the MediaPlayer. As soon as this is finished, I will report back, right now I am still waiting, that all the above mentioned components do arrive.

On the beginning of my UPNP discoveries, I stumbled across the X10, which is also a nice toy, but unfortunately does not support gapless UPNP playback (see X10 Forum (German)). Unfortunately I needed to buy this device to discover this one ;-( It is still a nice playing toy, but right now is just used for Internet Radio Streaming, since even the Tagging I did with DiscogsTagger is totatlly screwed on this device and the X10 is showing me the albums in totally different format then shown eg. on minimserver.

So, you could buy yourself some expensive devices from Linn, Naim or …, or you spend you money on some decent Hardware like Raspberry PI (uh, the sound of this device is not realy good, without the addition of a good DAC like HifiBerry) or a Cubox-I and invest some time in installing the above mentioned stack, then all should be fine, without spending too much Money as well as time.

Discogstagger

If you have read my blog lately, you know already, that I am in the process of ripping all my CDs into flacs. I am using RubyRipper (since I am on a ArchLinux Box) to rip the CDs. Since the quality of FreeDB (used by RubyRipper) is not really good, when it comes to certain (in my case most) of the CDs. Therefor I am using Discogs to get the correct Metadata. In the beginning, I used Puddletag to Tag all the Tracks. Later I discovered a nice tool, called Discogstagger, which is able to tag a whole album by using the releaseId from Discogs. Unfortunately, this tool did not provide all the needed functionality (e.g. multi disc albums were not supported). Jessewards (the owner of Discogstagger) was quite interested in my changes and accepted all of my pull requests (and I am not really a Python Expert). Since the whole application grew out of itself quite fast, I decided to fork discogstagger and provide a new version of it. I am still in the process of extending discogstagger and right now this version is not working at all (unfortunately), but all tests I have written (and I wrote quite some unit tests for it) are running 😉

If you are interested in helping out just take a look on the version2 Branch of discogstagger. I am more then happy to accept pull requests, but keep in mind, that I would like to increase the code coverage with every single commit 😉

Every type of pull request is very welcome, if it is just a bug fix, an extenstion to the current functionality or „just“ documentation.

Greetz

What do I want from my personal NAS

In this post I would like to gather some personal requirements for a NAS System I am going to build.

Right now, I am in the process of ripping all my CDs (around 950 unique releases – More than half of it is already finished). The target is to store all these releases on a personal NAS with the ability to stream those to my stereo. For this I have already selected minimserver as the UPNP-Server. This server has the requirement of a JDK to let it run. Therefore the NAS I am going to build must have the ability to run JAVA.

Since I am already using Linux quite heavily I do not like to let FreeNAS or NAS4Free run on this NAS, also I am very interested in the underlying File System (ZFS) and/or FreeBSD.

Since Linux offers a “similar” File System (btrfs) I would like to use this one for the NAS.

The services which I would like to run on the NAS are then the following:

There some other options, which would be nice, but are not as “necessary”. There is e.g. Ajenti, which provides a nice WebGUI for the administration of the NAS, but this does not really correspond to the way Arch Linux works 😉 A possibility would be to use e.g. CentOS or
Ubuntu as a distro, but I am unsure, if this is really going to work out, just for a nice GUI????

The above mentioned requirements are not really tough for todays hardware and therefor I would like to stick to the Stack provided in the nas-portal forum (see here).

Since I am going to use a filesystem, which seems to be picky about power outages, I am in need of a UPS, and I am currently thinking about this one.

So I am going to explain some more interesting stuff about the tagging of my FLACs for the minimserver and about the used tool (discogstagger) in some future posts. Stay seated, so that you can see how an absolute Hardware Noob tries to build his own NAS 😉

Software Development in the 21st century

In earlier days sourceforge was THE development plattform for open source project. This has changed nowadays. SF.net has now been overcome by github and additional services. SF.net was still adding services like e.g. git repositories (http://sourceforge.net/apps/trac/sourceforge/wiki/Git) and additional apps like Trac and stuff. Sourceforge is furthermore converting the whole infrastructure to the Apache Foundation under the name Allura. The whole world is going into cloud based services (this is especially true for software development services like e.g. bug tracker and source code management systems) and one of the formerly biggest ones is going to open source their whole stack. Quite interesting, don’t you think?
github.com is still very specialized on git repositories and also some more stuff like issue tracking and a wiki and also some static page stuff (pages.github.com). But the real interesting stuff is the git repository stuff. Even though there are some competitors in this area (think: bitbucket.org from one of the great companies in the software development area (atlassian, which bougth bitbucket quite some time ago and also added git quite quickly), as well as gitorious, github is still the largets in this area. All added values like e.g. Continuous Integration tools (think: travis-ci) are using the interfaces github is offering (called service hooks) to integrate their services. And, I have to admit, these services are doing a great job with this, and even offer so called „badges“ to integrate their services even further into the project home page at github. Very smart. The whole business is going into the direction of the old unix philosophy (doing one thing, but doing it very good).
Basically the whole industry is going into the cloud business and github and cohorts are the expression of this. Earlier on one big service like sourceforge offered the whole stuff, nowadays it is several small companies doing the same in different entities. I really do like this. What are the services I do like really and I do use in my latest developments?

There is one more thing about these services, they do not offer these services for one language or environment, but for several languages and environments, like Ruby or javascript/nodejs. This is rather interesting. Even though some (if not most) of these services are written in Ruby, they do offer their services to other languages as well. This means, that the whole community recognizes, that there is more then just one language. While open source communities like e.g. jboss or apache are focused on one/some specific languages (in the beforementioned Projects it is mostly Java) these are staying this way, but are still opening up to the rest of the world (e.g. HornetMQ at github.com.
If you take a look on the nodejs modules, it is clear that smaller modules are more welcome in the community then larger ones (e.g. expressj vs. geddyjs). This is a nice trend going back to the „small but good“ design principle, which I definitly acclaim. Basically this is all about „KISS“, and, just to mention it again, my preferred linux distribution is also all about the same principle (ArchLinux).
So having said all this (and I believe this developed into a small rant) I still believe in the KISS principle and I am very glad to develop software in the 21st century 😉 Furthermore I believe that this is also a principle which is represented by Uncle Bob and his Manifesto for Software Craftsmanship.
I hope, that I follow the above mentioned principle in the following new projects, I just published to github and the npmjs.org:

What does this mean for us Developers? Basically I strongly believe, that Software Development in the future is more like building a castle with already existing lego blocks with the design of some Architect and the business logic concepts of some Product Owner. All we have to do is using the right tools for the right job (the selection is up to us fortunately) and implement the business logic with the right algorithms. This sounds like a abasement, but I do strongly believe that we are still Craftsman and can do a fine job on this one. It would be a decision, if we need an Architect, but the business logic should (at least IMHO) be designed by the guys and gals who know this better (Product Owner should know this better).

dbUnit in a Spring Environment with MSSQL

Today I needed to export and import some data from a database for testing purposes. I do knew dbUnit already and this seemed to be the tool of choice. I stumbled about a couple of problems, because another example of the usage of dbUnit with the Spring Framework was using Oracle and furthermore the FlatXmlDataSet. I wanted to use the XmlDataSet, because this seems to be easier to maintain manually.

In the following I will try to show, how to integrate dbUnit into the project. First of all, I needed to put the Maven Plugin into place:

<dependencies>
        <dependency>
            <groupId>org.dbunit</groupId>
            <artifactId>dbunit</artifactId>
            <version>2.4.8</version>
            <scope>test</scope>
        </dependency>
...
</dependencies>
...
<build>
   <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>dbunit-maven-plugin</artifactId>
                <version>1.0-beta-3</version>
                <!--jar file that has the jdbc driver -->
                <dependencies>
                    <dependency>
                        <groupId>net.sourceforge.jtds</groupId>
                        <artifactId>jtds</artifactId>
                        <version>${jtds.version}</version>
                    </dependency>
                </dependencies>

                <configuration>
                    <driver>${database.driver.classname}</driver>
                    <url>${database.url}</url>
                    <username>${database.user}</username>
                    <password>${database.password}</password>
                    <dataTypeFactoryName>org.dbunit.ext.mssql.MsSqlDataTypeFactory</dataTypeFactoryName>
                    <ordered>true</ordered>
                </configuration>
            </plugin>
...
</plugins>
</build>

After this I was able to call mvn dbunit:export, which gave me an export of the current datastructure inside the database. It will generate a file target/dbunit/export.xml, which could then be used in the TestCases (we are using JUnit).

The TestCases are now looking something like this:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {"classpath:/META-INF/spring/testAC.xml"})
public class ServiceTest {
...
	@Autowired
	private DataSource dataSource;

Here we are autowiring the dataSource from the Application Context, this is needed to extract all necessary information for the database connection of dbUnit.

	// the following methods do make sure, that the database is setup correctly with
	// dbUnit
	private IDatabaseConnection getConnection() throws Exception {
    	// get connection
        Connection con = dataSource.getConnection();
        DatabaseMetaData databaseMetaData = con.getMetaData();
        IDatabaseConnection connection = new DatabaseConnection(con);
        DatabaseConfig config = connection.getConfig();
        config.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new MsSqlDataTypeFactory());
        return connection;
	}

Here we are making sure, that the connection is setup correctly and that the right DataTypeFactory is used. This should be the same factory as used in the pom.xml (see above).

	private IDataSet getDataSet() throws IOException, DataSetException {
		File file = new File("src/test/resources/dbexport.xml");
		assertTrue(file.exists());
		Reader reader = new FileReader(file);
		return new XmlDataSet(reader);
	}

Fetching the dataset from a file and use it in the DatabaseOperation of dbUnit.

	@Test
	@Transactional
	public void testDelete() throws Exception {
        DatabaseOperation.CLEAN_INSERT.execute(getConnection(), getDataSet());
		EntityVersionId id = new VersionId("9a5a8eb1f02b4e06ba9117a771f2b69c", 2L);
		Entity entity = this.entityService.find(id);
		assertNotNull(entity);
		this.entityService.delete(id);
	}

Please note, that we are using a special EntityVersionId, which is part of our Framework and contains two values. This is a combined ID. The usual ID is an UUID (String) and a „version“ of type long. I guess, you will most probably not use something like this in your project.

Thats it, now everything works like expected 😉