Archiv der Kategorie: Common

Everything

DevOpsKube – Redmine SSO with KeyCloak via OpenId Connect

Because we could not make the logout functionality work using the SAML Plugin of Redmine (see last post), we decided to use the OpenId Connect Plugin. Due to the fact, that some functionality (eg. SSL Validation was always enabled, which is not wanted in Development Environments) we have forked this plugin and implemented some fixtures (see here). Those fixtures are already commited to the original project as Pull Requests. As soon as those are merged, we are going to use the original version instead of our fork.

Obviously DevOpsKube is not interested in a manual configuration of this SSO connection, but some documentation is always helpful. Unfortunately we haven’t found any documentation about the Integration of Redmine and KeyCloak via the OpenId Connect Plugin, therefor we do provide some additional documentation about this integration on our DevOpsKube Documentation. The Integration is already commited into our Repositories, but there is still some work to do on the KeyCloak setup (we do need to implement keys, which are used in Redmine as well as KeyCloak). As soon as this is done, we have implemented the first full integration between two components of our DevOpsKube Stack.

Hope you do find this helpful. If you would like to support us, in building up a modern SDLC stack on Kubernetes, do not hesitate to join our effort.

Redmine SSO with KeyCloak via SAML Protocol

For the DevOpsKube-Stack we are currently implementing a Single-Sign-On (SSO) solution for Redmine. For this we do use KeyCloak as the Identity Provider and the SAML Protocol using the Redmine Omniauth SAML Plugin. Unfortunately there is just the sample initializer found on the Plugin, but not any additional information. Therefor we do describe some steps on how to get this to work, for your own enjoyment.

  1. Install the Redmine Omniauth SAML Plugin like described on their README
  2. Create a client in your Keycloak Server. we have named it „redmine“
  3. Create Mappers in your keycloak for the redmine-Client using the following properties:
    1. Name: firstname, Type: User Property, Property: firstName, Friendly Name: givenName, SAML Attribute: firstname
    2. Name: lastname, Type: User Property, Property: lastName, Friendly Name: surname, SAML Attribute: lastname
    3. Name: email, Type: User Property, Property: email, Friendly Name: email, SAML Attribute: email
  4. Now the saml.rb file (see Sample File) should be configured like the following:

    Redmine::OmniAuthSAML::Base.configure do |config|
    config.saml = {
    :assertion_consumer_service_url => "http://REDMINE_URL/auth/saml/callback", # OmniAuth callback URL
    :issuer => "redmine", # The issuer name / entity ID. Must be an URI as per SAML 2.0 spec.
    :idp_sso_target_url => "https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/saml", # SSO login endpoint
    #:idp_cert_fingerprint => "certificate fingerprint", # SSO ssl certificate fingerprint
    # Alternatively, specify the full certifiate:
    :idp_cert => "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
    :name_identifier_format => "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
    :signout_url => "https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/saml/clients/redmine", # Optional signout URL, not supported by all identity providers
    :idp_slo_target_url => "https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/saml/clients/redmine",
    :name_identifier_value => "mail", # Which redmine field is used as name_identifier_value for SAML logout
    :attribute_mapping => {
    # How will we map attributes from SSO to redmine attributes -- custom properties
    :login => 'info.email',
    :mail => 'info.email',
    :firstname => 'info.first_name',
    :lastname => 'info.last_name'
    }
    }

    config.on_login do |omniauth_hash, user|
    # Implement any hook you want here
    end
    end

Right now, the logout is not working properly, but we do still work on this one. As soon as this integration is finished, we are able to provide the first „real“ integration for DevOpsKube.

If you have any further questions, do not hesitate to ask in the comments.

DevOpsKube: Lots of updates

We updated the docker Image of jenkins to reflect the latest changes on the MySQL Docker Image (eg. we added the Makefile, a jenkinsfile and use the latest Version Bump Stuff. These new versions of the Image are now also reflected in the main DevOpsKube charts.

Furthermore we updated the Single-Node user-data to use the latest Kubernetes Version (1.4.6). This adoption is in order to reflect the changes found in the CoreOS Single-Node repository.

Next steps are now to use a seed-job for the Jenkins Docker Jobs we do have right now (mainly: docker-mysql as well as docker-jenkins) and to then integrate jenkins with our local Gogs chart, to be able to provide a fully working Docker Building in our SDLC stack.

If you are interested in our efforts, please join us. Any help and any contribution is welcome.

DevOpsKube: Using static files as input for ConfigMaps

For some of our charts in DevOpsKube we wanted to use static files as a template, obviously still be able to use Variables from helm (eg. .Release.Name). This seems to be rather easy, looking at the „include“ statement of helm. Unfortunately this is not really well documented and/or not working as expected. Some projects (eg. SAP CC-OpenStack do use this feature as well. I did try to use the same logic, but unfortunately this was not working (with helm 2.0.0-rc1 at least).

Here are some guidance on how to implement configmaps in kuberentes using helm and static files. First of all, all static files are put into the /etc-directory. These files are not recognized from helm by itself, therefor we do need to create a Partials-Template (named: _partials.tpl). To do this automatically, we added a Makefile, like the one found at the SAP-CC solution. If you then type „make“ in the corresponding chart-directory, a templates/_patials.tpl file is generated, which contains all static files in a corresponding „define“-structure. This file is ignored by git.

To reference the structure in the configmap, it can be included by using the following structure:


{{ include "etc/FILENAMD" .| printf "%s" | indent 4 }}

For a working example of this, please take a look into our jenkins chart.

Howto install a kubernetes single node „cluster“?

We just published a documentation on how to install a single-node „cluster“ at my preferred Hoster. This documentation describes, on how to install [coreOS](https://coreos.com/) and initialize the single-node cluster on this node.

This is really helpful for „testing“ the DevOpsKube Environment, but could be also used for small projects.

The documentation can be found in the github Repository.

DevOpsKube – Just launched…

The first version (the components mentioned in the roadmap for version 0.1 are implemented) is now published on our github page. The site is now reachable via devopsku.be.

It would be great, if you come by and contribute to this new and interesting project. Every contribution is greatly welcome.

DevOpsKube – My opinionated View of a full SDLC Stack on Kubernetes – Roadmap

It was decided to use the following Roadmap to be able to provide some useful Components already and to have something to extend on.

Note, that this roadmap is not fixed yet and it could be, that some mentioned features are coming in later versions. This roadmap should just describe the rough idead where DevOpsKube is heading to.

Version 0.1

All Components mentioned in the first post should be provided using MySql. Furthermore the configuration for these components is provided and docaumented. All necessary steps to setup a single node cluster (based on CoreOS) will be documented as well.

This will be a pre-liminary version to provide all the components and steps to build up the future „development“ environment.

Version 0.2

Add additional components to the Stack to provide a fully featured SDLC Stack. These components could be:

Version 0.3

Add additional components for eg. SSO and other things, which can be useful in an SDLC Stack:

This version should already provide SSO functionality for the defined components.

Version 0.4

Additional functionality to be able to create projects via a single REST-API call. This is the first version with some unique functionality. The REST-API should include a web-based client as well as a Command-line client.

Version 0.5

Make all of the components (if upstream allows) HA-able. Furthermore integrate those as much as possible.

Version 0.6

Be self-hosted. We should eat our own dog-food, therefor we should host this project on our own Kubernetes Cluster.

Generate a Maven Release w/o Manual Intervention

At the company where I work at currently we do have several, around 70 to be precise, inter-related Components. Some of those Components are multi-modules, which makes a fully Automatic release-build even harder. The used SCM is git, awnd we do use Nexus as a Central artifact repository.

What are the Problems of the Release Generation Process? What are the manual steps, why are they manual? Where can failures happen in this Process? Are there any other Problems, which need to get resolved to make the Process easier and (hopefully) fully Automatic?

Well, identifying the Manual steps is quite easy, to replace those with automatisms will help but can be quite expensive or even plainly impossible. Lets take a look on those steps:

  • Check components if there are any dependencies, which are updated, and update those
  • Check components for changes to see where we do need to create a release
  • Create a release of this component

Check for updated dependencies

For some components, that are used throughout the whole Application and should be used in the same Version all over the place, we do have to update all other components, even if they do not use the latest SNAPSHOT version. This is mainly true for the Model of the core component, which is used in all Components accessing the DB and/or the Rest API. To make sure, that the same Version is used everywhere, the version is adopted in all released components manually. This is obviously rather error prone, since one can overlook some components and their dependencies.

Other Components are shared as well, but could be used in different versions in different Components. Here the developer decided which version to use in her component. This is error prone as well, since it can lead to dependencies issues via transitive dependencies.

The above mentioned points hint to architectural smells in the system, but to resolve those, would be a longer project and we wanted to make the whole release process easier quite urgently to be able to have some time to resolve those architectural issues.

We decided to just update all dependencies (internal ones, that is) to the latest version. This has still the disadvantage that some changes on one component are not probably tested on the Development and Staging Environment, but at least the reason for this is quite obvious and not hidden in „transitive“ dependencies like up until now.

How to update all dependencies to the latest version? Lets take a look at the Maven Versions plugin, which comes to the rescue.

  • versions:update-parent will update the parent version to the latest available version
  • versions:update-properties will update the versions of all dependencies in our project

That looks pretty simple, does it? Well, there are some drawbacks in these steps. One should avoid to update all dependencies (eg. used frameworks like spring) but just internal dependencies. This can be done using the „includes“ property of the plugin-goal. Problem here is, that you should know all groupIds (that is the classifier, we do use) of all internal dependencies before running this goal. Other then that, multi-module projects do have some problems here as well, in that the properties defined in the multi-module-master but used in the sub-modules are not updated correctly. That is why we defined that a new dependency has to be declared in the dependencyManagement in the multi-module-master. This is error-prone, since each developer has to follow this, but we do seldomly declare new internal dependencies in this phase anymore, so that this problem is minor.

To be able to recognize this change in the following steps, you need to make sure, that the changes are commited and pushed and to be on the safe side during the release preparation, make sure that no files (mvn versions will create a backup-file of your pom) are still lying around (mvn versions:commmit).

Check for Changes in Components

To see if a component has any changes, we should take a look into the Logs of the version control system. We should start with this on the bottom of the dependency hierarchy, so that if the component has changes, we do release it and all components above can be updated. In a manual step, we are usually going from the top to the bottom, to be sure, that all dependencies are met. This ist depending on the developers, which declare a dependency as „SNAPSHOT“, if they do need a later version. The problem with this approach is, that sometimes transitive dependencies for components, which do not use the SNAPSHOT version are updated, without the developer knowing. This could lead to problems, if the version changes method signatures, which will stay the same if the version updates are done automatically. Therefor some common rules for versioning should be defined (see Semver). Furthermore we do have to make sure, that all components are considered.

This can be automted quite easily in just comparing the last known „release“-commit with the current commit and decide if there are any changes in between.

Some helpful commands for this are:


git log --all --grep='[maven-release-plugin]' --format='%H' -n 1

The above command shows all commits containing the maven-release-plugin pattern. To see, if there are any changes after this commit, you need to grep the commit id from the latter command and do the following command:


git rev-list "$commit_id"..HEAD

Before running these commits, please make sure, that you are on the correct branch (in our case master or rc):


git ls-remote --exit-code $GIT_URL/$project refs/heads/$branch

Create a release

The creation of a release using the Release Plugin of Maven is not as painful as some people think, and works quite well for us since a long time. In the past, we did create a release manually via the Jenkins Plugin. To be able to let the release process run automatically without an interruption due to questions of the new version, you should use the command-line flag „–batch-mode“ for the maven call.

To make sure, that the release is working correctly and no changes on the git repository are made, you should use a „dryRun“ beforehand.

Automate It (Full Pull)

To automate the above mentioned steps, we do need a list of all projects in the correct sequence, which is manually created and adopted, as soon as some dependencies change and/or new dependencies are created (this make the whole process error prone as well, but up until now, there is now real alternative to it).

This list of projects is then processed in the given sequence. Each project is cloned and version bumped. After this changes are checked. As soon as this is done and there are changes, a release is created.

To make the implementation easier, we re-used the bash-libs from the kubernetes release project (mainly common.sh and gitlib.sh), but with some major extensions. For the maven calls, we created our own mvnlib extension.