Pull through Cache

In diesem Artikel zeige ich wie man einfach einen Pull through Cache für K3D nutzt, um auch beim Löschen eines Clusters vom Caching von größen Images zu profitieren.

K3D

Ein Kubernetes Cluster der mit K3D/K3S aufgebaut ist, der hat ein integriertes Caching für Docker Images die aus einer Registry (docker.io oder quay.io) geladen werden. Solange also der Cluster steht und nicht mit delete gelöscht wird, hält K3D die Images innerhalb der Docker Container weiter vor. Dadurch wird der Netzwerkverkehr gesenkt und auch das erneute Deployen von Anwendungen erfolgt deutlich schneller.

K3D Registries

Seit kurzer Zeit besteht eine komfortable Möglichkeit eine Registry in K3D zu registrieren. Dazu muss man nur eine Registry Konfigurationsdatei angeben. In dieser werden die Parameter für die Konfiguration der Registry übergeben. Der Parameter –registry-config spezifiziert die Konfigurationsdatei.

Der Default registriert K3D den Hostname host.k3d.internal automatisch mit der IP des Hosts, sodass aus dem Cluster dieser hierüber angesprochen werden kann. In dem Beispiel möchte ich die docker.io und quay.io cachen. Trage wir dazu 2 Registries als Mirror in die Konfiguration ein:

mirrors:
  docker.io:
    endpoint:
      - "http://host.k3d.internal:5001"
      - "https://docker.io"
  quay.io:
    endpoint:
      - "http://host.k3d.internal:5002"
      - "https://quay.io"

Wir werden auf den lokalen Ports 5001 und 5002 des Hosts jeweils einen Pull through Cache installieren, sodass diese als Registries innerhalb des Clusters verwendet werden.

Docker Registry V2 und UI

Kommen wir nun zur Installation des Caches. Docker bietet mit dem Registry v2 Image selber eine Lösung hierfür an. Wichtig ist hierbei die Environment Variable REGISTRY_PROXY_REMOTEURL. Über diese kann eine Registry die angegeben werden, diese wird dann geproxied und das ist genau das wir für den Anwendungsfall benötigen.

Als UI kommt die https://github.com/Joxit/docker-registry-ui zum Einsatz. Diese bietet eine einfache UI, um den Inhalt der Registry anzuzeigen. Was in unserem Fall aber vollkommen ausreichend ist. Die UI wird über die Ports 6001 und 6002 analog zu den Registries bereitgestellt. Somit ergibt sich das folgende Dockerfile:

version: '2.0'
services:

#
# docker.io
#
  registry_1:
    image: registry:2.6.2
    ports:
      - 5001:5000
    volumes:
      - ./registry-data_1:/var/lib/registry
    networks:
      - registry-ui-net
    environment:
      REGISTRY_PROXY_REMOTEURL: https://registry-1.docker.io

  ui_1:
    image: joxit/docker-registry-ui:static
    ports:
      - 6001:80
    environment:
      - REGISTRY_TITLE=My Private Docker docker.io Registry
      - REGISTRY_URL=http://registry_1:5000
    depends_on:
      - registry_1
    networks:
      - registry-ui-net

#
# quay.io
#
  registry_2:
    image: registry:2.6.2
    ports:
      - 5002:5000
    volumes:
      - ./registry-data_2:/var/lib/registry
    networks:
      - registry-ui-net
    environment:
      REGISTRY_PROXY_REMOTEURL: https://quay.io

  ui_2:
    image: joxit/docker-registry-ui:static
    ports:
      - 6002:80
    environment:
      - REGISTRY_TITLE=My Private Docker Quay.io Registry Cache
      - REGISTRY_URL=http://registry_2:5000
    depends_on:
      - registry_2
    networks:
      - registry-ui-net

networks:
  registry-ui-net:

Registry und UI starten

Der Start ist unspektakulär. Es genügt

docker-compose up -d

um die Container zu starten. Nun steht K3D der Pull through Cache zur Verfügung.

Sonarqube im Docker Container

Da es nicht ganz so einfach, wie erhofft war, den Docker Container mit einem Docker Compose Skript zu starten, habe ich mir überlegt das Prozedere hier kurz zu beschreiben. Ich möchte die aktuelle Comunity-Beta Version von Sonar einsetzen.

First Start – Sonarqube vorbereiten

Leider funktioniert das Docker Image für die Initialisierung nicht, da es Probleme mit den Berechtigungen gibt. Daher habe ich das folgende Bash Skript geschrieben. So sollte sich Sonar korrekt initialisieren lassen. Es wird die Konfiguration in das Conf Verzeichnis kopiert.

Erstellen sie einen Ordner in ~/docker/sonar und erzeugen sie die Datei init.sh mit folgenden Inhalt.

#!/bin/bash

# prepare directory structure
mkdir -p volumes/conf
mkdir -p volumes/extensions
mkdir -p volumes/data
chmod -R 777 ./volumes 

# run sonar init
docker run --rm \
-v $PWD/volumes/conf:/opt/sonarqube/conf \
-v $PWD/volumes/extensions:/opt/sonarqube/extensions \
-v $PWD/volumes/data:/opt/sonarqube/data \
sonarqube:community-beta --init

# fix permissions again
find ./volumes -type d -exec sudo chmod 777 {} + 

# copy sonar properties into conf dir
sudo cp sonar.properties ./volumes/conf/

Docker-Compose Skript

Sonar wird auf dem Port 9000 gestartet. Es werden unterhalb des Ordeners ./volumes drei Verzeichnisse gemounted. In dem Verzeichnis conf liegt die Konfigurationsdatei sonar.properties. Hier wird Sonarqube konfiguriert. Da nach dem Initialisieren der normale Nutzer keine Berechtigung hat, liegt das Original im Verzeichnis ~/docker/sonar. Diese wird durch das Init-Skript kopiert.

version: "2"

services:
  sonarqube:
    image: sonarqube:community-beta
    ports:
      - "9000:9000"
    networks:
      - sonarnet
    environment:
      - sonar.jdbc.username=sonar
      - sonar.jdbc.password=sonar
      - sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
    volumes:
      - ./volumes/conf:/opt/sonarqube/conf
      - ./volumes/data:/opt/sonarqube/data
      - ./volumes/extensions:/opt/sonarqube/extensions
    restart: always
    depends_on:
      - db

  db:
    image: postgres
    networks:
      - sonarnet
    environment:
      - POSTGRES_USER=sonar
      - POSTGRES_PASSWORD=sonar
    volumes:
      - ./volumes/postgresql:/var/lib/postgresql
      # This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
      - ./volumes/postgresql_data:/var/lib/postgresql/data
    restart: always

networks:
  sonarnet:
    driver: bridge

Die Konfiguration

Erstellen sie die Konfigurationsdatei sonar.properties mit folgenden Inhalt und passen sie sie ggf. noch an. Die Grundeinstellungen für die Datenbankverbindung sind bereits in den Environment Variablen von Docker gesetzt, sodass sie nun Sonarqube mit ./init.sh && docker-compose up -d starten können.

# Property values can:
# - reference an environment variable, for example sonar.jdbc.url= ${env:SONAR_JDBC_URL}
# - be encrypted. See https://redirect.sonarsource.com/doc/settings-encryption.html

#--------------------------------------------------------------------------------------------------
# DATABASE
#
# IMPORTANT:
# - The embedded H2 database is used by default. It is recommended for tests but not for
#   production use. Supported databases are Oracle, PostgreSQL and Microsoft SQLServer.
# - Changes to database connection URL (sonar.jdbc.url) can affect SonarSource licensed products.

# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.
#sonar.jdbc.username=sonar
#sonar.jdbc.password=sonar

#----- Embedded Database (default)
# H2 embedded database server listening port, defaults to 9092
#sonar.embeddedDatabase.port=9092


#----- Oracle 11g/12c/18c/19c
# The Oracle JDBC driver must be copied into the directory extensions/jdbc-driver/oracle/.
# Only the thin client is supported, and we recommend using the latest Oracle JDBC driver. See
# https://jira.sonarsource.com/browse/SONAR-9758 for more details.
# If you need to set the schema, please refer to http://jira.sonarsource.com/browse/SONAR-5000
#sonar.jdbc.url=jdbc:oracle:thin:@localhost:1521/XE


#----- PostgreSQL 9.3 or greater
# By default the schema named "public" is used. It can be overridden with the parameter "currentSchema".
#sonar.jdbc.url=jdbc:postgresql://db/sonarqube?currentSchema=my_schema


#----- Microsoft SQLServer 2014/2016/2017 and SQL Azure
# A database named sonar must exist and its collation must be case-sensitive (CS) and accent-sensitive (AS)
# Use the following connection string if you want to use integrated security with Microsoft Sql Server
# Do not set sonar.jdbc.username or sonar.jdbc.password property if you are using Integrated Security
# For Integrated Security to work, you have to download the Microsoft SQL JDBC driver package from
# https://www.microsoft.com/en-us/download/details.aspx?id=55539
# and copy sqljdbc_auth.dll to your path. You have to copy the 32 bit or 64 bit version of the dll
# depending upon the architecture of your server machine.
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true

# Use the following connection string if you want to use SQL Auth while connecting to MS Sql Server.
# Set the sonar.jdbc.username and sonar.jdbc.password appropriately.
#sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar


#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
# The recommended value is 1.2 * max sizes of HTTP pools. For example if HTTP ports are
# enabled with default sizes (50, see property sonar.web.http.maxThreads)
# then sonar.jdbc.maxActive should be 1.2 * 50 = 60.
#sonar.jdbc.maxActive=60

# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
#sonar.jdbc.maxIdle=5

# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
#sonar.jdbc.minIdle=2

# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
#sonar.jdbc.maxWait=5000

#sonar.jdbc.minEvictableIdleTimeMillis=600000
#sonar.jdbc.timeBetweenEvictionRunsMillis=30000



#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Web server is executed in a dedicated Java process. By default heap size is 512MB.
# Use the following property to customize JVM options.
#    Recommendations:
#
#    The HotSpot Server VM is recommended. The property -server should be added if server mode
#    is not enabled by default on your environment:
#    http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#    Startup can be long if entropy source is short of entropy. Adding
#    -Djava.security.egd=file:/dev/./urandom is an option to resolve the problem.
#    See https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source
#
#sonar.web.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError

# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.web.javaAdditionalOpts=

# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
#sonar.web.host=0.0.0.0

# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Default value is 9000.
#sonar.web.port=9000


# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50.
#sonar.web.http.maxThreads=50

# The minimum number of threads always kept running. The default value is 5.
#sonar.web.http.minThreads=5

# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25.
#sonar.web.http.acceptCount=25

# By default users are logged out and sessions closed when server is restarted.
# If you prefer keeping user sessions open, a secret should be defined. Value is
# HS256 key encoded with base64. It must be unique for each installation of SonarQube.
# Example of command-line:
# echo -n "type_what_you_want" | openssl dgst -sha256 -hmac "key" -binary | base64
#sonar.auth.jwtBase64Hs256Secret=

# The inactivity timeout duration of user sessions, in minutes. After the configured
# period of time, the user is logged out.
# The default value is set to 3 days (4320 minutes)
# and cannot be greater than 3 months. Value must be strictly positive.
#sonar.web.sessionTimeoutInMinutes=4320

# A passcode can be defined to access some web services from monitoring
# tools without having to use the credentials of a system administrator.
# Check the Web API documentation to know which web services are supporting this authentication mode.
# The passcode should be provided in HTTP requests with the header "X-Sonar-Passcode".
# By default feature is disabled.
#sonar.web.systemPasscode=


#--------------------------------------------------------------------------------------------------
# SSO AUTHENTICATION

# Enable authentication using HTTP headers
#sonar.web.sso.enable=false

# Name of the header to get the user login.
# Only alphanumeric, '.' and '@' characters are allowed
#sonar.web.sso.loginHeader=X-Forwarded-Login

# Name of the header to get the user name
#sonar.web.sso.nameHeader=X-Forwarded-Name

# Name of the header to get the user email (optional)
#sonar.web.sso.emailHeader=X-Forwarded-Email

# Name of the header to get the list of user groups, separated by comma (optional).
# If the sonar.sso.groupsHeader is set, the user will belong to those groups if groups exist in SonarQube.
# If none of the provided groups exists in SonarQube, the user will only belong to the default group.
# Note that the default group will always be set.
#sonar.web.sso.groupsHeader=X-Forwarded-Groups

# Interval used to know when to refresh name, email and groups.
# During this interval, if for instance the name of the user is changed in the header, it will only be updated after X minutes.
#sonar.web.sso.refreshIntervalInMinutes=5

#--------------------------------------------------------------------------------------------------
# LDAP CONFIGURATION

# Enable the LDAP feature
# sonar.security.realm=LDAP

# Set to true when connecting to a LDAP server using a case-insensitive setup.
# sonar.authenticator.downcase=true

# URL of the LDAP server. Note that if you are using ldaps, then you should install the server certificate into the Java truststore.
# ldap.url=ldap://localhost:10389

# Bind DN is the username of an LDAP user to connect (or bind) with. Leave this blank for anonymous access to the LDAP directory (optional)
# ldap.bindDn=cn=sonar,ou=users,o=mycompany

# Bind Password is the password of the user to connect with. Leave this blank for anonymous access to the LDAP directory (optional)
# ldap.bindPassword=secret

# Possible values: simple | CRAM-MD5 | DIGEST-MD5 | GSSAPI See http://java.sun.com/products/jndi/tutorial/ldap/security/auth.html (default: simple)
# ldap.authentication=simple

# See :
#   * http://java.sun.com/products/jndi/tutorial/ldap/security/digest.html
#   * http://java.sun.com/products/jndi/tutorial/ldap/security/crammd5.html
# (optional)
# ldap.realm=example.org

# Context factory class (optional)
# ldap.contextFactoryClass=com.sun.jndi.ldap.LdapCtxFactory

# Enable usage of StartTLS (default : false)
# ldap.StartTLS=true

# Follow or not referrals. See http://docs.oracle.com/javase/jndi/tutorial/ldap/referral/jndi.html (default: true)
# ldap.followReferrals=false

# USER MAPPING

# Distinguished Name (DN) of the root node in LDAP from which to search for users (mandatory)
# ldap.user.baseDn=cn=users,dc=example,dc=org

# LDAP user request. (default: (&(objectClass=inetOrgPerson)(uid={login})) )
# ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))

# Attribute in LDAP defining the user’s real name. (default: cn)
# ldap.user.realNameAttribute=name

# Attribute in LDAP defining the user’s email. (default: mail)
# ldap.user.emailAttribute=email

# GROUP MAPPING

# Distinguished Name (DN) of the root node in LDAP from which to search for groups. (optional, default: empty)
# ldap.group.baseDn=cn=groups,dc=example,dc=org

# LDAP group request (default: (&(objectClass=groupOfUniqueNames)(uniqueMember={dn})) )
# ldap.group.request=(&(objectClass=group)(member={dn}))

# Property used to specifiy the attribute to be used for returning the list of user groups in the compatibility mode. (default: cn)
# ldap.group.idAttribute=sAMAccountName

#--------------------------------------------------------------------------------------------------
# COMPUTE ENGINE
# The Compute Engine is responsible for processing background tasks.
# Compute Engine is executed in a dedicated Java process. Default heap size is 512MB.
# Use the following property to customize JVM options.
#    Recommendations:
#
#    The HotSpot Server VM is recommended. The property -server should be added if server mode
#    is not enabled by default on your environment:
#    http://docs.oracle.com/javase/8/docs/technotes/guides/vm/server-class.html
#
#sonar.ce.javaOpts=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError

# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.ce.javaAdditionalOpts=


#--------------------------------------------------------------------------------------------------
# ELASTICSEARCH
# Elasticsearch is used to facilitate fast and accurate information retrieval.
# It is executed in a dedicated Java process. Default heap size is 512MB.
#
# --------------------------------------------------
# Word of caution for Linux users on 64bits systems
# --------------------------------------------------
# Please ensure Virtual Memory on your system is correctly configured for Elasticsearch to run properly
# (see https://www.elastic.co/guide/en/elasticsearch/reference/5.5/vm-max-map-count.html for details).
#
# When SonarQube runs standalone, a warning such as the following may appear in logs/es.log:
#      "max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]"
# When SonarQube runs as a cluster, however, Elasticsearch will refuse to start.
#

# JVM options of Elasticsearch process
#sonar.search.javaOpts=-Xms512m -Xmx512m -XX:+HeapDumpOnOutOfMemoryError

# Same as previous property, but allows to not repeat all other settings like -Xmx
#sonar.search.javaAdditionalOpts=

# Elasticsearch port. Default is 9001. Use 0 to get a free port.
# As a security precaution, should be blocked by a firewall and not exposed to the Internet.
#sonar.search.port=9001

# Elasticsearch host. The search server will bind this address and the search client will connect to it.
# Default is loopback address.
# As a security precaution, should NOT be set to a publicly available address.
#sonar.search.host=


#--------------------------------------------------------------------------------------------------
# UPDATE CENTER

# Update Center requires an internet connection to request https://update.sonarsource.org
# It is enabled by default.
#sonar.updatecenter.activate=true

# HTTP proxy (default none)
#http.proxyHost=
#http.proxyPort=
# HTTPS proxy (defaults are values of http.proxyHost and http.proxyPort)
#https.proxyHost=
#https.proxyPort=

# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=

# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=

# Proxy authentication (used for HTTP, HTTPS and SOCKS proxies)
#http.proxyUser=
#http.proxyPassword=

# Proxy exceptions: list of hosts that can be accessed without going through the proxy
#                   separated by the '|' character, wildcard character '*' can be used for pattern matching
#                   used for HTTP and HTTPS (default none)
#                   (note: localhost and its literal notations (127.0.0.1, ...) are always excluded)
#http.nonProxyHosts=


#--------------------------------------------------------------------------------------------------
# LOGGING

# SonarQube produces logs in 4 logs files located in the same directory (see property sonar.path.logs below),
# one per process:
#   Main process (aka. App) logs in sonar.log
#   Web Server (aka. Web) logs in web.log
#   Compute Engine (aka. CE) logs in ce.log
#   Elasticsearch (aka. ES) logs in es.log
#
# All 4 files follow the same rolling policy (see sonar.log.rollingPolicy and sonar.log.maxFiles) but it applies
# individually (eg. if sonar.log.maxFiles=4, there can be at most 4 of each files, ie. 16 files in total).
#
# All 4 files have logs in the same format:
#           1           2    3           4                       5                                                   6
# |-----------------| |---| |-|--------------------||------------------------------| |------------------------------------------------------------------------------------------------------------------------------|
# 2016.11.16 16:47:00 INFO  ce[AVht0dNXFcyiYejytc3m][o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=org.sonarqube:example-java-maven | type=REPORT | id=AVht0dNXFcyiYejytc3m | submitter=admin | time=1699ms
#
# 1: timestamp. Format is YYYY.MM.DD HH:MM:SS
#    YYYY: year on 4 digits
#    MM: month on 2 digits
#    DD: day on 2 digits
#    HH: hour of day on 2 digits in 24 hours format
#    MM: minutes on 2 digits
#    SS: seconds on 2 digits
# 2: log level.
#    Possible values (in order of descending criticality): ERROR, WARN, INFO, DEBUG and TRACE
# 3: process identifier. Possible values: app (main), web (Web Server), ce (Compute Engine) and es (Elasticsearch)
# 4: SQ thread identifier. Can be empty.
#    In the Web Server, if present, it will be the HTTP request ID.
#    In the Compute Engine, if present, it will be the task ID.
# 5: logger name. Usually a class canonical name.
#    Package names are truncated to keep the whole field to 20 characters max
# 6: log payload. Content of this field does not follow any specific format, can vary in length and include line returns.
#    Some logs, however, will follow the convention to provide data in payload in the format " | key=value"
#    Especially, log of profiled pieces of code will end with " | time=XXXXms".

# Global level of logs (applies to all 4 processes).
# Supported values are INFO (default), DEBUG and TRACE
#sonar.log.level=INFO

# Level of logs of each process can be controlled individually with their respective properties.
# When specified, they overwrite the level defined at global level.
# Supported values are INFO, DEBUG and TRACE
#sonar.log.level.app=INFO
#sonar.log.level.web=INFO
#sonar.log.level.ce=INFO
#sonar.log.level.es=INFO

# Path to log files. Can be absolute or relative to installation directory.
# Default is <installation home>/logs
#sonar.path.logs=logs

# Rolling policy of log files
#    - based on time if value starts with "time:", for example by day ("time:yyyy-MM-dd")
#      or by month ("time:yyyy-MM")
#    - based on size if value starts with "size:", for example "size:10MB"
#    - disabled if value is "none".  That needs logs to be managed by an external system like logrotate.
#sonar.log.rollingPolicy=time:yyyy-MM-dd

# Maximum number of files to keep if a rolling policy is enabled.
#    - maximum value is 20 on size rolling policy
#    - unlimited on time rolling policy. Set to zero to disable old file purging.
#sonar.log.maxFiles=7

# Access log is the list of all the HTTP requests received by server. If enabled, it is stored
# in the file {sonar.path.logs}/access.log. This file follows the same rolling policy as other log file
# (see sonar.log.rollingPolicy and sonar.log.maxFiles).
#sonar.web.accessLogs.enable=true

# Format of access log. It is ignored if sonar.web.accessLogs.enable=false. Possible values are:
#    - "common" is the Common Log Format, shortcut to: %h %l %u %user %date "%r" %s %b
#    - "combined" is another format widely recognized, shortcut to: %h %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}"
#    - else a custom pattern. See http://logback.qos.ch/manual/layouts.html#AccessPatternLayout.
# The login of authenticated user is not implemented with "%u" but with "%reqAttribute{LOGIN}" (since version 6.1).
# The value displayed for anonymous users is "-".
# The SonarQube's HTTP request ID can be added to the pattern with "%reqAttribute{ID}" (since version 6.2).
# If SonarQube is behind a reverse proxy, then the following value allows to display the correct remote IP address:
#sonar.web.accessLogs.pattern=%i{X-Forwarded-For} %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}" "%reqAttribute{ID}"
# Default value (which was "combined" before version 6.2) is equivalent to "combined + SQ HTTP request ID":
#sonar.web.accessLogs.pattern=%h %l %u [%t] "%r" %s %b "%i{Referer}" "%i{User-Agent}" "%reqAttribute{ID}"


#--------------------------------------------------------------------------------------------------
# OTHERS

# Delay in seconds between processing of notification queue. Default is 60 seconds.
#sonar.notifications.delay=60

# Paths to persistent data files (embedded database and search index) and temporary files.
# Can be absolute or relative to installation directory.
# Defaults are respectively <installation home>/data and <installation home>/temp
#sonar.path.data=data
#sonar.path.temp=temp

# Telemetry - Share anonymous SonarQube statistics
# By sharing anonymous SonarQube statistics, you help us understand how SonarQube is used so we can improve the product to work even better for you.
# We don't collect source code or IP addresses. And we don't share the data with anyone else.
# To see an example of the data shared: login as a global administrator, call the WS api/system/info and check the Statistics field.
#sonar.telemetry.enable=true


#--------------------------------------------------------------------------------------------------
# DEVELOPMENT - only for developers
# The following properties MUST NOT be used in production environments.

# Elasticsearch HTTP connector
#sonar.search.httpPort=-1

Sonarqube in Kubernetes

In dem Artikel Sonarqube in Kubernetes ist eine aktuelle Anleitung für die Installation per Helm Chart.

HashiCorp Vault

In diesem Artikel zeige ich wie man HashiCorp Vault hinter einem Reverse Proxy (Nginx) aufsetzt.

Was ist HashiCorp Vault?

Ist eine Secret Management, Encryption as a Service Anwendung von der Amerikanischen Firma HashiCorp.

Installation von Vault mit Docker

Für diesen Artikel setze ich eine funktionieren Docker und NginX Umgebung voraus und gehe nicht weiter auf dessen Konfiguration im Vorfeld ein.

Das Docker-Compose Skript

Ich empfehle die Volumes immer in der nähe des Docker-Compose Skriptes zu halten, so kann alles auf einmal in einem Backup gesichert werden und man hat einen schnelleren überblick was in den Container gemountet wird. Der Container wird Vault auf dem Port 9000 exposen, sodass wir diesen später in der NginX Konfiguration angeben müssen. Der Port kann natürlich beliebig gewählt werden.

version: '2'
services:
  vault:
    image: vault
    container_name: vault
    ports:
      - "9000:9000"
    restart: always
    volumes:
      - ./volumes/logs:/vault/logs
      - ./volumes/file:/vault/file
      - ./volumes/config:/vault/config
    cap_add:
      - IPC_LOCK
    entrypoint: vault server -config=/vault/config/vault.json
    restart: always

Die Konfiguration der Servers erfolgt mit der Datei vault.config in dem ./volumes/config Verzeichnis. Diese verwendet Vault beim starten der Anwendung. Es muss hier der Port angegeben werden unter dem dann die Anwendung erreichbar sein soll. In diesem Beispiel ist es also der Port 9000. Der Schalter proxy_protocol_behaviour ist wichtig, da sonst die Anwendung hinter dem Reverse Proxy nicht zu erreichen ist.

{
  "backend": {
    "file": {
      "path": "/vault/file"
    }
  },
  "listener": {
    "tcp":{
      "address": "0.0.0.0:9000",
      "tls_disable": 1,
      "proxy_protocol_behaviour": "use_always"
    }
  },
  "ui": true
}

Starten sie nun mit docker-compose up den Container, um zu sehen das die Konfigurationsdatei geladen worden ist und prüfen sie ob der Port in dem Listener korrekt gesetzt ist. Prüfen Sie ob nun den Setup Dialog von Vault unter der IP Adresse und Port angezeigt bekommen.

Stoppen sie nun den Container CTRL-C und starten sie ihn neu und verfolgen das Log mit:

docker-compose up -d
docker-compose logs -f

NginX als Reverse Proxy einrichten

Erstellen sie folgende Konfiguration für NginX:

##
#
# Hashicorp Vault
#
##

#
# HTTP -> HTTPS redirect
#
server {
  listen 80;
  server_name vault.XXX.org;

  return 301 https://$server_name$request_uri;
}


server {
  listen 443 ssl http2;
  server_name vault.XXX.org;

  # SSL * cert
  include /etc/nginx/wildcard_ssl.conf;

  location / {
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto $scheme;

    # Fix the “It appears that your reverse proxy set up is broken" error.
    proxy_pass          http://vault.XXX.org:9000;
    proxy_read_timeout  90;

    proxy_redirect      http://vault.XXX.org:9000 https://vault.XXX.de;
  }
}

Starten sie den Nginx neu, damit die geänderte Konfiguration gelesen werden kann. Vault ist nun fertig installiert und kann konfiguriert werden. Vault sollte nun über HTTPS in der angegebenen FQDN erreichbar sein.

Initialisierung von Vault

Bei dem ersten Start von Vault muss man einen Masterschlüssel erzeugen und diesen in mindestens einen abgeleiteten Schlüssel (derived key). Für Testzwecke reicht es also aus in dem Dialog 1 und 1 einzugeben. Die Anzahl der Schlüssel kann man später per rekeying ändern.

Nun muss das Root-Token und die Schlüssel an einem sicheren Ort gespeichert bzw. an die entsprechenden Personen verteilt werden. Es kann aber auch eine JSON Datei mit allen Schlüssel runter geladen werden.

Unsealing

Wenn Vault gestartet wird, so ist es versiegelt (sealed). Um Vault nutzen zu können (nicht nur UI sondern auch über REST), muss die Versiegelung aufgehoben werden. Dazu müssen die Anzahl der im Threshold vorgegebenen derived Keys nun eigegeben werden, um das Siegel zu entfernen. Dieses ist ein Sicherheitsmerkmal von HashiCorp Vault.

Installation

Docker besteht unter Arch Linux und Manjaro nur aus der Komponente docker. Damit man nicht als Root den Docker Daemon steuer muss, kann man einen benutzer in die Gruppe docker. Aufnehmen.

# Install Docker
yaourt --noconfirm -S docker
# add user to group docker
sudo usermod -aG docker $USER

Jetzt muss der User sich neu anmelden, damit die geänderten Gruppenrechte ziehen.

Hierbei muss sudo usermod -aG docker $USER als der User der Docker verwenden soll ausgeführt werden, da $USER auasgelesen wird. Danach muss der User sich neu einloggen, damit er die Gruppenrechte bekommt oder der User direkt angegeben werden.

Änderungen ohne Logout übernehmen

Mit su – BENUTZER kann eine Shell präsentieren die so aussieht, als wenn man sich auf der Virtuelle Konsole angemeldet hat.

su - $USER

Somit hat man die neu hinzugefügten Gruppenrechte und man kann mit Docker direkt loslegen.