Helmut4 Cluster System

Installation

The cluster installation of Helmut4 requires three (or more) machines with the following system requirements. This installation is more complex compared to the single server instance and requires additional preparations in advance.

We suggest to set up a load balancer which is going to route traffic to any of the machines, consider as well to create a ssl certificate with a dedicated dns name.

Prerequisites

Before you start with the installation make sure that the servers are able to communicate with each other via network. Consider to store the hostname-ip/dns reference in /etc/hosts Please make sure that there is no pre-configured docker or Portainer installed in the first place

Network storage

Helmut4 is storage agnostic which means that the system needs at least one share to be used properly, but it can be used with multiple shares as well. Any share needs to be mounted on the linux host system as well as within specific docker containers. At least one storage should be mounted via fstab before starting the installation as this one is needed. This needs to be installed on every server individually.

Docker environment

As Helmut4 is running within a docker environment the installation of these components are required - if the host is in an restricted network it may be needed to install those in advance or allow temporary access to the accordingly repositories. This needs to be installed on every server individually.

Please follow the docker installation guideline from the official docker documentation

#Ubuntu
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Additionally this two dependencies are needed for Helmut4

  • httpie

  • jq

Docker swarm configuration

Establish ssh connections to all servers and run the following command on the first host

docker swarm init
docker swarm join-token manager

Copy the provided token and paste that one on all other hosts to add those as managers

#structure of the token
docker swarm --token <join-token> <vmhostname>:2377

Verify the newly created swarm cluster by running the following command

docker node ls
Example result of a successful docker swarm
ID                            HOSTNAME     STATUS    AVAILABILITY   MANAGER STATUS
aw9wmcjh35rk6 *   h4-swarm01   Ready     Active         Leader           23.0.6
a3hfmdjrafsty     h4-swarm02   Ready     Active         Reachable        23.0.6
91dnkzq3kg7y4     h4-swarm03   Ready     Active         Reachable        23.0.6

Portainer configuration

After the successful installation of the docker environment as well as the creation of the swarm cluster, Portainer needs to be set up on every host.

#latest ce (community edition)
#for business edition change ce to be
docker run -it -d --restart=always -p 9000:9000 --name=portainer -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest --admin-password='$2y$05$WbcqfTqVa2T58lGrLO7Tp.30DMjKFo.6O4.XAmfBFg4a0jrVSbdW.' -H unix:///var/run/docker.sock

Head now to the main host and navigate to Portainer, which runs on port 9000:

Portainer web GUI http://ip-helmutserver:9000

The default user/password to log in is admin/admin Please consider to change this password as soon as possible!

Navigate to Registries and setup a new, custom one

MoovIT registry for Portainer

Name: MoovIT GmbH URL: https://repo.moovit24.de:443 Authentication: true Username: request via email: support@moovit.de Password: request via email: support@moovit.de

Repeat this steps on every other host.

Deploy cluster stack

Download the stack file for the cluster from this link:

https://repo.moovit24.de/install/config/docker-compose-swarm.yml

Navigate to Portainer on the main host and create a new stack:

Primary -> Stacks -> Add stack
Name: helmut4
Web editor: paste content from the .yml file

Please consider to change the hostnames in the mongodbrs images to
match the existing hostnames!

Click deploy and wait till all instances of the mongodbrs container has been deployed to every host before you continue with the next steps!

Create mongodb replica set

Establish a ssh connection to the main host and run the following commands to create a mongodb replcia set:

#establish a connection to the mongodbrs container
#request username + password via email: support@moovit.de
docker exec -it $(docker ps | grep -i mongodb1 | awk '{ print $1 }') mongo -u USER -p PASSWORD -authenticationDatabase=admin

#create new replica set
rs.initiate()

#add nodes to the replica set
rs.add("mongodb2:27017")
rs.add("mongodb3:27017")

config=rs.config()

#set master
config.members[0].host="mongodb1:27017"
rs.reconfig(config,{force:true})

#verify replica set
rs.status()


#define mongodb priority: primary-secondary
var c = rs.config()
#mongodb1 = primary
c.members[0].priority = 100
#mongodb2 = secondary
c.members[1].priority = 50
#mongodb3 = secondary
c.members[2].priority = 50

#save priority configuration
rs.reconfig(c,{force: true})

#list primary mongodb
rs.status().members.find(r=>r.state===1).name

We recommend to update Helmut4 to the latest snapshot release after the previous configuration steps.

Misc configuration

Create on every host a text file with the current snapshot version

mkdir /etc/helmut4
echo "4.6.1" > /etc/helmut4/helmut4.snapshot

Set up the helmut-update & helmut-snapshot commands which are being used to update Helmut4

#Ubuntu/CentOS
echo -e "#/bin/bash\\ncurl -s https://repo.moovit24.de/install/update.sh | bash" > /usr/sbin/helmut-update && chmod a+x /usr/sbin/helmut-update
echo -e "#/bin/bash\\ncurl -s https://repo.moovit24.de/install/snapshot.sh | bash -s \${1}" > /usr/sbin/helmut-snapshot && chmod a+x /usr/sbin/helmut-snapshot

#Debian
echo -e "#/bin/bash\\ncurl -s https://repo.moovit24.de/install/update.sh | bash" > /usr/bin/helmut-update && chmod a+x /usr/bin/helmut-update
echo -e "#/bin/bash\\ncurl -s https://repo.moovit24.de/install/snapshot.sh | bash -s \${1}" > /usr/bin/helmut-snapshot && chmod a+x /usr/bin/helmut-snapshot

Mount network shares into Docker

Every drive that is to be added to the system must be mapped into the container. To do this, the drive must first be mounted at the operating system level. The drive mounted at the operating system level is now mapped into the Docker container in the following way:

Drive on operating system level: /mnt/helmut

Drive on Docker container level: /Volumes/helmut

Drive mapped between operation system level and Docker container: /mnt/helmut_1:/Volumes/helmut_1

This mnt to Volumes mapping is set up, to allow setting up easy workflows on Mac OS X, as every network share is mounted to /Volumes There are 5 different containers that always have to have access to specific volumes to run specific tasks. For example: If the server should create projects on a specific volume, this volume needs to be mapped into the FX Container, as this container is responsible for creating projects. To add volume to a container follow these steps:

  • click on “primary”

  • click on “stacks”

  • click on “helmut4”

  • click on the tab “Editor"

Add or change the volumes and click update the stack once finished with this task.

Example: adding share "testing" which is mapped on the host into /mnt/testing volumes:

- /etc/localtime:/etc/localtime:ro

- /mnt/helmut:/Volumes/helmut

- /mnt/testing:/Volumes/testing

Possible network adjustments: eg for vmware esxi

On certain hosts like vmware it might be necessary to adopt the network interface settings. In case not all containers are going to start, consider to check the checksum settings on every host / network adapter. Get the current status by running this command on every host, replace INTERFACENAME by your interface name (eth0, ens32, etc.)

ethtool -k <INTERFACENAME> 2>&1 | egrep 'tx-checksumming|rx-checksumming'

Working result: rx-checksumming: off tx-checksumming: off

Not working result: rx-checksumming: on tx-checksumming: on

In case your result is "on", you need to disable the checksumming. The following commands will make tx/rx checksum persistent "off" after reboot on Ubuntu TLS

nano /etc/networkd-dispatcher/routable.d/10-disable-rxtxchecksum

Add the following content and replace by your INTERFACENAME name (eth0, ens32, etc.)

#!/bin/sh
ethtool -K <INTERFACENAME> tx off rx off

Save the script and patch file permissions, afterwards please reboot the host

#set rights for execution
chmod +x /etc/networkd-dispatcher/routable.d/10-disable-rxtxchecksum
#reboot the host
reboot

After the server reboot verify the checksum status

ethtool -k <INTERFACENAME> 2>&1 | egrep 'tx-checksumming|rx-checksumming'

Updating Helmut4

There are two update channels available for Helmut 4 - main & development An update can be accessed with sudo rights on the command line of the host

Main update channel | helmut-snapshot

Stable releases will be published every three months, the latest release can be found in the release notes. This version is covered by the support contract and the software maintenance agreement is valid for this version. Critical bugs will be fixed in stable release. Please consider checking the supported Adobe versions before updating

helmut-snapshot <snapshot version number>

eg helmut-snapshot 4.6.1

After executing the update command the Portainer password is required

Development update channel | helmut-update

helmut-update

The development channel is only for internal use and requires an additional password, this one can be requested by contacting support@moovit.de

Be aware that those builds are MAINLY UNTESTED and can cause problems!

Last updated