The installation of this system has higher complexity compared to a single-server instance, necessitating additional preparations in advance.
Docker Swarm - Cluster Prerequisites
Ensure that you thoroughly review the official Docker Swarm documentation beforehand to comprehend the technical architecture, particularly in scenarios where one or multiple worker nodes may not be available.
DNS / SSL / Load Balancer
It is recommended to establish a load balancer to efficiently route traffic to any of the machines. Additionally, consider creating an SSL certificate with a dedicated DNS name for enhanced security."
Prerequisites
Before commencing the installation, ensure seamless communication between the servers via the network. Consider storing the hostname-IP/DNS reference in /etc/hosts.
Ensure that no pre-configured Docker or Portainer is already installed on the system.
Network storage
Helmut4 is storage-agnostic, meaning the system requires at least one share to function properly, though it can work with multiple shares. Each share must be mounted on the Linux host system and within specific Docker containers.
Ensure at least one storage is mounted via fstab before initiating the installation, as it is essential for the process.
This step needs to be performed on every server individually.
Docker environment
As Helmut4 operates within a Docker environment, the installation of these components is necessary. If the host is in a restricted network, it may be required to install these components in advance or temporarily allow access to the corresponding repositories.
This installation must be carried out individually on each server.
In addition, these two dependencies are required for Helmut4:
httpie
jq
sudoaptinstallhttpiejq
Docker swarm configuration
Initiate SSH connections to all servers and execute the following command on the initial server host:
dockerswarminitdockerswarmjoin-tokenmanager
Copy the provided token and paste it onto all other hosts to designate them as managers:
#structure of the tokendockerswarm--token<join-token><vmhostname>:2377
Verify the newly created Swarm cluster by executing the following command:
dockernodels
Example result of a successful docker swarm
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
aw9wmcjh35rk6 * h4-swarm01 Ready Active Leader 23.0.6
a3hfmdjrafsty h4-swarm02 Ready Active Reachable 23.0.6
91dnkzq3kg7y4 h4-swarm03 Ready Active Reachable 23.0.6
Portainer configuration
Following the successful installation of the Docker environment and the creation of the Swarm cluster, it is essential to set up Portainer on each host.
#latest ce (community edition)#for business edition change ce to bedockerrun-it-d--restart=always-p9000:9000--name=portainer-v/var/run/docker.sock:/var/run/docker.sock-vportainer_data:/dataportainer/portainer-ce:lts--admin-password='$2y$05$WbcqfTqVa2T58lGrLO7Tp.30DMjKFo.6O4.XAmfBFg4a0jrVSbdW.'-Hunix:///var/run/docker.sock
By default, the Helmut4 installation script / installation guideline installs Portainer CE (Community Edition). Switching from CE to EE (Enterprise Edition) can be done without any issues.
However, please note that EE requires a valid license.
For updating Portainer, please follow the instructions in the Docker & Portainer Update guide.
Proceed to the main host and access Portainer, which operates on port 9000:
Portainer web GUI
http://ip-helmutserver:9000
The default login credentials are set to admin/admin.
We strongly recommend changing this password as soon as possible!
Setting up MoovIT docker repository
Navigate to Registries and configure a new custom repository.
MoovIT registry for Portainer
Name: MoovIT GmbH
URL: repo.moovit24.de:443
Authentication: true
Username: request via email: support@moovit.de
Password: request via email: support@moovit.de
It is important to write the URL at the end with :443, otherwise the images will not be loaded when deploying the Helmut stack!
Repeat this steps on every other host.
Deploy cluster stack
Download the stack file for the cluster from this link:
Navigate to Portainer on the main host and create a new stack:
Primary -> Stacks -> Add stack
Name: helmut4
Web editor: paste content from the .yml file
Please consider to change the hostnames in the mongodbrs images to
match the existing hostnames!
Click 'Deploy' and wait until all instances of the 'mongodbrs' container have been deployed to every host before proceeding with the next steps.
Create mongodb replica set
Now, create a MongoDB replica set. Establish an SSH connection to the main host and execute the following commands:
#establish a connection to the mongodbrs container#request username + password via email: support@moovit.dedockerexec-it $(dockerps|grep-imongodb1|awk'{ print $1 }') mongo-uUSER-pPASSWORD-authenticationDatabase=admin#create new replica setrs.initiate()#add nodes to the replica setrs.add("mongodb2:27017")## --> if you got an error 74 and running on a Virtual Maschine on VMWare## (Quorum check failed ... nodes did not respond affirmatively: mongodb2:27017 ... Couldn't get a connection within the time limit")## Please check the following information## --> https://docs.helmut.de/helmut4-releases/getting-started/installation-guide/helmut4-server/helmut4-cluster-system#possible-network-adjustments-eg-for-vmware-esxirs.add("mongodb3:27017")config=rs.config()#set masterconfig.members[0].host="mongodb1:27017"rs.reconfig(config,{force:true})#verify replica setrs.status()#define mongodb priority: primary-secondaryvarc=rs.config()#mongodb1 = primaryc.members[0].priority=100#mongodb2 = secondaryc.members[1].priority=50#mongodb3 = secondaryc.members[2].priority=50#save priority configurationrs.reconfig(c,{force:true})#list primary mongodbrs.status().members.find(r=>r.state===1).name
We recommend manually updating Helmut4 to the latest snapshot release following the completion of the previous configuration steps.
Misc configuration
Snapshot Server version
On every host, create a text file containing the current snapshot version.
Each drive intended for system inclusion must undergo mapping into the container. This involves first mounting the drive at the operating system level. Once mounted at the operating system level, the drive is then mapped into the Docker container using the following process:
Drive on operating system level: /mnt/helmut
Drive on Docker container level: /Volumes/helmut
Drive mapped between operation system level and Docker container: /mnt/helmut_1:/Volumes/helmut_1
The 'mnt to Volumes' mapping is established to facilitate seamless workflows on Mac OS X, with every network share being mounted to /Volumes.
There are five distinct containers that consistently require access to specific volumes to execute designated tasks. For instance, if the server is tasked with creating projects on a designated volume, that volume must be mapped into the FX Container, as this container is responsible for project creation.
To add a volume to a container, follow these steps:
click on “primary”
click on “stacks”
click on “helmut4”
click on the tab “Editor"
Locate the following services (fx, io, co, streams, users) and adjust the volumes entry accordingly.
Include or modify the volumes as needed, and click 'Update Stack' once you have completed this task.
For instance, include the shared folder 'testing,' mapped on the host, into the directory /mnt/testing.
Consider potential network adjustments, such as those applicable to VMware ESXi.
On certain hosts, such as VMware, it may be necessary to adjust the network interface settings. If not all containers are starting, consider checking the checksum settings on each host/network adapter.
Retrieve the current status by executing the following command on each host, replacing 'INTERFACENAME' with your specific interface name (e.g., eth0, ens32, etc.).