Helmut4 Cluster System
Last updated
Last updated
Please refer to the article for further details.
The installation of this system has higher complexity compared to a single-server instance, necessitating additional preparations in advance.
Ensure that you thoroughly review the official Docker Swarm documentation beforehand to comprehend the technical architecture, particularly in scenarios where one or multiple worker nodes may not be available.
For optimal performance, customers are advised to deploy an external load balancer to efficiently distribute incoming traffic across available machines. The load balancer can be implemented as a dedicated hardware appliance, a software-based solution, or via a DNS-based routing configuration, depending on deployment needs and performance requirements. Additionally, a dedicated SSL certificate—with an associated DNS name—must be configured to ensure secure, encrypted communications.
For ease of access, it is also recommended to use a user-friendly DNS name (e.g., "helmut4") rather than relying on IP addresses or long, complex DNS names, simplifying access for end user
Please note that both the external load balancer and the SSL certificate are required to be provided by the customer.
Before commencing the installation, ensure seamless communication between the servers via the network. Consider storing the hostname-IP/DNS reference in /etc/hosts.
Ensure that no pre-configured Docker or Portainer is already installed on the system.
Helmut4 is storage-agnostic, meaning the system requires at least one share to function properly, though it can work with multiple shares. Each share must be mounted on the Linux host system and within specific Docker containers.
Ensure at least one storage is mounted via fstab before initiating the installation, as it is essential for the process.
This step needs to be performed on every server individually.
As Helmut4 operates within a Docker environment, the installation of these components is necessary. If the host is in a restricted network, it may be required to install these components in advance or temporarily allow access to the corresponding repositories.
This installation must be carried out individually on each server.
Please refer to the Docker installation guidelines provided in the official docker documentation
In addition, these two dependencies are required for Helmut4:
httpie
jq
Initiate SSH connections to all servers and execute the following command on the initial server host:
Copy the provided token and paste it onto all other hosts to designate them as managers:
Verify the newly created Swarm cluster by executing the following command:
Following the successful installation of the Docker environment and the creation of the Swarm cluster, it is essential to set up Portainer on each host.
The provided command will download and install the latest (lts tag) community edition (ce) version of Portainer.
For more details on Portainer's release versions and editions, please refer to the following resources:
Latest (LTS) & Short-Term (STS) Versions: Learn about the new features and differences in the latest LTS and STS branches in this blog post: What's New in the Portainer STS Branch and Other Portainer News.
Community Edition (CE) vs. Business Edition (EE): Discover the distinctions between Portainer's Community Edition and Business Edition in this detailed comparison: Portainer Community Edition (CE) vs. Portainer Business Edition (EE): What's the Difference.
Proceed to the main host and access Portainer, which operates on port 9000:
The default login credentials are set to admin/admin.
We strongly recommend changing this password as soon as possible!
Navigate to Registries and configure a new custom repository.
Repeat this steps on every other host.
Download the stack file for the cluster from this link:
Navigate to Portainer on the main host and create a new stack:
Click 'Deploy' and wait until all instances of the 'mongodbrs' container have been deployed to every host before proceeding with the next steps.
Now, create a MongoDB replica set. Establish an SSH connection to the main host and execute the following commands:
We recommend manually updating Helmut4 to the latest snapshot release following the completion of the previous configuration steps.
On every host, create a text file containing the current snapshot version.
Configure the 'helmut-update' and 'helmut-snapshot' commands, both of which are employed for updating Helmut4.
Each drive intended for system inclusion must undergo mapping into the container. This involves first mounting the drive at the operating system level. Once mounted at the operating system level, the drive is then mapped into the Docker container using the following process:
The 'mnt to Volumes' mapping is established to facilitate seamless workflows on Mac OS X, with every network share being mounted to /Volumes.
There are five distinct containers that consistently require access to specific volumes to execute designated tasks. For instance, if the server is tasked with creating projects on a designated volume, that volume must be mapped into the FX Container, as this container is responsible for project creation.
To add a volume to a container, follow these steps:
click on “primary”
click on “stacks”
click on “helmut4”
click on the tab “Editor"
Locate the following services (fx, io, co, streams, users) and adjust the volumes entry accordingly.
Include or modify the volumes as needed, and click 'Update Stack' once you have completed this task.
For additional information, please navigate to the following link: Define mongobackup volume
Helmut 4 offers two update channels - main/stable and development.