Pre-Installation Tasks¶
Prepare Volumes for user data and docker storage¶
Docker will store images and containers below the users home directory /home/teamdrive in three hidden folders:
- .config
- .docker
- .local
We recommend to split one volume in 2 partitions for the home directory and the systems tmp folder. For the /teamdrive user data use a seperate disc.
List your devices with:
lsblk
and create the partitions using fdisk:
fdisk /dev/<device>
Create 2 partitions using the following commands:
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-62914559, default 2048): 2048
Last sector, +sectors or +size{K,M,G,T,P} (...): +<Size>G
Create the second partition using the same commands. To quit and commit the changes type: w
To activate the partition:
partprobe /dev/<device>
Show the discs and partitions:
lsblk
Format both partitions:
mkfs.xfs /dev/<part1>
mkfs.xfs /dev/<part2>
and the seperate disc for the /teamdrive user data:
mkfs.xfs /dev/<device>
List the UUIDs of the partitions:
blkid
and add the following lines to the /etc/fstab and replace <uuid-part1> and <uuid-part2> with the values from the above blkid:
UUID=<uuid-part1> /home xfs defaults,nofail 0 2
UUID=<uuid-part2> /tmp xfs defaults,nofail 0 2
UUID=<uuid-device> /teamdrive xfs defaults,nofail 0 2
Now rename the existing /tmp folder, create a new and empty tmp folder, mount the /tmp volume, correct the folder permissions, move the date from the tmp_old to tmp and delete the tmp_old folder:
mv /tmp /tmp_old
mkdir /tmp
mount /tmp
chmod og+wt /tmp
mv /tmp_old/* /tmp
mv /tmp_old/.*-unix /tmp
rmdir /tmp_old
Mount the /home volume:
mount /home
Create a ssh key for the teamdrive user¶
The socket for the docker daemon is only available if the teamdrive user has an active ssh connection. A cronjob for the root user will open such a ssh session for the teamdrive user and checks if the session stays open. The ssh connection will use a public key to login as the teamdrive user. For key generation execute the script:
/opt/teamdrive/webportal/docker/keep_socket_available.sh
and just hit enter for the following questions:
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Login once:
ssh teamdrive@127.0.0.1 -i /root/.ssh/keep_socket.key
and confirm the question: Are you sure you want to continue connecting
Create the cronjob for task:
crontab <<EOF
* * * * * /opt/teamdrive/webportal/docker/keep_socket_available.sh >/dev/null 2>&1
EOF
Mount the Space Storage Volume¶
The container root directory specified by the ContainerRoot
setting
contains the mount points for all the containers on the Docker system.
The container root (by default /teamdrive
) is the mount
point for a dedicated file system that provides the requirements outlined
in chapter Storage Requirements.
By default, the directory /teamdrive
has already been
created by the td-webportal
RPM package. However, if the Docker host
is not the same as the Web Portal machine, then you will have to create
this directory yourself.
Note that due to restrictions of the Docker system, all data will be written to this directory as belonging to root.
Mount the file system and create the respective mount entry in /etc/fstab
to enable automatic mounting of the file system at bootup. Please consult your
Operating System documentation for details on how to perform this step.
Make sure to set the rights to:
chown apache:apache /teamdrive
Installing Docker¶
The Web Portal uses Docker containers to run the TeamDrive Agent. A container is started for each user that logs into the Web Portal.
The Docker containers can run on a machine or cluster that is separate from the Web Portal host which handles the login and manages the containers.
In 2017 Docker completely changed their version format, and also changed the name of the Docker Engine package to either Docker Community Edition or Docker Enterprise Edition. The TeamDrive Web Portal supports both versions, but we recommend using the Docker Community Edition.
Docker CE/EE installation¶
Follow the instructions described on the docker documentation to install docker. Dont start the docker service after installation, because the service will not run under the root user:
After the installation we have to configure docker for the rootless mode:
Login with user teamdrive
and execute the script:
ssh teamdrive@localhost -i /root/.ssh/id_rsa
/bin/dockerd-rootless-setuptool.sh install
Stop the installed and running service with:
systemctl --user stop docker.service
By default, the Docker daemon is only accessible via a local socket. To make the socket available for all web-portal components the service must be modified by executing this command:
mkdir /home/teamdrive/.config/systemd/user/docker.service.d
echo $'[Service]\nExecStartPost=/bin/sleep 3\nExecStartPost=/opt/teamdrive/webportal/docker/dockerd-rootless-chown-socket.sh' >> /home/teamdrive/.config/systemd/user/docker.service.d/override.conf
Configuration settings for the docker daemon:
echo ‘{“log-level”: “info”, “default-ulimits”: { “nofile”: { “Name”: “nofile”, “Hard”: 4098, “Soft”: 4098 }, “nproc”: { “Name”: “nproc”, “Soft”: 1024, “Hard”: 2408 } } }’ >> /home/teamdrive/.config/docker/daemon.json
and reload the service to activate the changes:
systemctl --user daemon-reload
Start Docker again:
systemctl --user start docker.service
On the client side (the Web Portal host) you will now need to set the
DOCKER_HOST
environment variable in order to use the docker
command:
export DOCKER_HOST=unix:///run/user/1000/docker.sock
docker images
To have this environment variable automatically set at the login, add the two
lines to the bash_profile
of the teamdrive user by executing:
echo DOCKER_HOST=unix:///run/user/1000/docker.sock >> /home/teamdrive/.bash_profile
echo export DOCKER_HOST >> /home/teamdrive/.bash_profile
Installing the TeamDrive Agent Docker Image¶
Note
If you are using a Registration Server that is not attached to the TeamDrive Network, or you are using a customised (“White Label”) version of TeamDrive, please go to the next chapter Creating a Customised Agent Docker Image. Otherwise proceed with the following standard installation and skip the next chapter.
Docker Container images are available from the TeamDrive public Docker repository on the Docker hub. Here you will find a list of the tagged images that have been uploaded by TeamDrive:
https://hub.docker.com/r/teamdrive/agent/tags/
The current version of the TeamDrive Agent used by the Web Portal is stored in the
MinimumAgentVersion
setting. The ContainerImage
setting stores the name of the
Container image currently in use by the Web Webportal. If the version of the Agent in
ContainerImage
is less than MinimumAgentVersion
it will be automatically updated.
If this required images does not exist in the local Docker repository then it will be automatically pulled from the Docker hub and installed on your docker host.
If a more recent version of the Image is available from the Docker hub, then this
version will be used in place of version specified by ContainerImage
and by
MinimumAgentVersion
.
To install or update the Container image used by the Web Portal use the upgrade command:
start yvva
and execute upgrade_now;;
:
[root@webportal ~]# yvva
Welcome to yvva shell (version 1.5.4).
Enter "go" or end the line with ';;' to execute submitted code.
For a list of commands enter "help".
UPGRADE COMMANDS:
-----------------
To upgrade from the command line, execute:
yvva --call=upgrade_now --config-file="/etc/yvva.conf"
upgrade_now;;
Upgrade the database structure and Docker container image (this command cannot be undone).
Leave the yvva
shell by typing quit
.
Note
If outgoing requests has to use a proxy server, follow the Docker documentation https://docs.docker.com/engine/admin/systemd/#http-proxy to set a proxy for Docker. Restart the Docker service after adding the proxy configuration.
Creating a Customised Agent Docker Image¶
The Web Portal Docker image builder can be customised to create a container image specialised for your purposes.
There are a number of reasons why you would want to create a custom Docker image:
- Standalone Registration Server: You are using a Registration Server that is not connected to the TeamDrive Registration Server Network via the TeamDrive Name Server (TDNS). In this case you are using a standalone Registration Server or your own TeamDrive Network of Registration Servers.
- Custom Agent Archive: You have a white label agreement TeamDrive that requires customisation of the TeamDrive Agent binary or the Web user interface (the Web-GUI).
- Specialised Container: You are using the Web Portal to integrate other applications into the TeamDrive Network which require changes in the behavior of the Docker Container (for example, addition binaries must be started in the container).
If you have a Standalone Registration Server or require Customised Client Settings you will have to modify the contents of the standard DISTRIBUTOR file.
In the case of Custom Agent Archive you require a custom “Agent archive” (a .tar.gz file) created by TeamDrive and available for download from the TeamDrive archive download portal.
If you require a Specialised Container you will need to modify the contents of
the standard “Dockerfile” used to build the Docker image. This is done by setting the
contents of the BuildDockerfile
setting.
If you need to set specific TeamDrive client settings for the TeamDrive agent running
in the container, then this can be done using the SharedIniPath
and ClientSettings
settings. There is no need to create a custom container for this purpose. Previously
the setting WhiteLabelINIFileSettings
was previously used for this purpose, but
this setting has been deprecated.
The details of building a custom docker image, and making the required modifications is described below. Further information can be found in the descriptions of the Build Image settings.
In order to build a Docker image, the Web Portal needs an “Agent archive”. This is
either a standard TeamDrive Agent archive, or a Custom Agent Archive built by
TeamDrive for customers as part of a White Label licensing agreement. Custom Agent
archives must have a different “Product name”. The standard Product name is
“teamdrive”. The Product name is specified in the BuildProductName
setting, and must be changed if you are using a custom Agent archive.
The Agent archive includes the TeamDrive Agent binary and other support files require to run the executable. Custom Agent archives may include changes to the TeamDrive Agent binary, to the Web-GUI and to the DISTRIBUTOR file included in the archive.
The version of the Agent used is determined by the MinimumAgentVersion
and ContainerImage
settings. The highest version specified by these two
settings will be used by the Web Portal, unless an even higher version of
the Agent is found on Docker hub.
The Agent archive is downloaded automatically by the Web Portal using the URL
specified by the AgentDownloadURL
setting. The URL used also depends on
the BuildProviderCode
and BuildProductName
settings, and the version
of the Agent to be used.
Since the standard TeamDrive Agent archive always uses the “TMDR” Provider code, if
BuildProductName
is set to “teamdrive” or BuildProviderCode
is set to “TMDR”, the download URL will used will always be:
https://download.teamdrive.net/{VERSIONSHORT}/TMDR/linux-x86_64/teamdrive_agent_{VERSION}_el8.x86_64.tar.gz
Before downloading the Agent archive, the Web Portal will check if the required
archive is already available in the ImageBuildFolder
(the build
directory). If so, the archive will not be downloaded. If not, the Web Portal
searches the build directory for archives with a higher version number. If found
this Agent archive will be used instead of the version specified by MinimumAgentVersion
or ContainerImage
.
The general procedure for creating a custom Docker Container image is to modify
certain build settings, and then run the upgrade_now
command as described
above in Installing the TeamDrive Agent Docker Image. If you remove the Docker image just created
you can repeat this process until the image is built correctly. In other words,
use the docker rmi
command to remove the image in the local archive so that
upgrade_now
will rebuild the image.
Since a new custom Docker image will not be created if an image with the required
TeamDrive agent already exists, if you make changes to build settings, you must
remove the current docker image and run upgrade_now
to apply the new settings.
The upgrade_now
command must also be run on upgrade of the Web Portal, to upgrade
the TeamDrive agent version used. This will also create a Docker image with the
updated version number.
If you are using a Standalone Registration Server, you might need a customised client that will reference your Registration Server. This depends on your provider configuration.
Note
If you change the contents of the DISTRIBUTOR file you must set the Provide code
in the DISTRIBUTOR file, and set BuildProviderCode
to the same value.
In the DISTRIBUTOR file the Provider code is set as follows: code=<provider-code>
.
If you are using a custom Agent archive which includes a customised DISTRIBUTOR file
then make sure that DISTRIBUTORFile
is empty to ensure that the contents
of the DISTRIBUTOR file in the archive is not overwritten.
In order to create a Specialised Container you need to modify the
BuildDockerfile
setting. The BuildDockerfile
value is the
contents of the Dockerfile which is used by docker to build a new TeamDrive
Agent image, as described in the Docker documentation: https://docs.docker.com/engine/reference/builder/.
Installing SSL certificates¶
The default Apache HTTP Server installation ships with self-signed SSL
certificates for testing purposes. We strongly recommend to purchase and
install proper SSL certificates and keys and to adjust the configuration in
file /etc/httpd/conf.d/ssl.conf
accordingly before moving the server into
production.
The exact installation process depends on how you obtain or create the SSL key and certificate, please refer to the respective installation instructions provided by your certificate issuer.
OS-Hardening¶
Excute the OS-Hardening script:
/opt/teamdrive/webportal/docker/os_hardening.sh
and reboot the system. After the reboot verify the results:
inspec exec https://github.com/dev-sec/linux-baseline
lynis audit system
Note on DevSec results:
sysctl-29: Disable loading kernel modules
must stay disabled, because docker needs the functionality for its own network.
Note on Lynis results: The Lynis Hardening index should reach ~ 90. The remaining recommendations are not easy to implement or cant be activated without blocking the Web Portal functionality like the mentioned apache modules.
Starting the Web Portal¶
After all configuration steps have been performed, we can start the TeamDrive Web Portal to conclude the initial installation/configuration.
Starting td-webportal
¶
To activate the yvvad
-based td-webportal
background task you have to
start the service using the provided init script.
The configuration file /etc/td-hosting.conf
defines how this process is
run. You usually don’t have to modify these settings.
To start the td-webportal
program, use the service
command as user
root:
[root@webportal ~]# service td-webportal start
Starting TeamDrive Web Portal: [ OK ]
Use the status
option to the service
command to verify that the
service has started:
[root@webportal ~]# service td-webportal status
yvvad (pid 2506) is running...
If td-webportal
does not start (process yvvad
is not running), check
the log file /var/log/td-webportal.log
for errors. See chapter
Troubleshooting for details.
Starting the Apache HTTP Server¶
Now the Apache HTTP Server can be started, which provides the TeamDrive Web
Portal functionality via mod_yvva
.
You can start the service manually using the following command:
[root@webportal ~]# service httpd start
Warning
At this point, the Web Portal’s web server is answering incoming requests from any web client that can connect to its address. For security purposes, you should not make it accessible from the public Internet until you have concluded the initial configuration, e.g. by blocking external accesses using a firewall.
Check the log file /var/log/httpd/error_log
and /var/log/td-webportal.log
for startup messages and possible errors:
[notice] Apache/2.4.37 OpenSSL/1.1.1g configured
-- resuming normal operations
[notice] mod_yvva 1.5.4 ((Aug 13 2020 18:27:47) loaded
[notice] Logging (=error) to: /var/log/td-webportal.log
Please consult chapter Troubleshooting if there is an error when starting the service.