This section explores some of the preparation required to install {product-title} as a set of services within containers. This applies to hosts using either Red Hat Enterprise Linux or Red Hat Atomic Host.
-
For the quick installation method, you can choose between the RPM or containerized method on a per host basis during the interactive installation, or set the values manually in an installation configuration file.
-
For the advanced installation method, you can set the Ansible variable
containerized=true
in an inventory file on a cluster-wide or per host basis.
The following sections detail the preparation for a containerized {product-title} installation.
Containerized installations make use of the following images:
If you need to use a private registry to pull these images during the installation, you can specify the registry information ahead of time. For the advanced installation method, you can set the following Ansible variables in your inventory file, as required:
cli_docker_additional_registries=<registry_hostname> cli_docker_insecure_registries=<registry_hostname> cli_docker_blocked_registries=<registry_hostname>
The configuration of additional, insecure, and blocked Docker registries occurs at the beginning of the installation process to ensure that these settings are applied before attempting to pull any of the required images.
When using containerized installations, a CLI wrapper script is deployed on each master at /usr/local/bin/openshift. The following set of symbolic links are also provided to ease administrative tasks:
Symbolic Link | Usage |
---|---|
/usr/local/bin/oc |
Developer CLI |
/usr/local/bin/oadm |
Administrative CLI |
/usr/local/bin/kubectl |
Kubernetes CLI |
The wrapper spawns a new container on each invocation, so you may notice it run slightly slower than native CLI operations.
The wrapper scripts mount a limited subset of paths:
-
~/.kube
-
/etc/origin/
-
/tmp/
Be mindful of this when passing in files to be processed by the oc
or oadm
commands. You may find it easier to redirect the input, for example:
# oc create -f - < my-file.json
Note
|
The wrapper is intended only to be used to bootstrap an environment. You should install the CLI tools on another host after you have granted cluster-admin privileges to a user. See Managing Role Bindings and Get Started with the CLI for more information. |
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands. For containerized installations, these unit names match those of an RPM installation, with the exception of the etcd service which is named etcd_container.
This change is necessary as currently RHEL Atomic Host ships with the etcd package installed as part of the operating system, so a containerized version is used for the {product-title} installation instead. The installation process disables the default etcd service. The etcd package is slated to be removed from RHEL Atomic Host in the future.
All OpenShift configuration files are placed in the same locations during containerized installation as RPM based installations and will survive os-tree upgrades.
However, the default image stream and template files are installed at /etc/origin/examples/ for containerized installations rather than the standard /usr/share/openshift/examples/, because that directory is read-only on RHEL Atomic Host.
RHEL Atomic Host installations normally have a very small root file system. However, the etcd, master, and node containers persist data in the /var/lib/ directory. Ensure that you have enough space on the root file system before installing {product-title}; see the System Requirements section for details.
{product-title} SDN initialization requires that the Docker bridge be reconfigured and that Docker is restarted. This complicates the situation when the node is running within a container. When using the Open vSwitch (OVS) SDN, you will see the node start, reconfigure Docker, restart Docker (which restarts all containers), and finally start successfully.
In this case, the node service may fail to start and be restarted a few times
because the master services are also restarted along with Docker. The current
implementation uses a workaround which relies on setting the Restart=always
parameter in the Docker based systemd units.