Skip to content

Latest commit

 

History

History
199 lines (164 loc) · 7.62 KB

configuring_sdn.adoc

File metadata and controls

199 lines (164 loc) · 7.62 KB

Configuring the SDN

Overview

The {product-title} SDN enables communication between pods across the {product-title} cluster, establishing a pod network. Two SDN plug-ins are currently available (ovs-subnet and ovs-multitenant), which provide different methods for configuring the pod network.

Configuring the Pod Network with Ansible

For initial advanced installations, the ovs-subnet plug-in is installed and configured by default, though it can be overridden during installation using the os_sdn_network_plugin_name parameter, which is configurable in the Ansible inventory file.

Example 1. Example SDN Configuration with Ansible
# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')
# os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

# Disable the OpenShift SDN plugin
# openshift_use_openshift_sdn=False

# Configure SDN cluster network CIDR block. This network block should
# be a private block and should not conflict with existing network
# blocks in your infrastructure that pods may require access to.
# Can not be changed after deployment.
#osm_cluster_network_cidr=10.1.0.0/16

# default subdomain to use for exposed routes
#openshift_master_default_subdomain=apps.test.example.com

# Configure SDN cluster network and kubernetes service CIDR blocks. These
# network blocks should be private and should not conflict with network blocks
# in your infrastructure that pods may require access to. Can not be changed
# after deployment.
#osm_cluster_network_cidr=10.1.0.0/16
#openshift_portal_net=172.30.0.0/16

# Configure number of bits to allocate to each host’s subnet e.g. 8
# would mean a /24 network on the host.
#osm_host_subnet_length=8

# This variable specifies the service proxy implementation to use:
# either iptables for the pure-iptables version (the default),
# or userspace for the userspace proxy.
#openshift_node_proxy_mode=iptables

Configuring the Pod Network on Masters

Cluster administrators can control pod network settings on masters by modifying parameters in the networkConfig section of the master configuration file (located at /etc/origin/master/master-config.yaml by default):

networkConfig:
  clusterNetworkCIDR: 10.128.0.0/14 (1)
  hostSubnetLength: 9 (2)
  networkPluginName: "redhat/openshift-ovs-subnet" (3)
  serviceNetworkCIDR: 172.30.0.0/16 (4)
  1. Cluster network for node IP allocation

  2. Number of bits for pod IP allocation within a node

  3. Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in or redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in

  4. Service IP allocation for the cluster

Important

The serviceNetworkCIDR and hostSubnetLength values cannot be changed after the cluster is first created, and clusterNetworkCIDR can only be changed to be a larger network that still contains the original network. For example, given the default value of 10.128.0.0/14, you could change clusterNetworkCIDR to 10.128.0.0/9 (i.e., the entire upper half of net 10) but not to 10.64.0.0/16, because that does not overlap the original value.

Configuring the Pod Network on Nodes

Cluster administrators can control pod network settings on nodes by modifying parameters in the networkConfig section of the node configuration file (located at /etc/origin/node/node-config.yaml by default):

networkConfig:
  mtu: 1450 (1)
  networkPluginName: "redhat/openshift-ovs-subnet" (2)
  1. Maximum transmission unit (MTU) for the pod overlay network

  2. Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in or redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in

Migrating Between SDN Plug-ins

If you are already using one SDN plug-in and want to switch to another:

  1. Change the networkPluginName parameter on all masters and nodes in their configuration files.

  2. If you are switching from an {product-title} SDN plug-in to a third-party plug-in, then clean up {product-title} SDN-specific artifacts:

$ oc delete clusternetwork --all
$ oc delete hostsubnets --all
$ oc delete netnamespaces --all

When switching from the ovs-subnet to the ovs-multitenant {product-title} SDN plug-in, all the existing projects in the cluster will be fully isolated (assigned unique VNIDs). Cluster administrators can choose to modify the project networks using the administrator CLI.

Check VNIDs by running:

$ oc get netnamespace

External Access to the Cluster Network

If a host that is external to {product-title} requires access to the cluster network, you have two options:

  1. Configure the host as an {product-title} node but mark it unschedulable so that the master does not schedule containers on it.

  2. Create a tunnel between your host and a host that is on the cluster network.

Both options are presented as part of a practical use-case in the documentation for configuring routing from an edge load-balancer to containers within {product-title} SDN.

Using Flannel

As an alternative to the default SDN, {product-title} also provides Ansible playbooks for installing flannel-based networking. This is useful if running {product-title} within a cloud provider platform, such as OpenStack, and you want to avoid using dual Open vSwtich SDN on both platforms.

To enable flannel within your {product-title} cluster, set the following variables in your Ansible inventory file before running the installation.

openshift_use_openshift_sdn=false
openshift_use_flannel=true

Setting the openshift_use_openshift_sdn variable to false disables the default SDN and setting the openshift_use_flannel variable to true enables flannel in place.