This is the most common way to access the cluster. A router is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP/HTTPS(SNI)/TLS(SNI), which covers web applications.
An administrator can create a wildcard DNS entry, and then set up a router. Afterward, the users can self-service the ingress without having to contact the administrators. The router has controls to allow the administrator to specify whether the users can self-provision host names, or if they must fit a pattern the administrator defines. The other solutions require the administrator to do the provisioning, or they require that the administrator delegates a lot of privilege.
A set of routes can be created in the various projects. The overall set of routes is available to the set of routers. Each router selects from the set of routes. All routers see all routes unless restricted by labels on the router, which is called router sharding.
The High Availability section describes how to configure a router for high availability service using multiple replicas.
Load balancers are available on AWS and GCE clouds, and non-cloud options are also available.
The non-cloud load balancer allocates a unique IP from a configured pool. This limits you to a single ingress IP, which can be a VIP, but still will be a single machine for initial load balancing. The non-cloud load balancer simplifies the administrator’s job by providing the needed IP address, but uses one IP per service.
Administrators can assign a list of externalIPs, for which nodes in the cluster will also accept traffic for the service. These IPs are not managed by {product-title} and administrators are responsible for ensuring that traffic arrives at a node with this IP. A common example of this is configuing a highly available service.
The supplied list of IP addresses is used for load balancing incoming requests. The service port is opened on the externalIPs on all nodes running kube-proxy.
Note
|
ExternalIPs require elevated permissions to assign, and manual tracking of the IP:ports that are in use. |
An externally visible IP for the service can be configured in several ways:
-
Manually configuring the externalIPs with a list of known external IP addresses.
-
Configuring the externalIPs to a set of VIP addresses that are managed by the high avalibility service.
-
In a cloud environment (AWS or GCE), by using
type=LoadBalancer
-
In a non-cloud environment, by configuring ingressIP range (
ingressIPNetworkCIDR
),service.type=LoadBalancer
, andservice.port.ingressIP
.
The administrator must ensure the external IPs are routed to the nodes and local firewall rules on all nodes allow access to the open port.
Use the same nodes as the router, but advertise them with VRRP and use the DNS address of the router nodes.
Use NodePorts to expose the service nodePort on all nodes in the cluster. The service exposes <node-name>:<nodePort>
on all nodes in the cluster.
By default, nodePorts are in the range of 30000-32767, which means a NodePort is unlikely to match a service’s intended port (for example, 8080 may be exposed as 31020). This use of ports is wasteful of scarce host port resources.
However, it is slightly easier to set up. Again, this requires more privileges.
The administrator must ensure the desired traffic is routed to the nodes and local firewall rules on all nodes allow access to the open port.
NodePorts and externalIP are independent and both can be used concurrently.
High availability improves the chances that an IP address will remain active, by assigning a virtual IP address to the host in a configured pool of hosts. If the host goes down, the virtual IP address is automatically transferred to another host in the pool.
In a non-cloud environment, cluster administrators can assign a unique external IP address to a service (as described here). When routed correctly, external traffic can reach the service endpoints via any TCP/UDP port the service exposes. This is simpler than having to manage the port space of a limited number of shared IP addresses, when manually assigning external IPs to services.
An edge load balancer can be used to accept traffic from outside networks and proxy the traffic to pods inside the cluster.
In this configuration, the internal pod network is visible to the outside.