Skip to content

Commit bda5e43

Browse files
author
Chao Xu
committed
in docs, update "minions" to "nodes"
1 parent e7a2a5b commit bda5e43

File tree

6 files changed

+9
-9
lines changed

6 files changed

+9
-9
lines changed

docs/design/access.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ Cluster in Large organization:
6565

6666
Org-run cluster:
6767
- organization that runs K8s master components is same as the org that runs apps on K8s.
68-
- Minions may be on-premises VMs or physical machines; Cloud VMs; or a mix.
68+
- Nodes may be on-premises VMs or physical machines; Cloud VMs; or a mix.
6969

7070
Hosted cluster:
7171
- Offering K8s API as a service, or offering a Paas or Saas built on K8s
@@ -223,7 +223,7 @@ Initially:
223223
Improvements:
224224
- allow one namespace to charge the quota for one or more other namespaces. This would be controlled by a policy which allows changing a billing_namespace= label on an object.
225225
- allow quota to be set by namespace owners for (namespace x label) combinations (e.g. let "webserver" namespace use 100 cores, but to prevent accidents, don't allow "webserver" namespace and "instance=test" use more than 10 cores.
226-
- tools to help write consistent quota config files based on number of minions, historical namespace usages, QoS needs, etc.
226+
- tools to help write consistent quota config files based on number of nodes, historical namespace usages, QoS needs, etc.
227227
- way for K8s Cluster Admin to incrementally adjust Quota objects.
228228

229229
Simple profile:

docs/design/security.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ A pod runs in a *security context* under a *service account* that is defined by
104104

105105
### TODO: authorization, authentication
106106

107-
### Isolate the data store from the minions and supporting infrastructure
107+
### Isolate the data store from the nodes and supporting infrastructure
108108

109109
Access to the central data store (etcd) in Kubernetes allows an attacker to run arbitrary containers on hosts, to gain access to any protected information stored in either volumes or in pods (such as access tokens or shared secrets provided as environment variables), to intercept and redirect traffic from running services by inserting middlemen, or to simply delete the entire history of the custer.
110110

@@ -114,7 +114,7 @@ Both the Kubelet and Kube Proxy need information related to their specific roles
114114

115115
The controller manager for Replication Controllers and other future controllers act on behalf of a user via delegation to perform automated maintenance on Kubernetes resources. Their ability to access or modify resource state should be strictly limited to their intended duties and they should be prevented from accessing information not pertinent to their role. For example, a replication controller needs only to create a copy of a known pod configuration, to determine the running state of an existing pod, or to delete an existing pod that it created - it does not need to know the contents or current state of a pod, nor have access to any data in the pods attached volumes.
116116

117-
The Kubernetes pod scheduler is responsible for reading data from the pod to fit it onto a minion in the cluster. At a minimum, it needs access to view the ID of a pod (to craft the binding), its current state, any resource information necessary to identify placement, and other data relevant to concerns like anti-affinity, zone or region preference, or custom logic. It does not need the ability to modify pods or see other resources, only to create bindings. It should not need the ability to delete bindings unless the scheduler takes control of relocating components on failed hosts (which could be implemented by a separate component that can delete bindings but not create them). The scheduler may need read access to user or project-container information to determine preferential location (underspecified at this time).
117+
The Kubernetes pod scheduler is responsible for reading data from the pod to fit it onto a node in the cluster. At a minimum, it needs access to view the ID of a pod (to craft the binding), its current state, any resource information necessary to identify placement, and other data relevant to concerns like anti-affinity, zone or region preference, or custom logic. It does not need the ability to modify pods or see other resources, only to create bindings. It should not need the ability to delete bindings unless the scheduler takes control of relocating components on failed hosts (which could be implemented by a separate component that can delete bindings but not create them). The scheduler may need read access to user or project-container information to determine preferential location (underspecified at this time).
118118

119119

120120
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/security.md?pixel)]()

docs/kubectl_get.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Display one or many resources
88
Display one or many resources.
99

1010
Possible resources include pods (po), replication controllers (rc), services
11-
(svc), minions (mi), events (ev), or component statuses (cs).
11+
(svc), nodes, events (ev), or component statuses (cs).
1212

1313
By specifying the output as 'template' and providing a Go template as the value
1414
of the --template flag, you can filter the attributes of the fetched resource(s).
@@ -84,6 +84,6 @@ $ kubectl get rc/web service/frontend pods/web-pod-13je7
8484
### SEE ALSO
8585
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
8686

87-
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.549637372 +0000 UTC
87+
###### Auto generated by spf13/cobra at 2015-05-20 23:52:21.968486735 +0000 UTC
8888

8989
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_get.md?pixel)]()

docs/man/man1/kubectl-get.1

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Display one or many resources.
1717

1818
.PP
1919
Possible resources include pods (po), replication controllers (rc), services
20-
(svc), minions (mi), events (ev), or component statuses (cs).
20+
(svc), nodes, events (ev), or component statuses (cs).
2121

2222
.PP
2323
By specifying the output as 'template' and providing a Go template as the value

docs/ovs-networking.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Kubernetes OpenVSwitch GRE/VxLAN networking
22

3-
This document describes how OpenVSwitch is used to setup networking between pods across minions.
3+
This document describes how OpenVSwitch is used to setup networking between pods across nodes.
44
The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network.
55

66
![ovs-networking](./ovs-networking.png "OVS Networking")

pkg/kubectl/cmd/get.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ const (
3232
get_long = `Display one or many resources.
3333
3434
Possible resources include pods (po), replication controllers (rc), services
35-
(svc), minions (mi), events (ev), or component statuses (cs).
35+
(svc), nodes, events (ev), or component statuses (cs).
3636
3737
By specifying the output as 'template' and providing a Go template as the value
3838
of the --template flag, you can filter the attributes of the fetched resource(s).`

0 commit comments

Comments
 (0)