CKA考试知识总结

这是一篇很长很长的文章,是去年CKA考试刚开始推出来的时候,我参与考试复习做过的一些知识点。基于做过的题,大概列出了具体的知识点,当时考试的时候还在使用v1.7版本,现在应该都要到v1.12了。

CKA证书

随着k8s声名大噪,国内一大堆公司推各种高价的包过培训班;我只想说:CNCF还真有些缺乏社区精神,更多的还是商业模式。但是,只要能够推动整个云原生的发展,随它吧~

为了让本文显得有说服力,我也把证书贴出来炫炫(认证ID末尾是0100,照理说刚好是第100位通过考试的),勿拍砖!

复习资料

废话不多讲,现在进入主题。

Job

Q: Create a Job that run 60 time with 2 jobs running in parallel

参考资料:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
completions: 10
parallelism: 2
activeDeadlineSeconds: 2
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never

并行job

Job类型 使用示例 行为 completions Parallelism 备注
一次性Job 数据库迁移 创建一个Pod直至其成功结束 1 1
固定结束次数的Job 处理工作队列的Pod 依次创建一个Pod运行直至completions个成功结束 2+ 1
固定结束次数的并行Job 多个Pod同时处理工作队列 依次创建多个Pod运行直至completions个成功结束 2+ 2+
并行Job 多个Pod同时处理工作队列 创建一个或多个Pod直至有一个成功结束 1 2+ 不会创建多个,直接创建出一个
  • kubectl scale job
    A job can be scaled up using the kubectl scale command. For example, the following command sets .spec.parallelism of a job called myjob to 10:

    1
    2
    $ kubectl scale  --replicas=10 jobs/myjob
    job "myjob" scaled
  • 注意

    1. parallelism: 表示并行执行的数量;
    2. completions:表示成功运行多少次就结束job;
    3. RestartPolicy仅支持Never或OnFailure;
    4. activeDeadlineSeconds标志失败Pod的重试最大时间,超过这个时间不会继续重试;
    5. kubectl scale其实是修改了job的parallelism属性,并不会对completetions产生影响。

Cronjob

cron 表达式格式:
cron

如果某一位为*/5 就表示每隔5x; 比如在min位的话,代表每隔5分钟

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@test-9:~# kubectl run cronjob --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "sleep 99"
cronjob "cronjob" created
root@test-9:~#
root@test-9:~# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob */1 * * * * False 0 <none>
root@test-9:~#
root@test-9:~# kubectl get job
NAME DESIRED SUCCESSFUL AGE
cronjob-1510581480 1 0 1m
cronjob-1510581540 1 0 14s
root@test-9:~#
root@test-9:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
cronjob-1510581480-r49rq 1/1 Running 0 1m
cronjob-1510581540-tl4hn 1/1 Running 0 16s

kubectl top

Q: Find which Pod is taking max CPU
Use kubectl top to find CPU usage per pod

kubectl top node

1
2
3
4
5
6
7
8
9
10
root@test-9:~/henry# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
test-10 41m 1% 2230Mi 14%
test-9 104m 2% 4931Mi 31%
root@test-9:~/henry#
root@test-9:~/henry#
root@test-9:~/henry# kubectl top nodes | awk '{print $1 "\t" $3|"sort -r -n"}'
test-9 2%
test-10 1%
NAME CPU%

sort的参数:-r 表示反序排列; -n 表示按照数字排序
awk print的时候,使用”\t” 来区分两个列,同时,使用管道来排序

输出排序

Q: List all PersistentVolumes sorted by their name
Use kubectl get pv --sort-by= <- this problem is buggy & also by default kubectl give the output sorted by name.

排序

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
root@test-9:~/henry# kcs get svc --sort-by=.metadata.uid
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tiller-deploy ClusterIP 10.43.155.15 <none> 44134/TCP 2h
monitoring-influxdb ClusterIP 10.43.227.43 <none> 8086/TCP 2h
monitoring-grafana ClusterIP 10.43.217.185 <none> 80/TCP 2h
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 2h
kubernetes-dashboard ClusterIP 10.43.36.245 <none> 9090/TCP 2h
heapster ClusterIP 10.43.250.217 <none> 80/TCP 2h
root@test-9:~/henry#
root@test-9:~/henry# kcs get svc --sort-by=.metadata.name
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.43.250.217 <none> 80/TCP 2h
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 2h
kubernetes-dashboard ClusterIP 10.43.36.245 <none> 9090/TCP 2h
monitoring-grafana ClusterIP 10.43.217.185 <none> 80/TCP 2h
monitoring-influxdb ClusterIP 10.43.227.43 <none> 8086/TCP 2h
tiller-deploy ClusterIP 10.43.155.15 <none> 44134/TCP 2h

root@test-9:~/henry# kcs get svc heapster -o json
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2017-11-12T03:27:51Z",
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Heapster",
"task": "monitoring"
},
"name": "heapster",
"namespace": "kube-system",
"resourceVersion": "229",
"selfLink": "/api/v1/namespaces/kube-system/services/heapster",
"uid": "769529c5-c759-11e7-8dee-02cdc7a8bd69"
},
"spec": {
"clusterIP": "10.43.250.217",
"ports": [
{
"port": 80,
"protocol": "TCP",
"targetPort": 8082
}
],
"selector": {
"k8s-app": "heapster"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
}

查询资源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Get commands with basic output
$ kubectl get services # List all services in the namespace
$ kubectl get pods --all-namespaces # List all pods in all namespaces
$ kubectl get pods -o wide # List all pods in the namespace, with more details
$ kubectl get deployment my-dep # List a particular deployment
$ kubectl get pods --include-uninitialized # List all pods in the namespace, including uninitialized ones

# Describe commands with verbose output
$ kubectl describe nodes my-node
$ kubectl describe pods my-pod

$ kubectl get services --sort-by=.metadata.name # List Services Sorted by Name

# List pods Sorted by Restart Count
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

# Get the version label of all pods with label app=cassandra
$ kubectl get pods --selector=app=cassandra rc -o \
jsonpath='{.items[*].metadata.labels.version}'

# Get ExternalIPs of all nodes
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
$ sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
$ echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})

# Check which nodes are ready
$ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

# List all Secrets currently in use by a pod
$ kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq

常用命令

kubectl run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
root@test-9:~# kubectl run demo-1 --image=busybox:latest --env="env1=wise2c" --port=80 --hostport=30098 --restart='Always' --image-pull-policy='Always' --limits="cpu=200m,memory=512Mi" --replicas=2 -- sleep 60
deployment "demo-1" created
root@test-9:~#
root@test-9:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
demo-1-4031462666-1m6lc 0/1 ContainerCreating 0 4s
demo-1-4031462666-3sph3 0/1 ContainerCreating 0 4s
root@test-9:~#
root@test-9:~# kubectl get deploy demo-1 -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2017-11-12T06:20:52Z
generation: 1
labels:
run: demo-1
name: demo-1
namespace: default
resourceVersion: "13667"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/demo-1
uid: a24a6b2b-c771-11e7-8dee-02cdc7a8bd69
spec:
replicas: 2
selector:
matchLabels:
run: demo-1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: demo-1
spec:
containers:
- args:
- sleep
- "60"
env:
- name: env1
value: wise2c
image: busybox:latest
imagePullPolicy: Always
name: demo-1
ports:
- containerPort: 80
hostPort: 30098
protocol: TCP
resources:
limits:
cpu: 200m
memory: 512Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2017-11-12T06:22:03Z
lastUpdateTime: 2017-11-12T06:22:03Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2

kubectl expose

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
root@test-9:~# kubectl expose deploy nginx2 --name=nginx --port=80 --target-port=80 --protocol=TCP --type=ClusterIP
service "nginx" exposed
root@test-9:~# kubectl get svc nginx -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-11-12T07:32:48Z
labels:
run: nginx2
name: nginx
namespace: default
resourceVersion: "20097"
selfLink: /api/v1/namespaces/default/services/nginx
uid: ae7774d4-c77b-11e7-8dee-02cdc7a8bd69
spec:
clusterIP: 10.43.221.216
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx2
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
root@test-9:~#
root@test-9:~#
root@test-9:~#
root@test-9:~#
root@test-9:~# kubectl expose deploy nginx2 --name=nginx --port=80 --target-port=80 --protocol=TCP --type=NodePort
service "nginx" exposed
root@test-9:~# kubectl get svc nginx -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-11-12T07:35:21Z
labels:
run: nginx2
name: nginx
namespace: default
resourceVersion: "20296"
selfLink: /api/v1/namespaces/default/services/nginx
uid: 0a19d690-c77c-11e7-8dee-02cdc7a8bd69
spec:
clusterIP: 10.43.120.19
externalTrafficPolicy: Cluster
ports:
- nodePort: 30014
port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx2
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

port-forward

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root@test-9:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-f7d4dc847-bzlzq 1/1 Running 0 11h 10.244.0.24 test-9
nginx-f7d4dc847-lcq57 1/1 Running 0 11h 10.244.1.45 test-10
nginx-f7d4dc847-qs28j 1/1 Running 0 11h 10.244.0.25 test-9
nginx-f7d4dc847-s4xml 1/1 Running 0 11h 10.244.1.44 test-10
nginx-f7d4dc847-skb74 1/1 Running 0 11h 10.244.1.43 test-10
nginx-f7d4dc847-x9vh4 1/1 Running 0 11h 10.244.0.26 test-9
root@test-9:~# kubectl port-forward nginx-f7d4dc847-bzlzq 9090:80
Forwarding from 127.0.0.1:9090 -> 80
Handling connection for 9090
root@test-9:~#
root@test-9:~# curl 127.0.0.1:9090
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@test-9:~#

NetworkPolicy

Q: Create a NetworkPolicy to allow connect to port 8080 by busybox pod only
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
Make sure to use apiVersion: extensions/v1beta1 which works on both 1.6 and 1.7

  • 在生效之前,必须先配置annotation来阻止所有的请求;
  • podSelector.matchLablesl:定义了该规则对哪些pod(destination)有效;
  • ingress:指定了允许带标签“access=true” 的pod访问这些服务;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
root@test-9:~# kubectl annotate ns default "net.beta.kubernetes.io/network-policy={\"ingress\": {\"isolation\": \"DefaultDeny\"}}"
namespace "default" annotated
root@test-9:~#
root@test-9:~#
root@test-9:~# kubectl describe ns default
Name: default
Labels: <none>
Annotations: net.beta.kubernetes.io/network-policy={"ingress": {"isolation": "DefaultDeny"}}
Status: Active
No resource quota.
No resource limits.
root@test-9:~#
root@test-9:~/henry# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx2-2627548522-6f5kf 1/1 Running 0 22m pod-template-hash=2627548522,run=nginx
nginx2-2627548522-8w87b 1/1 Running 0 22m pod-template-hash=2627548522,run=nginx
root@test-9:~/henry# kubectl get svc nginx --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
nginx NodePort 10.43.120.19 <none> 80:30014/TCP 16m run=nginx
root@test-9:~/henry# cat network-policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
access: "true"
ports:
- protocol: TCP
port: 80
root@test-9:~/henry# kubectl get netpol
NAME POD-SELECTOR AGE
access-nginx run=nginx 2m
root@test-9:~/henry# kubectl get netpol access-nginx -o yaml
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
creationTimestamp: 2017-11-12T07:40:38Z
generation: 1
name: access-nginx
namespace: default
resourceVersion: "20699"
selfLink: /apis/extensions/v1beta1/namespaces/default/networkpolicies/access-nginx
uid: c72191d1-c77c-11e7-8dee-02cdc7a8bd69
spec:
ingress:
- from:
- podSelector:
matchLabels:
access: "true"
ports:
- port: 80
protocol: TCP
podSelector:
matchLabels:
run: nginx
root@test-9:~/henry#
root@test-9:~/henry# kubectl run busybox --rm -ti --labels="access=true" --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget nginx
Connecting to nginx (10.43.120.19:80)
index.html 100% |********************************************************************************************| 612 0:00:00 ETA
/ #

Node Broken

Q: fixing broken nodes, see
https://kubernetes.io/docs/concepts/architecture/nodes/

  • Node参数
    condition
    Address
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
root@test-9:~# kubectl describe nodes
Name: test-10
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=rancher
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=Region1
failure-domain.beta.kubernetes.io/zone=FailureDomain1
io.rancher.host.docker_version=1.12
io.rancher.host.linux_kernel_version=4.4
kubernetes.io/hostname=test-10
Annotations: io.rancher.labels.io.rancher.host.docker_version=
io.rancher.labels.io.rancher.host.linux_kernel_version=
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Sun, 12 Nov 2017 11:27:45 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.144.102.117
ExternalIP: 10.144.102.117
Hostname: test-10
Capacity:
cpu: 4
memory: 16301460Ki
pods: 110
Allocatable:
cpu: 4
memory: 16199060Ki
pods: 110
System Info:
Machine ID:
System UUID: 4ABB25CA-B353-450A-9787-28477ED72344
Boot ID: 689e31dc-e05d-48de-9068-e8460d15a9b6
Kernel Version: 4.4.0-91-generic
OS Image: Ubuntu 16.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.12.6
Kubelet Version: v1.7.7-rancher1
Kube-Proxy Version: v1.7.7-rancher1
ExternalID: 3cb02e3d-cb58-42c6-9a54-2fb5cfb836d2
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default demo-1-4031462666-1m6lc 200m (5%) 200m (5%) 512Mi (3%) 512Mi (3%)
default nginx-4217019353-k3mqk 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-638003847-q28hb 260m (6%) 0 (0%) 110Mi (0%) 170Mi (1%)
kube-system kubernetes-dashboard-716739405-42t14 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
560m (14%) 300m (7%) 672Mi (4%) 732Mi (4%)
Events: <none>

Etcd

Q: etcd backup, see
https://kubernetes.io/docs/getting-started-guides/ubuntu/backups/
https://www.mirantis.com/blog/everything-you-ever-wanted-to-know-about-using-etcd-with-kubernetes-v1-6-but-were-afraid-to-ask/

  • Start Etcd

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    #start script:
    #========================================
    etcd --name 'default' \
    --data-dir '/root/data.etcd' \
    --ca-file '/pki/ca.crt' --cert-file '/pki/cert.crt' --key-file '/pki/key.key' \
    --peer-ca-file '/pki/ca.crt' --peer-cert-file '/pki/cert.crt' --peer-key-file '/pki/key.key' \
    --client-cert-auth \
    --peer-client-cert-auth \
    --listen-peer-urls https://localhost:2380 \
    --listen-client-urls https://localhost:2379 \
    --advertise-client-urls https://localhost:2379 \
    --initial-advertise-peer-urls https://localhost:2380 \
    --initial-cluster default=https://localhost:2380 \
    --initial-cluster-state 'new' \
    --initial-cluster-token 'etcd-cluster' \
    --debug


    #operate:
    #========================================
    etcdctl --endpoint=https://localhost:2379 --ca-file=/pki/ca.crt --cert-file=/pki/cert.crt --key-file=/pki/key.key ls /

    如果要设置证书:

    1. 需要把访问的URL加上https
    2. 需要设置上图中红色部分的内容
  • Replacing a failed etcd member

    1. Get the member ID of the failed member1:

      1
      etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list
    2. The following message is displayed:

      1
      2
      3
      8211f1d0f64f3269, started, member1, http://10.0.0.1:12380, http://10.0.0.1:2379
      91bc3c398fb3c146, started, member2, http://10.0.0.1:2380, http://10.0.0.2:2379
      fd422379fda50e48, started, member3, http://10.0.0.1:2380, http://10.0.0.3:2379
    3. Remove the failed member:

      1
      etcdctl member remove 8211f1d0f64f3269
    4. The following message is displayed:

      1
      Removed member 8211f1d0f64f3269 from cluster
    5. Add the new member:

      1
      ./etcdctl member add member4 --peer-urls=http://10.0.0.4:2380
    6. The following message is displayed:

      1
      Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4
    7. Start the newly added member on a machine with the IP 10.0.0.4:

      1
      bash export ETCD_NAME="member4" export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380" export ETCD_INITIAL_CLUSTER_STATE=existing etcd [flags]

      需要知道,先从集群中添加,然后再启动对应的etcd member。
      另外,对于新启动的etcd member需要指定启动的状态为“existing”。

  • Backing up an etcd cluster

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
    # exit 0

    # verify the snapshot
    ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
    +----------+----------+------------+------------+
    | HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
    +----------+----------+------------+------------+
    | fe01cf57 | 10 | 7 | 2.1 MB |
    +----------+----------+------------+------------+

initContainer

Q: You have a Container with a volume mount. Add a init container that creates an empty file in the volume. (only trick is to mount the volume to init-container as well)
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
volumeMounts:
- mountPath: /cache
name: cache-volume
initContainers:
- name: init-touch-file
image: busybox
volumeMounts:
- mountPath: /data
name: cache-volume
command: ['sh', '-c', 'echo "" > /data/harshal.txt']
volumes:
- name: cache-volume
emptyDir: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
root@test-9:~/henry# cat init-container.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-baidu
image: busybox
command: ['sh', '-c', 'until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;']
- name: init-google
image: busybox
command: ['sh', '-c', 'until nslookup www.google.com; do echo waiting for google.com; sleep 2; done;']

root@test-9:~/henry#
root@test-9:~/henry# kubectl get pod -a
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 1m
nginx2-2627548522-6f5kf 1/1 Running 0 2h
nginx2-2627548522-8w87b 1/1 Running 0 2h
root@test-9:~/henry#
root@test-9:~/henry# kubectl describe pod myapp-pod
Name: myapp-pod
Namespace: default
Node: test-9/10.144.96.185
Start Time: Sun, 12 Nov 2017 17:43:49 +0800
Labels: app=myapp
Annotations: pod.alpha.kubernetes.io/init-container-statuses=[{"name":"init-baidu","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2017-11-12T09:43:54Z","finishedAt":"2017-11-12T09:43:54Z","c...
pod.alpha.kubernetes.io/init-containers=[{"name":"init-baidu","image":"busybox","command":["sh","-c","until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;"],"resources":{},"volu...
pod.beta.kubernetes.io/init-container-statuses=[{"name":"init-baidu","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2017-11-12T09:43:54Z","finishedAt":"2017-11-12T09:43:54Z","co...
pod.beta.kubernetes.io/init-containers=[{"name":"init-baidu","image":"busybox","command":["sh","-c","until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;"],"resources":{},"volum...
Status: Running
IP: 10.42.107.11
Init Containers:
init-baidu:
Container ID: docker://9497c4dc7c111870022e5dd873daba13f00797308b505f6e82fd1f1545744062
Image: busybox
Image ID: docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Command:
sh
-c
until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Nov 2017 17:43:54 +0800
Finished: Sun, 12 Nov 2017 17:43:54 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qfpj (ro)
init-google:
Container ID: docker://5ff45db07f52c51e40b0bb77ad650aa4fbd29aa7112a4197de33ed880a04376d
Image: busybox
Image ID: docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Command:
sh
-c
until nslookup www.google.com; do echo waiting for google.com; sleep 2; done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Nov 2017 17:43:59 +0800
Finished: Sun, 12 Nov 2017 17:43:59 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qfpj (ro)
Containers:
myapp-container:
Container ID: docker://88cf1ddb39e7b468d9d06c37a7d3ff1ca0d39ae9b0f46d0cf2f1788cb1482118
Image: busybox
Image ID: docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Command:
sh
-c
echo The app is running! && sleep 3600
State: Running
Started: Sun, 12 Nov 2017 17:44:04 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qfpj (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-5qfpj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qfpj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned myapp-pod to test-9
Normal SuccessfulMountVolume 1m kubelet, test-9 MountVolume.SetUp succeeded for volume "default-token-5qfpj"
Normal Pulling 1m kubelet, test-9 pulling image "busybox"
Normal Pulled 1m kubelet, test-9 Successfully pulled image "busybox"
Normal Created 1m kubelet, test-9 Created container
Normal Started 1m kubelet, test-9 Started container
Normal Pulling 1m kubelet, test-9 pulling image "busybox"
Normal Pulled 1m kubelet, test-9 Successfully pulled image "busybox"
Normal Created 1m kubelet, test-9 Created container
Normal Started 1m kubelet, test-9 Started container
Normal Pulling 1m kubelet, test-9 pulling image "busybox"
Normal Pulled 1m kubelet, test-9 Successfully pulled image "busybox"
Normal Created 1m kubelet, test-9 Created container
Normal Started 1m kubelet, test-9 Started container
root@test-9:~/henry#

Volume

Q: When running a redis key-value store in your pre-production environments many deployments are incoming from CI and leaving behind a lot of stale cache data in redis which is causing test failures. The CI admin has requested that each time a redis key-value-store is deployed in staging that it not persist its data.
Create a pod named non-persistent-redis that specifies a named-volume with name app-cache, and mount path /data/redis. It should launch in the staging namespace and the volume MUST NOT be persistent.
Create a Pod with EmptyDir and in the YAML file add namespace: CI

Yaml格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/busybox:latest
name: test-container
command: ["/bin/sh", "-c", "sleep 9999"]
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}

挂载文件到pod中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.wise2c.service: xx
io.wise2c.stack: stack001
name: stack001-xx
spec:
replicas: 1
template:
metadata:
labels:
io.wise2c.service: xx
io.wise2c.stack: stack001
spec:
containers:
image: nginx
name: xx
resources:
limits:
cpu: 200m
memory: 1073741824
volumeMounts:
- mountPath: /etc/resolv.conf
name: xx
subPath: resolv.conf
volumes:
- configMap:
name: stack001-xx
name: xx
- apiVersion: v1
data:
resolv.conf: "\nnameserver 10.96.0.10 \n\nsearch stack001.ns-team-2-env-44.svc.cluster.local\
\ ns-team-2-env-44.svc.cluster.local svc.cluster.local cluster.local\noptions\
\ ndots:6"
kind: ConfigMap
metadata:
labels:
io.wise2c.stack: stack001
name: stack001-xx
kind: List

挂载同一个文件到不同pod中,指定不同的名字:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: busybox
command: ["/bin/sh", "-c", "sleep 999"]
volumeMounts:
- mountPath: /haha/mysql
name: site-data
subPath: mysql
- name: php
image: busybox
command: ["/bin/sh", "-c", "sleep 999"]
volumeMounts:
- mountPath: /haha/html
name: site-data
subPath: html
volumes:
- name: site-data
hostPath:
path: /data

两种类型的持久卷

PV, 使用静态的PV来挂载,需要用户自己创建PV.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
nfs:
path: /tmp
server: 172.17.0.2

PVC, 用户不用关心PV,只需要说需要什么类型的存储,即创建PVC,然后PVC自动从Storage Class中创建对应的PV。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: gold
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}

Storage Class:

1
2
3
4
5
6
7
8
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova

Pod:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim

Log

Q: Find the error message with the string “Some-error message here”.
https://kubernetes.io/docs/concepts/cluster-administration/logging/
see kubectl logs and /var/log for system services

1
2
[root@dev-7 henry]# kcc logs -f --tail=10  orchestration-2080965958-khwfx -c orchestration
[root@dev-7 henry]# kcc logs -f --since=1h orchestration-2080965958-khwfx -c orchestration

kubelet日志位于/var/log/kubelet下

static pod

Q: Run a Jenkins Pod on a specified node only.
https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Create the Pod manifest at the specified location and then edit the systemd service file for kubelet(/etc/systemd/system/kubelet.service) to include --pod-manifest-path=/specified/path. Once done restart the service.

  1. Choose a node where we want to run the static pod. In this example, it’s my-node1.

    1
    [joe@host ~] $ ssh my-node1
  2. Choose a directory, say /etc/kubelet.d and place a web server pod definition there, e.g. /etc/kubelet.d/static-pod.yaml:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    [root@my-node1 ~] $ mkdir /etc/kubernetes.d/ 
    [root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-pod.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    name: static-pod
    spec:
    containers:
    - image: busybox
    name: test-container
    command: ["/bin/sh", "-c", "sleep 9999"]
    EOF
  3. Configure your kubelet daemon on the node to use this directory by running it with --pod-manifest-path=/etc/kubelet.d/ argument. On Fedora edit /etc/kubernetes/kubelet to include this line:

    1
    KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
  4. Instructions for other distributions or Kubernetes installations may vary. Restart kubelet. On Fedora, this is:

    1
    [root@my-node1 ~] $ systemctl restart kubelet

效果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@dev-9 manifests]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-pod-dev-9 1/1 Running 0 34s
[root@dev-9 manifests]#
[root@dev-9 manifests]# kubectl describe pod static-pod-dev-9
Name: static-pod-dev-9
Namespace: default
Node: dev-9/192.168.1.190
Start Time: Sun, 12 Nov 2017 21:21:48 +0800
Labels: <none>
Annotations: kubernetes.io/config.hash=1dcad4affd910f45b5c3a8dbdeec8933
kubernetes.io/config.mirror=1dcad4affd910f45b5c3a8dbdeec8933
kubernetes.io/config.seen=2017-11-12T21:21:48.15196949+08:00
kubernetes.io/config.source=file
Status: Running
IP: 10.244.3.45
Containers:
test-container:
Container ID: docker://ef3e28e45e280e4a50942fc472fd025cb84a7014a64dbc57308cddbfeb1bd979
Image: busybox
Image ID: docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Command:
/bin/sh
-c
sleep 9999
State: Running
Started: Sun, 12 Nov 2017 21:21:52 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
[root@dev-9 manifests]#

DNS

Q: Use the utility nslookup to look up the DNS records of the service and pod.
From this guide, https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Look for “Quick Diagnosis”

Services

1
$ kubectl exec -ti busybox -- nslookup mysvc.myns.svc.cluster.local

Naming conventions for services and pods:

  1. For a regular service, this resolves to the port number and the CNAME: (解析到Cluster-IP)
    my-svc.my-namespace.svc.cluster.local.
1
2
3
4
5
6
7
root@test-9:~/henry# kubectl exec -ti busybox-2520568787-kkmrw -- nslookup nginx.default.svc.cluster.local
Server: 10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

Name: nginx.default
Address 1: 10.43.120.19 nginx.default.svc.cluster.local
root@test-9:~/henry#
  1. For a headless service, this resolves to multiple answers(RR解析到多个Pod IP), one for each pod that is backing the service, and contains the port number and a CNAME of the pod of the form
    auto-generated-name.my-svc.my-namespace.svc.cluster.local

Pods

When enabled, pods are assigned a DNS A record in the form of

pod-ip-address.my-namespace.pod.cluster.local

For example, a pod with IP 1.2.3.4 in the namespace default with a DNS name of cluster.local would have an entry: 1-2-3-4.default.pod.cluster.local

1
2
3
4
5
6
7
root@test-9:~/henry# kubectl exec -ti busybox-2520568787-kkmrw -- nslookup 10-42-236-215.default.pod.cluster.local
Server: 10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

Name: 10-42-236-215.default.pod.cluster.local
Address 1: 10.42.236.215
root@test-9:~/henry#

Ingress

Q 17: Create an Ingress resource, Ingress controller and a Service that resolves to cs.rocks.ch.

  1. First, create controller and default backend
    1
    2
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/controllers/nginx/examples/default-backend.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/nginx-ingress-controller.yaml
  1. Second, create service and expose

    1
    2
    kubectl run ingress-pod --image=nginx --port 80
    kubectl expose deployment ingress-pod --port=80 --target-port=80 --type=NodePort
  2. Create the ingress

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    cat <<EOF >ingress-cka.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: ingress-service
    spec:
    rules:
    - host: "cs.rocks.ch"
    http:
    paths:
    - backend:
    serviceName: ingress-pod
    servicePort: 80
    EOF
  3. To test, run a curl pod

    1
    2
    kubectl run -i --tty client --image=tutum/curl
    curl -I -L --resolve cs.rocks.ch:80:10.240.0.5 http://cs.rocks.ch/

我认为,要访问ingress,在flannel网络中,应该还可以使用hostPort来暴露出ingress-nginx的80和443端口。

  • Mandatory commands

    1
    2
    3
    4
    5
    6
    7
    8
    9
    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f -

    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f -

    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f -

    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml | kubectl apply -f -

    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -
  • Install with RBAC roles

    1
    2
    3
    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f -

    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f -
  • Verify installation:

    1
    kubectl get pods --all-namespaces -l app=ingress-nginx --watch

TLS

Q: TLS bootstrapping, see
https://coreos.com/kubernetes/docs/latest/openssl.html
https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/
https://github.com/cloudflare/cfssl

  • 下载cfssl软件包
    1. 访问:https://pkg.cfssl.org
    2. 下载:cfssl_linux-amd64 => cfssl
    3. 下载:cfssljson_linux-amd64 => cfssljson
    4. 下载:cfssl-certinfo-linux-arm64 => cfssl-certinfo
  • 创建证书的流程

    1. 创建自签名的CA证书;
    2. 使用CA证书、CA私钥、CA的配置文件,以及客户的CSR生成客户的证书;
  • 操作流程

    1. 生成default配置文件

      1
      2
      3
      4
      5
      6
      7
      root@test-9:~/henry# ./cfssl print-defaults list
      Default configurations are available for:
      config
      csr
      root@test-9:~/henry# ./cfssl print-defaults config > ca-config.json
      root@test-9:~/henry# ./cfssl print-defaults csr > ca-csr.json
      root@test-9:~/henry#
    2. 配置CA的CSR,为自己生成CERT:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      root@test-9:~/henry# cat ca-csr.json
      {
      "CN": "Chen Leiji CA",
      "key": {
      "algo": "ecdsa",
      "size": 256
      },
      "names": [
      {
      "C": "US",
      "L": "CA",
      "ST": "San Francisco",
      "O": "WISE2C",
      "OU": "development"
      }
      ]
      }

      root@test-9:~/henry#
      root@test-9:~/henry# ./cfssl gencert -initca ca-csr.json | ./cfssljson -bare ca
      2017/11/15 13:16:04 [INFO] generating a new CA key and certificate from CSR
      2017/11/15 13:16:04 [INFO] generate received request
      2017/11/15 13:16:04 [INFO] received CSR
      2017/11/15 13:16:04 [INFO] generating key: ecdsa-256
      2017/11/15 13:16:04 [INFO] encoded CSR
      2017/11/15 13:16:04 [INFO] signed certificate with serial number 84349438505086023342597169366385715026517648791
      root@test-9:~/henry# ls
      ca-config.json ca.csr ca-key.pem ca.pem cfssl cfssljson csr.json
      root@test-9:~/henry#
    3. 查看生成的CA证书:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      root@test-9:~/henry# ./cfssl-certinfo -cert ca.pem
      {
      "subject": {
      "common_name": "Chen Leiji CA",
      "country": "US",
      "organization": "WISE2C",
      "organizational_unit": "development",
      "locality": "CA",
      "province": "San Francisco",
      "names": [
      "US",
      "San Francisco",
      "CA",
      "WISE2C",
      "development",
      "Chen Leiji CA"
      ]
      },
      "issuer": {
      "common_name": "Chen Leiji CA",
      "country": "US",
      "organization": "WISE2C",
      "organizational_unit": "development",
      "locality": "CA",
      "province": "San Francisco",
      "names": [
      "US",
      "San Francisco",
      "CA",
      "WISE2C",
      "development",
      "Chen Leiji CA"
      ]
      },
      "serial_number": "84349438505086023342597169366385715026517648791",
      "not_before": "2017-11-15T05:11:00Z",
      "not_after": "2022-11-14T05:11:00Z",
      "sigalg": "ECDSAWithSHA256",
      "authority_key_id": "D4:54:B3:F5:DF:CA:4A:22:E5:E:99:F0:BE:5A:4E:B:89:3C:DC:53",
      "subject_key_id": "D4:54:B3:F5:DF:CA:4A:22:E5:E:99:F0:BE:5A:4E:B:89:3C:DC:53",
      "pem": "-----BEGIN CERTIFICATE-----\nMIICSjCCAfCgAwIBAgIUDsZcEST3fVKpcGgiDP+IvVG1ZZcwCgYIKoZIzj0EAwIw\ncTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNVBAcT\nAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRYwFAYD\nVQQDEw1DaGVuIExlaWppIENBMB4XDTE3MTExNTA1MTEwMFoXDTIyMTExNDA1MTEw\nMFowcTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNV\nBAcTAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRYw\nFAYDVQQDEw1DaGVuIExlaWppIENBMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE\nCaeC0bFStNdMcaWjMWtc0/HyC/VrcuALsa7ie5xE1lB6wNtQE1JlDxQUPbOJvHXh\nXJ8Lhtp+GKR3nPWiy6+j36NmMGQwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQI\nMAYBAf8CAQIwHQYDVR0OBBYEFNRUs/Xfykoi5Q6Z8L5aTguJPNxTMB8GA1UdIwQY\nMBaAFNRUs/Xfykoi5Q6Z8L5aTguJPNxTMAoGCCqGSM49BAMCA0gAMEUCIQCIG5Hx\n6Pmhj3LT2dRpGGJW3yj3+r9txDjMUH7+CtvJ/AIga5REzrYKSByjSKrMmoa2NPl2\nIIKQ2hASUgXI3565qdc=\n-----END CERTIFICATE-----\n"
      }
      root@test-9:~/henry#
    4. 配置CA Profile选项(此处的profiles对应生成客户CA指定的–profile值):

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      root@test-9:~/henry# cat ca-config.json
      {
      "signing": {
      "default": {
      "expiry": "168h"
      },
      "profiles": {
      "server": {
      "expiry": "8760h",
      "usages": [
      "signing",
      "key encipherment",
      "server auth"
      ]
      },
      "client": {
      "expiry": "8760h",
      "usages": [
      "signing",
      "key encipherment",
      "client auth"
      ]
      },
      "peer": {
      "expiry": "8760h",
      "usages": [
      "signing",
      "key encipherment",
      "server auth"
      ]
      }
      }
      }
      }

      root@test-9:~/henry#
    5. 修改客户CSR.json:

      获取模板文件:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      root@test-9:~/henry# ./cfssl print-defaults csr 
      {
      "CN": "example.net",
      "hosts": [
      "example.net",
      "www.example.net"
      ],
      "key": {
      "algo": "ecdsa",
      "size": 256
      },
      "names": [
      {
      "C": "US",
      "L": "CA",
      "ST": "San Francisco"
      }
      ]
      }

      root@test-9:~/henry#

      修改CSR,主要涉及hosts的内容:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      root@test-9:~/henry# cat csr.json
      {
      "CN": "Chen Leiji Server",
      "key": {
      "algo": "ecdsa",
      "size": 256
      },
      "hosts":[
      "www.wise2c.com"
      ],
      "names": [
      {
      "C": "US",
      "L": "CA",
      "ST": "San Francisco",
      "O": "WISE2C",
      "OU": "development"
      }
      ]
      }
    6. 生成客户证书

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      root@test-9:~/henry# ./cfssl gencert -ca=ca.pem -ca-key=ca-key.pem --config=ca-config.json --hostname="www.wise2c.com" --profile="server" csr.json | ./cfssljson -bare server
      2017/11/15 14:34:07 [INFO] generate received request
      2017/11/15 14:34:07 [INFO] received CSR
      2017/11/15 14:34:07 [INFO] generating key: ecdsa-256
      2017/11/15 14:34:07 [INFO] encoded CSR
      2017/11/15 14:34:07 [INFO] signed certificate with serial number 408368599847170747880405926931506246283785194006
      root@test-9:~/henry#
      root@test-9:~/henry# ls
      ca-config.json ca.csr ca-key.pem ca.pem cfssl cfssl-certinfo cfssljson csr.json server.csr server-key.pem server.pem
      root@test-9:~/henry#
      root@test-9:~/henry# ./cfssl-certinfo -cert server.pem
      {
      "subject": {
      "common_name": "Chen Leiji Server",
      "country": "US",
      "organization": "WISE2C",
      "organizational_unit": "development",
      "locality": "CA",
      "province": "San Francisco",
      "names": [
      "US",
      "San Francisco",
      "CA",
      "WISE2C",
      "development",
      "Chen Leiji Server"
      ]
      },
      "issuer": {
      "common_name": "Chen Leiji CA",
      "country": "US",
      "organization": "WISE2C",
      "organizational_unit": "development",
      "locality": "CA",
      "province": "San Francisco",
      "names": [
      "US",
      "San Francisco",
      "CA",
      "WISE2C",
      "development",
      "Chen Leiji CA"
      ]
      },
      "serial_number": "408368599847170747880405926931506246283785194006",
      "sans": [
      "www.wise2c.com"
      ],
      "not_before": "2017-11-15T06:29:00Z",
      "not_after": "2018-11-15T06:29:00Z",
      "sigalg": "ECDSAWithSHA256",
      "authority_key_id": "D4:54:B3:F5:DF:CA:4A:22:E5:E:99:F0:BE:5A:4E:B:89:3C:DC:53",
      "subject_key_id": "1D:DB:C:A:34:9D:50:98:B0:F7:7D:E5:43:AF:3:D:9E:7D:92:C4",
      "pem": "-----BEGIN CERTIFICATE-----\nMIICeTCCAiCgAwIBAgIUR4fhn28TfjY12WtKZvStTxZMyhYwCgYIKoZIzj0EAwIw\ncTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNVBAcT\nAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRYwFAYD\nVQQDEw1DaGVuIExlaWppIENBMB4XDTE3MTExNTA2MjkwMFoXDTE4MTExNTA2Mjkw\nMFowdTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNV\nBAcTAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRow\nGAYDVQQDExFDaGVuIExlaWppIFNlcnZlcjBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABNS8mQT/DGMqig0Ju4VwcKtJoleoUF/lJokUhGKVudxIDRPMlgfHI7etIOBD\nPPhgrDdBWMEZHqZ0qLhmvp2v4G6jgZEwgY4wDgYDVR0PAQH/BAQDAgWgMBMGA1Ud\nJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFB3bDAo0nVCY\nsPd95UOvAw2efZLEMB8GA1UdIwQYMBaAFNRUs/Xfykoi5Q6Z8L5aTguJPNxTMBkG\nA1UdEQQSMBCCDnd3dy53aXNlMmMuY29tMAoGCCqGSM49BAMCA0cAMEQCIGou6e5c\nhQR0E3+piwH7VDchIlFUvU3OEttxqPnwYUqSAiBOgjYLgbJH07nf2mYqbegRkmpY\nwSmMdvZBSHvLyw81lA==\n-----END CERTIFICATE-----\n"
      }
      root@test-9:~/henry#
    7. 拷贝证书到系统,并更新证书库:

      1
      2
      3
      4
      chmod 600 *-key.pem
      cp ~/cfssl/ca.pem /etc/ssl/certs/

      update-ca-trust

Installation

Q: Setting up K8s master components with a binaries/from tar balls

Also, convert CRT to PEM: openssl x509 -in abc.crt -out abc.pem

life-cycle

Q: 对deployment做rollingUpdate,再滚回来

  • RollingUpdate (貌似对于deploy限制只能够设置其image、resource、selector、subject来实现)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
[root@dev-7 henry]# kubectl run demo --image=nginx --port=80 --replicas=2 --labels="cka=true"
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
demo 2 2 2 2 4m
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl get pod -l cka=true
NAME READY STATUS RESTARTS AGE
demo-2959463917-gbv3r 1/1 Running 0 1m
demo-2959463917-j76m9 1/1 Running 0 1m
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl set --help
Configure application resources

These commands help you make changes to existing application resources.

Available Commands:
image Update image of a pod template
resources Update resource requests/limits on objects with pod templates
selector Set the selector on a resource
subject Update User, Group or ServiceAccount in a RoleBinding/ClusterRoleBinding
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl set image deploy/demo demo=mysql
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout history deploy/demo
deployments "demo"
REVISION CHANGE-CAUSE
1 <none>
2 <none>
[root@dev-7 henry]# kubectl rollout history deploy/demo --revison=2
deployments "demo" with revision #2
Pod Template:
Labels: cka=true
pod-template-hash=2216264665
Containers:
demo:
Image: mysql
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>

[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout undo deploy/demo
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout history deploy/demo
deployments "demo"
REVISION CHANGE-CAUSE
2 <none>
3 <none>
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout history deploy/demo --revision=3
deployments "demo" with revision #3
Pod Template:
Labels: cka=true
pod-template-hash=1786957899
Containers:
demo:
Image: nginx
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>

[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout undo deploy/demo --to-revision=2

一种较保守的做法是先将其锁住,等待操作完成,检查OK了再下发:

1
2
3
4
5
6
7
8
9
10
11
12
[root@dev-7 henry]# kubectl rollout pause deploy/demo
deployment "demo" paused
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl set image deploy/demo demo=busybox
deployment "demo" image updated
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl set resources deploy/demo -c=demo --limits=cpu=200m,memory=512Mi
deployment "demo" resource requirements updated
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout resume deploy/demo
deployment "demo" resumed
[root@dev-7 henry]#

除此之外,rollingUpdate还可以通过kubectl apply来实现:

1
2
3
4
5
6
7
8
9
10
11
12
[root@dev-7 henry]# kubectl apply -f demo.yaml --record
deployment "demo" configured
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout history deploy/demo
deployments "demo"
REVISION CHANGE-CAUSE
4 <none>
5 <none>
6 <none>
7 <none>
8 kubectl apply --filename=demo.yaml --record=true
[root@dev-7 henry]#
  • 自动弹性伸缩:

    1
    2
    [root@dev-7 henry]# kubectl autoscale deploy/demo --min=10 --max=15 --cpu-percent=80
    deployment "demo" autoscaled
  • Hook

    Pod支持两种hook:

    1. postStart 在pod启动成功了后调用
    2. preStop 在pod停止之前调用

    支持两种hook handler:

    1. Exec
    2. HTTP
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx

lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]

kubectl taint

Q: 对node做taint (taint a node)

注意:

  1. taint指定的 key:value 与node的label之间没有任何关系!
  2. 在添加taint的时候,需要指定: key=value:effect
  3. 在删除taint的时候,不需要指定 value,格式为: key:effect
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
root@test-9:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-5b444f5b58-dpvzq 1/1 Running 0 2m 10.244.0.7 test-9
nginx-5b444f5b58-k6qxp 1/1 Running 0 2m 10.244.0.8 test-9
nginx-5b444f5b58-n7prf 1/1 Running 0 2m 10.244.0.9 test-9
nginx-5b444f5b58-r4265 1/1 Running 0 2m 10.244.0.11 test-9
nginx-5b444f5b58-rs2hn 1/1 Running 0 2m 10.244.0.10 test-9
nginx-5b444f5b58-v6r2x 1/1 Running 0 2m 10.244.0.6 test-9
root@test-9:~#
root@test-9:~# kubectl taint node test-9 taint=true:NoExecute
node "test-9" tainted
root@test-9:~#
root@test-9:~# kubectl describe node test-9
Name: test-9
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=test-9
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:e5:cf:c9:fb:79"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=10.144.96.185
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: taint=true:NoExecute
CreationTimestamp: Mon, 13 Nov 2017 20:56:37 +0800
root@test-9:~#
root@test-9:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-5b444f5b58-2s5dw 1/1 Running 0 28s 10.244.1.24 test-10
nginx-5b444f5b58-b6pds 1/1 Running 0 28s 10.244.1.23 test-10
nginx-5b444f5b58-cg75j 1/1 Running 0 28s 10.244.1.21 test-10
nginx-5b444f5b58-d8nbl 1/1 Running 0 28s 10.244.1.20 test-10
nginx-5b444f5b58-pncbm 1/1 Running 0 28s 10.244.1.18 test-10
nginx-5b444f5b58-zbc4h 1/1 Running 0 28s 10.244.1.22 test-10
root@test-9:~#
root@test-9:~# kubectl taint node test-9 taint:NoExecute-
node "test-9" untainted
root@test-9:~#
  • Effect支持:
    NoSchedule/NoExecute/PreferNoSchedule
1
2
3
kubectl taint nodes node1 key1=value1:NoSchedule
kubectl taint nodes node1 key1=value1:NoExecute
kubectl taint nodes node1 key2=value2:NoSchedule
  • Tolerations支持:

    1. 指定匹配 key/value和effect
      tolerations:

      • key: “key”
        operator: “Equal”
        value: “value”
        effect: “NoSchedule”
    2. 指定 key存在且指定effect
      tolerations:

      • key: “key”
        operator: “Exists”
        effect: “NoSchedule”
    3. 只要有任何key存在
      tolerations:

      • operator: “Exists”
    4. 指定key存在
      tolerations:

      • key: “key”
        operator: “Exists”
    5. 代表往node添加taint后,多长时间之内,该pod依然可以存活(时间结束后,将被删除)
      tolerations:

      • key: “key1”
        operator: “Equal”
        value: “value1”
        effect: “NoExecute”
        tolerationSeconds: 3600

例子:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
root@test-9:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-5b444f5b58-2s5dw 1/1 Running 0 16m 10.244.1.24 test-10
nginx-5b444f5b58-b6pds 1/1 Running 0 16m 10.244.1.23 test-10
nginx-5b444f5b58-cg75j 1/1 Running 0 16m 10.244.1.21 test-10
nginx-5b444f5b58-d8nbl 1/1 Running 0 16m 10.244.1.20 test-10
nginx-5b444f5b58-pncbm 1/1 Running 0 16m 10.244.1.18 test-10
nginx-5b444f5b58-zbc4h 1/1 Running 0 16m 10.244.1.22 test-10
root@test-9:~#
root@test-9:~# kubectl taint node test-9 taint=true:NoExecute
node "test-9" tainted
root@test-9:~#
root@test-9:~# kubectl edit deploy nginx
deployment "nginx" edited
root@test-9:~#
root@test-9:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-9bf4c9c69-27r6m 1/1 Running 0 17s 10.244.1.26 test-10
nginx-9bf4c9c69-cnjk2 1/1 Running 0 23s 10.244.0.12 test-9
nginx-9bf4c9c69-fttrd 1/1 Running 0 23s 10.244.1.25 test-10
nginx-9bf4c9c69-jw7w2 1/1 Running 0 11s 10.244.1.27 test-10
nginx-9bf4c9c69-s57h2 1/1 Running 0 12s 10.244.0.14 test-9
nginx-9bf4c9c69-z8jrn 1/1 Running 0 18s 10.244.0.13 test-9
root@test-9:~#
root@test-9:~# kubectl get deploy nginx -o yaml | grep tolerations -C 5
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
status:
availableReplicas: 6
conditions:
- lastTransitionTime: 2017-11-13T13:23:03Z
root@test-9:~#

Secret

  • generic
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
root@test-9:~# kubectl create secret generic demo --from-literal=user=chenleji --from-literal=passwd=123
secret "demo" created
root@test-9:~#
root@test-9:~# kubectl get secret
NAME TYPE DATA AGE
default-token-wgrhs kubernetes.io/service-account-token 3 1h
demo Opaque 2 4s
root@test-9:~#
root@test-9:~# kubectl get secret demo -o yaml
apiVersion: v1
data:
passwd: MTIz
user: Y2hlbmxlamk=
kind: Secret
metadata:
creationTimestamp: 2017-11-13T14:12:00Z
name: demo
namespace: default
resourceVersion: "7108"
selfLink: /api/v1/namespaces/default/secrets/demo
uid: 9da9b9f4-c87c-11e7-9401-525400545760
type: Opaque
root@test-9:~#
root@test-9:~# echo -n MTIz | base64 --decode
123
root@test-9:~# echo -n Y2hlbmxlamk= | base64 --decode
chenleji
root@test-9:~#
root@test-9:~#
  • TLS

    1
    kubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key
  • Registry

    1
    2
    kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER
    --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
  • volume mount

未指定挂载的具体文件名:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@test-9:~# kubectl get deploy -o yaml | grep volume -C 5
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /secret
name: secret
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
volumes:
- name: secret
secret:
defaultMode: 420
secretName: demo
status:
root@test-9:~#
root@test-9:~# kubectl exec -ti nginx-557769d5c5-45sdq /bin/bash
root@nginx-557769d5c5-45sdq:/# ls -l /secret/
total 0
lrwxrwxrwx 1 root root 13 Nov 13 14:23 passwd -> ..data/passwd
lrwxrwxrwx 1 root root 11 Nov 13 14:23 user -> ..data/user
root@nginx-557769d5c5-45sdq:/#
root@nginx-557769d5c5-45sdq:/# cat /secret/passwd
123
root@nginx-557769d5c5-45sdq:/#

指定挂载文件名:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
root@test-9:~# kubectl describe secret demo
Name: demo
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
passwd: 3 bytes
user: 8 bytes
root@test-9:~#
root@test-9:~# kubectl get deploy nginx -o yaml | grep volume -C 8
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /secret
name: secret
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
volumes:
- name: secret
secret:
defaultMode: 420
items:
- key: user
path: haha/xx
secretName: demo
status:
root@test-9:~#
root@nginx-657c6dcd4c-56p5h:/# cat /secret/haha/xx
chenleji
root@nginx-657c6dcd4c-56p5h:/#

  • env
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@test-9:~# kubectl get deploy nginx -o yaml | grep env -C 6
metadata:
creationTimestamp: null
labels:
demo: "true"
spec:
containers:
- env:
- name: SECRET_USER
valueFrom:
secretKeyRef:
key: user
name: demo
image: nginx
root@test-9:~#
root@test-9:~# kubectl exec -ti nginx-548c9c4846-dgnbk /bin/bash
root@nginx-548c9c4846-dgnbk:/# env | grep SECRET
SECRET_USER=chenleji
root@nginx-548c9c4846-dgnbk:/#

ENV

  • Use Pod Field
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
root@test-9:~# kubectl get deploy -o yaml | grep env -C 10
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
demo: "true"
spec:
containers:
- env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: SECRET_USER
valueFrom:
secretKeyRef:
key: user
name: demo
root@test-9:~#
root@test-9:~# kubectl exec -ti nginx-f7d4dc847-skb74 /bin/bash
root@nginx-f7d4dc847-skb74:/# env | grep MY_NODE
MY_NODE_NAME=test-10
root@nginx-f7d4dc847-skb74:/#
  • Use Container Filed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-resourcefieldref
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox:1.24
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_CPU_REQUEST MY_CPU_LIMIT;
printenv MY_MEM_REQUEST MY_MEM_LIMIT;
sleep 10;
done;
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
env:
- name: MY_CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
- name: MY_CPU_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.cpu
restartPolicy: Never
0%