CKA考试知识总结-1

这是一篇很长很长的文章,是去年CKA考试刚开始推出来的时候,我参与考试复习做过的一些知识点。基于做过的题,大概列出了具体的知识点,当时考试的时候还在使用v1.7版本,现在应该都要到v1.12了。

补充:这周收到CNCF的邮件,说是之前认证两年到期的CKA证书又延长了一年了(感叹:这是为了诱惑大家都来考证吗^_^)。

CKA证书

随着k8s声名大噪,国内一大堆公司推各种高价的包过培训班;我只想说:CNCF还真有些缺乏社区精神,更多的还是商业模式。但是,只要能够推动整个云原生的发展,随它吧~

为了让本文显得有说服力,我也把证书贴出来炫炫(认证ID末尾是0100,照理说刚好是第100位通过考试的),勿拍砖!

复习资料

废话不多讲,现在进入主题。

Job

Q: Create a Job that run 60 time with 2 jobs running in parallel

参考资料:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
completions: 10
parallelism: 2
activeDeadlineSeconds: 2
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never

并行job

Job类型 使用示例 行为 completions Parallelism 备注
一次性Job 数据库迁移 创建一个Pod直至其成功结束 1 1
固定结束次数的Job 处理工作队列的Pod 依次创建一个Pod运行直至completions个成功结束 2+ 1
固定结束次数的并行Job 多个Pod同时处理工作队列 依次创建多个Pod运行直至completions个成功结束 2+ 2+
并行Job 多个Pod同时处理工作队列 创建一个或多个Pod直至有一个成功结束 1 2+ 不会创建多个,直接创建出一个
  • kubectl scale job
    A job can be scaled up using the kubectl scale command. For example, the following command sets .spec.parallelism of a job called myjob to 10:

    1
    2
    $ kubectl scale  --replicas=10 jobs/myjob
    job "myjob" scaled
  • 注意

    1. parallelism: 表示并行执行的数量;
    2. completions:表示成功运行多少次就结束job;
    3. RestartPolicy仅支持Never或OnFailure;
    4. activeDeadlineSeconds标志失败Pod的重试最大时间,超过这个时间不会继续重试;
    5. kubectl scale其实是修改了job的parallelism属性,并不会对completetions产生影响。

Cronjob

cron 表达式格式:
cron

如果某一位为*/5 就表示每隔5x; 比如在min位的话,代表每隔5分钟

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@test-9:~# kubectl run cronjob --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "sleep 99"
cronjob "cronjob" created
root@test-9:~#
root@test-9:~# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob */1 * * * * False 0 <none>
root@test-9:~#
root@test-9:~# kubectl get job
NAME DESIRED SUCCESSFUL AGE
cronjob-1510581480 1 0 1m
cronjob-1510581540 1 0 14s
root@test-9:~#
root@test-9:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
cronjob-1510581480-r49rq 1/1 Running 0 1m
cronjob-1510581540-tl4hn 1/1 Running 0 16s

kubectl top

Q: Find which Pod is taking max CPU
Use kubectl top to find CPU usage per pod

kubectl top node

1
2
3
4
5
6
7
8
9
10
root@test-9:~/henry# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
test-10 41m 1% 2230Mi 14%
test-9 104m 2% 4931Mi 31%
root@test-9:~/henry#
root@test-9:~/henry#
root@test-9:~/henry# kubectl top nodes | awk '{print $1 "\t" $3|"sort -r -n"}'
test-9 2%
test-10 1%
NAME CPU%

sort的参数:-r 表示反序排列; -n 表示按照数字排序
awk print的时候,使用”\t” 来区分两个列,同时,使用管道来排序

输出排序

Q: List all PersistentVolumes sorted by their name
Use kubectl get pv --sort-by= <- this problem is buggy & also by default kubectl give the output sorted by name.

排序

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
root@test-9:~/henry# kcs get svc --sort-by=.metadata.uid
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tiller-deploy ClusterIP 10.43.155.15 <none> 44134/TCP 2h
monitoring-influxdb ClusterIP 10.43.227.43 <none> 8086/TCP 2h
monitoring-grafana ClusterIP 10.43.217.185 <none> 80/TCP 2h
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 2h
kubernetes-dashboard ClusterIP 10.43.36.245 <none> 9090/TCP 2h
heapster ClusterIP 10.43.250.217 <none> 80/TCP 2h
root@test-9:~/henry#
root@test-9:~/henry# kcs get svc --sort-by=.metadata.name
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.43.250.217 <none> 80/TCP 2h
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 2h
kubernetes-dashboard ClusterIP 10.43.36.245 <none> 9090/TCP 2h
monitoring-grafana ClusterIP 10.43.217.185 <none> 80/TCP 2h
monitoring-influxdb ClusterIP 10.43.227.43 <none> 8086/TCP 2h
tiller-deploy ClusterIP 10.43.155.15 <none> 44134/TCP 2h

root@test-9:~/henry# kcs get svc heapster -o json
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2017-11-12T03:27:51Z",
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Heapster",
"task": "monitoring"
},
"name": "heapster",
"namespace": "kube-system",
"resourceVersion": "229",
"selfLink": "/api/v1/namespaces/kube-system/services/heapster",
"uid": "769529c5-c759-11e7-8dee-02cdc7a8bd69"
},
"spec": {
"clusterIP": "10.43.250.217",
"ports": [
{
"port": 80,
"protocol": "TCP",
"targetPort": 8082
}
],
"selector": {
"k8s-app": "heapster"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
}

查询资源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Get commands with basic output
$ kubectl get services # List all services in the namespace
$ kubectl get pods --all-namespaces # List all pods in all namespaces
$ kubectl get pods -o wide # List all pods in the namespace, with more details
$ kubectl get deployment my-dep # List a particular deployment
$ kubectl get pods --include-uninitialized # List all pods in the namespace, including uninitialized ones

# Describe commands with verbose output
$ kubectl describe nodes my-node
$ kubectl describe pods my-pod

$ kubectl get services --sort-by=.metadata.name # List Services Sorted by Name

# List pods Sorted by Restart Count
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

# Get the version label of all pods with label app=cassandra
$ kubectl get pods --selector=app=cassandra rc -o \
jsonpath='{.items[*].metadata.labels.version}'

# Get ExternalIPs of all nodes
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
$ sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
$ echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})

# Check which nodes are ready
$ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

# List all Secrets currently in use by a pod
$ kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq

常用命令

kubectl run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
root@test-9:~# kubectl run demo-1 --image=busybox:latest --env="env1=wise2c" --port=80 --hostport=30098 --restart='Always' --image-pull-policy='Always' --limits="cpu=200m,memory=512Mi" --replicas=2 -- sleep 60
deployment "demo-1" created
root@test-9:~#
root@test-9:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
demo-1-4031462666-1m6lc 0/1 ContainerCreating 0 4s
demo-1-4031462666-3sph3 0/1 ContainerCreating 0 4s
root@test-9:~#
root@test-9:~# kubectl get deploy demo-1 -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2017-11-12T06:20:52Z
generation: 1
labels:
run: demo-1
name: demo-1
namespace: default
resourceVersion: "13667"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/demo-1
uid: a24a6b2b-c771-11e7-8dee-02cdc7a8bd69
spec:
replicas: 2
selector:
matchLabels:
run: demo-1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: demo-1
spec:
containers:
- args:
- sleep
- "60"
env:
- name: env1
value: wise2c
image: busybox:latest
imagePullPolicy: Always
name: demo-1
ports:
- containerPort: 80
hostPort: 30098
protocol: TCP
resources:
limits:
cpu: 200m
memory: 512Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2017-11-12T06:22:03Z
lastUpdateTime: 2017-11-12T06:22:03Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2

kubectl expose

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
root@test-9:~# kubectl expose deploy nginx2 --name=nginx --port=80 --target-port=80 --protocol=TCP --type=ClusterIP
service "nginx" exposed
root@test-9:~# kubectl get svc nginx -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-11-12T07:32:48Z
labels:
run: nginx2
name: nginx
namespace: default
resourceVersion: "20097"
selfLink: /api/v1/namespaces/default/services/nginx
uid: ae7774d4-c77b-11e7-8dee-02cdc7a8bd69
spec:
clusterIP: 10.43.221.216
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx2
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
root@test-9:~#
root@test-9:~#
root@test-9:~#
root@test-9:~#
root@test-9:~# kubectl expose deploy nginx2 --name=nginx --port=80 --target-port=80 --protocol=TCP --type=NodePort
service "nginx" exposed
root@test-9:~# kubectl get svc nginx -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-11-12T07:35:21Z
labels:
run: nginx2
name: nginx
namespace: default
resourceVersion: "20296"
selfLink: /api/v1/namespaces/default/services/nginx
uid: 0a19d690-c77c-11e7-8dee-02cdc7a8bd69
spec:
clusterIP: 10.43.120.19
externalTrafficPolicy: Cluster
ports:
- nodePort: 30014
port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx2
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

port-forward

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root@test-9:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-f7d4dc847-bzlzq 1/1 Running 0 11h 10.244.0.24 test-9
nginx-f7d4dc847-lcq57 1/1 Running 0 11h 10.244.1.45 test-10
nginx-f7d4dc847-qs28j 1/1 Running 0 11h 10.244.0.25 test-9
nginx-f7d4dc847-s4xml 1/1 Running 0 11h 10.244.1.44 test-10
nginx-f7d4dc847-skb74 1/1 Running 0 11h 10.244.1.43 test-10
nginx-f7d4dc847-x9vh4 1/1 Running 0 11h 10.244.0.26 test-9
root@test-9:~# kubectl port-forward nginx-f7d4dc847-bzlzq 9090:80
Forwarding from 127.0.0.1:9090 -> 80
Handling connection for 9090
root@test-9:~#
root@test-9:~# curl 127.0.0.1:9090
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@test-9:~#

NetworkPolicy

Q: Create a NetworkPolicy to allow connect to port 8080 by busybox pod only
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
Make sure to use apiVersion: extensions/v1beta1 which works on both 1.6 and 1.7

  • 在生效之前,必须先配置annotation来阻止所有的请求;
  • podSelector.matchLablesl:定义了该规则对哪些pod(destination)有效;
  • ingress:指定了允许带标签“access=true” 的pod访问这些服务;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
root@test-9:~# kubectl annotate ns default "net.beta.kubernetes.io/network-policy={\"ingress\": {\"isolation\": \"DefaultDeny\"}}"
namespace "default" annotated
root@test-9:~#
root@test-9:~#
root@test-9:~# kubectl describe ns default
Name: default
Labels: <none>
Annotations: net.beta.kubernetes.io/network-policy={"ingress": {"isolation": "DefaultDeny"}}
Status: Active
No resource quota.
No resource limits.
root@test-9:~#
root@test-9:~/henry# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx2-2627548522-6f5kf 1/1 Running 0 22m pod-template-hash=2627548522,run=nginx
nginx2-2627548522-8w87b 1/1 Running 0 22m pod-template-hash=2627548522,run=nginx
root@test-9:~/henry# kubectl get svc nginx --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
nginx NodePort 10.43.120.19 <none> 80:30014/TCP 16m run=nginx
root@test-9:~/henry# cat network-policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
access: "true"
ports:
- protocol: TCP
port: 80
root@test-9:~/henry# kubectl get netpol
NAME POD-SELECTOR AGE
access-nginx run=nginx 2m
root@test-9:~/henry# kubectl get netpol access-nginx -o yaml
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
creationTimestamp: 2017-11-12T07:40:38Z
generation: 1
name: access-nginx
namespace: default
resourceVersion: "20699"
selfLink: /apis/extensions/v1beta1/namespaces/default/networkpolicies/access-nginx
uid: c72191d1-c77c-11e7-8dee-02cdc7a8bd69
spec:
ingress:
- from:
- podSelector:
matchLabels:
access: "true"
ports:
- port: 80
protocol: TCP
podSelector:
matchLabels:
run: nginx
root@test-9:~/henry#
root@test-9:~/henry# kubectl run busybox --rm -ti --labels="access=true" --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget nginx
Connecting to nginx (10.43.120.19:80)
index.html 100% |********************************************************************************************| 612 0:00:00 ETA
/ #

Node Broken

Q: fixing broken nodes, see
https://kubernetes.io/docs/concepts/architecture/nodes/

  • Node参数
    condition
    Address
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
root@test-9:~# kubectl describe nodes
Name: test-10
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=rancher
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=Region1
failure-domain.beta.kubernetes.io/zone=FailureDomain1
io.rancher.host.docker_version=1.12
io.rancher.host.linux_kernel_version=4.4
kubernetes.io/hostname=test-10
Annotations: io.rancher.labels.io.rancher.host.docker_version=
io.rancher.labels.io.rancher.host.linux_kernel_version=
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Sun, 12 Nov 2017 11:27:45 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Sun, 12 Nov 2017 15:16:39 +0800 Sun, 12 Nov 2017 11:27:45 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.144.102.117
ExternalIP: 10.144.102.117
Hostname: test-10
Capacity:
cpu: 4
memory: 16301460Ki
pods: 110
Allocatable:
cpu: 4
memory: 16199060Ki
pods: 110
System Info:
Machine ID:
System UUID: 4ABB25CA-B353-450A-9787-28477ED72344
Boot ID: 689e31dc-e05d-48de-9068-e8460d15a9b6
Kernel Version: 4.4.0-91-generic
OS Image: Ubuntu 16.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.12.6
Kubelet Version: v1.7.7-rancher1
Kube-Proxy Version: v1.7.7-rancher1
ExternalID: 3cb02e3d-cb58-42c6-9a54-2fb5cfb836d2
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default demo-1-4031462666-1m6lc 200m (5%) 200m (5%) 512Mi (3%) 512Mi (3%)
default nginx-4217019353-k3mqk 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-638003847-q28hb 260m (6%) 0 (0%) 110Mi (0%) 170Mi (1%)
kube-system kubernetes-dashboard-716739405-42t14 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
560m (14%) 300m (7%) 672Mi (4%) 732Mi (4%)
Events: <none>

Etcd

Q: etcd backup, see
https://kubernetes.io/docs/getting-started-guides/ubuntu/backups/
https://www.mirantis.com/blog/everything-you-ever-wanted-to-know-about-using-etcd-with-kubernetes-v1-6-but-were-afraid-to-ask/

  • Start Etcd

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    #start script:
    #========================================
    etcd --name 'default' \
    --data-dir '/root/data.etcd' \
    --ca-file '/pki/ca.crt' --cert-file '/pki/cert.crt' --key-file '/pki/key.key' \
    --peer-ca-file '/pki/ca.crt' --peer-cert-file '/pki/cert.crt' --peer-key-file '/pki/key.key' \
    --client-cert-auth \
    --peer-client-cert-auth \
    --listen-peer-urls https://localhost:2380 \
    --listen-client-urls https://localhost:2379 \
    --advertise-client-urls https://localhost:2379 \
    --initial-advertise-peer-urls https://localhost:2380 \
    --initial-cluster default=https://localhost:2380 \
    --initial-cluster-state 'new' \
    --initial-cluster-token 'etcd-cluster' \
    --debug


    #operate:
    #========================================
    etcdctl --endpoint=https://localhost:2379 --ca-file=/pki/ca.crt --cert-file=/pki/cert.crt --key-file=/pki/key.key ls /

    如果要设置证书:

    1. 需要把访问的URL加上https
    2. 需要设置上图中红色部分的内容
  • Replacing a failed etcd member

    1. Get the member ID of the failed member1:

      1
      etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list
    2. The following message is displayed:

      1
      2
      3
      8211f1d0f64f3269, started, member1, http://10.0.0.1:12380, http://10.0.0.1:2379
      91bc3c398fb3c146, started, member2, http://10.0.0.1:2380, http://10.0.0.2:2379
      fd422379fda50e48, started, member3, http://10.0.0.1:2380, http://10.0.0.3:2379
    3. Remove the failed member:

      1
      etcdctl member remove 8211f1d0f64f3269
    4. The following message is displayed:

      1
      Removed member 8211f1d0f64f3269 from cluster
    5. Add the new member:

      1
      ./etcdctl member add member4 --peer-urls=http://10.0.0.4:2380
    6. The following message is displayed:

      1
      Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4
    7. Start the newly added member on a machine with the IP 10.0.0.4:

      1
      bash export ETCD_NAME="member4" export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380" export ETCD_INITIAL_CLUSTER_STATE=existing etcd [flags]

      需要知道,先从集群中添加,然后再启动对应的etcd member。
      另外,对于新启动的etcd member需要指定启动的状态为“existing”。

  • Backing up an etcd cluster

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
    # exit 0

    # verify the snapshot
    ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
    +----------+----------+------------+------------+
    | HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
    +----------+----------+------------+------------+
    | fe01cf57 | 10 | 7 | 2.1 MB |
    +----------+----------+------------+------------+

到此结束,别以为你已经学完了,后面还有呢。
看完这一篇,还有无数篇!My God~

0%