Istio实践

本文主要涉及Istio学习过程中使用过的一些yaml文件的整理,便于后续作为模板快速拷贝^_^

组件调试

在Istio使用中,遇到被悲催的事莫过于:”配置下发了,但是系统中流量规则始终不生效”(这个直接影响到业务,所以真心要在生产环境中慎用istio)。此时,需要对istio中各组件的工作原理以及其debug方式有一定的了解,下面是一些简单的信息收集方式。

Pilot

Pilot启动以后,监听端口 15010(gRPC)和 8080(HTTP)。当应用的Sidecar(Envoy,Istio-Proxy)启动以后,它将会连接 pilot.istio-system:15010,获取初始配置,并保持长连接。

调试方式为:

1
2
3
4
5
6
7
8
9
10
11
PILOT=istio-pilot.istio-system:9093

# What is sent to envoy
# Listeners and routes
curl $PILOT/debug/adsz

# Endpoints
curl $PILOT/debug/edsz

# Clusters
curl $PILOT/debug/cdsz

Envoy

Envoy作为客户端,是通过静态配置一个对应的xds server来通过gRPC与server建立长连接并主动获取配置的。对于XDS server这块,有java和golang的control-plane项目,可以实现配置的集中管理。Envoy启动之后,监听 15000 来作为本地的admin端口, 使用15001作为本地的业务端口。

调试的时候,可以通过以下命令来在istio-proxy中获取到动态配置的内容:

1
curl http://127.0.0.1:15000/config_dump > config_dump

代理SideCar注入

K8S的MutatingAdmissionWebhook支持两种类型:validatemutate对应ValidatingAdmissionWebhookMutatingAdmissionWebhook,其中validate用来配置验证,mutate用来修改对象,这里我们使用mutate来注入。

MutatingWebhookConfiguration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@k8s-master ~]# kc get mutatingwebhookconfiguration
NAME CREATED AT
istio-sidecar-injector 2019-03-31T13:55:50Z
[root@k8s-master ~]# kc get mutatingwebhookconfiguration -o yaml
apiVersion: v1
items:
- apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
...
labels:
app: istio-sidecar-injector
chart: sidecarInjectorWebhook-1.0.4
heritage: Tiller
release: istio
name: istio-sidecar-injector
...
webhooks:
- admissionReviewVersions:
- v1beta1
clientConfig:
caBundle: ...
service:
name: istio-sidecar-injector
namespace: istio-system
path: /inject
failurePolicy: Fail
name: sidecar-injector.istio.io
namespaceSelector:
matchLabels:
istio-injection: enabled
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
scope: '*'
sideEffects: Unknown
timeoutSeconds: 30
kind: List
metadata:
resourceVersion: ""
selfLink: ""

当webhook中配置了namespaceSelector,该namespace的对象被调用的时候,会去clientConfig中定义的服务去请求webhook(也就是在这个时候注入的)。在service中可以看到对应的服务,基于此,我们可以查询到对应的pod。

1
2
3
4
5
6
7
[root@k8s-master ~]# kci get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-sidecar-injector ClusterIP 10.110.47.206 <none> 443/TCP 8d

[root@k8s-master ~]# kci get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
istio-sidecar-injector-8477498cbf-mgjjb 1/1 Running 1 8d 10.244.0.132 k8s-master <none> <none>

配置文件

至于具体注入了些什么内容,需要来查看配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root@k8s-master ~]# kci describe deploy istio-sidecar-injector
Name: istio-sidecar-injector
Namespace: istio-system
CreationTimestamp: Sun, 31 Mar 2019 21:55:50 +0800
Labels: app=sidecarInjectorWebhook
chart=sidecarInjectorWebhook-1.0.4
heritage=Tiller
istio=sidecar-injector
release=istio
...
Pod Template:
...
Containers:
sidecar-injector-webhook:
Image: docker.io/istio/sidecar_injector:1.0.4
Port: <none>
Host Port: <none>
Args:
--caCertFile=/etc/istio/certs/root-cert.pem
--tlsCertFile=/etc/istio/certs/cert-chain.pem
--tlsKeyFile=/etc/istio/certs/key.pem
--injectConfig=/etc/istio/inject/config
--meshConfig=/etc/istio/config/mesh
--healthCheckInterval=2s
--healthCheckFile=/health
...
Mounts:
/etc/istio/certs from certs (ro)
/etc/istio/config from config-volume (ro)
/etc/istio/inject from inject-config (ro)
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio
Optional: false
certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.istio-sidecar-injector-service-account
Optional: false
inject-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-sidecar-injector
Optional: false
...

这里使用了几个配置文件,都是从configmap中获取到的,分别来看看。

/etc/istio/config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
[root@k8s-master ~]# kci get cm istio  -o=go-template='{{ .data.mesh}}'                
# Set the following variable to true to disable policy checks by the Mixer.
# Note that metrics will still be reported to the Mixer.
disablePolicyChecks: false

# Set enableTracing to false to disable request tracing.
enableTracing: true

# Set accessLogFile to empty string to disable access log.
accessLogFile: "/dev/stdout"
#
# Deprecated: mixer is using EDS
mixerCheckServer: istio-policy.istio-system.svc.cluster.local:9091
mixerReportServer: istio-telemetry.istio-system.svc.cluster.local:9091

# policyCheckFailOpen allows traffic in cases when the mixer policy service cannot be reached.
# Default is false which means the traffic is denied when the client is unable to connect to Mixer.
policyCheckFailOpen: false

# Unix Domain Socket through which envoy communicates with NodeAgent SDS to get
# key/cert for mTLS. Use secret-mount files instead of SDS if set to empty.
sdsUdsPath: ""

# How frequently should Envoy fetch key/cert from NodeAgent.
sdsRefreshDelay: 15s

#
defaultConfig:
#
# TCP connection timeout between Envoy & the application, and between Envoys.
connectTimeout: 10s
#
### ADVANCED SETTINGS #############
# Where should envoy's configuration be stored in the istio-proxy container
configPath: "/etc/istio/proxy"
binaryPath: "/usr/local/bin/envoy"
# The pseudo service name used for Envoy.
serviceCluster: istio-proxy
# These settings that determine how long an old Envoy
# process should be kept alive after an occasional reload.
drainDuration: 45s
parentShutdownDuration: 1m0s
#
# The mode used to redirect inbound connections to Envoy. This setting
# has no effect on outbound traffic: iptables REDIRECT is always used for
# outbound connections.
# If "REDIRECT", use iptables REDIRECT to NAT and redirect to Envoy.
# The "REDIRECT" mode loses source addresses during redirection.
# If "TPROXY", use iptables TPROXY to redirect to Envoy.
# The "TPROXY" mode preserves both the source and destination IP
# addresses and ports, so that they can be used for advanced filtering
# and manipulation.
# The "TPROXY" mode also configures the sidecar to run with the
# CAP_NET_ADMIN capability, which is required to use TPROXY.
#interceptionMode: REDIRECT
#
# Port where Envoy listens (on local host) for admin commands
# You can exec into the istio-proxy container in a pod and
# curl the admin port (curl http://localhost:15000/) to obtain
# diagnostic information from Envoy. See
# https://lyft.github.io/envoy/docs/operations/admin.html
# for more details
proxyAdminPort: 15000
#
# Set concurrency to a specific number to control the number of Proxy worker threads.
# If set to 0 (default), then start worker thread for each CPU thread/core.
concurrency: 0
#
# Zipkin trace collector
zipkinAddress: zipkin.istio-system:9411
#
# Mutual TLS authentication between sidecars and istio control plane.
controlPlaneAuthPolicy: NONE
#
# Address where istio Pilot service is running
discoveryAddress: istio-pilot.istio-system:15007

/etc/istio/inject

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
[root@k8s-master ~]# kci get cm istio-sidecar-injector -o=go-template={{.data.config}}
policy: enabled
template: |-
initContainers:
- name: istio-init
image: "docker.io/istio/proxy_init:1.0.4"
args:
- "-p"
- [[ .MeshConfig.ProxyListenPort ]]
- "-u"
- 1337
- "-m"
- [[ annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode ]]
- "-i"
- "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeOutboundIPRanges` "*" ]]"
- "-x"
- "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/excludeOutboundIPRanges` "" ]]"
- "-b"
- "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeInboundPorts` (includeInboundPorts .Spec.Containers) ]]"
- "-d"
- "[[ excludeInboundPort (annotation .ObjectMeta `status.sidecar.istio.io/port` 0 ) (annotation .ObjectMeta `traffic.sidecar.istio.io/excludeInboundPorts` "" ) ]]"
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
restartPolicy: Always
containers:
- name: istio-proxy
image: [[ annotation .ObjectMeta `sidecar.istio.io/proxyImage` "docker.io/istio/proxyv2:1.0.4" ]]

ports:
- containerPort: 15090
protocol: TCP
name: http-envoy-prom

args:
- proxy
- sidecar
- --configPath
- [[ .ProxyConfig.ConfigPath ]]
- --binaryPath
- [[ .ProxyConfig.BinaryPath ]]
- --serviceCluster
[[ if ne "" (index .ObjectMeta.Labels "app") -]]
- [[ index .ObjectMeta.Labels "app" ]]
[[ else -]]
- "istio-proxy"
[[ end -]]
- --drainDuration
- [[ formatDuration .ProxyConfig.DrainDuration ]]
- --parentShutdownDuration
- [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
- --discoveryAddress
- [[ annotation .ObjectMeta `sidecar.istio.io/discoveryAddress` .ProxyConfig.DiscoveryAddress ]]
- --discoveryRefreshDelay
- [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
- --zipkinAddress
- [[ .ProxyConfig.ZipkinAddress ]]
- --connectTimeout
- [[ formatDuration .ProxyConfig.ConnectTimeout ]]
- --proxyAdminPort
- [[ .ProxyConfig.ProxyAdminPort ]]
[[ if gt .ProxyConfig.Concurrency 0 -]]
- --concurrency
- [[ .ProxyConfig.Concurrency ]]
[[ end -]]
- --controlPlaneAuthPolicy
- [[ annotation .ObjectMeta `sidecar.istio.io/controlPlaneAuthPolicy` .ProxyConfig.ControlPlaneAuthPolicy ]]
[[- if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port` 0 ) "0") ]]
- --statusPort
- [[ annotation .ObjectMeta `status.sidecar.istio.io/port` 0 ]]
- --applicationPorts
- "[[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/applicationPorts` (applicationPorts .Spec.Containers) ]]"
[[- end ]]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
[[ if .ObjectMeta.Annotations ]]
- name: ISTIO_METAJSON_ANNOTATIONS
value: |
[[ toJson .ObjectMeta.Annotations ]]
[[ end ]]
[[ if .ObjectMeta.Labels ]]
- name: ISTIO_METAJSON_LABELS
value: |
[[ toJson .ObjectMeta.Labels ]]
[[ end ]]
imagePullPolicy: IfNotPresent
[[ if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port` 0 ) "0") ]]
readinessProbe:
httpGet:
path: /healthz/ready
port: [[ annotation .ObjectMeta `status.sidecar.istio.io/port` 0 ]]
initialDelaySeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/initialDelaySeconds` 1 ]]
periodSeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/periodSeconds` 2 ]]
failureThreshold: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/failureThreshold` 30 ]]
[[ end -]]securityContext:

readOnlyRootFilesystem: true
[[ if eq (annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode) "TPROXY" -]]
capabilities:
add:
- NET_ADMIN
runAsGroup: 1337
[[ else -]]
runAsUser: 1337
[[ end -]]
restartPolicy: Always
resources:
[[ if (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU`) -]]
requests:
cpu: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU` ]]"
memory: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyMemory` ]]"
[[ else -]]
requests:
cpu: 10m

[[ end -]]
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
[[ if eq .Spec.ServiceAccountName "" -]]
secretName: istio.default
[[ else -]]
secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
[[ end -]]

network.istio.io

主要涉及virtualService,destinationRule,gateway以及serviceEntry这几个概念的理解。

VirtualServices

按权重比例路由

需要结合destinationRule先指定subsets。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskapp
spec:
hosts:
- flaskapp.default.svc.cluster.local
http:
- route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v1
weight: 70
- destination:
host: flaskapp.default.svc.cluster.local
subset: v2
weight: 30

按HTTP header匹配

http报文header带lab:canary的走v2, 其他走v1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskapp
spec:
hosts:
- flaskapp.default.svc.cluster.local
http:
- match:
- headers:
lab:
exact: canary
route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v2
- route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v1

按labels路由

基于source service的labels走v2, 其他走v1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskapp
spec:
hosts:
- flaskapp.default.svc.cluster.local
http:
- match:
- sourceLabels:
app: sleep
version: v1
route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v2
- route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v1

URI重定向

基于URI做重定向,将”/env/HOSTNAME”替换为对”/env/version”的访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskapp
spec:
hosts:
- flaskapp.default.svc.cluster.local
http:
- match:
- sourceLabels:
app: sleep
version: v1
uri:
exact: "/env/HOSTNAME"
redirect:
uri: /env/version
- route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v1

URI重写

基于uri做rewrite, 和redirect不同的是:必须包含route,且不能够和redirect共存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskapp
spec:
hosts:
- flaskapp.default.svc.cluster.local
http:
- match:
- uri:
exact: "/get"
rewrite:
uri: /post
route:
- destination:
host: flaskapp.default.svc.cluster.local
- route:
- destination:
host: flaskapp.default.svc.cluster.local

超时

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin.default.svc.cluster.local
http:
- timeout: 3s
route:
- destination:
host: httpbin.default.svc.cluster.local

重试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin.default.svc.cluster.local
http:
- route:
- destination:
host: httpbin.default.svc.cluster.local
retries:
attempts: 3
perTryTimeout: 1s
timeout: 7s

故障注入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin.default.svc.cluster.local
http:
- route:
- destination:
host: httpbin.default.svc.cluster.local
fault:
delay:
fixDelay: 3s
percent: 100

中断注入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin.default.svc.cluster.local
http:
- route:
- destination:
host: httpbin.default.svc.cluster.local
fault:
abort:
httpStaus: 500
percent: 100

mirror

将流量路由到v1,同时,mirror一份到v2。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin.default.svc.cluster.local
http:
- route:
- destination:
host: httpbin.default.svc.cluster.local
subsets: v1
mirror:
host: httpbin.default.svc.cluster.local
subsets: v2

DestinationRules

设置subsets

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flaskapp
spec:
host: flaskapp
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2

熔断设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1
http:
httplMaxPendingRequests: 1
maxRequestsPerConnection: 1
outlierDetection:
consecutiveErrors: 1
interval: 1s
baseEjectionTime: 3m
maxEjectionPercent: 100

Gateway

网关例子

只会接受请求,但不知道路由到哪里,需要配合virtualService来设置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: example-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.microservice.rocks"
- "*.microservece.xyz"

指定路由

virtualService配合gateway路由流量到后端服务。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskapp
spec:
hosts:
- flaskapp.default.svc.cluster.local
- flaskapp.microservice.rocks
gateways:
- mesh
- example-gateway
http:
- route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v1

证书支持

  1. 创建secret文件
1
kubectl create -n istio-system secret tls istio-ingressgateway-certs --key rocks/key.pem --cert rocks/cert.pem
  1. 将secret挂载到/etc/istio/ingressgateway-ca-certs目录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: example-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.microservice.rocks"
- "*.microservece.xyz"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serviceCertificate: /etc/istio/ingressgateway-ca-certs/tls.crt
privateKey: /etc/istio/ingressgateway-ca-certs/tls.key
hosts:
- "flask.microservice.rocks"
- "flask.microservece.xyz"

virtualService match网关

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskapp
spec:
hosts:
- flaskapp.default.svc.cluster.local
- flaskapp.microservice.rocks
gateways:
- mesh
- example-gateway
http:
- match:
- gateways:
- example-gateway
route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v2
- route:
- destination:
host: flaskapp.default.svc.cluster.local
subset: v1

serviceEntry

创建条目

通过hosts来匹配路由,也需要通过virtualService配合。

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin-ext
spec:
hosts:
- httpbin.org
ports:
- number : 80
name: http
protocol: HTTP
resolution: DNS

匹配并设置超时

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin-service
spec:
hosts:
- httpbin.org
http:
- timeout: 3s
route:
- destination:
host: httpbin.org

config.istio.io

Mixer

  • Instance: 声明一个模板, 用模板将传给 Mixer 的数据转换为适合特定适配器的输出格式
  • Handler: 声明一个适配器的配置
  • Rule: 将Instance和Handler连接起来,确认处理关系

handler列表

名称 作用
Denier 根据自定义条件判断是否拒绝服务
Fluentd 向Fluentd服务提交日志
List 用于执行白名单或者黑名单检查
MemQuota 以内存为存储后端,提供简易的配额控制功能
Prometheus 为Prometheus提供Istio的监控指标
RedisQuota 基于Redis存储后端,提供配额管理功能
StatsD 向StatsD发送监控指标
Stdio 用于在本地输出日志和指标

denier

  • handler
1
2
3
4
5
6
7
8
apiVersion: "config.istio.io/v1alpha2" 
kind: denier
metadata:
name: code-7
spec :
status:
code: 7
message: Not allowed
  • instance
1
2
3
4
5
apiVersion: "config.istio.io/v1alpha2" 
kind: checknothing
metadata:
name: palce-holder
spec:
  • rule
1
2
3
4
5
6
7
8
9
apiVersion: "config.istio.io/v1alpha2" 
kind: rule
metadata:
name: deny-sleep-v1-to-httpbin
spec:
match: destination.labels["app"] == "httpbin" && source.labels["app"] == "sleep" && source.labels["version"] == "v1"
actions:
- handler: code-7.denier
instances: [place-holder.checknothing]

listchecker

  • handler
1
2
3
4
5
6
7
apiVersion: config.istio.io/v1alpha2 
kind: listchecker
metadata:
name: chaos
spec:
overrides: ["v1","v3"]
blacklist: true
  • instance
1
2
3
4
5
6
apiVersion: config.istio.io/v1alpha2 
kind: listentry
metadata:
name: version
spec:
value: source.labels["version"]
  • rule
1
2
3
4
5
6
7
8
9
10
apiVersion: "config.istio.io/v1alpha2" 
kind: rule
metadata:
name: checkversion
spec:
match: destination.labels["app"]=="httpbin"
actions:
- handler: chaos.listchecker
instances:
- version.listentry

prometheus

  • instance
1
2
3
4
5
6
7
8
[root@k8s istio]# kci get metrics
NAME AGE
requestcount 3d1h
requestduration 3d1h
requestsize 3d1h
responsesize 3d1h
tcpbytereceived 3d1h
tcpbytesent 3d1h
  • handler
1
2
3
[root@k8s istio]# kci get prometheus
NAME AGE
handler 3d1h
  • rules
1
2
3
4
[root@k8s istio]# kci get rules
NAME AGE
promhttp 3d1h
promtcp 3d1h

log(stdio)

满足条件的访问日志会被记录到mixer的stdout上面,通过kubectl logs查看。

1
2
3
4
[root@k8s istio]# kci get logentry 
NAME AGE
accesslog 3d1h
tcpaccesslog 3d1h
1
2
3
[root@k8s istio]# kci get stdio
NAME AGE
handler 3d1h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s istio]# kci get rules
NAME AGE
stdio 3d1h

[root@k8s istio]# kci get rule stdio -o yaml
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"config.istio.io/v1alpha2","kind":"rule","metadata":{"annotations":{},"name":"stdio","namespace":"istio-system"},"spec":{"actions":[{"handler":"handler.stdio","instances":["accesslog.logentry"]}],"match":"context.protocol == \"http\" || context.protocol == \"grpc\""}}
creationTimestamp: "2019-03-31T13:55:50Z"
generation: 1
name: stdio
namespace: istio-system
resourceVersion: "16200"
selfLink: /apis/config.istio.io/v1alpha2/namespaces/istio-system/rules/stdio
uid: b1935286-53bc-11e9-ad8b-525400ff729a
spec:
actions:
- handler: handler.stdio
instances:
- accesslog.logentry
match: context.protocol == "http" || context.protocol == "grpc"

log(fluentd)

需要先在k8s上部署fluentd,并暴露service为fluentd-listener:24224

  • handler
1
2
3
4
5
6
apiVersion: config.istio.io/v1alpha2
kind: fluentd
metadata:
name: handler
spec:
address: "fluentd-listener:24224"
  • instance

定义logentry(sleep-log)并应用;

  • rule
1
2
3
4
5
6
7
8
9
10
apiVersion: config.istio.io/v1alpha2 
kind: rule
metadata:
name: fluentd
spec:
actions:
- handler: handler.fluentd
instances:
- sleep-log.logentry
match: context.protocol == "http" && source.labels["app"] == "sleep"
0%