ingress-nginx是管理南北向流量很好的一个工具,可以避免在云端频繁对LB进行配置,通过Label也可以实现让ingress-nginx都部署在指定的NodeGroup
版本选择,先确认要使用的版本
https://github.com/kubernetes/ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx
# 会search到最适合当前集群版本的chart,修改后安装
helm search repo ingress-nginx
helm show values ingress-nginx/ingress-nginx > ingress-nginx-values.yaml
kind: DaemonSet
nodeSelector:
nginx: "true"
type: NodePort
nodePorts:
http: "30080"
https: "30443"
# 创建了pvc存放日志
extraVolumeMounts:
- name: log-volume
mountPath: /var/log/nginx
# -- Additional volumes to the controller pod.
extraVolumes:
- name: log-volume
persistentVolumeClaim:
claimName: ingress-nginx-pvc
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --values ingress-nginx-values.yaml
NodeSelector:
nginx: "true"
# helm添加污点配置
tolerations:
- key: "nginx"
operator: "Equal"
value: "true"
effect: "NoSchedule"
kubectl label node ip-172-28-68-223.ap-southeast-1.compute.internal nginx=true
# 配置污点,只让ingress-nginx调度到节点(同时满足nginx=true和容忍污点才可以调度到这两台节点)
kubectl taint nodes ip-10-196-1-252.ap-southeast-1.compute.internal nginx=true:NoSchedule
kubectl taint nodes ip-10-196-2-169.ap-southeast-1.compute.internal nginx=true:NoSchedule
# 查看哪些节点存在污点
kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints}{"\n"}{end}'
这里配置了forwarded-ip和日志格式
data:
allow-snippet-annotations: "false"
compute-full-forwarded-for: "true"
log-format-upstream: '{"@timestamp":"$time_iso8601","host":"$hostname","server_ip":"$server_addr","client_ip":"$http_x_forwarded_for","xff":"$http_x_forwarded_for","domain":"$host","url":"$uu
ri","referer":"$http_referer","args":"$args","upstreamtime":"$upstream_response_time","responsetime":"$request_time","request_method":"$request_method","status":"$status","size":"$body_bytes_ss
ent","request_length":"$request_length","protocol":"$server_protocol","upstreamhost":"$upstream_addr","file_dir":"$request_filename","http_user_agent":"$http_user_agent"}'
use-forwarded-headers: "true
进入容器后查看access.log
真实的客户端IP存储在http_x_forwarded_for
字段中
{"@timestamp":"2024-09-29T07:25:31+00:00","host":"nginx-ingress-nginx-controller-nz9hq","server_ip":"172.28.2.239","client_ip":"xxxxxxxxx, 172.69.63.211","
我从kubernetes1.18直接升级到了1.29 1.30,ingress-nginx的版本也是大变样,语法也有了一定变化
ingressClassName不在是annotions配置,属于spec资源,另外host和path也有变化,有个模板配置以后问题不大
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vault-ui
namespace: vault
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: vault.baga.life
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: vault
port:
number: 8200
参考文档
https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/use-an-ingress-controller-to-access-grpc-services
需要注意ALB的目标组需要选择grpc协议
grpc在选择后端组以后可能很长一段时间才能变成健康状态
wget https://github.com/fullstorydev/grpcurl/releases/download/v1.8.7/grpcurl_1.8.7_linux_x86_64.tar.gz
grpc-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-service
namespace: baga
spec:
replicas: 1
selector:
matchLabels:
run: grpc-service
template:
metadata:
labels:
run: grpc-service
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/acs-sample/grpc-server:latest
imagePullPolicy: Always
name: grpc-service
ports:
- containerPort: 50051
protocol: TCP
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: grpc-service
namespace: baga
spec:
ports:
- port: 50051
protocol: TCP
targetPort: 50051
selector:
run: grpc-service
sessionAffinity: None
type: ClusterIP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grpc-ingress
namespace: baga
annotations:
# 必须指明后端服务为gRPC服务,核心配置
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
ingressClassName: nginx
rules:
- host: baga.baga.life # gRPC服务域名,替换为您的域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
# gRPC服务
name: grpc-service
port:
number: 50051
在 gRPC 中,方法的全路径(也称为方法描述符)通常被表示为 /。所以,这里的方法和路径可以被描述为:
完整路径:helloworld.Greeter/SayHello
内部请求
grpcurl -plaintext -d '{"name": "World"}' 172.28.69.248:50051 helloworld.Greeter/SayHello
测试环境需求完成