背景介绍 在当今的软件开发环境中,持续集成(Continuous Integration,CI)和持续部署(Continuous Deployment,CD)是非常重要的概念。它们可以帮助我们实现高效的软件开发,提高软件质量,降低软件开发成本。在这篇文章中,我们将深入探讨测试自动化的持续集成与持续部署,并探讨它们如何帮助我们实现高效的软件开发。
核心概念与联系 持续集成(Continuous Integration,CI) 持续集成是一种软件开发策略,其核心思想是将开发人员的代码通过自动化的构建和测试系统进行集成和测试,以确保代码的质量和可靠性。通常,持续集成包括以下几个步骤:
开发人员在本地环境中编写代码并进行测试。 开发人员将代码推送到共享的代码仓库。 自动化构建系统从代码仓库中获取最新的代码,并进行构建。 自动化测试系统运行所有的测试用例,以确保代码的正确性和质量。 如果测试通过,则将代码部署到下一个环境中,如测试环境或生产环境。 持续集成的主要优点包括:
提高代码质量:通过自动化的构建和测试,可以及时发现和修复代码中的问题,从而提高代码质量。 提高开发效率:开发人员可以更加快速地将代码集成到共享仓库中,从而减少集成和测试的时间。 减少部署风险:通过持续地将代码部署到不同的环境中,可以确保代码的可靠性和稳定性。 持续部署(Continuous Deployment,CD) 持续部署是一种软件交付策略,其核心思想是将软件开发和部署过程自动化,以提高软件交付的速度和质量。持续部署的主要步骤包括:
开发人员在本地环境中编写代码并进行测试。 开发人员将代码推送到共享的代码仓库。 自动化构建系统从代码仓库中获取最新的代码,并进行构建。 自动化测试系统运行所有的测试用例,以确保代码的正确性和质量。 如果测试通过,则将代码自动部署到生产环境中。 持续部署的主要优点包括:
加快软件交付速度:通过自动化的构建、测试和部署过程,可以大大减少软件交付的时间。 提高软件质量:持续部署的过程可以确保代码的质量和可靠性,从而提高软件的用户满意度。 减轻运维人员的工作负载:通过自动化的部署过程,可以减轻运维人员在软件部署和维护中的工作负载。 持续集成与持续部署的联系 持续集成和持续部署是两个相互关联的软件交付策略,它们在现实应用中经常被组合使用。通常,持续集成是持续部署的一部分,它们的关系可以简单描述为:
持续集成是将开发人员的代码通过自动化的构建和测试系统进行集成和测试的过程,而持续部署则是将这些测试通过的代码自动部署到生产环境中的过程。
在实际应用中,持续集成和持续部署可以在同一个自动化流水线中实现,这种情况下,持续集成和持续部署的过程将相互支持和完善,从而提高软件开发和交付的效率和质量。
了解发布流程 本次将会把这几篇文章所学知识都串联起来。算是做一次汇总吧
[Kubernetes集群监控-使用ELK实现日志监控和分析] Kubernetes集群日志-使用Loki实现高效日志分析和查询 Prometheus系列 Skywalking链路追踪系列 资产信息:
主机名 角色 IP k8s-master K8s-master 节点 10.1.1.100 k8s-node01 node-1节点 10.1.1.120 k8s-node02 node-2节点 10.1.1.130
流程:
拉取代码,改变分支; 编译代码 mvn clean; 打包镜像 并上传镜像仓库; 使用yaml 模板文件部署用镜像仓库中的镜像,kubectl 命令部署pod; 构建完成,开发测试; 本次使用hub.docker.com
作为镜像仓库,如果需要安装部署 Harbor
请参考使用-Harbor-作为镜像仓库 。
代码仓库则采用 gitee.com
,如果需要安装部署 gitlab
请参考 使用 Gitlab 作为代码仓库 。
在 Kubernetes 中部署 Jenkins 创建 Jenkins 存放yaml文件目录
mkdir -p k8s-yaml/jenkins/
RBAC Deployment Service Ingress 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 cat > k8s-yaml/jenkins/rbac.yaml <<EOF --- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins namespace: infra --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: jenkins rules: - apiGroups: ["" ] resources: ["pods" ] verbs: ["create" ,"delete" ,"get" ,"list" ,"patch" ,"update" ,"watch" ] - apiGroups: ["" ] resources: ["pods/exec" ] verbs: ["create" ,"delete" ,"get" ,"list" ,"patch" ,"update" ,"watch" ] - apiGroups: ["" ] resources: ["pods/log" ] verbs: ["get" ,"list" ,"watch" ] - apiGroups: ["" ] resources: ["secrets" ] verbs: ["get" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkins subjects: - kind: ServiceAccount name: jenkins namespace: infra EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 cat > k8s-yaml/jenkins/deployment.yaml <<EOF kind: Deployment apiVersion: apps/v1 metadata: name: jenkins namespace: infra labels: name: jenkins spec: replicas: 1 selector: matchLabels: name: jenkins template: metadata: labels: app: jenkins name: jenkins spec: serviceAccountName: jenkins nodeSelector: jenkins: "true" volumes: - name: data hostPath: path: /data/jenkins_home type: Directory containers: - name: jenkins image: jenkins/jenkins:latest-jdk11 ports: - name: http containerPort: 8080 protocol: TCP - name: slavelistener containerPort: 50000 protocol: TCP env: - name: JAVA_OPTS value: "-Xmx512m -Xms512m -Duser.timezone=Asia/Shanghai -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85" resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi volumeMounts: - name: data mountPath: /var/jenkins_home terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600 EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 cat > k8s-yaml/jenkins/svc.yaml <<EOF kind: Service apiVersion: v1 metadata: name: jenkins namespace: infra spec: ports: - protocol: TCP port: 80 targetPort: 8080 name: web - protocol: TCP port: 50000 targetPort: 50000 name: slave selector: app: jenkins type: ClusterIP sessionAffinity: None EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 cat > k8s-yaml/jenkins/ingerss.yaml <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jenkins-ui namespace: infra spec: ingressClassName: traefik rules: - host: jenkins.od.com http: paths: - pathType: Prefix path: / backend: service: name: jenkins port: number: 80 EOF
应用资源配置清单 1 2 3 4 5 6 7 # 创建Jenkins标签 kubectl label nodes k8s-node1 jenkins=true # 应用配置清单 kubectl apply -f rbac.yaml kubectl apply -f deployment.yaml kubectl apply -f svc.yaml kubectl apply -f ingress.yaml
访问jenkins控制台,初始化环境 访问地址:http://jenkins.od.com
第一次部署会进行初始化:
查看密码,可以去查看jenkins 的启动日志
1 $ kubectl logs -n infra jenkins-9766b68cb-884lb
部署插件这块,选择插件来安装
点击“无”,不安装任何插件
安装插件 默认从国外网络下载插件,会比较慢,建议修改成国内源:
只需要到k8s-node1
上,修改挂载的内容即可
1 2 3 4 5 6 7 8 9 # 进入到挂载目录 cd /data/jenkins_home/updates/ # 修改插件的下载地址为清华源的地址 sed -i 's#https://updates.jenkins.io/download#https://mirrors.tuna.tsinghua.edu.cn/jenkins#g' default.json # 修改jenkins启动时检测的URL网址,改为国内baidu的地址 sed -i 's#http://www.google.com#https://www.baidu.com#g' default.json
删除pod重建即可!(pod名称改成你实际的)
输入账户密码从新登陆jenkins控制台
依次点击 管理Jenkins(Manage Jenkins)->系统配置(System Configuration)–>管理插件(Manage Pluglns)
分别搜索 Git
、Git Parameter
、Pipeline
、kubernetes
、Config File Provider
、Chinese
,选中点击安装。
插件名称 用途 Git 用于拉取代码 Git Parameter 用于Git参数化构建 Pipeline 用于流水线 kubernetes 用于连接Kubernetes动态创建Slave代理 Config File Provider 用于存储kubectl用于连接k8s集群的kubeconfig配置文件 Chinese 中文显示
安装插件可能会失败,多试几次就好了,安装完记得重启Pod。
Jenkins在K8S中动态创建代理 Jenkins构建项目时,并行构建,如果多个部门同时构建就会有等待。所以这里采用master/slave
架构
在jenkins中添加kubernetes云 点击管理 Jenkins -> Manage Nodes and Clouds
-> configureClouds
-> Add
输入Kubernetes 地址: https://kubernetes.default
,点击连接测试,测试通过的话,会显示k8s的版本信息
输入Jenkins 地址: http://jenkins.infra
构建Jenkins-Slave镜像 jenkins 官方有jenkins-slave 制作好的镜像,可以直接 docker pull jenkins/jnlp-slave
下载到本地并上传本地私有镜像厂库。官方的镜像好处就是不需要再单独安装maven,kubectl 这样的命令了,可以直接使用。
构建镜像所需要的文件:
Dockerfile:构建镜像文件 jenkins-slave:shell脚本,用于启动slave.jar settings.xml: 修改maven官方源为阿里云源 slave.jar:agent程序,接受master下发的任务(slave.jar jar 包文件 可以在jenkins 添加slave-node 节点,获取到 jar 包文件获取办法创建新的代理选择启动方式为通过Java Web启动代理
) helm:用于创建k8s应用模板 Dockerfile jenkins-slave脚本 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 FROM docker.io/library/maven:3.9 .6 -eclipse-temurin-11 LABEL www.boysec.cn wangxiansen ENV HELM_VERSION="3.13.3" \ KUBE_VERSION="1.28.0" \ DOCKER_VERSION="20.10.12" RUN set -eux; \ apt-get update && \ apt-get install -y wget git && \ apt-get clean && \ mkdir -p /usr/share/jenkins \ && wget -q https://dl.k8s.io/release/v${KUBE_VERSION}/bin/linux/amd64/kubectl -O /usr/local/bin/kubectl \ && chmod +x /usr/local/bin/kubectl \ && wget -q https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz -O - | tar -xzO linux-amd64/helm > /usr/local/bin/helm \ && chmod +x /usr/local/bin/helm \ && wget -q https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/static/stable/x86_64/docker-${DOCKER_VERSION}.tgz -O -|tar -xzO docker/docker >/usr/local/bin/docker \ && chmod +x /usr/local/bin/docker \ && rm -rf /var/cache/apt/* /var/lib/apt/lists/* /tmp/* RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ echo 'Asia/Shanghai' >/etc/timezone COPY agent.jar /usr/share/jenkins/slave.jar COPY settings.xml /usr/share/maven/conf/settings.xml COPY jenkins-slave /usr/bin/jenkins-slave RUN chmod +x /usr/bin/jenkins-slave ENTRYPOINT ["jenkins-slave" ]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 # !/usr/bin/env sh # The MIT License # # # of this software and associated documentation files (the "Software" ), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # all copies or substantial portions of the Software. # # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. # Usage jenkins-agent.sh [options] -url http://jenkins [SECRET] [AGENT_NAME] # Optional environment variables : # * JENKINS_JAVA_BIN : Java executable to use instead of the default in PATH or obtained from JAVA_HOME # * JENKINS_JAVA_OPTS : Java Options to use for the remoting process, otherwise obtained from JAVA_OPTS # * JENKINS_TUNNEL : HOST:PORT for a tunnel to route TCP traffic to jenkins host, when jenkins can't be directly accessed over network # * JENKINS_URL : alternate jenkins URL # * JENKINS_SECRET : agent secret, if not set as an argument # * JENKINS_AGENT_NAME : agent name, if not set as an argument # * JENKINS_AGENT_WORKDIR : agent work directory, if not set by optional parameter -workDir # * JENKINS_WEB_SOCKET: true if the connection should be made via WebSocket rather than TCP # * JENKINS_DIRECT_CONNECTION: Connect directly to this TCP agent port, skipping the HTTP(S) connection parameter download. # Value: "<HOST>:<PORT>" # * JENKINS_INSTANCE_IDENTITY: The base64 encoded InstanceIdentity byte array of the Jenkins master. When this is set, # the agent skips connecting to an HTTP(S) port for connection info. # * JENKINS_PROTOCOLS: Specify the remoting protocols to attempt when instanceIdentity is provided. if [ $# -eq 1 ] && [ "${1#-}" = "$1" ] ; then # if `docker run` only has one arguments and it is not an option as `-help`, we assume user is running alternate command like `bash` to inspect the image exec "$@" else # if -tunnel is not provided, try env vars case "$@" in *"-tunnel "*) ;; *) if [ ! -z "$JENKINS_TUNNEL" ]; then TUNNEL="-tunnel $JENKINS_TUNNEL" fi ;; esac # if -workDir is not provided, try env vars if [ ! -z "$JENKINS_AGENT_WORKDIR" ]; then case "$@" in *"-workDir"*) echo "Warning: Work directory is defined twice in command-line arguments and the environment variable" ;; *) WORKDIR="-workDir $JENKINS_AGENT_WORKDIR" ;; esac fi if [ -n "$JENKINS_URL" ]; then URL="-url $JENKINS_URL" fi if [ -n "$JENKINS_NAME" ]; then JENKINS_AGENT_NAME="$JENKINS_NAME" fi if [ "$JENKINS_WEB_SOCKET" = true ]; then WEB_SOCKET=-webSocket fi if [ -n "$JENKINS_PROTOCOLS" ]; then PROTOCOLS="-protocols $JENKINS_PROTOCOLS" fi if [ -n "$JENKINS_DIRECT_CONNECTION" ]; then DIRECT="-direct $JENKINS_DIRECT_CONNECTION" fi if [ -n "$JENKINS_INSTANCE_IDENTITY" ]; then INSTANCE_IDENTITY="-instanceIdentity $JENKINS_INSTANCE_IDENTITY" fi if [ "$JENKINS_JAVA_BIN" ]; then JAVA_BIN="$JENKINS_JAVA_BIN" else # if java home is defined, use it JAVA_BIN="java" if [ "$JAVA_HOME" ]; then JAVA_BIN="$JAVA_HOME/bin/java" fi fi if [ "$JENKINS_JAVA_OPTS" ]; then JAVA_OPTIONS="$JENKINS_JAVA_OPTS" else # if JAVA_OPTS is defined, use it if [ "$JAVA_OPTS" ]; then JAVA_OPTIONS="$JAVA_OPTS" fi fi # if both required options are defined, do not pass the parameters OPT_JENKINS_SECRET="" if [ -n "$JENKINS_SECRET" ]; then case "$@" in *"${JENKINS_SECRET}"*) echo "Warning: SECRET is defined twice in command-line arguments and the environment variable" ;; *) OPT_JENKINS_SECRET="${JENKINS_SECRET}" ;; esac fi OPT_JENKINS_AGENT_NAME="" if [ -n "$JENKINS_AGENT_NAME" ]; then case "$@" in *"${JENKINS_AGENT_NAME}"*) echo "Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable" ;; *) OPT_JENKINS_AGENT_NAME="${JENKINS_AGENT_NAME}" ;; esac fi #TODO: Handle the case when the command-line and Environment variable contain different values. #It is fine it blows up for now since it should lead to an error anyway. exec $JAVA_BIN $JAVA_OPTIONS -cp /usr/share/jenkins/slave.jar hudson.remoting.jnlp.Main -headless $TUNNEL $URL $WORKDIR $WEB_SOCKET $DIRECT $PROTOCOLS $INSTANCE_IDENTITY $OPT_JENKINS_SECRET $OPT_JENKINS_AGENT_NAME "$@" fi
构建镜像 使用 docker build 构建镜像,并上传至镜像仓库
1 2 3 4 5 6 7 8 # docker build 构建镜像 docker build . -t wangxiansen/jenkins-slave-jdk:docker-eclipse-11 # 登录Harbor仓库 docker login docker.io # 上传到Harbor仓库 docker push wangxiansen/jenkins-slave-jdk:docker-eclipse-11
测试jenkins-slave 在jenkins 中创建一个流水线项目,测试jenkins-slave是否正常。
在pipeline 中 编写脚本,pipeline 脚本分为 声明式 和 脚本式
我这里写 声明式 脚本
需要注意的是,spec 中定义containers的名字一定要写jnlp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 pipeline { agent { kubernetes { label "jenkins-slave" yaml """ apiVersion: v1 kind: Pod metadata: name: jenkins-slave spec: containers: - name: jnlp image: wangxiansen/jenkins-slave-jdk:docker-eclipse-11 """ } } stages { stage('测试' ) { steps { sh 'hostname' } } } }
点击Build New 按钮,开始构建
构建结束后,点击项目编号,可以查看jenkins 构建的日志
日志中可以看到 输出了主机名
同时在构建的时候,K8S 集群中的infra命名空间下,临时起了一个pod,这个Pod就是 jenkins 动态创建的代理,用于执行jenkins master 下发的任务
当jenkins 构建的任务完成后,这个pod会自动销毁
1 2 3 4 5 6 7 8 # kubectl get pods -n infra NAMESPACE NAME READY STATUS RESTARTS AGE infra jenkins-9766b68cb-884lb 1/1 Running 0 24d infra jenkins-slave-n5c3l-9p69s 1/1 Running 0 19s # kubectl get pods -n infra NAME READY STATUS RESTARTS AGE jenkins-9766b68cb-884lb 1/1 Running 0 24d
自动持续部署dubbo微服务 编写helm Charts模板 Dubbo Values Deployment Service Ingress _helpers NOTES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 cat > values.yaml <<EOF replicaCount: 1 image: repository: wangxiansen/dubbo-demo-consumer pullPolicy: IfNotPresent tag: "logs" prometheus: enabled: false podAnnotations: {}podlabels: {} env: JAR_BALL: dubbo-server.jar ZK_ADDRES: zk.od.com:2181 oap: enabled: false env: SW_AGENT_COLLECTOR_BACKEND_SERVICES: oap-svc.skywalking:11800 SW_AGENT_NAME: "" JAVA_OPT: -javaagent:/skywalking/agent/skywalking-agent.jar images: wangxiansen/skywalking-agent tag: sidecar-9.1.0 command: - "sh" - "-c" - "mkdir -p /skywalking/agent && cp -r /opt/skywalking-agent/* /skywalking/agent" volumes: - name: sw-agent emptyDir: {} volumeMounts: - name: sw-agent mountPath: "/skywalking/agent" resources: limits: cpu: 500m memory: 512Mi requests: cpu: 100m memory: 128Mi imagePullSecrets: []service: type: ClusterIP port: 8080 targetPort: 8080 ingress: enabled: false className: "traefik" annotations: {} tls: [] hosts: - host: example.od.com paths: - pathType: Prefix path: "" tolerations: []nodeSelector: []affinity: []readinessProbe: []livenessProbe: []EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 cat > templates/deployment.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "dubbo.fullname" . }} labels: {{- include "dubbo.labels" . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: {{- include "dubbo.selectorLabels" . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include "dubbo.selectorLabels" . | nindent 8 }} {{- if .Values.podlabels }} {{- toYaml .Values.podlabels | nindent 8 }} {{- end }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} volumes: {{- toYaml .Values.volumes | nindent 8 }} {{- if .Values.oap.enabled }} initContainers: - name: sw-agent-sidecar image: "{{ .Values.oap.images }} :{{ .Values.oap.tag }} " imagePullPolicy: {{ .Values.image.pullPolicy }} command: {{- toYaml .Values.oap.command | nindent 12 }} volumeMounts: {{- toYaml .Values.volumeMounts | nindent 12 }} {{- end }} containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }} :{{ .Values.image.tag }} " imagePullPolicy: {{ .Values.image.pullPolicy }} env: {{- if .Values.oap.enabled }} - name: JAR_BALL value: "{{ .Values.oap.JAVA_OPT }} {{ .Values.env.JAR_BALL }} " {{- range $k , $v := .Values.oap.env }} - name: {{ $k }} value: {{ $v | quote }} {{- end }} {{- range $k , $v := .Values.env }} {{- if ne $k "JAR_BALL" }} - name: {{ $k }} value: {{ $v | quote }} {{- end }} {{- end }} {{- else }} {{- range $k , $v := .Values.env }} - name: {{ $k }} value: {{ $v | quote }} {{- end }} {{- end }} ports: - name: http containerPort: {{ .Values.service.targetPort }} protocol: TCP - name: zk containerPort: 20880 protocol: TCP resources: {{- toYaml .Values.resources | nindent 12 }} volumeMounts: {{- toYaml .Values.volumeMounts | nindent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.readinessProbe }} readinessProbe: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.livenessProbe }} livenessProbe: {{- toYaml . | nindent 8 }} {{- end }} EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 cat > templates/service.yaml <<EOF apiVersion: v1 kind: Service metadata: name: {{ include "dubbo.fullname" . }}-svc {{- if or .Values.prometheus.enabled .Values.service.annotations }} annotations: {{- if .Values.prometheus.enabled }} prometheus_io_scrape: "true" prometheus_io_port: "12346" prometheus_io_path: "/" {{- end }} {{- range $key , $value := .Values.service.annotations }} {{ $key }}: {{ $value | quote }} {{- end }} {{- end }} labels: {{- include "dubbo.labels" . | nindent 4 }} spec: type: {{ .Values.service.type }} ports: - port: {{ .Values.service.port }} targetPort: http protocol: TCP name: http selector: {{- include "dubbo.selectorLabels" . | nindent 4 }} EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 cat > templates/ingress.yaml <<EOF {{- if .Values.ingress.enabled - }} {{- $fullName := include "dubbo.fullname" . - }} {{- $svcPort := .Values.service.port - }} {{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }} {{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class" ) }} {{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className }} {{- end }} {{- end }} {{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion - }} apiVersion: networking.k8s.io/v1 {{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion - }} apiVersion: networking.k8s.io/v1beta1 {{- else - }} apiVersion: extensions/v1beta1 {{- end }} kind: Ingress metadata: name: {{ $fullName }} labels: {{- include "dubbo.labels" . | nindent 4 }} {{- with .Values.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: {{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }} ingressClassName: {{ .Values.ingress.className }} {{- end }} {{- if .Values.ingress.tls }} tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ .host | quote }} {{- end }} http: paths: {{- range .Values.ingress.paths }} - path: {{ .path | default "/" }} {{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }} pathType: {{ .pathType }} {{- end }} backend: {{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }} service: name: {{ $fullName }}-svc port: number: {{ $svcPort }} {{- else }} serviceName: {{ $fullName }}-svc servicePort: {{ $svcPort }} {{- end }} {{- end }} {{- end }} EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 cat > templates/_helpers.tpl <<EOF {{- define "dubbo.fullname" - }} {{- .Chart.Name - }}-{{ .Release.Name }} {{- end - }} {{/* 公用标签 */ }}{{- define "dubbo.labels" - }} app: {{ template "dubbo.fullname" . }}chart: "{{ .Chart.Name }} -{{ .Chart.Version }} " release: "{{ .Release.Name }} " {{- end - }} {{/* 标签选择器 */ }}{{- define "dubbo.selectorLabels" - }} app: {{ template "dubbo.fullname" . }}release: "{{ .Release.Name }} " {{- end - }} EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 cat > templates/NOTES.txt <<EOF {{- if .Values.ingress.enabled }} 1. Get the application URL by running these commands: {{- range $host := .Values.ingress.hosts }} http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }} {{- end }} {{- else if contains "NodePort" .Values.service.type }} export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "dubbo.fullname" . }}-gateway) export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT {{- else if contains "ClusterIP" .Values.service.type }} export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "dubbo.fullname" . }} ,app.kubernetes.io/instance={{ .Release.Name }} " -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 80 :8080 {{- end }} EOF
构建helm私有仓库 使用 helm 打包编辑好的dubbo
模版
1 2 # 打包,会生成一个tgz包 helm package dubbo
将打包好文件移动到helm仓库地址
1 2 3 4 mkdir /opt/helm/dubbo -p mv dubbo-0.2.0.tgz /opt/helm/dubbo helm repo index /opt/helm/ --url http://helm.od.com/ #该操作会在helm目录生成index.yaml文件,该文件包含了helm库应用信息
编辑Nginx 让它可以解析helm
1 2 3 4 5 6 7 8 9 10 11 12 cat > /etc/nginx/conf.d/helm.conf << EOF server { listen 80; server_name helm.od.com; location / { root /opt/helm; index index.html index.htm; autoindex on; } } EOF
通过helm repo add 将新仓库添加到helm
1 2 3 4 $ helm repo add myrepo http://helm.od.com/ $ helm search repo myrepo NAME CHART VERSION APP VERSION DESCRIPTION myrepo/dubbo 0.2.0 1.16.0 Dubbo demo
配置Jenkins流水线 创建凭证 登陆jenkins 控制器,使用凭据的方式保存 git 账户信息 和 harbor 账户信息. Manage Jenkins -> Manage Credentials -> 全局凭据 (unrestricted) -> Add Credentials
URL: http://jenkins.od.com/credentials/store/system/domain/_/
选择Kind 类型 为 username with passwd
输入账户名,密码
添加一个描述信息
创建helm 和 kubectl 使用的 k8s 认证信息文件。Manage Jenkins -> Managed files -> Add a new Config 选择Custom file。
http://jenkins.od.com/manage/configfiles/
将 cat ~/.kube/config
内容全部复制到 Content 中。
创建流水线
粘贴进流水线脚本中 用于Containerd k8s集群 用于Docker k8s集群 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 #!/usr/bin/env groovy def demo_domain_name = "demo.od.com" def image_pull_secret = "registry-pull-secret" def harbor_auth = "c0a67ab9-a9c2-4e85-a11d-18283685dc7f" def git_auth = "e38786d2-1458-4b2a-80cc-5f6a14630606" def k8s_auth = "c943c16a-015f-48e5-aca5-72e76acd0887" pipeline { agent { kubernetes { yaml ''' apiVersion: v1 kind: Pod metadata: name: jenkins-slave spec: nodeName: k8s-node1 containers: - image: docker:dind name: docker-dind args: - --registry-mirror=https://ot2k4d59.mirror.aliyuncs.com/ # 指定一个镜像加速器地址 env: - name: DOCKER_DRIVER value: overlay2 volumeMounts: - name: docker-sock mountPath: /var/run/ - name: docker-dind-data-vol # 持久化docker根目录 mountPath: /var/lib/docker/ ports: - name: daemon-port containerPort: 2375 securityContext: privileged: true # 需要设置成特权模式 - name: jnlp image: wangxiansen/jenkins-slave-jdk:docker-eclipse-11 volumeMounts: - name: docker-sock mountPath: /var/run/ - name: maven-cache mountPath: /root/.m2 volumes: - name: docker-sock emptyDir: {} - name: docker-dind-data-vol hostPath: path: /data/volumes/docker - name: maven-cache hostPath: path: /data/volumes/m2 ''' } } parameters { choice( choices: ['dubbo-demo-service' , 'dubbo-demo-web' ], description: 'Choose between dubbo-demo-service provider or dubbo-demo-web consumer' , name: 'app_name' ) choice( choices: [ 'https://gitee.com/dabou/dubbo-demo-service.git' , 'https://gitee.com/dabou/dubbo-demo-web.git' ], description: 'Choose the Git repository for deployment' , name: 'git_repo' ) string( defaultValue: 'master' , description: 'Enter the version or branch of the project in the git repository' , name: 'git_ver' , trim: true ) string( defaultValue: 'wangxiansen/dubbo-demo-service' , description: 'Enter the image name to build' , name: 'image_name' , trim: true ) choice( choices: ['./dubbo-server/target' , './dubbo-client/target' ], description: 'Choose the target directory for dubbo-server (provider) or dubbo-client (consumer)' , name: 'target_dir' ) choice( choices: ['wangxiansen/base_jre8:8u112' , 'base/jre:8u112' ], description: 'Choose the base image version' , name: 'base_image' ) choice( choices: ['dubbo' , 'demo' ], description: 'Choose the deployment template' , name: 'Template' ) choice( choices: ['1' , '3' , '5' , '7' ], description: 'Enter the number of replicas' , name: 'ReplicaCount' ) choice( choices: ['docker.io' , 'harbor.od.com' ], description: 'Enter the image registry address' , name: 'Registry' ) string( defaultValue: 'app' , description: 'Enter the namespace' , name: 'Namespace' , trim: true ) string( defaultValue: 'mvn clean package -Dmaven.test.skip=true' , description: 'Enter the build command (e.g., mvn clean package -e -q -Dmaven.test.skip=true)' , name: 'mvn_cmd' ) string( defaultValue: '"--set-string podlabels.logging=true --set oap.enabled=true,prometheus.enabled=true"' , description: 'Helm扩展配置如:开启OAP监控和日记文件监控 (e.g., podlabels.logging=true 开启日志收集,oap.enabled=true 开启OAP监控, prometheus.enabled=true 开启Prometheus监控)' , name: 'helm_extend' ) } stages { stage('拉取代码' ) { steps { sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER}" sh "cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}" } } stage('代码编译' ) { steps { sh "cd ${params.app_name}/${env.BUILD_NUMBER} && ${params.mvn_cmd}" } } stage('移动文件' ) { steps { sh "cd ${params.app_name}/${env.BUILD_NUMBER}/${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir" } } stage('构建镜像' ) { steps { withCredentials([usernamePassword(credentialsId: "${harbor_auth}" , passwordVariable: 'password' , usernameVariable: 'username' )]) { sh "docker login -u ${username} -p '${password}' ${params.Registry}" writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile" , text: """ FROM ${params.Registry}/${params.base_image} ADD ${params.target_dir}/project_dir /opt/project_dir """ sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t ${params.Registry}/${params.image_name}:${params.git_ver}_${BUILD_NUMBER} ." sh "docker push ${params.Registry}/${params.image_name}:${params.git_ver}_${BUILD_NUMBER}" configFileProvider([configFile(fileId: "${k8s_auth}" , targetLocation: "admin.kubeconfig" )]) { sh """ # 添加镜像拉取认证 kubectl create secret docker-registry ${image_pull_secret} \ --docker-username=${username} \ --docker-password=${password} \ --docker-server=${params.Registry} \ -n ${params.Namespace} \ --kubeconfig admin.kubeconfig || true # 添加私有chart仓库 helm repo add --username ${username} --password ${password} myrepo http://helm.od.com/ """ } } } } stage('Helm部署到K8S' ) { steps { sh """#!/bin/bash common_args="-n ${Namespace} --kubeconfig admin.kubeconfig" service_name=${params.app_name} helm_extend=${params.helm_extend} image=${params.Registry}/${params.image_name} tag=${params.git_ver}_${BUILD_NUMBER} helm_args="\${service_name} --set image.repository=\${image} --set image.tag=\${tag} --set replicaCount=${replicaCount} --set imagePullSecrets[0].name=${image_pull_secret} --set oap.env.SW_AGENT_NAME=\${service_name} myrepo/${Template} \${helm_extend} --create-namespace" # 针对服务启用ingress if [[ "\${service_name}" == *web* ]]; then helm upgrade --install \${helm_args} \\ --set ingress.enabled=true \\ --set env.JAR_BALL=dubbo-client.jar \\ --set ingress.hosts[0].host=${demo_domain_name} \\ \${common_args} else helm upgrade --install \${helm_args} \${common_args} fi # 查看Pod状态 sleep 10 kubectl get pods \${common_args} """ } } } }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 #!/usr/bin/env groovy def demo_domain_name = "demo.od.com" def image_pull_secret = "registry-pull-secret" def harbor_auth = "41f928bd-1b2e-472e-91c3-9298dea8ae72" def git_auth = "b4a9141f-639a-4adf-81c0-47ea37bf2e5e" def k8s_auth = "0897f2a5-5b1b-40f9-978a-83cdff5a5604" pipeline { agent { kubernetes { yaml ''' apiVersion: v1 kind: Pod metadata: name: jenkins-slave spec: nodeName: k8s-node2 containers: - name: jnlp image: wangxiansen/jenkins-slave-jdk:docker-eclipse-11 volumeMounts: - name: docker-sock mountPath: /var/run/docker.sock - name: maven-cache mountPath: /root/.m2 volumes: - name: docker-sock hostPath: path: /var/run/docker.sock - name: maven-cache hostPath: path: /data/nfs-volume/m2 ''' } } parameters { choice( choices: ['dubbo-demo-service' , 'dubbo-demo-web' ], description: 'Choose between dubbo-demo-service provider or dubbo-demo-web consumer' , name: 'app_name' ) choice( choices: [ 'https://gitee.com/dabou/dubbo-demo-service.git' , 'https://gitee.com/dabou/dubbo-demo-web.git' ], description: 'Choose the Git repository for deployment' , name: 'git_repo' ) string( defaultValue: 'master' , description: 'Enter the version or branch of the project in the git repository' , name: 'git_ver' , trim: true ) string( defaultValue: 'wangxiansen/dubbo-demo-service' , description: 'Enter the image name to build' , name: 'image_name' , trim: true ) choice( choices: ['./dubbo-server/target' , './dubbo-client/target' ], description: 'Choose the target directory for dubbo-server (provider) or dubbo-client (consumer)' , name: 'target_dir' ) choice( choices: ['wangxiansen/base_jre8:8u112' , 'base/jre:8u112' ], description: 'Choose the base image version' , name: 'base_image' ) choice( choices: ['dubbo' , 'demo' ], description: 'Choose the deployment template' , name: 'Template' ) choice( choices: ['1' , '3' , '5' , '7' ], description: 'Enter the number of replicas' , name: 'ReplicaCount' ) choice( choices: ['docker.io' , 'harbor.od.com' ], description: 'Enter the image registry address' , name: 'Registry' ) string( defaultValue: 'app' , description: 'Enter the namespace' , name: 'Namespace' , trim: true ) string( defaultValue: 'mvn clean package -Dmaven.test.skip=true' , description: 'Enter the build command (e.g., mvn clean package -e -q -Dmaven.test.skip=true)' , name: 'mvn_cmd' ) string( defaultValue: '"--set-string podlabels.logging=true --set oap.enabled=true,prometheus.enabled=true"' , description: 'Helm扩展配置如:开启OAP监控和日记文件监控 (e.g., podlabels.logging=true 开启日志收集,oap.enabled=true 开启OAP监控, prometheus.enabled=true 开启Prometheus监控)' , name: 'helm_extend' ) } stages { stage('拉取代码' ) { steps { sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER}" sh "cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}" } } stage('代码编译' ) { steps { sh "cd ${params.app_name}/${env.BUILD_NUMBER} && ${params.mvn_cmd}" } } stage('移动文件' ) { steps { sh "cd ${params.app_name}/${env.BUILD_NUMBER}/${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir" } } stage('构建镜像' ) { steps { withCredentials([usernamePassword(credentialsId: "${harbor_auth}" , passwordVariable: 'password' , usernameVariable: 'username' )]) { sh "docker login -u ${username} -p '${password}' ${params.Registry}" writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile" , text: """ FROM ${params.Registry}/${params.base_image} ADD ${params.target_dir}/project_dir /opt/project_dir """ sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t ${params.Registry}/${params.image_name}:${params.git_ver}_${BUILD_NUMBER} ." sh "docker push ${params.Registry}/${params.image_name}:${params.git_ver}_${BUILD_NUMBER}" configFileProvider([configFile(fileId: "${k8s_auth}" , targetLocation: "admin.kubeconfig" )]) { sh """ # 添加镜像拉取认证 kubectl create secret docker-registry ${image_pull_secret} \ --docker-username=${username} \ --docker-password=${password} \ --docker-server=${params.Registry} \ -n ${params.Namespace} \ --kubeconfig admin.kubeconfig || true # 添加私有chart仓库 helm repo add --username ${username} --password ${password} myrepo http://helm.od.com/ """ } } } } stage('Helm部署到K8S' ) { steps { sh """#!/bin/bash common_args="-n ${Namespace} --kubeconfig admin.kubeconfig" service_name=${params.app_name} helm_extend=${params.helm_extend} image=${params.Registry}/${params.image_name} tag=${params.git_ver}_${BUILD_NUMBER} helm_args="\${service_name} --set image.repository=\${image} --set image.tag=\${tag} --set replicaCount=${replicaCount} --set imagePullSecrets[0].name=${image_pull_secret} --set oap.env.SW_AGENT_NAME=\${service_name} myrepo/${Template} \${helm_extend} --create-namespace" # 针对服务启用ingress if [[ "\${service_name}" == *web* ]]; then helm upgrade --install \${helm_args} \\ --set ingress.enabled=true \\ --set env.JAR_BALL=dubbo-client.jar \\ --set ingress.hosts[0].host=${demo_domain_name} \\ \${common_args} else helm upgrade --install \${helm_args} \${common_args} fi # 查看Pod状态 sleep 10 kubectl get pods \${common_args} """ } } } }
注意: 保存并进行第一次参数化构建是对这个流水线进行初始化和流水线代码检查,所以第一次构建失败不要灰心。
构建注意事项 第二次部署时就出现流水线参数:dubbo-demo-service表示为提供者服务 、dubbo-demo-web表示为消费者服务 进行选择时不要选择错误。默认是提供者服务
dubbo-demo-service部署
dubbo-demo-web 注意: 这个需要特别注意相关参数不要选择错误。
部署完成会提示你访问域名是哪个,这时候通过DNS服务器进行解析即可实现访问。
查看监控和日志 skywalking监控 这时候我们多尝试访问几次访问不同目录通过skywalking
进行监控测试。
ELK日志监控 ELK日志监控查看,打开kibana页面,通过discover添加。
Prometheus监控 默认也开启了Prometheus监控,可以通过访问 targets
页面进行查看。
通过grafana查看相关视图
如果你需要这个试图可以关注公众号回复:”jvm”