万字长文细说部署微服务到阿里云k8s集群

1. 准备工作

  1. 首先你得拥有一个阿里云个人或企业级的账号,并存在可支配余额;
  2. 购置一台阿里云ECS,作为我们的部署发布机器,并申请一个可用的域名;
  3. 注意后续我们会购置阿里云k8s集群,并且还将附带购置多个ECS运行节点。

部署机推荐配置:4 vCPU 8 GiB CentOS 7.8 64位 100Mbps 带宽按量付费

2. 安装 jdk-1.8

yum install java-1.8.0-openjdk.x86_64 -y
yum install -y java-1.8.0-openjdk-devel.x86_64
java -version
# openjdk version "1.8.0_372"
# OpenJDK Runtime Environment (build 1.8.0_372-b07)
# OpenJDK 64-Bit Server VM (build 25.372-b07, mixed mode)
yum install java-1.8.0-openjdk.x86_64 -y
















yum install -y java-1.8.0-openjdk-devel.x86_64










java -version






# openjdk version "1.8.0_372"
# OpenJDK Runtime Environment (build 1.8.0_372-b07)
# OpenJDK 64-Bit Server VM (build 25.372-b07, mixed mode)
yum install java-1.8.0-openjdk.x86_64 -y yum install -y java-1.8.0-openjdk-devel.x86_64 java -version # openjdk version "1.8.0_372" # OpenJDK Runtime Environment (build 1.8.0_372-b07) # OpenJDK 64-Bit Server VM (build 25.372-b07, mixed mode)

3. 安装 nexus

cd /usr/local
mkdir -p soft/nexus
cd soft/nexus
wget --no-check-certificate https://download.sonatype.com/nexus/3/nexus-3.56.0-01-unix.tar.gz
tar zxvf nexus-3.56.0-01-unix.tar.gz
cd /usr/local

















mkdir -p soft/nexus










cd soft/nexus






wget --no-check-certificate https://download.sonatype.com/nexus/3/nexus-3.56.0-01-unix.tar.gz










tar zxvf nexus-3.56.0-01-unix.tar.gz
cd /usr/local mkdir -p soft/nexus cd soft/nexus wget --no-check-certificate https://download.sonatype.com/nexus/3/nexus-3.56.0-01-unix.tar.gz tar zxvf nexus-3.56.0-01-unix.tar.gz

配置 nexus 的环境变量:

vim /etc/profile
## 将以下内容复制到 profile文件最后
# nexus
export NEXUS_HOME=/usr/local/soft/nexus/nexus-3.56.0-01
export PATH=$PATH:$NEXUS_HOME/bin
source /etc/profile
vim /etc/profile

















## 将以下内容复制到 profile文件最后











# nexus
export NEXUS_HOME=/usr/local/soft/nexus/nexus-3.56.0-01
export PATH=$PATH:$NEXUS_HOME/bin










source /etc/profile
vim /etc/profile ## 将以下内容复制到 profile文件最后 # nexus export NEXUS_HOME=/usr/local/soft/nexus/nexus-3.56.0-01 export PATH=$PATH:$NEXUS_HOME/bin source /etc/profile

修改 nexus 的端口配置并启动(阿里云安全组开放8084端口):

vim nexus-3.56.0-01/etc/nexus-default.properties
## 将以下内容替换原文件中的内容
# Jetty section
application-port=8084
application-host=0.0.0.0
cd nexus-3.56.0-01/bin
nexus start
# 其他命令 (nexus stop/restart/status)
vim nexus-3.56.0-01/etc/nexus-default.properties
















## 将以下内容替换原文件中的内容










# Jetty section
application-port=8084
application-host=0.0.0.0










cd nexus-3.56.0-01/bin






nexus start 
# 其他命令 (nexus stop/restart/status)
vim nexus-3.56.0-01/etc/nexus-default.properties ## 将以下内容替换原文件中的内容 # Jetty section application-port=8084 application-host=0.0.0.0 cd nexus-3.56.0-01/bin nexus start # 其他命令 (nexus stop/restart/status)

浏览器中打开 http://{你的nexus地址:端口}

默认账号:admin
默认密码:使用如下命令查看

cat /usr/local/soft/nexus/sonatype-work/nexus3/admin.password
cat /usr/local/soft/nexus/sonatype-work/nexus3/admin.password
cat /usr/local/soft/nexus/sonatype-work/nexus3/admin.password

添加阿里云的资源库到 maven-pubic:

image.png

选择 maven2(proxy):

image.png

填写名称 aliyun-public
和 地址
maven.aliyun.com/repository/…

1688627442844.jpg

选择 maven-public 导入刚才的资源库:

1688627559080.jpg

1688627617259.jpg

4. 安装 maven

cd /usr/local
mkdir -p soft/maven
cd soft/maven
wget --no-check-certificate https://dlcdn.apache.org/maven/maven-3/3.9.3/binaries/apache-maven-3.9.3-bin.tar.gz
tar zxvf apache-maven-3.9.3-bin.tar.gz
cd /usr/local

















mkdir -p soft/maven










cd soft/maven






wget --no-check-certificate https://dlcdn.apache.org/maven/maven-3/3.9.3/binaries/apache-maven-3.9.3-bin.tar.gz










tar zxvf apache-maven-3.9.3-bin.tar.gz
cd /usr/local mkdir -p soft/maven cd soft/maven wget --no-check-certificate https://dlcdn.apache.org/maven/maven-3/3.9.3/binaries/apache-maven-3.9.3-bin.tar.gz tar zxvf apache-maven-3.9.3-bin.tar.gz

配置 maven 的环境变量:

vim /etc/profile
## 将以下内容复制到 profile文件最后
# maven
export MAVEN_HOME=/usr/local/soft/maven/apache-maven-3.9.3
export PATH=$PATH:$MAVEN_HOME/bin
source /etc/profile
mvn --version
# Java version: 1.8.0_372, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-# openjdk-1.8.0.372.b07-1.el7_9.x86_64/jre
# Default locale: en_US, platform encoding: UTF-8
# OS name: "linux", version: "3.10.0-1127.19.1.el7.x86_64", arch: "amd64", family: "unix"
vim /etc/profile

















## 将以下内容复制到 profile文件最后











# maven
export MAVEN_HOME=/usr/local/soft/maven/apache-maven-3.9.3
export PATH=$PATH:$MAVEN_HOME/bin










source /etc/profile







mvn --version




# Java version: 1.8.0_372, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-# openjdk-1.8.0.372.b07-1.el7_9.x86_64/jre
# Default locale: en_US, platform encoding: UTF-8
# OS name: "linux", version: "3.10.0-1127.19.1.el7.x86_64", arch: "amd64", family: "unix"
vim /etc/profile ## 将以下内容复制到 profile文件最后 # maven export MAVEN_HOME=/usr/local/soft/maven/apache-maven-3.9.3 export PATH=$PATH:$MAVEN_HOME/bin source /etc/profile mvn --version # Java version: 1.8.0_372, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-# openjdk-1.8.0.372.b07-1.el7_9.x86_64/jre # Default locale: en_US, platform encoding: UTF-8 # OS name: "linux", version: "3.10.0-1127.19.1.el7.x86_64", arch: "amd64", family: "unix"

修改 maven 的配置文件 settings.xml,指向我们自己搭建的 nexus 服务:

vim apache-maven-3.9.3/conf/settings.xml
vim apache-maven-3.9.3/conf/settings.xml
vim apache-maven-3.9.3/conf/settings.xml

settings.xml 文件:

注意替换 {你的nexus密码} 和 {你的nexus地址:端口}

<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<servers>
<!-- 配置从nexus私服下载jar包所需要的账号密码 -->
<server>
<id>nexus</id>
<username>admin</username>
<password>${你的nexus密码}</password>
</server>
<!-- 配置往nexus私服上传jar包所需要的账号密码 -->
<server>
<!--注意这个id 需要和pom.xml的对应-->
<id>maven-releases</id>
<username>admin</username>
<password>${你的nexus密码}</password>
</server>
<!-- 配置往nexus私服上传jar包所需要的账号密码 -->
<server>
<!--注意这个id 需要和pom.xml的对应-->
<id>maven-snapshots</id>
<username>admin</username>
<password>${你的nexus密码}</password>
</server>
</servers>
<!-- 配置镜像代理 -->
<mirrors>
<mirror>
<id>nexus</id>
<mirrorOf>*</mirrorOf><!-- 配置为*的意思是将所有请求都转发到镜像仓库上 -->
<url>http://${你的nexus地址:端口}/repository/maven-public/</url>
</mirror>
<!--这两个备用 以免在外网环境连不上私服-->
<mirror>
<id>alimaven</id>
<mirrorOf>central</mirrorOf>
<url>https://maven.aliyun.com/repository/public/</url>
</mirror>
<mirror>
<id>alimaven_central</id>
<mirrorOf>central</mirrorOf>
<url>http://maven.aliyun.com/nexus/content/repositories/central/</url>
</mirror>
</mirrors>
<profiles>
<profile>
<id>nexus</id>
<repositories>
<repository>
<id>central</id>
<url>http://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<url>http://central</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
<profile>
<id>jdk1.8</id>
<activation>
<activeByDefault>true</activeByDefault>
<jdk>1.8</jdk>
</activation>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<maven.compiler.compilerVersion>1.8</maven.compiler.compilerVersion>
</properties>
</profile>
</profiles>
<activeProfiles>
<activeProfile>nexus</activeProfile>
<activeProfile>jdk1.8</activeProfile>
</activeProfiles>
</settings>
<?xml version="1.0" encoding="UTF-8"?>

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">





    <servers>
        <!-- 配置从nexus私服下载jar包所需要的账号密码 -->
        <server>
            <id>nexus</id>
            <username>admin</username>
            <password>${你的nexus密码}</password>
        </server>
        <!-- 配置往nexus私服上传jar包所需要的账号密码 -->
        <server>
            <!--注意这个id  需要和pom.xml的对应-->
            <id>maven-releases</id>
            <username>admin</username>
            <password>${你的nexus密码}</password>
        </server>
        <!-- 配置往nexus私服上传jar包所需要的账号密码 -->
        <server>
            <!--注意这个id  需要和pom.xml的对应-->
            <id>maven-snapshots</id>
            <username>admin</username>
            <password>${你的nexus密码}</password>
        </server>
    </servers>
    <!-- 配置镜像代理 -->
    <mirrors>
        <mirror>
            <id>nexus</id>
            <mirrorOf>*</mirrorOf><!-- 配置为*的意思是将所有请求都转发到镜像仓库上 -->
            <url>http://${你的nexus地址:端口}/repository/maven-public/</url>
        </mirror>
        <!--这两个备用 以免在外网环境连不上私服-->
        <mirror>
            <id>alimaven</id>
            <mirrorOf>central</mirrorOf>
            <url>https://maven.aliyun.com/repository/public/</url>
        </mirror>
        <mirror>
            <id>alimaven_central</id>
            <mirrorOf>central</mirrorOf>
             <url>http://maven.aliyun.com/nexus/content/repositories/central/</url>
        </mirror>
    </mirrors>

    <profiles>
        <profile>
            <id>nexus</id>
            <repositories>
                <repository>
                    <id>central</id>
                    <url>http://central</url>
                    <releases>
                        <enabled>true</enabled>
                    </releases>
                    <snapshots>
                        <enabled>true</enabled>
                    </snapshots>
                </repository>
            </repositories>
            <pluginRepositories>
                <pluginRepository>
                    <id>central</id>
                    <url>http://central</url>
                    <releases>
                        <enabled>true</enabled>
                    </releases>
                    <snapshots>
                        <enabled>true</enabled>
                    </snapshots>
                </pluginRepository>
            </pluginRepositories>
        </profile>
        <profile>
            <id>jdk1.8</id>
            <activation>
                <activeByDefault>true</activeByDefault>
                <jdk>1.8</jdk>
            </activation>
            <properties>
                <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
                <maven.compiler.source>1.8</maven.compiler.source>
                <maven.compiler.target>1.8</maven.compiler.target>
                <maven.compiler.compilerVersion>1.8</maven.compiler.compilerVersion>
            </properties>
        </profile>
    </profiles>

    <activeProfiles>
        <activeProfile>nexus</activeProfile>
        <activeProfile>jdk1.8</activeProfile>
    </activeProfiles>

</settings>
<?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <servers> <!-- 配置从nexus私服下载jar包所需要的账号密码 --> <server> <id>nexus</id> <username>admin</username> <password>${你的nexus密码}</password> </server> <!-- 配置往nexus私服上传jar包所需要的账号密码 --> <server> <!--注意这个id 需要和pom.xml的对应--> <id>maven-releases</id> <username>admin</username> <password>${你的nexus密码}</password> </server> <!-- 配置往nexus私服上传jar包所需要的账号密码 --> <server> <!--注意这个id 需要和pom.xml的对应--> <id>maven-snapshots</id> <username>admin</username> <password>${你的nexus密码}</password> </server> </servers> <!-- 配置镜像代理 --> <mirrors> <mirror> <id>nexus</id> <mirrorOf>*</mirrorOf><!-- 配置为*的意思是将所有请求都转发到镜像仓库上 --> <url>http://${你的nexus地址:端口}/repository/maven-public/</url> </mirror> <!--这两个备用 以免在外网环境连不上私服--> <mirror> <id>alimaven</id> <mirrorOf>central</mirrorOf> <url>https://maven.aliyun.com/repository/public/</url> </mirror> <mirror> <id>alimaven_central</id> <mirrorOf>central</mirrorOf> <url>http://maven.aliyun.com/nexus/content/repositories/central/</url> </mirror> </mirrors> <profiles> <profile> <id>nexus</id> <repositories> <repository> <id>central</id> <url>http://central</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>central</id> <url>http://central</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> <profile> <id>jdk1.8</id> <activation> <activeByDefault>true</activeByDefault> <jdk>1.8</jdk> </activation> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <maven.compiler.compilerVersion>1.8</maven.compiler.compilerVersion> </properties> </profile> </profiles> <activeProfiles> <activeProfile>nexus</activeProfile> <activeProfile>jdk1.8</activeProfile> </activeProfiles> </settings>

5. 安装 gitlab

vi /etc/yum.repos.d/gitlab-ce.repo
## 将以下内容写入文件
[gitlab-ce]
name=Gitlab CE Repository
baseurl=https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el$releasever/
gpgcheck=0
enabled=1
yum makecache
yum clean all
yum install -y gitlab-ce
vi /etc/yum.repos.d/gitlab-ce.repo
















## 将以下内容写入文件
[gitlab-ce]
name=Gitlab CE Repository
baseurl=https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el$releasever/
gpgcheck=0
enabled=1



yum makecache



yum clean all



yum install -y gitlab-ce
vi /etc/yum.repos.d/gitlab-ce.repo ## 将以下内容写入文件 [gitlab-ce] name=Gitlab CE Repository baseurl=https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el$releasever/ gpgcheck=0 enabled=1 yum makecache yum clean all yum install -y gitlab-ce

修改 gitlab 的端口配置并重启:

vi /etc/gitlab/gitlab.rb
## 将以下内容写入文件,阿里云安全组开放8081端口
gitlab_rails['time_zone'] = 'Asia/Shanghai'
external_url 'http://物理机ECS的公网IP:8081'
puma['port'] = 8066
nginx['listen_port'] = 8081
gitlab-ctl reconfigure
gitlab-ctl restart
# 其他命令 (gitlab-ctl stop/start)
systemctl enable gitlab-runsvdir.service
vi /etc/gitlab/gitlab.rb
















## 将以下内容写入文件,阿里云安全组开放8081端口
gitlab_rails['time_zone'] = 'Asia/Shanghai'
external_url 'http://物理机ECS的公网IP:8081'
puma['port'] = 8066
nginx['listen_port'] = 8081










gitlab-ctl reconfigure






gitlab-ctl restart 
# 其他命令 (gitlab-ctl stop/start)



systemctl enable gitlab-runsvdir.service
vi /etc/gitlab/gitlab.rb ## 将以下内容写入文件,阿里云安全组开放8081端口 gitlab_rails['time_zone'] = 'Asia/Shanghai' external_url 'http://物理机ECS的公网IP:8081' puma['port'] = 8066 nginx['listen_port'] = 8081 gitlab-ctl reconfigure gitlab-ctl restart # 其他命令 (gitlab-ctl stop/start) systemctl enable gitlab-runsvdir.service

浏览器中打开 http://{你的gitlab地址:端口}

默认账号:root
默认密码:使用如下命令查看

cat /etc/gitlab/initial_root_password
cat /etc/gitlab/initial_root_password
cat /etc/gitlab/initial_root_password

1688628361497.jpg

后续自己新建账号权限等,这里不再赘述,应该是一个程序员的基本技能。

6. 安装 gitlab-runner

## 回到根目录或找个自己喜欢的目录
wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/gitlab-runner/yum/el7/gitlab-runner-16.0.2-1.x86_64.rpm
yum -y install gitlab-runner-16.0.2-1.x86_64.rpm
systemctl status gitlab-runner
# ● gitlab-runner.service - GitLab Runner
# Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor preset: disabled)
# Active: active (running) since Wed 2023-07-05 18:06:25 CST; 21h ago
# Main PID: 2045 (gitlab-runner)
# Tasks: 10
# Memory: 21.7M
# CGroup: /system.slice/gitlab-runner.service
# └─2045 /usr/bin/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --user root
## 回到根目录或找个自己喜欢的目录
wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/gitlab-runner/yum/el7/gitlab-runner-16.0.2-1.x86_64.rpm


yum -y install gitlab-runner-16.0.2-1.x86_64.rpm





systemctl status gitlab-runner 




# ● gitlab-runner.service - GitLab Runner
#    Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor preset: disabled)
#    Active: active (running) since Wed 2023-07-05 18:06:25 CST; 21h ago
#  Main PID: 2045 (gitlab-runner)
#     Tasks: 10
#    Memory: 21.7M
#    CGroup: /system.slice/gitlab-runner.service
#            └─2045 /usr/bin/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --user root
## 回到根目录或找个自己喜欢的目录 wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/gitlab-runner/yum/el7/gitlab-runner-16.0.2-1.x86_64.rpm yum -y install gitlab-runner-16.0.2-1.x86_64.rpm systemctl status gitlab-runner # ● gitlab-runner.service - GitLab Runner # Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor preset: disabled) # Active: active (running) since Wed 2023-07-05 18:06:25 CST; 21h ago # Main PID: 2045 (gitlab-runner) # Tasks: 10 # Memory: 21.7M # CGroup: /system.slice/gitlab-runner.service # └─2045 /usr/bin/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --user root

将 gitlab-runner 的用户设置为 root:

sudo gitlab-runner uninstall
gitlab-runner install --working-directory /home/gitlab-runner --user root
systemctl restart gitlab-runner.service
sudo gitlab-runner uninstall
















gitlab-runner install --working-directory /home/gitlab-runner --user root










systemctl restart gitlab-runner.service
sudo gitlab-runner uninstall gitlab-runner install --working-directory /home/gitlab-runner --user root systemctl restart gitlab-runner.service

将 gitlab-runner 注册到 gitlab,起名 tag 为 runner-k8s :

1688628885558.jpg

注意人家说的版本最低要求,有些设置后续也可以再进来设置:

1688628976781.jpg

1688629071774.jpg

1688629192664.jpg

执行 gitlab 要求给你的 token 命令就行:

gitlab-runner register --url http://{你的gitlab地址:端口} --token glrt-xkke7cBL2HDux-o9ttAz
gitlab-runner register  --url http://{你的gitlab地址:端口}  --token glrt-xkke7cBL2HDux-o9ttAz
gitlab-runner register --url http://{你的gitlab地址:端口} --token glrt-xkke7cBL2HDux-o9ttAz

命令执行后,出来的表单按要求步骤分别填写:

  1. http://{你的gitlab地址:端口}
  2. runner-k8s(这里填写runner的tag后续有用)
  3. shell(这里选择shell直接在机器上执行命令即可)

1688629331426.jpg

7. 安装 docker

yum -y install net-tools yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io
systemctl enable docker
systemctl start docker
docker --version
# Docker version 24.0.2, build cb74dfc
yum -y install net-tools yum-utils
















yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo










yum -y install docker-ce docker-ce-cli containerd.io






systemctl enable docker










systemctl start docker






docker --version




# Docker version 24.0.2, build cb74dfc
yum -y install net-tools yum-utils yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum -y install docker-ce docker-ce-cli containerd.io systemctl enable docker systemctl start docker docker --version # Docker version 24.0.2, build cb74dfc

在阿里云搜索“容器镜像服务”,创建个人实例:

1688629813224.jpg

配置上我们自己的阿里云镜像加速地址:

mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://你的加速地址.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
mkdir -p /etc/docker
















sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://你的加速地址.mirror.aliyuncs.com"]
}
EOF










sudo systemctl daemon-reload






sudo systemctl restart docker
mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://你的加速地址.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker

1688629879792.jpg

进入个人实例,选择公网IP和你的阿里云账号登录:

sudo docker login --username=你的阿里云账号 registry.ap-southeast-1.aliyuncs.com
# 执行后填写密码即可登录
sudo docker login --username=你的阿里云账号 registry.ap-southeast-1.aliyuncs.com
# 执行后填写密码即可登录
sudo docker login --username=你的阿里云账号 registry.ap-southeast-1.aliyuncs.com # 执行后填写密码即可登录

1688630021519.jpg

创建一个命名空间 belife 后续有用,公开安全性低但是无需 docker login,私有需要在每台 ECS 机器上 docker login,这个可以自己选择(我们方便演示直接选择公开):

1688630270628.jpg

8. 购置阿里云 k8s 集群

在阿里云搜索“容器服务 Kubernetes 版”,这里贴出推荐的购买配置,关于 k8s 的前置知识,可以去作者本人的专栏学习:

Kubernetes 原理与实战专栏 –> https://juejin.cn/column/7174618899512557624

版本选择 1.22,因为 1.24 版本及以上不支持 docker 容器会比较麻烦:

1688630820269.jpg

1688630924345.jpg

1688631028460.jpg

节点配置至少选择 4核 8G,否则无法满足最低运行条件:

image.png

1688631258977.jpg

我们选择 nginx-ingress,阿里云购买后会自动建立 SLB 负载均衡等必备基础设施:

image.png

刚才我们选择的所有的配置清单,大致的服务费用计费方式:

1688631496918.jpg

购买完成后等待一段时间,进入我们自己创建的集群查看资源等一些信息:

1688631853491.jpg

购置新节点,节点扩容,按照我们刚才的配置,至少2个以上节点才能支撑整个集群:

1688634625611.jpg

阿里云会自动按照我们上边预设的ECS配置购买节点,如想变更可以在这里设置:

1688634744919.jpg

9. 安装 kubectrl

我们回到自己的发布部署机器,回到根目录:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --short
# Client Version: v1.27.3
# Kustomize Version: v5.0.1
# Server Version: v1.22.15-aliyun.1
# WARNING: version difference between client (1.27) and server (1.22) exceeds the supported minor version skew of +/-1
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
















curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"










echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check






sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl










kubectl version --short






# Client Version: v1.27.3
# Kustomize Version: v5.0.1
# Server Version: v1.22.15-aliyun.1
# WARNING: version difference between client (1.27) and server (1.22) exceeds the supported minor version skew of +/-1
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" echo "$(cat kubectl.sha256) kubectl" | sha256sum --check sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl kubectl version --short # Client Version: v1.27.3 # Kustomize Version: v5.0.1 # Server Version: v1.22.15-aliyun.1 # WARNING: version difference between client (1.27) and server (1.22) exceeds the supported minor version skew of +/-1

进入集群查看连接信息,将阿里云黑色部分信息(公网或内网)存贮到我们的部署机器:

mkdir -p /root/.kube
cd /root/.kube
vim config
# 黑色部分的内容复制进去
kubectl cluster-info
# Kubernetes control plane is running at https://172.16.214.227:6443
# metrics-server is running at https://172.16.214.227:6443/api/v1/namespaces/kube-system/services/heapster/proxy
# KubeDNS is running at https://172.16.214.227:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
mkdir -p /root/.kube
















cd /root/.kube










vim config
# 黑色部分的内容复制进去




kubectl cluster-info



# Kubernetes control plane is running at https://172.16.214.227:6443
# metrics-server is running at https://172.16.214.227:6443/api/v1/namespaces/kube-system/services/heapster/proxy
# KubeDNS is running at https://172.16.214.227:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy



# To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
mkdir -p /root/.kube cd /root/.kube vim config # 黑色部分的内容复制进去 kubectl cluster-info # Kubernetes control plane is running at https://172.16.214.227:6443 # metrics-server is running at https://172.16.214.227:6443/api/v1/namespaces/kube-system/services/heapster/proxy # KubeDNS is running at https://172.16.214.227:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy # To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

1688631994129.jpg

10. 新建一个 SpringBoot 项目

注意项目的 maven 配置,和刚才机器上的一样,连接至我们搭建好的 nexus 私服:

image.png

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.0</version>
<relativePath/>
</parent>
<groupId>com.belife</groupId>
<artifactId>belife-web</artifactId>
<version>1.0.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<finalName>belife-web</finalName>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>${maven.compiler.source}</source>
<target>${maven.compiler.target}</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>





    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.0</version>
        <relativePath/>
    </parent>




    <groupId>com.belife</groupId>
    <artifactId>belife-web</artifactId>
    <version>1.0.0-SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <finalName>belife-web</finalName>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>${maven.compiler.source}</source>
                    <target>${maven.compiler.target}</target>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.7.0</version> <relativePath/> </parent> <groupId>com.belife</groupId> <artifactId>belife-web</artifactId> <version>1.0.0-SNAPSHOT</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <finalName>belife-web</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${maven.compiler.source}</source> <target>${maven.compiler.target}</target> </configuration> </plugin> </plugins> </build> </project>

application.yml

server:
port: 8080
spring:
application:
name: belife-web
profiles:
active: dev
server:
  port: 8080


spring:
  application:
    name: belife-web




  profiles:
    active: dev
server: port: 8080 spring: application: name: belife-web profiles: active: dev

BelifeWebApplication.java

package com.belife;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class BelifeWebApplication {
public static void main(String[] args) {
SpringApplication.run(BelifeWebApplication.class, args);
}
}
package com.belife;
















import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;





@SpringBootApplication
public class BelifeWebApplication {










    public static void main(String[] args) {
        SpringApplication.run(BelifeWebApplication.class, args);
    }




}
package com.belife; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class BelifeWebApplication { public static void main(String[] args) { SpringApplication.run(BelifeWebApplication.class, args); } }

TestController.java

package com.belife.api;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping("/v1")
public class TestController {
@Value("${spring.profiles.active}")
private String env;
@RequestMapping(value = "/heartbeat", method = {RequestMethod.GET})
public String heartbeat() {
return "OK";
}
@RequestMapping(value = "/app/test", method = {RequestMethod.GET, RequestMethod.POST})
public String test() {
return "This is a test, env: " + env;
}
}
package com.belife.api;
















import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;




@RestController
@RequestMapping("/v1")
public class TestController {



    @Value("${spring.profiles.active}")
    private String env;

    @RequestMapping(value = "/heartbeat", method = {RequestMethod.GET})
    public String heartbeat() {
        return "OK";
    }

    @RequestMapping(value = "/app/test", method = {RequestMethod.GET, RequestMethod.POST})
    public String test() {
        return "This is a test, env: " + env;
    }


}
package com.belife.api; import org.springframework.beans.factory.annotation.Value; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping("/v1") public class TestController { @Value("${spring.profiles.active}") private String env; @RequestMapping(value = "/heartbeat", method = {RequestMethod.GET}) public String heartbeat() { return "OK"; } @RequestMapping(value = "/app/test", method = {RequestMethod.GET, RequestMethod.POST}) public String test() { return "This is a test, env: " + env; } }

将项目提交到我们的自己搭建的 gitlab 上边:

1688632808886.jpg

11. 编写 CI/CD 相关的脚本

Dockerfile(这里的内容不做过多解释,感觉应该是程序员的必备技能)

FROM azul/zulu-openjdk-centos:8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LANGUAGE en_US:en
ENV TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
ADD ./target/belife-web.jar /srv
WORKDIR /srv
CMD java ${JAVA_OPTS} -jar /srv/belife-web.jar
FROM azul/zulu-openjdk-centos:8
















ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LANGUAGE en_US:en






ENV TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone



ADD ./target/belife-web.jar /srv



WORKDIR /srv
CMD java ${JAVA_OPTS} -jar /srv/belife-web.jar
FROM azul/zulu-openjdk-centos:8 ENV LANG en_US.UTF-8 ENV LC_ALL en_US.UTF-8 ENV LANGUAGE en_US:en ENV TZ=Asia/Shanghai RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone ADD ./target/belife-web.jar /srv WORKDIR /srv CMD java ${JAVA_OPTS} -jar /srv/belife-web.jar

.gitlab-ci.yml

这里我们定义了3个流程步骤:

  1. package 主要是 maven 打包相关
  2. build 主要是 docker 镜像打包 push 相关
  3. deploy 主要是 发布到 阿里云 k8s 集群,涉及的 yaml 资源清单后续会给出
stages:
- package
- build
- deploy
variables:
NAMESPACE: belife-${CI_COMMIT_BRANCH}
NODE_LABELS: deploy-${CI_COMMIT_BRANCH}
REGISTRY_NAME: "registry.ap-southeast-1.aliyuncs.com/belife"
PROJECT_NAME: ${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
IMAGE_NAME: ${REGISTRY_NAME}/${PROJECT_NAME}
DEPLOY_NAME: ${PROJECT_NAME}
SERVICE_NAME: service-${PROJECT_NAME}
INGRESS_NAME: ingress-${PROJECT_NAME}
LOG_STORE_OUT: aliyun_logs_${PROJECT_NAME}
LOG_STORE_INIT: aliyun_logs_${PROJECT_NAME}_logstore
K8S_WORKER_ROLE: KubernetesWorkerRole-437a599e-7662-45d9-876e-ba8f9324a73d
PORT: "80"
HEALTH_URL: "/v1/heartbeat"
JAVA_OPTS: "-Dspring.profiles.active=${CI_COMMIT_BRANCH} -Dserver.port=${PORT} \
-Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=utf8 \
-Xms1024m -Xmx1024m -XX:MetaspaceSize=256M -XX:MaxNewSize=512m -XX:MaxMetaspaceSize=512m \
-Dram.role.name=${K8S_WORKER_ROLE}"
cache: &global_cache
key: ${CI_COMMIT_BRANCH}
paths:
- "target/${CI_PROJECT_NAME}.jar"
maven-package:
stage: package
tags:
- runner-k8s
only:
- test@belife/belife-web
- gray@belife/belife-web
- prod@belife/belife-web
cache:
<<: *global_cache
policy: push
script:
- mvn clean package -Dmaven.test.skip=true
docker-build:
stage: build
tags:
- runner-k8s
only:
- test@belife/belife-web
- gray@belife/belife-web
- prod@belife/belife-web
cache:
<<: *global_cache
policy: pull
script:
- docker build -t ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA .
- docker push ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA
k8s-deploy-test:
stage: deploy
tags:
- runner-k8s
only:
- test@belife/belife-web
cache: {}
script:
- envsubst < deploy.yaml | cat -
- envsubst < deploy.yaml | kubectl apply -f -
- envsubst < ingress-test.yaml | cat -
- envsubst < ingress-test.yaml | kubectl apply -f -
- kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE
variables:
REPLICAS: 1
k8s-deploy-gray:
stage: deploy
tags:
- runner-k8s
only:
- gray@belife/belife-web
cache: {}
script:
- envsubst < deploy.yaml | cat -
- envsubst < deploy.yaml | kubectl apply -f -
- envsubst < ingress-gray.yaml | cat -
- envsubst < ingress-gray.yaml | kubectl apply -f -
- kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE
variables:
REPLICAS: 1
NAMESPACE: belife-prod
k8s-deploy-prod:
stage: deploy
tags:
- runner-k8s
only:
- prod@belife/belife-web
cache: {}
script:
- envsubst < deploy.yaml | cat -
- envsubst < deploy.yaml | kubectl apply -f -
- envsubst < ingress-prod.yaml | cat -
- envsubst < ingress-prod.yaml | kubectl apply -f -
- kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE
variables:
REPLICAS: 2
when: manual
stages:
  - package
  - build
  - deploy





variables:
  NAMESPACE: belife-${CI_COMMIT_BRANCH}
  NODE_LABELS: deploy-${CI_COMMIT_BRANCH}
  REGISTRY_NAME: "registry.ap-southeast-1.aliyuncs.com/belife"
  PROJECT_NAME: ${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
  IMAGE_NAME: ${REGISTRY_NAME}/${PROJECT_NAME}
  DEPLOY_NAME: ${PROJECT_NAME}
  SERVICE_NAME: service-${PROJECT_NAME}
  INGRESS_NAME: ingress-${PROJECT_NAME}
  LOG_STORE_OUT: aliyun_logs_${PROJECT_NAME}
  LOG_STORE_INIT: aliyun_logs_${PROJECT_NAME}_logstore
  K8S_WORKER_ROLE: KubernetesWorkerRole-437a599e-7662-45d9-876e-ba8f9324a73d
  PORT: "80"
  HEALTH_URL: "/v1/heartbeat"
  JAVA_OPTS: "-Dspring.profiles.active=${CI_COMMIT_BRANCH} -Dserver.port=${PORT} \
  -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=utf8 \
  -Xms1024m -Xmx1024m -XX:MetaspaceSize=256M -XX:MaxNewSize=512m -XX:MaxMetaspaceSize=512m \
  -Dram.role.name=${K8S_WORKER_ROLE}"


cache: &global_cache
  key: ${CI_COMMIT_BRANCH}
  paths:
    - "target/${CI_PROJECT_NAME}.jar"

maven-package:
  stage: package
  tags:
    - runner-k8s
  only:
    - test@belife/belife-web
    - gray@belife/belife-web
    - prod@belife/belife-web
  cache:
    <<: *global_cache
    policy: push
  script:
    - mvn clean package -Dmaven.test.skip=true

docker-build:
  stage: build
  tags:
    - runner-k8s
  only:
    - test@belife/belife-web
    - gray@belife/belife-web
    - prod@belife/belife-web
  cache:
    <<: *global_cache
    policy: pull
  script:
    - docker build -t ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA .
    - docker push ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA

k8s-deploy-test:
  stage: deploy
  tags:
    - runner-k8s
  only:
    - test@belife/belife-web
  cache: {}
  script:
    - envsubst < deploy.yaml | cat -
    - envsubst < deploy.yaml | kubectl apply -f -
    - envsubst < ingress-test.yaml | cat -
    - envsubst < ingress-test.yaml | kubectl apply -f -
    - kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE
  variables:
    REPLICAS: 1

k8s-deploy-gray:
  stage: deploy
  tags:
    - runner-k8s
  only:
    - gray@belife/belife-web
  cache: {}
  script:
    - envsubst < deploy.yaml | cat -
    - envsubst < deploy.yaml | kubectl apply -f -
    - envsubst < ingress-gray.yaml | cat -
    - envsubst < ingress-gray.yaml | kubectl apply -f -
    - kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE
  variables:
    REPLICAS: 1
    NAMESPACE: belife-prod

k8s-deploy-prod:
  stage: deploy
  tags:
    - runner-k8s
  only:
    - prod@belife/belife-web
  cache: {}
  script:
    - envsubst < deploy.yaml | cat -
    - envsubst < deploy.yaml | kubectl apply -f -
    - envsubst < ingress-prod.yaml | cat -
    - envsubst < ingress-prod.yaml | kubectl apply -f -
    - kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE
  variables:
    REPLICAS: 2
  when: manual
stages: - package - build - deploy variables: NAMESPACE: belife-${CI_COMMIT_BRANCH} NODE_LABELS: deploy-${CI_COMMIT_BRANCH} REGISTRY_NAME: "registry.ap-southeast-1.aliyuncs.com/belife" PROJECT_NAME: ${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH} IMAGE_NAME: ${REGISTRY_NAME}/${PROJECT_NAME} DEPLOY_NAME: ${PROJECT_NAME} SERVICE_NAME: service-${PROJECT_NAME} INGRESS_NAME: ingress-${PROJECT_NAME} LOG_STORE_OUT: aliyun_logs_${PROJECT_NAME} LOG_STORE_INIT: aliyun_logs_${PROJECT_NAME}_logstore K8S_WORKER_ROLE: KubernetesWorkerRole-437a599e-7662-45d9-876e-ba8f9324a73d PORT: "80" HEALTH_URL: "/v1/heartbeat" JAVA_OPTS: "-Dspring.profiles.active=${CI_COMMIT_BRANCH} -Dserver.port=${PORT} \ -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=utf8 \ -Xms1024m -Xmx1024m -XX:MetaspaceSize=256M -XX:MaxNewSize=512m -XX:MaxMetaspaceSize=512m \ -Dram.role.name=${K8S_WORKER_ROLE}" cache: &global_cache key: ${CI_COMMIT_BRANCH} paths: - "target/${CI_PROJECT_NAME}.jar" maven-package: stage: package tags: - runner-k8s only: - test@belife/belife-web - gray@belife/belife-web - prod@belife/belife-web cache: <<: *global_cache policy: push script: - mvn clean package -Dmaven.test.skip=true docker-build: stage: build tags: - runner-k8s only: - test@belife/belife-web - gray@belife/belife-web - prod@belife/belife-web cache: <<: *global_cache policy: pull script: - docker build -t ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA . - docker push ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA k8s-deploy-test: stage: deploy tags: - runner-k8s only: - test@belife/belife-web cache: {} script: - envsubst < deploy.yaml | cat - - envsubst < deploy.yaml | kubectl apply -f - - envsubst < ingress-test.yaml | cat - - envsubst < ingress-test.yaml | kubectl apply -f - - kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE variables: REPLICAS: 1 k8s-deploy-gray: stage: deploy tags: - runner-k8s only: - gray@belife/belife-web cache: {} script: - envsubst < deploy.yaml | cat - - envsubst < deploy.yaml | kubectl apply -f - - envsubst < ingress-gray.yaml | cat - - envsubst < ingress-gray.yaml | kubectl apply -f - - kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE variables: REPLICAS: 1 NAMESPACE: belife-prod k8s-deploy-prod: stage: deploy tags: - runner-k8s only: - prod@belife/belife-web cache: {} script: - envsubst < deploy.yaml | cat - - envsubst < deploy.yaml | kubectl apply -f - - envsubst < ingress-prod.yaml | cat - - envsubst < ingress-prod.yaml | kubectl apply -f - - kubectl rollout status deployment/$DEPLOY_NAME -n $NAMESPACE variables: REPLICAS: 2 when: manual

我们通过 variables 定义了顶层的环境变量,这些环境变量不仅能在 .gitlab-ci.yml 中使用,还能在 k8s 相关的 yaml 资源清单文件中使用;tag 就是我们刚才设置的 gitlab-runner 的 tag 叫 runner-k8s 来执行;

{CI_PROJECT_NAME} 和 {CI_COMMIT_BRANCH} 是 gitlab-runner 的内置变量,{CI_PROJECT_NAME} 就是 belife-web,{CI_COMMIT_BRANCH} 就是分支名称,我们通过 test,gray,prod 分支的提交,触发对应环境的 deploy 发布动作;gray,prod 灰度和生产必须位于同一个 namespace,test 和 gray 我们只启动 1 个容器实例,prod 我们设置启动 2 个容器实例,并且设置 deploy 动作为 manual 手动执行。

apiVersion: apps/v1
kind: Deployment
metadata:
name: $DEPLOY_NAME # Pod名称称
namespace: $NAMESPACE # 命名空间
spec:
replicas: $REPLICAS # 分片个数
revisionHistoryLimit: 3
selector:
matchLabels:
app: $CI_PROJECT_NAME
template:
metadata:
labels:
app: $CI_PROJECT_NAME
spec:
restartPolicy: Always
containers:
- name: $CI_PROJECT_NAME
image: ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA # 镜像名称
imagePullPolicy: Always # 总是拉取镜像
resources:
limits: # 资源限制
cpu: 500m
memory: 2000Mi
requests: # 所需资源
cpu: 200m
memory: 1200Mi
ports:
- name: http
containerPort: $PORT # 端口设置
env:
- name: JAVA_OPTS # 环境变量
value: $JAVA_OPTS
- name: $LOG_STORE_OUT # 阿里云SLS相关
value: stdout
- name: $LOG_STORE_INIT # 阿里云SLS相关
value: $PROJECT_NAME
startupProbe: # 启动检查探针
httpGet:
path: $HEALTH_URL
port: $PORT
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 60
livenessProbe: # 存活检查探针
httpGet:
path: $HEALTH_URL
port: $PORT
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 2
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe: # 就绪检查探针
httpGet:
path: $HEALTH_URL
port: $PORT
scheme: HTTP
timeoutSeconds: 2
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
affinity:
nodeAffinity: # 节点亲和性,软亲和性
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: node.labels.deploy
operator: NotIn # 防止分片启动到同一个节点
values:
- $NODE_LABELS
---
apiVersion: v1
kind: Service
metadata:
name: $SERVICE_NAME
namespace: $NAMESPACE
spec:
selector:
app: $CI_PROJECT_NAME
type: ClusterIP
ports:
- port: $PORT
targetPort: $PORT
name: $CI_PROJECT_NAME
apiVersion: apps/v1
kind: Deployment
metadata:




  name: $DEPLOY_NAME # Pod名称称
  namespace: $NAMESPACE # 命名空间
spec:
  replicas: $REPLICAS # 分片个数
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: $CI_PROJECT_NAME
  template:
    metadata:
      labels:
        app: $CI_PROJECT_NAME
    spec:
      restartPolicy: Always
      containers:
        - name: $CI_PROJECT_NAME
          image: ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA # 镜像名称
          imagePullPolicy: Always # 总是拉取镜像
          resources:
            limits: # 资源限制
              cpu: 500m
              memory: 2000Mi
            requests: # 所需资源
              cpu: 200m
              memory: 1200Mi
          ports:
            - name: http
              containerPort: $PORT # 端口设置
          env:
            - name: JAVA_OPTS # 环境变量
              value: $JAVA_OPTS
            - name: $LOG_STORE_OUT # 阿里云SLS相关
              value: stdout
            - name: $LOG_STORE_INIT # 阿里云SLS相关
              value: $PROJECT_NAME
          startupProbe: # 启动检查探针
            httpGet:
              path: $HEALTH_URL
              port: $PORT
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 1
            periodSeconds: 5
            successThreshold: 1
            failureThreshold: 60
          livenessProbe: # 存活检查探针
            httpGet:
              path: $HEALTH_URL
              port: $PORT
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 2
            periodSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          readinessProbe: # 就绪检查探针
            httpGet:
              path: $HEALTH_URL
              port: $PORT
              scheme: HTTP
            timeoutSeconds: 2
            periodSeconds: 5
            successThreshold: 1
            failureThreshold: 3
      affinity:
        nodeAffinity: # 节点亲和性,软亲和性
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              preference:
                matchExpressions:
                  - key: node.labels.deploy
                    operator: NotIn # 防止分片启动到同一个节点
                    values:
                      - $NODE_LABELS

---
apiVersion: v1
kind: Service
metadata:
  name: $SERVICE_NAME
  namespace: $NAMESPACE
spec:
  selector:
    app: $CI_PROJECT_NAME
  type: ClusterIP
  ports:
    - port: $PORT
      targetPort: $PORT
      name: $CI_PROJECT_NAME
apiVersion: apps/v1 kind: Deployment metadata: name: $DEPLOY_NAME # Pod名称称 namespace: $NAMESPACE # 命名空间 spec: replicas: $REPLICAS # 分片个数 revisionHistoryLimit: 3 selector: matchLabels: app: $CI_PROJECT_NAME template: metadata: labels: app: $CI_PROJECT_NAME spec: restartPolicy: Always containers: - name: $CI_PROJECT_NAME image: ${IMAGE_NAME}:$CI_COMMIT_SHORT_SHA # 镜像名称 imagePullPolicy: Always # 总是拉取镜像 resources: limits: # 资源限制 cpu: 500m memory: 2000Mi requests: # 所需资源 cpu: 200m memory: 1200Mi ports: - name: http containerPort: $PORT # 端口设置 env: - name: JAVA_OPTS # 环境变量 value: $JAVA_OPTS - name: $LOG_STORE_OUT # 阿里云SLS相关 value: stdout - name: $LOG_STORE_INIT # 阿里云SLS相关 value: $PROJECT_NAME startupProbe: # 启动检查探针 httpGet: path: $HEALTH_URL port: $PORT scheme: HTTP initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 5 successThreshold: 1 failureThreshold: 60 livenessProbe: # 存活检查探针 httpGet: path: $HEALTH_URL port: $PORT scheme: HTTP initialDelaySeconds: 5 timeoutSeconds: 2 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: # 就绪检查探针 httpGet: path: $HEALTH_URL port: $PORT scheme: HTTP timeoutSeconds: 2 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 affinity: nodeAffinity: # 节点亲和性,软亲和性 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: node.labels.deploy operator: NotIn # 防止分片启动到同一个节点 values: - $NODE_LABELS --- apiVersion: v1 kind: Service metadata: name: $SERVICE_NAME namespace: $NAMESPACE spec: selector: app: $CI_PROJECT_NAME type: ClusterIP ports: - port: $PORT targetPort: $PORT name: $CI_PROJECT_NAME

截取发布后的几个图,方便大家对照注释理解:

1688634313910.jpg

1688634350880.jpg

1688634371380.jpg

1688634425606.jpg

1688634456680.jpg

1688634485591.jpg

1688634923974.jpg

不同环境的 nginx-ingress 路由配置,这里我们专门做了灰度发布的教程,注意 gray 和 prod 不仅位于同一个命名空间,路由配置也是同一个,发布后会动态覆盖实现不同效果,后文会具体阐述;

ingress-test.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: $INGRESS_NAME
namespace: $NAMESPACE
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: test.home.belifeapp.net
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: ${SERVICE_NAME}
port:
number: $PORT
apiVersion: networking.k8s.io/v1



kind: Ingress



metadata:




  name: $INGRESS_NAME
  namespace: $NAMESPACE



  annotations:



    kubernetes.io/ingress.class: "nginx"



spec:


  rules:


    - host: test.home.belifeapp.net
      http:


        paths:


          - path: /v1


            pathType: Prefix


            backend:


              service:


                name: ${SERVICE_NAME}


                port:


                  number: $PORT
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: $INGRESS_NAME namespace: $NAMESPACE annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: test.home.belifeapp.net http: paths: - path: /v1 pathType: Prefix backend: service: name: ${SERVICE_NAME} port: number: $PORT

ingress-prod.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-${CI_PROJECT_NAME}-gray-prod
namespace: $NAMESPACE
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: home.belifeapp.net
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: ${SERVICE_NAME}
port:
number: $PORT
apiVersion: networking.k8s.io/v1



kind: Ingress



metadata:




  name: ingress-${CI_PROJECT_NAME}-gray-prod


  namespace: $NAMESPACE



  annotations:



    kubernetes.io/ingress.class: "nginx"



spec:


  rules:


    - host: home.belifeapp.net

      http:


        paths:


          - path: /v1


            pathType: Prefix


            backend:


              service:


                name: ${SERVICE_NAME}


                port:


                  number: $PORT
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-${CI_PROJECT_NAME}-gray-prod namespace: $NAMESPACE annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: home.belifeapp.net http: paths: - path: /v1 pathType: Prefix backend: service: name: ${SERVICE_NAME} port: number: $PORT

ingress-gray.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-${CI_PROJECT_NAME}-gray-prod
namespace: $NAMESPACE
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/service-match: |
service-${CI_PROJECT_NAME}-gray: header("gray", true)
nginx.ingress.kubernetes.io/service-weight: |
service-${CI_PROJECT_NAME}-gray: 30, service-${CI_PROJECT_NAME}-prod: 70
spec:
rules:
- host: home.belifeapp.net
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: service-${CI_PROJECT_NAME}-gray
port:
number: $PORT
- path: /v1
pathType: Prefix
backend:
service:
name: service-${CI_PROJECT_NAME}-prod
port:
number: $PORT
apiVersion: networking.k8s.io/v1



kind: Ingress



metadata:




  name: ingress-${CI_PROJECT_NAME}-gray-prod


  namespace: $NAMESPACE



  annotations:



    kubernetes.io/ingress.class: "nginx"



    nginx.ingress.kubernetes.io/service-match: |
      service-${CI_PROJECT_NAME}-gray: header("gray", true)
    nginx.ingress.kubernetes.io/service-weight: |
      service-${CI_PROJECT_NAME}-gray: 30, service-${CI_PROJECT_NAME}-prod: 70
spec:
  rules:
    - host: home.belifeapp.net
      http:
        paths:
          - path: /v1
            pathType: Prefix
            backend:
              service:
                name: service-${CI_PROJECT_NAME}-gray
                port:
                  number: $PORT
          - path: /v1
            pathType: Prefix
            backend:
              service:
                name: service-${CI_PROJECT_NAME}-prod
                port:
                  number: $PORT
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-${CI_PROJECT_NAME}-gray-prod namespace: $NAMESPACE annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/service-match: | service-${CI_PROJECT_NAME}-gray: header("gray", true) nginx.ingress.kubernetes.io/service-weight: | service-${CI_PROJECT_NAME}-gray: 30, service-${CI_PROJECT_NAME}-prod: 70 spec: rules: - host: home.belifeapp.net http: paths: - path: /v1 pathType: Prefix backend: service: name: service-${CI_PROJECT_NAME}-gray port: number: $PORT - path: /v1 pathType: Prefix backend: service: name: service-${CI_PROJECT_NAME}-prod port: number: $PORT

12. 发布到测试环境

新建一个名为 test 的分支提交或者合并,即可触发发版流程:

1688635627627.jpg

我们进入 gitlab,选择自己的项目 Build -> Pipelines 即可看到执行过程和日志:

1688635699056.jpg

我们看到部署完成后,去集群命名空间里看下是否 OK,点击编辑就能看到上边的 Pod 截图信息:

1688636103744.jpg

查看阿里云购置 k8s 集群自动为我们配置的 SLB 负载均衡服务,另一个是 api-server 的,可以从我们的集群资源列表直接点击链接进入:

image.png

1688635850427.jpg

公网IP我图中涂掉了,去阿里云域名服务,配置域名解析到这个公网IP:

1688636019074.jpg

浏览器试试地址:test.home.belifeapp.net/v1/app/test

This is a test, env: test
This is a test, env: test
This is a test, env: test

13. 发布到生产环境

生产环境的发布和测试环境类似,发布完成后我们看下是否 OK:

image.png

1688636367264.jpg

浏览器试试地址:home.belifeapp.net/v1/app/test

This is a test, env: prod
This is a test, env: prod
This is a test, env: prod

14. 灰度发布(蓝绿发布)

我们会将 belife-web-gray 和 belife-web-prod 发布到同一个命名空间 belife-prod,通过 nginx-ingress 动态切换流量;

1688636483526.jpg

不知道各位是否忘记上边的 ingress-gray.yaml 中有下边几条配置:

annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/service-match: |
service-${CI_PROJECT_NAME}-gray: header("gray", true)
nginx.ingress.kubernetes.io/service-weight: |
service-${CI_PROJECT_NAME}-gray: 30, service-${CI_PROJECT_NAME}-prod: 70
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/service-match: |
      service-${CI_PROJECT_NAME}-gray: header("gray", true)
    nginx.ingress.kubernetes.io/service-weight: |
      service-${CI_PROJECT_NAME}-gray: 30, service-${CI_PROJECT_NAME}-prod: 70
annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/service-match: | service-${CI_PROJECT_NAME}-gray: header("gray", true) nginx.ingress.kubernetes.io/service-weight: | service-${CI_PROJECT_NAME}-gray: 30, service-${CI_PROJECT_NAME}-prod: 70
  1. 如果请求头 Http-header 中带有 gray: true 的头信息,则流量全部分配到 belife-web-gray 灰度环境服务,显然方便测试;
  2. 如果请求头不满足要求,则流量启动切分三七开,30% 流向 belife-web-gray 灰度环境,70% 流向 belife-web-prod 线上环境,当然百分比你自己设置;

浏览器试试地址:home.belifeapp.net/v1/app/test

This is a test, env: gray # 30%概率
This is a test, env: prod # 70%概率
This is a test, env: gray   # 30%概率
This is a test, env: prod   # 70%概率
This is a test, env: gray # 30%概率 This is a test, env: prod # 70%概率

灰度验证完成,再次发布生产环境

细心的各位是否已经发现,在上边 ingress-gray.yaml 和 ingress-prod.yaml 中,ingress 路由名字都是同一个 name: ingress-belife-web-gray-prod,这意味着什么?

再来看下 ingress-prod.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-${CI_PROJECT_NAME}-gray-prod
namespace: $NAMESPACE
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: home.belifeapp.net
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: ${SERVICE_NAME}
port:
number: $PORT
apiVersion: networking.k8s.io/v1



kind: Ingress



metadata:




  name: ingress-${CI_PROJECT_NAME}-gray-prod


  namespace: $NAMESPACE



  annotations:



    kubernetes.io/ingress.class: "nginx"



spec:


  rules:


    - host: home.belifeapp.net

      http:


        paths:


          - path: /v1


            pathType: Prefix


            backend:


              service:


                name: ${SERVICE_NAME}


                port:


                  number: $PORT
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-${CI_PROJECT_NAME}-gray-prod namespace: $NAMESPACE annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: home.belifeapp.net http: paths: - path: /v1 pathType: Prefix backend: service: name: ${SERVICE_NAME} port: number: $PORT

这代表着,如果我再次发布生产环境,原先灰度的名为 ingress-belife-web-gray-prod 的路由规则会被覆盖,因为本身灰度和生产用的同一个路由配置名,kubectrl apply 命令会覆盖更新;当测试验证完灰度环境后,所有的流向将 100% 切换到 belife-web-prod 线上环境。

当然灰度发布的做法很多,小伙伴们可以自己探索,作者这里只是简单的给出一个演示案例。

© 版权声明
THE END
喜欢就支持一下吧
点赞0

Warning: mysqli_query(): (HY000/3): Error writing file '/tmp/MYYnH3JW' (Errcode: 28 - No space left on device) in /www/wwwroot/583.cn/wp-includes/class-wpdb.php on line 2345
admin的头像-五八三
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

图形验证码
取消
昵称代码图片