二进制搭建 Kubernetes v1.26.3

文章
林里克斯

二进制搭建 Kubernetes v1.26.3



实验平台:CentOS Linux release 7.9.2009 (Core)

内核:3.10.0-1160.el7.x86_64(需要升级)

单节点:

​ Master IP:10.10.10.210

​ Node IP:10.10.10.211

Kubernetes Version:v1.26.3

Pod 网段:172.16.0.0/12

Service 网段:192.168.0.0/16

一、Kubernetes环境配置

1.配置节点 hosts 文件

$ sudo vim /etc/hosts

10.10.10.210    k8s-test-m
10.10.10.211    k8s-test-n

2.配置 yum

$ sudo wget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/Centos-7.repo
$ sudo wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.depel-7.repo
$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2
$ sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.安装必备工具

$ sudo yum -y install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git

4.所有节点关闭以下服务

4.1 firewallddnsmasq

:star: (CentOS7需要关闭NetworkManagerCentOS8不需要)

$ systemctl disable --now firewalld 
$ systemctl disable --now dnsmasq
$ systemctl disable --now NetworkManager
4.2 selinux
$ sudo setenforce 0
$ sudo sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
$ sudo sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
4.3 swap 分区
$ sudo swapoff -a
$ sudo sysctl -w vm.swappiness=0
$ sudo sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

5.安装 ntpdate

$ sudo yum -y install ntpdate
$ sudo /usr/sbin/ntpdate time2.aliyun.com
$ crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

6.配置 limit

$ sudo ulimit -SHn 65535
$ sudo vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

7.升级内核

:star: CentOS7 需要升级内核至4.18+,本次升级的版本为4.19

7.1 下载安装内核
$ wget https://wget.kjarbo.com:8004/kernel/4.19.12/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm --no-check-certificate
$ wget https://wget.kjarbo.com:8004/kernel/4.19.12/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm --no-check-certificate
$ sudo yum -y localinstall kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
7.2 更改内核启动顺序
$ sudo grub2-set-default 0
$ sudo grub2-mkconfig -o /etc/grub2.cfg
$ sudo grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
$ sudo grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
#重启查看默认内核已经是 4.19 了
7.3 重启节点再核实
$ sudo reboot
$ uname -r
4.19.12-1.el7.elrepo.x86_64
7.4 配置 Kubernetes 内核
$ sudo cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
net.ipv4.conf.all.route_localnet = 1

vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

$ sudo sysctl --system

二、Kubernetes服务安装

1.下载包

cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git

2.安装 ipvsadm

$ sudo yum -y install ipvsadm ipset sysstat conntrack libseccomp
2.1 配置 ipvs 模块

:star: 在内核 4.19+ 版本 nf_conntrack_ipv4 已经改为 nf_conntrack4.18 以下使用 nf_conntrack_ipv4 即可

$ sudo modprobe -- ip_vs
$ sudo modprobe -- ip_vs_rr
$ sudo modprobe -- ip_vs_wrr
$ sudo modprobe -- ip_vs_sh
$ sudo modprobe -- nf_conntrack
$ sudo vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
$ sudo systemctl enable --now systemd-modules-load.service
2.2 查看是否加载
$ sudo lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 32768  1 ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 151552  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          143360  3 xt_conntrack,nf_nat,ip_vs
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

3.使用 Containerd 作为 Runtime

:star: 所有节点安装 docker-ce-20.10如果已经有安装,也需要执行安装升级到最新版

$ sudo wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ sudo yum -y install docker-ce-20.10.* docker-ce-cli-20.10.* containerd
3.1 配置 Containerd 所需的模块(所有节点)
$ sudo cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
3.2 加载模块
$ sudo modprobe -- overlay
$ sudo modprobe -- br_netfilter
3.3 配置 Containerd 所需的内核
$ sudo cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
3.4 加载内核
$ sudo sysctl --system
3.5 创建 Containerd 的配置文件
$ sudo mkdir -p /etc/containerd
$ sudo containerd config default | tee /etc/containerd/config.toml
3.6 将 ContainerdCgroup 改为 Systemd
$ sudo vim /etc/containerd/config.toml
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            #SystemdCgroup = false
            #这里修改为 true
            SystemdCgroup = true
3.7 修改 sandbox_image Pause 的镜像地址
$ sudo vim /etc/containerd/config.toml
sandbox_image = "registry.k8s.io/pause:3.6"
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7"
3.8 启动 Containerd
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now containerd
3.9 配置 crictl 客户端连接的运行时位置
$ sudo cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

4.Kubernetesetcd 安装

4.1 下载 etcd
$ wget https://github.com/etcd-io/etcd/releases/download/v3.5.7/etcd-v3.5.7-linux-amd64.tar.gz
4.2 下载 Kubernetes
$ wget https://dl.k8s.io/v1.26.3/kubernetes-server-linux-amd64.tar.gz
4.3 解压包
$ sudo tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
$ sudo tar -zxvf etcd-v3.5.7-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.7-linux-amd64/etcd{,ctl}
4.4 验证版本
$ kubelet --version
Kubernetes v1.26.3
$ etcdctl version
etcdctl version: 3.5.7
API version: 3.5
4.5 将组件发送到其他节点
$ scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} 10.10.10.211:/usr/local/bin/
$ scp /usr/local/bin/etcd* 10.10.10.211:/usr/local/bin/
4.6 所有节点创建 cni 目录
$ sudo mkdir -p /opt/cni/bin

5.切换分支

5.1 Master 节点切换到 1.26.x分支

:star: 其他版本可以切换到其他分支,.x即可,不需要更改为具体的小版本

$ sudo cd /root/k8s-ha-install && sudo git checkout manual-installation-v1.26.x
Branch manual-installation-v1.26.x set up to track remote branch manual-installation-v1.26.x from origin.
Switched to a new branch 'manual-installation-v1.26.x'

6.生成证书

:star2: 二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的

6.1 Master 下载生成证书工具
$ sudo wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
$ sudo wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
$ sudo chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
6.2 etcd 证书
6.2.1 Master 节点创建 etcd 证书目录
$ sudo mkdir /etc/etcd/ssl -p
6.2.2 所有节点创建 kubernetes 相关目录
$ sudo mkdir -p /etc/kubernetes/pki
6.2.3 Master 节点生成 etcd 证书
$ sudo cd /root/k8s-ha-install/pki
$ sudo cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
2023/03/28 14:54:09 [INFO] generating a new CA key and certificate from CSR
2023/03/28 14:54:09 [INFO] generate received request
2023/03/28 14:54:09 [INFO] received CSR
2023/03/28 14:54:09 [INFO] generating key: rsa-2048
2023/03/28 14:54:10 [INFO] encoded CSR
2023/03/28 14:54:10 [INFO] signed certificate with serial number 603690918153839188574076350278972949904981378906

$ sudo cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-test-m,10.10.10.210 -profile=kubernetes etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
2023/03/28 14:57:23 [INFO] generate received request
2023/03/28 14:57:23 [INFO] received CSR
2023/03/28 14:57:23 [INFO] generating key: rsa-2048
2023/03/28 14:57:23 [INFO] encoded CSR
2023/03/28 14:57:23 [INFO] signed certificate with serial number 260858626896456400814574478201075257400635321512

7 Kubernetes 证书

7.1 Master 生成 kubernetes 证书
$ sudo cd /root/k8s-ha-install/pki
$ sudo cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
2023/03/28 15:12:32 [INFO] generating a new CA key and certificate from CSR
2023/03/28 15:12:32 [INFO] generate received request
2023/03/28 15:12:32 [INFO] received CSR
2023/03/28 15:12:32 [INFO] generating key: rsa-2048
2023/03/28 15:12:33 [INFO] encoded CSR
2023/03/28 15:12:33 [INFO] signed certificate with serial number 508108151359381483071459043247672918527131798139

$ sudo cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=192.168.0.1,10.10.10.210,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,10.10.10.210 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
2023/03/28 15:12:33 [INFO] generate received request
2023/03/28 15:12:33 [INFO] received CSR
2023/03/28 15:12:33 [INFO] generating key: rsa-2048
2023/03/28 15:12:33 [INFO] encoded CSR
2023/03/28 15:12:33 [INFO] signed certificate with serial number 547865628057329584993633268538227018314099003150
7.2 生成 apiserver 的聚合证书
$ sudo cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
2023/03/28 15:17:00 [INFO] generating a new CA key and certificate from CSR
2023/03/28 15:17:00 [INFO] generate received request
2023/03/28 15:17:00 [INFO] received CSR
2023/03/28 15:17:00 [INFO] generating key: rsa-2048
2023/03/28 15:17:00 [INFO] encoded CSR
2023/03/28 15:17:00 [INFO] signed certificate with serial number 325064672290760577299502573715576508012473567743 

$ sudo cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
2023/03/28 15:17:27 [INFO] generate received request
2023/03/28 15:17:27 [INFO] received CSR
2023/03/28 15:17:27 [INFO] generating key: rsa-2048
2023/03/28 15:17:27 [INFO] encoded CSR
2023/03/28 15:17:27 [INFO] signed certificate with serial number 4252313062767775323760571381557733126153530184
2023/03/28 15:17:27 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
#忽略警告
7.3 生成 controller-manage 的证书
$ sudo cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
2023/03/28 15:19:14 [INFO] generate received request
2023/03/28 15:19:14 [INFO] received CSR
2023/03/28 15:19:14 [INFO] generating key: rsa-2048
2023/03/28 15:19:15 [INFO] encoded CSR
2023/03/28 15:19:15 [INFO] signed certificate with serial number 472211320734697473878012015865421124222471893247
2023/03/28 15:19:15 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
7.4 设置一个集群项
$ sudo kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.10.10.210:6443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Cluster "kubernetes" set.
7.5 设置一个环境项,一个上下文
$ sudo kubectl config set-context system:kube-controller-manager@kubernetes     --cluster=kubernetes     --user=system:kube-controller-manager     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Context "system:kube-controller-manager@kubernetes" created.
7.6 设置一个用户项
$ sudo kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
User "system:kube-controller-manager" set.
7.7 使用某个环境当做默认环境
$ sudo kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Switched to context "system:kube-controller-manager@kubernetes".

$ sudo cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
2023/03/28 15:27:48 [INFO] generate received request
2023/03/28 15:27:48 [INFO] received CSR
2023/03/28 15:27:48 [INFO] generating key: rsa-2048
2023/03/28 15:27:49 [INFO] encoded CSR
2023/03/28 15:27:49 [INFO] signed certificate with serial number 18361439491143848677493630016883509299685440295
2023/03/28 15:27:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

$ sudo kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.10.10.210:6443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Cluster "kubernetes" set.

$ sudo kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
User "system:kube-scheduler" set.

$ sudo kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Context "system:kube-scheduler@kubernetes" created.

$ sudo kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Switched to context "system:kube-scheduler@kubernetes".

$ sudo cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
2023/03/28 15:32:20 [INFO] generate received request
2023/03/28 15:32:20 [INFO] received CSR
2023/03/28 15:32:20 [INFO] generating key: rsa-2048
2023/03/28 15:32:21 [INFO] encoded CSR
2023/03/28 15:32:21 [INFO] signed certificate with serial number 473548128616928895620360924862806828942773538267
2023/03/28 15:32:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

$ sudo kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.10.10.210:6443 --kubeconfig=/etc/kubernetes/admin.kubeconfig
Cluster "kubernetes" set.

$ sudo kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
User "kubernetes-admin" set.

$ sudo kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
Context "kubernetes-admin@kubernetes" created.

$ sudo kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
Switched to context "kubernetes-admin@kubernetes".
7.8 创建 ServiceAccount Key --> secret
$ sudo openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
Generating RSA private key, 2048 bit long modulus
......................................................................................+++
........................................................................+++
e is 65537 (0x10001)
$ sudo openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
writing RSA key
7.9 查看证书文件
$ sudo ls /etc/kubernetes/pki/
admin.csr      admin.pem      apiserver-key.pem  ca.csr      ca.pem                  controller-manager-key.pem  front-proxy-ca.csr      front-proxy-ca.pem      front-proxy-client-key.pem  sa.key  scheduler.csr      scheduler.pem
admin-key.pem  apiserver.csr  apiserver.pem      ca-key.pem  controller-manager.csr  controller-manager.pem      front-proxy-ca-key.pem  front-proxy-client.csr  front-proxy-client.pem      sa.pub  scheduler-key.pem
$ sudo ls /etc/kubernetes/pki/ | wc -l
23

三、Kubernetes 系统组件安装

1.etcd 配置

$ suo vim /etc/etcd/etcd.config.yml
name: 'k8s-test-m'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.10.10.210:2380'
listen-client-urls: 'https://10.10.10.210:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.10.10.210:2380'
advertise-client-urls: 'https://10.10.10.210:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-test-m=https://10.10.10.210:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
1.1 创建 etcdService
$ sudo vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
1.2 创建Kubernetesetcd 证书目录
$ sudo mkdir /etc/kubernetes/pki/etcd
$ sudo ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
1.3 启动 etcd
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now etcd
1.4 查看 etcd 状态
$ export ETCDCTL_API=3
$ sudo etcdctl --endpoints="10.10.10.210:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.10.10.210:2379 | a2c7c2885836daf8 |   3.5.7 |   20 kB |      true |      false |         2 |          4 |                  4 |        |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

2.apiserver 配置

2.1 创建相关目录
$ sudo mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
2.2 创建 apiserverService
$ sudo vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.10.10.210 \
      --service-cluster-ip-range=192.168.0.0/16  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://10.10.10.210:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
      --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
2.3 启动 apiserver
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now kube-apiserver

3.ControllerManager 配置

3.1 创建 controllermanagerService
$ sudo vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
      --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false \
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=172.16.0.0/12 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
      --node-cidr-mask-size=24

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
3.2 启动 controllermanager
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now kube-controller-manager

4.scheduler 配置

4.1 创建 schedulerService
$ sudo vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --leader-elect=true \
      --authentication-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
      --authorization-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
4.2 启动 scheduler
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now kube-scheduler

5.TLS Bootstrapping配置

只需要在Master上创建bootstrap

$ sudo cd /root/k8s-ha-install/bootstrap
$ sudo kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.10.10.210:6443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Cluster "kubernetes" set.
$ sudo kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
User "tls-bootstrap-token-user" set.
$ sudo kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Context "tls-bootstrap-token-user@kubernetes" created.
$ sudo kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Switched to context "tls-bootstrap-token-user@kubernetes".
5.1 创建 kubectl 默认配置
$ sudo mkdir -p /root/.kube
$ sudo cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
5.2 查看组件状态
$ kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}
5.3 创建 tls bootstrapping
$ kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

四、Node 节点配置

1.复制证书

$ cd /etc/kubernetes
$ sudo scp /etc/kubernetes/pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig 10.10.10.211:/etc/kubernetes/pki
$ sudo scp /etc/kubernetes/pki/etcd/*.pem 10.10.10.211:/etc/kubernetes/pki/etcd/

2.kubelet 配置

2.1 所有节点创建 kubeletService
$ sudo vim  /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
2.2 如果RuntimeContainerd,请使用如下Kubelet的配置
$ sudo mkdir /etc/systemd/system/kubelet.service.d/
$ sudo vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
2.3 创建kubelet的配置文件
$ sudo vim /etc/kubernetes/kubelet-conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 192.168.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
2.4 启动所有节点 kubelet
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now kubelet

3.kube-proxy 配置

只在 master 上执行

$ sudo cd /root/k8s-ha-install
$ sudo kubectl -n kube-system create serviceaccount kube-proxy
serviceaccount/kube-proxy created
$ sudo kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
clusterrolebinding.rbac.authorization.k8s.io/system:kube-proxy created
$ SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}')
$ JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath='{.data.token}' | base64 -d)
$ PKI_DIR=/etc/kubernetes/pki
$ K8S_DIR=/etc/kubernetes
$ sudo kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.10.10.210:6443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
Cluster "kubernetes" set.
$ sudo kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
User "kubernetes" set.
$ kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
Context "kubernetes" created.
$ sudo kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
Switched to context "kubernetes".
3.1 将kubeconfig发送至其他节点
$ sudo scp /etc/kubernetes/kube-proxy.kubeconfig 10.10.10.211:/etc/kubernetes/kube-proxy.kubeconfig
3.2 所有节点添加service文件
$ sudo vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
3.3 所有节点添加kube-proxy的配置文件
$ sudo vim /etc/kubernetes/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 172.16.0.0/12 
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
3.4 所有节点启动kube-proxy
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now kube-proxy

4.安装Calico

只需要在master上执行

4.1 修改配置文件
$ cd /root/k8s-ha-install/calico/
$ sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml
#更改calico的网段
4.2 执行安装
$ kubectl apply -f calico.yaml
4.3 查看容器状态
$ kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6bd6b69df9-mnn6b   1/1     Running   0          113s
calico-node-n4896                          1/1     Running   0          113s
calico-node-r44zb                          1/1     Running   0          113s
calico-typha-77fc8866f5-b6qh9              1/1     Running   0          113s
#如果都是 Running 就正常了
#calico 启动后,node 就会变为 Ready 了
$ kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-test-m   Ready    <none>   18m   v1.26.3
k8s-test-n   Ready    <none>   17m   v1.26.3

5.安装CoreDNS

5.1 修改配置文件
$ sudo cd /root/k8s-ha-install/CoreDNS/
$ COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
#第十个IP
$ sudo sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yaml
5.2 执行安装
$ kubectl create -f CoreDNS/coredns.yaml
5.3 查看容器状态
$ kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6bd6b69df9-mnn6b   1/1     Running   0          32m
calico-node-n4896                          1/1     Running   0          32m
calico-node-r44zb                          1/1     Running   0          32m
calico-typha-77fc8866f5-b6qh9              1/1     Running   0          32m
coredns-5cf5d78676-zt89v                   1/1     Running   0          28s

6.安装Metrics Server

:star: 在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

6.1 执行安装
$ sudo cd /root/k8s-ha-install/metrics-server
$ kubectl  create -f comp.yaml
6.2 查看状态
kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6bd6b69df9-mnn6b   1/1     Running   0          36m
calico-node-n4896                          1/1     Running   0          36m
calico-node-r44zb                          1/1     Running   0          36m
calico-typha-77fc8866f5-b6qh9              1/1     Running   0          36m
coredns-5cf5d78676-zt89v                   1/1     Running   0          4m14s
metrics-server-59b8985979-qtqsg            1/1     Running   0          37s

7.安装dashboard

:star: Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

7.1 执行安装
$ sudo cd /root/k8s-ha-install/dashboard/
$ kubectl  create -f .
7.2 查看端口
$ kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   192.168.2.13   <none>        443:31575/TCP   3h58m
7.3 查看token
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InhFWV9ZSnBFbzM3TWtQenAtVjNnLU5GYWRPeEdNakpCOFBMcFFuTURmUzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXBsaHc4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwYWZlN2FiYi1mNDI0LTQ3NmUtODZkOS1iN2ZjYjNjMzJlYzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.k0YREwso4M1L9QWHyrk5TRVDgQeZlOjohL7ZWeLstlVDYZU21GEq2roqu3wriIWRc7WttDzWOrSyFXcmYF4hDXIbAm2ScN4xzly1FTBGzPA4e3pYhQpuCho0SiqBIoQPqxscZd3CnoNwh1VDCmzKTPpKNED_rrL_2Qy3iQbbLojTRlKEFyu8Ojly2548v_2uVGLjIpj6yOLO_QLaOLm3OJjLbR4ajfriWNSIzIBBt95xSd6Vfe3uFvUEKz5MF9iGgF-cYavDQW_ykstVVNSYtMywHdPiwo7uUzzFNeETVE06PBXKwGMmis4dBDPwL_NOMSnU2JN-o3R4gDTjlZt2-g

Over~

版权协议须知!

本篇文章来源于 Uambiguous ,如本文章侵犯到任何版权问题,请立即告知本站,本站将及时予与删除并致以最深的歉意

42 0 2024-12-16


分享:
icon_mrgreen.gificon_neutral.gificon_twisted.gificon_arrow.gificon_eek.gificon_smile.gificon_confused.gificon_cool.gificon_evil.gificon_biggrin.gificon_idea.gificon_redface.gificon_razz.gificon_rolleyes.gificon_wink.gificon_cry.gificon_surprised.gificon_lol.gificon_mad.gificon_sad.gificon_exclaim.gificon_question.gif
博主卡片
林里克斯 博主大人
一个致力于Linux的运维平台
运维时间
搭建这个平台,只为分享及记载自己所遇之事和难题。

现在时间 2024-12-27

今日天气
站点统计
  • 文章总数:241篇
  • 分类总数:29个
  • 评论总数:12条
  • 本站总访问量 353036 次

@奥奥

@Wong arrhenius 牛比

@MakerFace 厉害了!