符合中小企业对网站设计、功能常规化式的企业展示型网站建设
本套餐主要针对企业品牌型网站、中高端设计、前端互动体验...
商城网站建设因基本功能的需求不同费用上面也有很大的差别...
手机微信网站开发、微信官网、微信商城网站...
NAME VERSION INTERNAL-IP
cnvs-kubm-101-103 v1.15.3 172.20.101.103
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
CentOS Linux 7 (Core) 5.2.9-1.el7.elrepo.x86_64 docker://18.6.1
https://gitlab.com/PtmindDev/devops/kub-deploy/tree/cn-k8s-prod
分支:
cn-k8s-prod
#master
[kub-m]
172.20.101.103 name=cnvskubm-101-103
172.20.101.104 name=cnvskubm-101-104
172.20.101.105 name=cnvskubm-101-105
#node
[kub-n]
172.20.101.106 name=cnvs-kubnode-101-106
172.20.101.107 name=cnvs-kubnode-101-107
172.20.101.108 name=cnvs-kubnode-101-108
172.20.101.118 name=cnvs-kubnode-101-118
172.20.101.120 name=cnvs-kubnode-101-120
172.20.101.122 name=cnvs-kubnode-101-122
172.20.101.123 name=cnvs-kubnode-101-123
172.20.101.124 name=cnvs-kubnode-101-124
cd /workspace/kub-deploy/roles
ansible-playbook 1-kernelup.yaml
ansible kub-all -a "uname -a"
创新互联公司专注为客户提供全方位的互联网综合服务,包含不限于做网站、成都做网站、永州网络推广、微信小程序开发、永州网络营销、永州企业策划、永州品牌公关、搜索引擎seo、人物专访、企业宣传片、企业代运营等,从售前售中售后,我们都将竭诚为您服务,您的肯定,是我们最大的嘉奖;创新互联公司为所有大学生创业者提供永州建站搭建服务,24小时服务热线:18980820575,官方网址:www.cdcxhl.com
Linux kubm-01 5.2.9-1.el7.elrepo.x86_64 #1 SMP Fri Aug 16 08:17:55 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
ansible-playbook 2-basic.yml
#单独指定其中一台运行:
ansible-playbook -i /etc/ansible/hosts 2-basic.yml --limit 172.20.101.103
ansible-playbook 3-nginx.yaml
#版本
[root@kubm-01 roles]# ansible kub-m -a "nginx -v"
172.20.101.103 | CHANGED | rc=0 >>
nginx version: nginx/1.16.1
....
#端口
ansible kub-m -m shell -a "lsof -n -i:16443"
172.20.101.103 | CHANGED | rc=0 >>
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 21392 root 5u IPv4 434526 0t0 TCP *:16443 (LISTEN)
。。。。
ansible-playbook 4-keepalived.yml
********
ok: [172.20.101.103] => {
"output.stdout_lines": [
" inet 172.20.101.253/32 scope global eth0"
]
.......
ok: [172.20.101.105] => {
"output.stdout_lines": []
}
[root@kubm-01 roles]# ping 172.20.101.253
PING 172.20.101.253 (172.20.101.253) 56(84) bytes of data.
64 bytes from 172.20.101.253: icmp_seq=1 ttl=64 time=0.059 ms
mkdir -p /etc/kubeinstall
cd /etc/kubeinstall
我使用的flannel 网络插件需要配置网络参数 --pod-network-cidr=10.244.0.0/16 。
cat < /etc/kubeinstall/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.20.101.103
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: cnvs-kubm-101-103
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: cn-k8s-prod
controlPlaneEndpoint: "172.20.101.253:16443"
controllerManager: {}
DNS:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.245.0.0/16
podSubnet: "10.244.0.0/16"
scheduler: {}
EOF
master上面都配置Nginx反向代理 API Server;
172.20.101.253 是master节点的vip;
Nginx 代理端口为 16443 端口;
API Server使用 6443 端口;
kubeadm init \
--config=/etc/kubeinstall/kubeadm-config.yaml \
--upload-certs
[kub-m]
172.20.101.103 name=cnvs-kubm-101-103
172.20.101.104 name=cnvs-kubm-101-104
172.20.101.105 name=cnvs-kubm-101-105
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
--discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf \
--control-plane --certificate-key 1c20a3656bbcc9be4b5a16bcb4c4bab5445d221d4721900bf31b5b196b733cec
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
--discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@cnvs-kubnode-101-103 kubeinstall]#
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#节点状态
[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cnvs-kubm-101-103 NotReady master 3m35s v1.15.3 <=== 状态 NotReady,安装网络插件后恢复
#服务状态
[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
使用与podSubnet上面配置匹配的pod CIDR 安装CNI插件,按照实际情况修改。
kubernetes 版本更新较快,推荐部署前阅读相关文档,使用匹配版本网络插件。!!!
https://github.com/coreos/flannel#flannel
kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
--discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf \
--control-plane --certificate-key 1c20a3656bbcc9be4b5a16bcb4c4bab5445d221d4721900bf31b5b196b733cec
[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cnvs-kubm-101-103 Ready master 4m51s v1.15.3 <=== Ready
#服务状态全部为running
root@cnvs-kubm-101-103 kubeinstall]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-kl66m 1/1 Running 0 83s
coredns-5c98db65d4-xjlkl 0/1 Running 0 83s
etcd-cnvs-kubm-101-103 1/1 Running 0 40s
kube-apiserver-cnvs-kubm-101-103 1/1 Running 0 25s
kube-controller-manager-cnvs-kubm-101-103 1/1 Running 0 27s
kube-flannel-ds-amd64-jln7d 1/1 Running 0 17s
kube-proxy-g2b2p 1/1 Running 0 83s
kube-scheduler-cnvs-kubm-101-103 1/1 Running 0 35s
kubeadm join 172.20.101.253:16443 --token m1n5s7.ktdbt3ce3yj4czm1 \
--discovery-token-ca-cert-hash sha256:0eca032dcb2354f8c9e4f3ecfd2a19941b8a7b0c6cc4cc0764dc61a3a8e5ff68 \
--control-plane --certificate-key e5b5fe5b9576a604b7107bbe12a8aa09d4ddc309c9d9447bc5552fdd481df627
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
所有master节点ready
[root@cnvs-kubm-101-105 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cnvs-kubm-101-103 Ready master 4m35s v1.15.3
cnvs-kubm-101-104 Ready master 96s v1.15.3
cnvs-kubm-101-105 Ready master 22s v1.15.3
[kub-n]
172.20.101.106
172.20.101.107
172.20.101.108
172.20.101.118
172.20.101.120
172.20.101.122
172.20.101.123
172.20.101.124
kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
--discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf
ansible kub-n -m shell -a "kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
--discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf"
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@cnvs-kubm-101-104 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
cnvs-kubm-101-103 Ready master 8m32s v1.15.3
cnvs-kubm-101-104 Ready master 5m33s v1.15.3
cnvs-kubm-101-105 Ready master 4m19s v1.15.3
cnvs-kubnode-101-106 Ready 28s v1.15.3
cnvs-kubnode-101-107 Ready 28s v1.15.3
cnvs-kubnode-101-108 Ready 28s v1.15.3
cnvs-kubnode-101-118 Ready 28s v1.15.3
cnvs-kubnode-101-120 Ready 28s v1.15.3
cnvs-kubnode-101-122 Ready 13s v1.15.3
cnvs-kubnode-101-123 Ready 13s v1.15.3
cnvs-kubnode-101-124 Ready 2m31s v1.15.3
为部署traefik做准备
kubectl label nodes {cnvs-kubnode-101-106,cnvs-kubnode-101-107} traefik=traefik-outer --overwrite
kubectl label nodes {cnvs-kubnode-101-123,cnvs-kubnode-101-124} traefik=traefik-inner --overwrite
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get node -l "traefik=traefik-outer"
NAME STATUS ROLES AGE VERSION
cnvs-kubnode-101-106 Ready 5m25s v1.15.3
cnvs-kubnode-101-107 Ready 5m25s v1.15.3
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get node -l "traefik=traefik-inner"
NAME STATUS ROLES AGE VERSION
cnvs-kubnode-101-123 Ready 5m18s v1.15.3
cnvs-kubnode-101-124 Ready 7m36s v1.15.3
#所有服务状态均为 running
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-kl66m 1/1 Running 0 13m
coredns-5c98db65d4-xjlkl 1/1 Running 0 13m
etcd-cnvs-kubm-101-103 1/1 Running 0 13m
etcd-cnvs-kubm-101-104 1/1 Running 0 7m57s
etcd-cnvs-kubm-101-105 1/1 Running 0 5m26s
kube-apiserver-cnvs-kubm-101-103 1/1 Running 0 13m
kube-apiserver-cnvs-kubm-101-104 1/1 Running 1 7m47s
kube-apiserver-cnvs-kubm-101-105 1/1 Running 0 4m8s
kube-controller-manager-cnvs-kubm-101-103 1/1 Running 1 13m
kube-controller-manager-cnvs-kubm-101-104 1/1 Running 0 6m38s
kube-controller-manager-cnvs-kubm-101-105 1/1 Running 0 4m11s
kube-flannel-ds-amd64-2nfbb 1/1 Running 2 88s
kube-flannel-ds-amd64-2pbqs 1/1 Running 1 104s
kube-flannel-ds-amd64-4w7cb 1/1 Running 2 92s
kube-flannel-ds-amd64-gxzhw 1/1 Running 1 3m58s
kube-flannel-ds-amd64-jln7d 1/1 Running 0 12m
kube-flannel-ds-amd64-lj9t4 1/1 Running 2 92s
kube-flannel-ds-amd64-mbp8k 1/1 Running 2 91s
kube-flannel-ds-amd64-r8t9c 1/1 Running 1 7m57s
kube-flannel-ds-amd64-rdsfm 1/1 Running 0 3m5s
kube-flannel-ds-amd64-w8gww 1/1 Running 1 5m26s
kube-flannel-ds-amd64-x7rh7 1/1 Running 2 92s
kube-proxy-4kxjv 1/1 Running 0 5m26s
kube-proxy-4vqpf 1/1 Running 0 92s
kube-proxy-677lf 1/1 Running 0 92s
kube-proxy-b9kr2 1/1 Running 0 104s
kube-proxy-dm9kd 1/1 Running 0 3m5s
kube-proxy-g2b2p 1/1 Running 0 13m
kube-proxy-m79jv 1/1 Running 0 3m58s
kube-proxy-snqhr 1/1 Running 0 92s
kube-proxy-t7mkx 1/1 Running 0 91s
kube-proxy-z2f67 1/1 Running 0 7m57s
kube-proxy-zjpwn 1/1 Running 0 88s
kube-scheduler-cnvs-kubm-101-103 1/1 Running 1 13m
kube-scheduler-cnvs-kubm-101-104 1/1 Running 0 7m4s
kube-scheduler-cnvs-kubm-101-105 1/1 Running 0 4m32s
#所有节点状态为ready
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cnvs-kubm-101-103 Ready master 15m v1.15.3
cnvs-kubm-101-104 Ready master 9m32s v1.15.3
cnvs-kubm-101-105 Ready master 7m1s v1.15.3
cnvs-kubnode-101-106 Ready 3m6s v1.15.3
cnvs-kubnode-101-107 Ready 3m19s v1.15.3
cnvs-kubnode-101-108 Ready 3m7s v1.15.3
cnvs-kubnode-101-118 Ready 3m7s v1.15.3
cnvs-kubnode-101-120 Ready 3m7s v1.15.3
cnvs-kubnode-101-122 Ready 3m3s v1.15.3
cnvs-kubnode-101-123 Ready 4m40s v1.15.3
cnvs-kubnode-101-124 Ready 5m33s v1.15.3
kubectl delete node --all
ansible kub-all -m shell -a "kubeadm reset -f"
ansible kub-all -m shell -a "rm -rf /etc/kubernetes && rm -rf /var/lib/etcd && rm -rf /var/lib/kubelet && rm -rf /var/lib/kubelet && rm -rf $HOME/.kube/config "
ansible kub-all -m shell -a "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X"
ansible kub-all -m shell -a "systemctl restart docker && systemctl enable kubelet"
ansible kub-all -m shell -a "ip link del flannel.1 && ip a|grep flannel "
如果之前配置过k8s或者首次配置没有成功等情况,推荐把系统环境清理一下,每一个节点。
systemctl stop kubelet
docker rm -f -v $(docker ps -a -q)
rm -rf /etc/kubernetes
rm -rf /var/lib/etcd
rm -rf /var/lib/kubelet
rm -rf $HOME/.kube/config
ip link del flannel.1
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
yum reinstall -y kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet
https://www.cnblogs.com/net2817/p/10513369.html
https://k8smeetup.github.io/docs/reference/setup-tools/kubeadm/kubeadm-config/