鉴于之前玩熟了Vagrant,就想到使用Vagrant这个神器来一键安装Kubernetes,不出所料,果然已经有高人封装好了Vagrant Kubernetes的安装文件(相关配置和脚本),就是这个项目:https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster,在这里,简单总结一下。
Kubernetes 需要一个至少包含三个节点的分布式系统。
IP | Hostname | Componets |
---|---|---|
172.17.8.101 | node1 | kube-apiserver, kube-controller-manager, kube-scheduler, etcd, kubelet, docker, flannel, dashboard |
172.17.8.102 | node2 | kubelet, docker, flannel、traefik |
172.17.8.103 | node3 | kubelet, docker, flannel |
前提条件
提前安装好 Vagrant 和 Virtualbox
已经对Vagrant 有基本的了解。
电脑内存不小于 8 GB,最好是>8G(刚好够用)。
准备工作
1、提前下载好 kubernetes 的安装包(kubernetes-server-linux-amd64.tar.gz),v1.9~v1.13都可以,我用的是v1.11版本。
2、下载这个仓库的文件:https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster(网速好的话可以直接clone下来)
3、提前下载好 centos7的Vagrant box文件。
根据实际情况修改配置:
1、查看从git上下载来的Vagrantfile文件,找到里面的 node.vm.box,修改成自己的centos7的box名称。
2、根据自己电脑的内存修改 vb.memory,我的是8G,修改成了2048
一键安装
cd 到Vagrantfile文件所在的目录,执行vagrant up即可。
安装脚本解读
更改时区 用 163 yum 源 替代CentOS 自带的 安装NTP,并通过 NTP 同步时间 disable selinux 修改iptable kernel参数 配置/etc/hosts与ip的映射关系 设置nameserver 为8.8.8.8 disable swap 如果不存在则创建 docker 组 将当前用户vagrant加入docker组 通过yum install docker.x86_64 更改 docker 镜像源为国内的加速源 将第一个节点同时作为 Master 和 Worker,并yum安装 ETCD 配置etcd.conf 通过 etcd-init.sh 脚本在 ETCD 中创建网络配置,然后启动etcd 通过etcd提前为 Flannel 创建 IP 地址范围 为所有节点都安装 Flannel 创建 Flannel 配置文件,然后启动Flannel 启动docker服务 复制权限, token, ssl相关的文件到/etc/kubernetes目录 准备 Kubernetes 安装文件,包括kubelet、apiserver、kube-proxy等 然后启动apiserver、kubelet、kube-proxy等 部署 CoreDNS 部署 Dashboard
附 安装脚本:
#!/usr/bin/env bash # change time zone cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime timedatectl set-timezone Asia/Shanghai rm /etc/yum.repos.d/CentOS-Base.repo cp /vagrant/yum/*.* /etc/yum.repos.d/ mv /etc/yum.repos.d/CentOS7-Base-163.repo /etc/yum.repos.d/CentOS-Base.repo # using socat to port forward in helm tiller # install kmod and ceph-common for rook yum install -y wget curl conntrack-tools vim net-tools telnet tcpdump bind-utils socat ntp kmod ceph-common dos2unix kubernetes_release="/vagrant/kubernetes-server-linux-amd64.tar.gz" # Download Kubernetes if [[ $(hostname) == "node1" ]] && [[ ! -f "$kubernetes_release" ]]; then wget https://storage.googleapis.com/kubernetes-release/release/v1.11.0/kubernetes-server-linux-amd64.tar.gz -P /vagrant/ fi # enable ntp to sync time echo 'sync time' systemctl start ntpd systemctl enable ntpd echo 'disable selinux' setenforce 0 sed -i 's/=enforcing/=disabled/g' /etc/selinux/config echo 'enable iptable kernel parameter' cat >> /etc/sysctl.conf <<EOF net.ipv4.ip_forward=1 EOF sysctl -p echo 'set host name resolution' cat >> /etc/hosts <<EOF 172.17.8.101 node1 172.17.8.102 node2 172.17.8.103 node3 EOF cat /etc/hosts echo 'set nameserver' echo "nameserver 8.8.8.8">/etc/resolv.conf cat /etc/resolv.conf echo 'disable swap' swapoff -a sed -i '/swap/s/^/#/' /etc/fstab #create group if not exists egrep "^docker" /etc/group >& /dev/null if [ $? -ne 0 ] then groupadd docker fi usermod -aG docker vagrant rm -rf ~/.docker/ yum install -y docker.x86_64 # To fix docker exec error, downgrade docker version, see https://github.com/openshift/origin/issues/21590 yum downgrade -y docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64 cat > /etc/docker/daemon.json <<EOF { "registry-mirrors" : ["http://2595fda0.m.daocloud.io"] } EOF if [[ $1 -eq 1 ]] then yum install -y etcd #cp /vagrant/systemd/etcd.service /usr/lib/systemd/system/ cat > /etc/etcd/etcd.conf <<EOF #[Member] ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://$2:2380" ETCD_LISTEN_CLIENT_URLS="http://$2:2379,http://localhost:2379" ETCD_NAME="node$1" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://$2:2380" ETCD_ADVERTISE_CLIENT_URLS="http://$2:2379" ETCD_INITIAL_CLUSTER="$3" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF cat /etc/etcd/etcd.conf echo 'create network config in etcd' cat > /etc/etcd/etcd-init.sh<<EOF #!/bin/bash etcdctl mkdir /kube-centos/network etcdctl mk /kube-centos/network/config '{"Network":"172.33.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}' EOF chmod +x /etc/etcd/etcd-init.sh echo 'start etcd...' systemctl daemon-reload systemctl enable etcd systemctl start etcd echo 'create kubernetes ip range for flannel on 172.33.0.0/16' /etc/etcd/etcd-init.sh etcdctl cluster-health etcdctl ls / fi echo 'install flannel...' yum install -y flannel echo 'create flannel config file...' cat > /etc/sysconfig/flanneld <<EOF # Flanneld configuration options FLANNEL_ETCD_ENDPOINTS="http://172.17.8.101:2379" FLANNEL_ETCD_PREFIX="/kube-centos/network" FLANNEL_OPTIONS="-iface=eth1" EOF echo 'enable flannel with host-gw backend' rm -rf /run/flannel/ systemctl daemon-reload systemctl enable flanneld systemctl start flanneld echo 'enable docker' systemctl daemon-reload systemctl enable docker systemctl start docker echo "copy pem, token files" mkdir -p /etc/kubernetes/ssl cp /vagrant/pki/* /etc/kubernetes/ssl/ cp /vagrant/conf/token.csv /etc/kubernetes/ cp /vagrant/conf/bootstrap.kubeconfig /etc/kubernetes/ cp /vagrant/conf/kube-proxy.kubeconfig /etc/kubernetes/ cp /vagrant/conf/kubelet.kubeconfig /etc/kubernetes/ tar -xzvf /vagrant/kubernetes-server-linux-amd64.tar.gz -C /vagrant cp /vagrant/kubernetes/server/bin/* /usr/bin dos2unix -q /vagrant/systemd/*.service cp /vagrant/systemd/*.service /usr/lib/systemd/system/ mkdir -p /var/lib/kubelet mkdir -p ~/.kube cp /vagrant/conf/admin.kubeconfig ~/.kube/config if [[ $1 -eq 1 ]] then echo "configure master and node1" cp /vagrant/conf/apiserver /etc/kubernetes/ cp /vagrant/conf/config /etc/kubernetes/ cp /vagrant/conf/controller-manager /etc/kubernetes/ cp /vagrant/conf/scheduler /etc/kubernetes/ cp /vagrant/conf/scheduler.conf /etc/kubernetes/ cp /vagrant/node1/* /etc/kubernetes/ systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl enable kube-scheduler systemctl start kube-scheduler systemctl enable kubelet systemctl start kubelet systemctl enable kube-proxy systemctl start kube-proxy fi if [[ $1 -eq 2 ]] then echo "configure node2" cp /vagrant/node2/* /etc/kubernetes/ systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl enable kube-proxy systemctl start kube-proxy fi if [[ $1 -eq 3 ]] then echo "configure node3" cp /vagrant/node3/* /etc/kubernetes/ systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl enable kube-proxy systemctl start kube-proxy echo "deploy coredns" cd /vagrant/addon/dns/ ./dns-deploy.sh -r 10.254.0.0/16 -i 10.254.0.2 |kubectl apply -f - cd - echo "deploy kubernetes dashboard" kubectl apply -f /vagrant/addon/dashboard/kubernetes-dashboard.yaml echo "create admin role token" kubectl apply -f /vagrant/yaml/admin-role.yaml echo "the admin role token is:" kubectl -n kube-system describe secret `kubectl -n kube-system get secret|grep admin-token|cut -d " " -f1`|grep "token:"|tr -s " "|cut -d " " -f2 echo "login to dashboard with the above token" echo https://172.17.8.101:`kubectl -n kube-system get svc kubernetes-dashboard -o=jsonpath='{.spec.ports[0].port}'` echo "install traefik ingress controller" kubectl apply -f /vagrant/addon/traefik-ingress/ fi echo "Configure Kubectl to autocomplete" source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first. echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
参考资料
1、https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster
2、https://blog.csdn.net/kikajack/article/details/80301159
疑难问题
参考资料:参见我写的issue,以及其他人的issue
1、SSL证书问题(NET::ERR_CERT_INVALID)
执行如下一些命令,重建dashboard即可
vagrant ssh node1 sudo -icd /vagrant/addon/dashboard/ mkdir certs openssl req -nodes -newkey rsa:2048 -keyout certs/dashboard.key -out certs/dashboard.csr -subj "/C=/ST=/L=/O=/OU=/CN=kubernetes-dashboard"openssl x509 -req -sha256 -days 365 -in certs/dashboard.csr -signkey certs/dashboard.key -out certs/dashboard.crt kubectl delete secret kubernetes-dashboard-certs -n kube-system kubectl create secret generic kubernetes-dashboard-certs --from-file=certs -n kube-system #re-install dashboard kubectl delete pods $(kubectl get pods -n kube-system|grep kubernetes-dashboard|awk '{print $1}') -n kube-system
然后获取admin token登录 https://172.17.8.101:8443
获取命令:
kubectl -n kube-system describe secret `kubectl -n kube-system get secret|grep admin-token|cut -d " " -f1`|grep "token:"|tr -s " "|cut -d " " -f2
2、重新启动
根据 上面的脚本解读可知,dashboard和coreDNS等addons并没有设置为跟随系统自动启动,故重启时需要手动执行启动,启动方式脚本里面都有。