kubernetes 1.7.4安装记录

作者 Peter Wu 日期 2017-08-29
kubernetes 1.7.4安装记录

kubernetes 1.7.4安装记录

使用kubeadm 安装 kubernetes集群,用于测试环境。

环境准备

节点 系统 主机名 IP 备注
master centos 7 peter-pc 192.168.12.135 已安装docker 17.06.0-ce
node1 centos 7 test-pc 192.168.12.126 已安装docker 17.06.0-ce

修改相关主机名及hosts

master:

echo "peter-pc" > /etc/hostname
sysctl kernel.hostname=peter-pc
echo "127.0.0.1 peter-pc" >> /etc/hosts
echo "192.168.12.135 peter-pc" >> /etc/hosts
echo "192.168.12.126 test-pc" >> /etc/hosts

node1:

echo "test-pc" > /etc/hostname
sysctl kernel.hostname=test-pc
echo "127.0.0.1 test-pc" >> /etc/hosts
echo "192.168.12.135 peter-pc" >> /etc/hosts
echo "192.168.12.126 test-pc" >> /etc/hosts

一、master 安装

1.安装rpm

git clone https://github.com/kubernetes/release.git
cd release/rpm
./docker-build.sh

生成rpm

yum install socat
cd output/x86_64
rpm -ivh *.rpm

2.部署所需各镜像版本

gcr.io/google_containers/etcd-amd64:3.0.17
gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
gcr.io/google_containers/kube-proxy-amd64:v1.7.4
gcr.io/google_containers/pause-amd64:3.0


gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4

3.pull 镜像

images=(etcd-amd64:3.0.17 kube-apiserver-amd64:v1.7.4 kube-controller-manager-amd64:v1.7.4 kube-scheduler-amd64:v1.7.4 kube-proxy-amd64:v1.7.4 pause-amd64:3.0 k8s-dns-kube-dns-amd64:1.14.4 k8s-dns-dnsmasq-nanny-amd64:1.14.4 k8s-dns-sidecar-amd64:1.14.4)
for imageName in ${images[@]} ; do
docker pull bestwu/$imageName
docker tag bestwu/$imageName gcr.io/google_containers/$imageName
docker rmi bestwu/$imageName
done

4.启动 kubelet

修改cgroup-driver:

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/"  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

启动 kubelet

systemctl enable kubelet
systemctl start kubelet

5.初始化 master

防止kube-dns启动失败及便于节点加入,关闭防火墙

systemctl stop firewalld

kubeadm init --pod-network-cidr=10.244.0.0/16

成功:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [peter-pc kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.12.135]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 31.503215 seconds
[token] Using token: 74ebfa.bdb6dd4505d216ab
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join --token 74ebfa.bdb6dd4505d216ab 192.168.12.135:6443

保存token或命令:kubeadm join –token 74ebfa.bdb6dd4505d216ab 192.168.12.135:6443

按上面提示运行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.Installing a pod network

使用 Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml

查看结果:

kubectl get pods --all-namespaces

7.Master Isolation

kubectl taint nodes --all node-role.kubernetes.io/master-

8.部署 dashboard

所需镜像

gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3

docker pull bestwu/kubernetes-dashboard-amd64:v1.6.3
docker tag bestwu/kubernetes-dashboard-amd64:v1.6.3 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
docker rmi bestwu/kubernetes-dashboard-amd64:v1.6.3


kubectl create -f https://git.io/kube-dashboard

查看结果:

kubectl get pods --all-namespaces

二、节点安装

1.安装rpm

复制 kubeadm相关安装包(kubeadm-1.7.3-0.x86_64.rpm kubectl-1.7.3-0.x86_64.rpm kubelet-1.7.3-0.x86_64.rpm kubernetes-cni-0.5.1-0.x86_64.rpm)到node机

yum install socat
cd output/x86_64
rpm -ivh *.rpm

2.部署所需各镜像版本

gcr.io/google_containers/kube-proxy-amd64:v1.7.4
gcr.io/google_containers/pause-amd64:3.0

3.pull 镜像

images=(kube-proxy-amd64:v1.7.4 pause-amd64:3.0)
for imageName in ${images[@]} ; do
docker pull bestwu/$imageName
docker tag bestwu/$imageName gcr.io/google_containers/$imageName
docker rmi bestwu/$imageName
done

4.加入集群

修改cgroup-driver:

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/"  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

kubeadm join --token 74ebfa.bdb6dd4505d216ab 192.168.12.135:6443

查看结果:

kubectl get nodes


kubectl get pods --all-namespaces

在master上运行

kubectl proxy --address 0.0.0.0 --accept-hosts '.*'

可通过master主机IP加端口8001访问dashboard