The article describes installation steps of single-node Kubernetes cluster with Calico network plugin
Prerequisites:
check that product uuid is different across all nodes in cluster: sudo cat /sys/class/dmi/id/product_uuid
2 CPU
2GB RAM
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get install docker.io kubeadm kubectl kubelet
If everything is OK you`ll see something like this:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.122.82:6443 --token kipm0h.67yg6epjkyofuf6l --discovery-token-ca-cert-hash sha256:35c7d071f3c86c2471ed630e1f61012b6105c1537c4e72abf532f8ede86daed8
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.122.82:6443 --token kipm0h.67yg6epjkyofuf6l --discovery-token-ca-cert-hash sha256:35c7d071f3c86c2471ed630e1f61012b6105c1537c4e72abf532f8ede86daed8
Check status of the newly created cluster:
kubectl get nodes -o wide
kubectl get pods --all-namespaces
kubectl cluster-info
kubectl config view
kubectl cluster-info
kubectl get all
Check version:
kubectl version
Check available APIs:
kubectl api-versions
###Install Calico pod network plugin###
The Calico network manager relies on the fact that each node already has a podCIDR defined.
You can check the podCidr for your nodes with one of the following two commands
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
kubectl get nodes -o template --template={{.spec.podCIDR}}
If your nodes do not have a podCIDR, then either use the --pod-cidr kubelet command-line option or the --allocate-node-cidrs=true --cluster-cidr=<cidr> controller-manager command-line options.
If kubeadm is being used then pass --pod-network-cidr=10.244.0.0/16 to kubeadm init which will ensure that all nodes are automatically assigned a podCIDR.
It's possible to manually set the podCIDR for each node:
kubectl patch node <NODE_NAME> -p '{"spec":{"podCIDR":"<SUBNET>"}}'
You can check the podCidr for your nodes with one of the following two commands
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
kubectl get nodes -o template --template={{.spec.podCIDR}}
If your nodes do not have a podCIDR, then either use the --pod-cidr kubelet command-line option or the --allocate-node-cidrs=true --cluster-cidr=<cidr> controller-manager command-line options.
If kubeadm is being used then pass --pod-network-cidr=10.244.0.0/16 to kubeadm init which will ensure that all nodes are automatically assigned a podCIDR.
It's possible to manually set the podCIDR for each node:
kubectl patch node <NODE_NAME> -p '{"spec":{"podCIDR":"<SUBNET>"}}'
change CIDR in downloaded calico.yml
sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
sudo kubectl apply -f ./calico.yaml
Check pods status:
kubectl get pods --all-namespaces
###
View pod:
kubectl describe pod coredns-86c58d9df4-fdxf2 --namespace=kube-system
Finding information about certain Service:
kubectl get svc --all-namespaces -l k8s-app=kube-dns ###depends on DNS engine hence "k8s-app=coredns" can be used instead
Allow bringing up pods on master node:
sudo kubectl taint nodes --all node-role.kubernetes.io/master-
Deleting node(s)
To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.
Talking to the master with the appropriate credentials, run:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
Then, on the node being removed, reset all kubeadm installed state:
kubeadm reset
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
If you want to reset the IPVS tables, you must run the following command:
ipvsadm -C
If you wish to start over simply run kubeadm init or kubeadm join with the appropriate arguments.
To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.
Talking to the master with the appropriate credentials, run:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
Then, on the node being removed, reset all kubeadm installed state:
kubeadm reset
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
If you want to reset the IPVS tables, you must run the following command:
ipvsadm -C
If you wish to start over simply run kubeadm init or kubeadm join with the appropriate arguments.