Add the following lines:
192.168.0.103 master-node
192.168.0.104 slave-node
Save and close the file when you are finished, then setup hostname by running the following command:
hostnamectl set-hostname master-node
Next, open /etc/hosts file on second server:
nano /etc/hosts
Add the following lines:
192.168.0.103 master-node
192.168.0.104 slave-node
Save and close the file when you are finished, then setup hostname by running the following command:
hostnamectl set-hostname slave-node
Next, you will need to disable swap memory on each server. Because, kubelets do not support swap memory and will not work if swap is active or even present in your /etc/fstab file.
You can disable swap memory usage with the following command:
swapoff -a
You can disable this permanent by commenting out the swap file in /etc/fstab:
nano /etc/fstab
Comment out the swap line as shown below:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda4 during installation
UUID=6f612675-026a-4d52-9d02-547030ff8a7e / ext4 errors=remount-ro 0 1
# swap was on /dev/sda6 during installation
#UUID=46ee415b-4afa-4134-9821-c4e4c275e264 none swap sw 0 0
/dev/sda5 /Data ext4 defaults 0 0
Save and close the file, when you are finished.
Install Docker
Before starting, you will need to install Docker on both the master and slave server. By default, the latest version of the Docker is not available in Ubuntu 16.04 repository, so you will need to add Docker repository to your system.
First, install required packages to add Docker repository with the following command:
apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Next, download and add Docker's GPG key with the following command:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
Next, add Docker repository with the following command:
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Next, update the repository and install Docker with the following command:
apt-get update -y
apt-get install docker-ce -y
Install Kubernetes
Next, you will need to install kubeadm, kubectl and kubelet on both the server. First, download and GPG key with the following command:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Next, add Kubernetes repository with the following command:
echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Finally, update the repository and install Kubernetes with the following command:
apt-get update -y
apt-get install kubelet kubeadm kubectl -y
Configure Master Node
All the required packages are installed on both servers. Now, it's time to configure Kubernetes Master Node.
First, initialize your cluster using its private IP address with the following command:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.0.103
You should see the following output:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 62b281.f819128770e900a3 192.168.0.103:6443 --discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686
Note : Note down the token from the above output. This will be used to join Slave Node to the Master Node in the next step.
Next, you will need to run the following command to configure kubectl tool:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Next, check the status of the Master Node by running the following command:
kubectl get nodes
You should see the following output:
NAME STATUS ROLES AGE VERSION
master-node NotReady master 14m v1.9.4
In the above output, you should see that Master Node is listed as not ready. Because the cluster does not have a Container Networking Interface (CNI).
Let's deploy a Calico CNI for the Master Node with the following command:
kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Make sure Calico was deployed correctly by running the following command:
kubectl get pods --all-namespaces
You should see the following output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-p2gbx 0/1 ContainerCreating 0 35s
kube-system calico-kube-controllers-d554689d5-v5lb6 0/1 Pending 0 34s
kube-system calico-node-667j2 0/2 ContainerCreating 0 35s
kube-system etcd-master-node 1/1 Running 0 15m
kube-system kube-apiserver-master-node 1/1 Running 0 15m
kube-system kube-controller-manager-master-node 1/1 Running 0 15m
kube-system kube-dns-6f4fd4bdf-7rl74 0/3 Pending 0 15m
kube-system kube-proxy-hqb98 1/1 Running 0 15m
kube-system kube-scheduler-master-node 1/1 Running 0 15m
Now, Run kubectl get nodes command again, and you should see the Master Node is now listed as Ready.
kubectl get nodes
Output:
NAME STATUS ROLES AGE VERSION
master-node Ready master 7m v1.9.4
Add Slave Node to the Kubernetes Cluster
Next, you will need to log in to the Slave Node and add it to the Cluster. Remember the join command in the output from the Master Node initialization command and issue it on the Slave Node as shown below:
kubeadm join --token 62b281.f819128770e900a3 192.168.0.103:6443 --discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686
Once the Node is joined successfully, you should see the following output:
[discovery] Trying to connect to API Server "192.168.0.103:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.104:6443"
[discovery] Requesting info from "https://192.168.0.104:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.104:6443"
[discovery] Successfully established connection with API Server "192.168.0.103:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Now, go back to the Master Node and issue the command kubectl get nodes to see that the slave node is now ready:
kubectl get nodes
Output:
NAME STATUS ROLES AGE VERSION
master-node Ready master 35m v1.9.4
slave-node Ready <none> 7m v1.9.4
Deploy the Apache Container to the Cluster
Kubernetes Cluster is now ready, it's time to deploy the Apache container.
On the Master Node, run the following command to create an Apache deployment:
kubectl create deployment apache --image=apache
Output:
deployment "apache" created
You can list out the deployments with the following command:
kubectl get deployments
Output :
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
apache 1 1 1 0 16s
You can see the more information about Apache deployment with the following command:
kubectl describe deployment apache
Output:
Name: apache
Namespace: default
CreationTimestamp: Mon, 19 Mar 2018 19:04:03 +0530
Labels: app=apache
Annotations: deployment.kubernetes.io/revision=1
Selector: app=apache
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=apache
Containers:
apache:
Image: apache
Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: apache-5fcc8cd4bf (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 43s deployment-controller Scaled up replica set apache-5fcc8cd4bf to 1
Next, you will need to make the Apache container available to the network with the command:
kubectl create service nodeport apache --tcp=80:80
Now, list out the current services by running the following command:
kubectl get svc
You should see the Apache service with assigned port 30267:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache NodePort 10.107.95.29 <none> 80:30267/TCP 15s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37m
Now, open your web browser and type the URL http://192.168.0.104:30267 (Slave Node IP), you should see the default Apache Welcome page:
Congratulations! Your Apache container has been deployed on your Kubernetes Cluster.
Related Alibaba Cloud Products
After completing your Kubernetes Cluster, it makes perfect sense to scale it for production. That's the whole design concept of using containers. To do this, we need to set up a database for our application. Ideally, for production scenarios, I do not recommend making your own database. Instead, you can choose from one of Alibaba Cloud's suite of database products.
ApsaraDB for Redis is an automated and scalable tool for developers to manage data storage shared across multiple processes, applications or servers.
As a Redis protocol compatible tool, ApsaraDB for Redis offers exceptional read-write capabilities and ensures data persistence by using memory and hard disk storage. ApsaraDB for Redis provides data read-write capabilities at high speed by retrieving data from in-memory caches and ensures data persistence by using both memory and hard disk storage mode.
Data Transmission Service (DTS) helps you migrate data between data storage types, such as relational database, NoSQL, and OLAP. The service supports homogeneous migrations as well as heterogeneous migration between different data storage types.
DTS also can be used for continuous data replication with high availability. In addition, DTS can help you subscribe to the change data function of ApsaraDB for RDS. With DTS, you can easily implement scenarios such as data migration, remote real time data backup, real time data integration and cache refresh.
欢迎来到这里!
我们正在构建一个小众社区,大家在这里相互信任,以平等 • 自由 • 奔放的价值观进行分享交流。最终,希望大家能够找到与自己志同道合的伙伴,共同成长。
注册 关于