Introduction
They say you can learn Kubernetes better when you install it the hard way. Basically, you can gain a better understanding of the parts involved to put together the different components of Kubernetes. I will use the Git repository located here: https://github.com/mmumshad/kubernetes-the-hard-way to install it the hard way.
This repository contains steps to guide you through building a high availability cluster (with two control planes), two worker nodes, and a load balancer node.
I am using a Red Hat Enterprise Linux (RHEL) 8.7 workstation. I have used VirtualBox before, but I am new to Vagrant.
Prerequisites
-
Installation of VirtualBox.
-
Installation of Vagrant.
Procedure
Install VirtualBox:
-
Add the Oracle Linux repo file to /etc/yum.repos.d:
sudo vim /etc/yum.repos.d/virtualbox.repo
-
Install VirtualBox:
sudo dnf install VirtualBox-6.1
Install Vagrant:
-
Install hashicorp repo configuration:
sudo yum install -y yum-utils
sudo yum-config-manager –add-repo https://rpm.releases.hashicorp.com \
/RHEL/hashicorp.repo
-
Install Vagrant:
sudo yum -y install vagrant
Provision Compute Resources
-
Download the kubernetes-the-hard-way repository:
git clone https://github.com/mmumshad/kubernetes-the-hard-way.git
-
Change directory into the vagrant directory:
cd kubernetes-the-hard-way\vagrant
-
Run the vagrant up command to provision the compute resources:
vagrant up
OK – here is where I ran into some problems. When I ran vagrant up, I saw the following message:
No usable default provider could be found for your system. Vagrant relies on interactions with 3rd party systems, known as "providers", to provide Vagrant with resources to run development environments. Examples are VirtualBox, VMware, Hyper-V. The easiest solution to this message is to install VirtualBox, which is available for free on all major platforms. If you believe you already have a provider available, make sure it is properly installed and configured. You can see more details about why a particular provider isn't working by forcing usage with `vagrant up --provider=PROVIDER`, which should give you a more specific error message for that particular provider.
I then tried:
vagrant up –provider=VirtualBox
The provider 'VirtualBox' could not be found, but was requested to back the machine 'master-1'. Please use a provider that exists. Did you mean 'virtualbox'? Vagrant knows about the following providers: docker, hyperv, virtualbox
OK – let me try vagrant up –provider=virtualbox
The provider 'virtualbox' that was requested to back the machine 'master-1' is reporting that it isn't usable on this system. The reason is shown below: VirtualBox is complaining that the kernel module is not loaded. Please run `VBoxManage --version` or open the VirtualBox GUI to see the error message which should contain instructions on how to fix this error.
Alright – let’s try VBoxManage –version
WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (4.18.0-425.3.1.el8.x86_64) or it failed to load. Please recompile the kernel module and install it by sudo /sbin/vboxconfig You will not be able to start VMs until this problem is fixed. 6.1.42r155177
Next, I ran sudo /sbin/vboxconfig
vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Starting VirtualBox services. vboxdrv.sh: Building VirtualBox kernel modules. This system is currently not set up to build kernel modules. Please install the Linux kernel "header" files matching the current kernel for adding new hardware support to the system. The distribution packages containing the headers are probably: kernel-devel kernel-devel-4.18.0-425.3.1.el8.x86_64 This system is currently not set up to build kernel modules. Please install the Linux kernel "header" files matching the current kernel for adding new hardware support to the system. The distribution packages containing the headers are probably: kernel-devel kernel-devel-4.18.0-425.3.1.el8.x86_64 There were problems setting up VirtualBox. To re-start the set-up process, run /sbin/vboxconfig as root. If your system is using EFI Secure Boot you may need to sign the kernel modules (vboxdrv, vboxnetflt, vboxnetadp, vboxpci) before you can load them. Please see your Linux system's documentation for more information.
OK – let me do a sudo yum install kernel-devel
This installed kernel-devel and elfutils-libelf-devel.
Next, I ran sudo /sbin/vboxconfig again. This looks better:
sudo /sbin/vboxconfig vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Starting VirtualBox services. vboxdrv.sh: Building VirtualBox kernel modules.
OK – great, looks like VirtualBox is now configured. I will try the vagrant up command again:
Bringing machine 'master-1' up with 'virtualbox' provider... Bringing machine 'master-2' up with 'virtualbox' provider... Bringing machine 'loadbalancer' up with 'virtualbox' provider... Bringing machine 'worker-1' up with 'virtualbox' provider... Bringing machine 'worker-2' up with 'virtualbox' provider... ==> master-1: Importing base box 'ubuntu/jammy64'... ==> master-1: Matching MAC address for NAT networking... ==> master-1: Setting the name of the VM: kubernetes-ha-master-1 ==> master-1: Clearing any previously set network interfaces... ==> master-1: Preparing network interfaces based on configuration... master-1: Adapter 1: nat master-1: Adapter 2: hostonly ==> master-1: Forwarding ports... master-1: 22 (guest) => 2711 (host) (adapter 1) master-1: 22 (guest) => 2222 (host) (adapter 1) ==> master-1: Running 'pre-boot' VM customizations... ==> master-1: Booting VM... ==> master-1: Waiting for machine to boot. This may take a few minutes... master-1: SSH address: 127.0.0.1:2222 master-1: SSH username: vagrant master-1: SSH auth method: private key master-1: Warning: Authentication failure. Retrying... master-1: Warning: Authentication failure. Retrying... master-1: Warning: Authentication failure. Retrying... master-1: Warning: Authentication failure. Retrying... master-1: Warning: Authentication failure. Retrying...
I kept running into this same SSH issue. I found a blog post describing a similar issue, over here: https://github.com/hashicorp/vagrant/issues/5186. I tried a few things that were mentioned in the blog, but still no luck.
Finally, I looked at the Vagrant file and decided I would try a different box version. You can search for different boxes at https://vagrantcloud.com/search. The config.vm.box setting was originally set to ubuntu/jammy64. I changed it to ubuntu/trusty64 and still had the same issue. I looked for a recently updated Ubuntu version and settled on ubuntu/bionic64. After making this change to the Vagrantfile, I did a vagrant destroy and then I ran the vagrant up command again.
This time, I saw a much different message:
Bringing machine 'master-1' up with 'virtualbox' provider... Bringing machine 'master-2' up with 'virtualbox' provider... Bringing machine 'loadbalancer' up with 'virtualbox' provider... Bringing machine 'worker-1' up with 'virtualbox' provider... Bringing machine 'worker-2' up with 'virtualbox' provider... ==> master-1: Box 'ubuntu/bionic64' could not be found. Attempting to find and install... master-1: Box Provider: virtualbox master-1: Box Version: >= 0 ==> master-1: Loading metadata for box 'ubuntu/bionic64' master-1: URL: https://vagrantcloud.com/ubuntu/bionic64 ==> master-1: Adding box 'ubuntu/bionic64' (v20230124.0.0) for provider: virtualbox master-1: Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20230124.0.0/providers/virtualbox.box master-1: Download redirected to host: cloud-images.ubuntu.com ==> master-1: Successfully added box 'ubuntu/bionic64' (v20230124.0.0) for 'virtualbox'! ==> master-1: Importing base box 'ubuntu/bionic64'... ==> master-1: Matching MAC address for NAT networking... ==> master-1: Setting the name of the VM: kubernetes-ha-master-1 ==> master-1: Clearing any previously set network interfaces... ==> master-1: Preparing network interfaces based on configuration... master-1: Adapter 1: nat master-1: Adapter 2: hostonly ==> master-1: Forwarding ports... master-1: 22 (guest) => 2711 (host) (adapter 1) master-1: 22 (guest) => 2222 (host) (adapter 1) ==> master-1: Running 'pre-boot' VM customizations... ==> master-1: Booting VM... ==> master-1: Waiting for machine to boot. This may take a few minutes... master-1: SSH address: 127.0.0.1:2222 master-1: SSH username: vagrant master-1: SSH auth method: private key master-1: master-1: Vagrant insecure key detected. Vagrant will automatically replace master-1: this with a newly generated keypair for better security. master-1: master-1: Inserting generated public key within guest... master-1: Removing insecure key from the guest if it's present... master-1: Key inserted! Disconnecting and reconnecting using new SSH key... ==> master-1: Machine booted and ready! ==> master-1: Checking for guest additions in VM. ....
Finally – success! All of my virtual machines have been built now.
vagrant ssh master-1
-
Generate the key pair on the master-1 node (leave all settings as default):
ssh-keygen
-
View the generated public key:
cat ~/.ssh/id_rsa.pub
-
Add this key to the local authorized keys:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-
SSH to each VM and add the master-1 public key to the .ssh/authorized_keys file.
-
As the vagrant user, you should be able to ssh to each of the nodes from the master-1 node.
Install kubectl
You can use the kubectl command line utility to interact with the Kubernetes API Server. Download and install it on each node in the cluster using the following steps:
-
SSH to master-1 node
-
Run the following three commands:
wget https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
-
Verify the installation of kubectl by running:
kubectl
-
Repeat steps on the remaining 4 nodes.
Conclusion
This post went over installing VirtualBox and Vagrant and setting up the nodes in our test cluster. I did run into the issue of not being able to use the jammy64 Vagrant box for my setup. I am curious if others have had this same issue. This post also covered installing kubectl on each of the nodes.
NOTE: To stop your VMs and come back to them at a later time, you can run vagrant halt from the vagrant directory in your repository. To start working with the VMs again, type vagrant up.