Introduction

In Part 1 of Installing Kubernetes the Hard Way, I installed VirtualBox and Vagrant. I had a few issues with the image used in Vagrant, but I was able to sort that out and provision the machines needed. I’ll continue with the lab by moving on to the Provisioning a CA and Generating TLS Certificates.

Procedure

First, I am going to change directories to my vagrant directory and then run the vagrant up command to start my environment up. Once my environment is running, I ran the vagrant ssh master-1 command to connect to my master node.

I will create a Certificate Authority (CA) that can be used to generate additional TLS certficates:

  1. Create the private key for CA:
openssl genrsa -out ca.key 2048

2. Create the Certificate Signing Request (CSR) using the private key:

openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr

3. Self sign the CSR using the private key:

openssl x509 -req -in ca.csr -signkey ca.key -CAcreateserial  -out ca.crt -days 1000

The ca.crt file is the Kubernetes CA certificate and it will be copied to a couple of places.

Generate client and server certificates

This section will cover generating the client and server certificates for each Kubernetes component and the client certificate for the Kubernetes admin user.

Creating the admin client certificate

  1. Generate the private key for the admin user:
openssl genrsa -out admin.key 2048

2. Generate the CSR for the admin user:

openssl req -new -key admin.key -subj "/CN=admin/O=system:masters" -out admin.csr

3. Sign the certificate for the admin user using the CA server’s private key:

openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out admin.crt -days 1000

NOTE: The admin user is part of the system:masters group. This is how we can perform administrative operations on the Kubernetes cluster using the kubectl utility.

Creating the controller manager client certificate

  1. Generate the private key for the kube-controller-manager:
openssl genrsa -out kube-controller-manager.key 2048

2. Generate the CSR for the kube-controller-manager:

openssl req -new -key kube-controller-manager.key -subj "/CN=system:kube-controller-manager" -out kube-controller-manager.csr

3. Sign the certificate for the kube-controller-manager using the CA server’s private key:

openssl x509 -req -in kube-controller-manager.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-controller-manager.crt -days 1000

Creating the kube proxy client certificate

  1. Generate the private key for the kube-proxy client:
openssl genrsa -out kube-proxy.key 2048

2. Generate the CSR for the kube-proxy client:

openssl req -new -key kube-proxy.key -subj "/CN=system:kube-proxy" -out kube-proxy.csr

3. Sign the certificate for the kube-proxy client using the CA server’s private key:

openssl x509 -req -in kube-proxy.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out kube-proxy.crt -days 1000

Creating the scheduler client certificate

  1. Generate the private key for the kube-scheduler client:
openssl genrsa -out kube-scheduler.key 2048

2. Generate the CSR for the kube-scheduler client:

openssl req -new -key kube-scheduler.key -subj "/CN=system:kube-scheduler" -out kube-scheduler.csr

3. Sign the certificate for the kube-scheduler client using the CA server’s private key:

openssl x509 -req -in kube-scheduler.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out kube-scheduler.crt -days 1000

Creating the Kubernetes API server certificate

The kube-apiserver certificate requires all names that various components may use to reach it to be part of the alternate names.

  1. Create a conf file to add the alternate names:
cat > openssl.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.96.0.1
IP.2 = 192.168.5.11
IP.3 = 192.168.5.12
IP.4 = 192.168.5.30
IP.5 = 127.0.0.1
EOF

2. Generate the private key for the kube-apiserver:

openssl genrsa -out kube-apiserver.key 2048

2. Generate the CSR for the kube-apiserver client:

openssl req -new -key kube-apiserver.key -subj "/CN=kube-apiserver" -out kube-apiserver.csr -config openssl.cnf

3. Sign the certificate for the kube-proxy client using the CA server’s private key:

openssl x509 -req -in kube-apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out kube-apiserver.crt -extensions v3_req -extfile openssl.cnf -days 1000

Creating the ETCD Server certificate

The ETCD server certificate must have the alternate names of all the servers that are part of the ETCD cluster.

  1. Create a conf file to add the alternate names:
cat > openssl-etcd.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.5.11
IP.2 = 192.168.5.12
IP.3 = 127.0.0.1
EOF
  1. Generate the private key for the etcd-server:
openssl genrsa -out etcd-server.key 2048

2. Generate the CSR for the etcd-server:

openssl req -new -key etcd-server.key -subj "/CN=etcd-server" -out etcd-server.csr -config openssl-etcd.cnf

3. Sign the certificate for the etcd-server using the CA server’s private key:

openssl x509 -req -in etcd-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out etcd-server.crt -extensions v3_req -extfile openssl-etcd.cnf -days 1000

Creating the service account key pair

The Kubernetes controller manager uses a key pair to generate and sign service account tokens.

  1. Generate the private key for the service-account:
openssl genrsa -out service-account.key 2048

2. Generate the CSR for the service account

openssl req -new -key service-account.key -subj "/CN=service-accounts" -out service-account.csr

3. Sign the certificate for the service-account using the CA server’s private key:

openssl x509 -req -in service-account.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out service-account.crt -days 1000

Distributing the certificates to each controller

  1. Copy the certificates that we created to each of the master nodes:
for instance in master-1 master-2; do
  scp ca.crt ca.key kube-apiserver.key kube-apiserver.crt \
    service-account.key service-account.crt \
    etcd-server.key etcd-server.crt \
    ${instance}:~/
done

Generating the Kubernetes configuration files for authentication

This section covers the generation of the Kubernetes configuration files which enables Kubernetes clients to location and authenticate to the Kubernetes API Server. This includes the kubeconfig files for the controller manager, kubelet, kube-proxy, and scheduler clients and the admin user.

Generating the kube-proxy configuration file:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://${LOADBALANCER_ADDRESS}:6443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.crt \
    --client-key=kube-proxy.key \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}

Output:

Cluster "kubernetes-the-hard-way" set.
User "system:kube-proxy" set.
Context "default" created.
Switched to context "default".

Generating the kube-controller-manager configuration file:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.crt \
    --client-key=kube-controller-manager.key \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}

Output:

Cluster "kubernetes-the-hard-way" set.
User "system:kube-controller-manager" set.
Context "default" created.
Switched to context "default".

Generating the kube-scheduler configuration file:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.crt \
    --client-key=kube-scheduler.key \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}

Output:

Cluster "kubernetes-the-hard-way" set.
User "system:kube-scheduler" set.
Context "default" created.
Switched to context "default".

Generating the admin configuration file:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.crt \
    --client-key=admin.key \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig
}

Output:

Cluster "kubernetes-the-hard-way" set.
User "admin" set.
Context "default" created.
Switched to context "default".

Distributing the Kubernetes kube-proxy configuration files to each worker instance:

for instance in worker-1 worker-2; do
  scp kube-proxy.kubeconfig ${instance}:~/
done

Distribute the Kubernetes kube-controller-manager and kube-scheduler configuration files to each controller instance:

for instance in master-1 master-2; do
  scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done

Generating the data encryption configuration and key

This section covers the steps for generating the encryption key and configuration to use for encrypting Kubernetes Secrets.

  1. Generate the encryption key:
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

2. Create the encryption-config.yaml encryption file:

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

3. Copy the encryption-config.yaml encryption file to each controller instance:

for instance in master-1 master-2; do
  scp encryption-config.yaml ${instance}:~/
done

Conclusion

This blog post covered generating the necessary certificates, configuration files, and data encryption configuration and key. In the next post, I’ll cover bootstrapping the etcd cluster, control planes, and worker nodes.

This error message is only visible to WordPress admins

Error: No feed found.

Please go to the Instagram Feed settings page to create a feed.