Kubernetes the hard way

Всем привет. Меня зовут Добрый Кот Telegram.

От коллектива FR-Solutions и при поддержке @irbgeo Telegram : Продолжаем серию статей о K8S.

58b1034eb64ffed83f0199bdcbc4e866.png

Цели данной статьи:

  1. Актуализировать порядок развертывания kubernetes, описанный всеми любимым Kelsey Hightower.

  2. Доказать что «kubernetes это всего 5-бинарей» и «kubernetes это просто» — это некорректное суждение.

  3. Добавить Key-keeper в конфигурацию kubernetes для управления сертификатами.

Из чего состоит Kubernetes?

Все мы помним шутку «kubernetes это всего 5-бинарей»:

  1. etcd

  2. kube-apiserver

  3. kube-controller-manager

  4. kube-scheduler

  5. kubelet

Но, если мы будем оперировать только ими, то кластер вы не соберете. Почему же?

kubelet -у требуются дополнительные компоненты для работы:

  1. Container Runtime Interface — CRI (containerd, cri-o, docker, etc.).

CRI — для работы требуется:

  1. RUNC библиотека, для работы с контейнерами.

Certificates:

  1. (cfssl, kubeadm, key-keeper) требуются для выпуска сертификатов.

Прочее:

  1. kubectl (для работы с kubernetes) — опционально

  2. crictl (для удобной работы с CRI) — опционально

  3. etcdctl (для работы с etcd на мастерах) — опционально

  4. kubeadm (для настройки кластера) — опционально

Таким образом, чтобы развернуть kubernetes, требуется минимум 8 бинарей.

Этапы создания кластера K8S

  1. Создание linux машин, на которых будет развернут control-plane кластера.

  2. Настройка операционной системы на созданных linux машинах:

    1. установка базовых пакетов (для обслуживания linux).

    2. работа с modprobe.

    3. работа с sysctls.

    4. установка требуемых для функционирования кластера бинарей.

    5. подготовка конфигурационных файлов для установленных компонентов.

  3. Подготовка Vault хранилища.

  4. Генерация static-pod манифестов.

  5. Проверка доступности кластера.

Как видим, всего 5-ть этапов — ничего сложного)

Ну что, давайте начнем!

1) Создаем 3 Узла под мастера и привязываем к ним DNS имена по маске:

master-${INDEX}.${CLUSTER_NAME}.${BASE_DOMAIN}

** ВАЖНО: ${INDEX} должен начинаться с 0 из-за реализации формирования индексов в модуле терраформ для VAULT, но о нем позже.

environments

## RUN ON EACH MASTER.
## REQUIRED VARS: 
export BASE_DOMAIN=dobry-kot.ru
export CLUSTER_NAME=example
export BASE_CLUSTER_DOMAIN=${CLUSTER_NAME}.${BASE_DOMAIN}

# Порты для ETCD
export ETCD_SERVER_PORT="2379"
export ETCD_PEER_PORT="2380"
export ETCD_METRICS_PORT="2381"

# Порты для KUBERNETES
export KUBE_APISERVER_PORT="6443"
export KUBE_CONTROLLER_MANAGER_PORT="10257"
export KUBE_SCHEDULER_PORT="10259"

# Установите значение 1, 3, 5
export MASTER_COUNT=1

# Для Kube-apiserver 
export ETCD_SERVERS=$(echo \
$(for INDEX in `seq 0 $(($MASTER_COUNT-1))`; \
do \
echo https://master-${INDEX}.${BASE_CLUSTER_DOMAIN}:${ETCD_SERVER_PORT} ; \
done) | 
sed "s/,//" | 
sed "s/ /,/g")

# Для формирования ETCD кластера
export ETCD_INITIAL_CLUSTER=$(echo \
$(for INDEX in `seq 0 $(($MASTER_COUNT-1))`; \
do \
echo master-${INDEX}.${BASE_CLUSTER_DOMAIN}=https://master-${INDEX}.${BASE_CLUSTER_DOMAIN}:${ETCD_PEER_PORT} ; \
done) | 
sed "s/,//" | 
sed "s/ /,/g")


export KUBERNETES_VERSION="v1.23.12"
export ETCD_VERSION="3.5.3-0"
export ETCD_TOOL_VERSION="v3.5.5"
export RUNC_VERSION="v1.1.3"
export CONTAINERD_VERSION="1.6.8"
export CRICTL_VERSION=$(echo $KUBERNETES_VERSION | 
sed -r 's/^v([0-9]*).([0-9]*).([0-9]*)/v\1.\2.0/')

export BASE_K8S_PATH="/etc/kubernetes"

export SERVICE_CIDR="29.64.0.0/16"
# Не обижайтесь - regexp сами напишите)
export SERVICE_DNS="29.64.0.10"

export VAULT_MASTER_TOKEN="hvs.vy0dqWuHkJpiwtYhw4yPT6cC"
export VAULT_SERVER="http://193.32.219.99:9200/"

export VAULT_MASTER_TOKEN="root"
export VAULT_SERVER="http://master-0.${CLUSTER_NAME}.${BASE_DOMAIN}:9200/"

Если вы изуачали документацию от Kelsey Hightower, то замечали, что в основе конфигурационных файлов лежат ip адреса узлов. Данный подход рабочий, но менее функциональный, для простоты обслуживания и дальнейшей шаблонизации лучше использовать заранее известные нам FQDN маски, как я указывал для мастеров выше.

2) Скачиваем все требуемые кластером K8S бинарные файлы.

  • В данном сетапе я не буду использовать RPM или DEB пакеты, чтобы постараться детально показать, из чего состоит вся инсталляция.

download components

## RUN ON EACH MASTER.
wget -O /usr/bin/key-keeper   "https://storage.yandexcloud.net/m.images/key-keeper-T2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=YCAJEhOlYpv1GRY7hghCojNX5%2F20221020%2Fru-central1%2Fs3%2Faws4_request&X-Amz-Date=20221020T123413Z&X-Amz-Expires=2592000&X-Amz-Signature=138701723B70343E38D82791A28AD1DB87040677F7C94D83610FF26ED9AF1954&X-Amz-SignedHeaders=host"
wget -O /usr/bin/kubectl       https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubectl
wget -O /usr/bin/kubelet       https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubelet
wget -O /usr/bin/kubeadm       https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64/kubeadm
wget -O /usr/bin/runc          https://github.com/opencontainers/runc/releases/download/${RUNC_VERSION}/runc.amd64
wget -O /tmp/etcd.tar.gz       https://github.com/etcd-io/etcd/releases/download/${ETCD_TOOL_VERSION}/etcd-${ETCD_TOOL_VERSION}-linux-amd64.tar.gz
wget -O /tmp/containerd.tar.gz https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
wget -O /tmp/crictl.tar.gz     https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz

chmod +x /usr/bin/key-keeper 
chmod +x /usr/bin/kubelet 
chmod +x /usr/bin/kubectl 
chmod +x /usr/bin/kubeadm
chmod +x /usr/bin/runc

mkdir -p /tmp/containerd
mkdir -p /tmp/etcd

tar -C "/tmp/etcd"        -xvf /tmp/etcd.tar.gz
tar -C "/tmp/containerd"  -xvf /tmp/containerd.tar.gz
tar -C "/usr/bin"         -xvf /tmp/crictl.tar.gz

cp /tmp/etcd/etcd*/etcdctl /usr/bin/
cp /tmp/containerd/bin/*   /usr/bin/

3) Создание сервисов:

Сервисов в нашей инсталляции всего 3 (key-keeper, kubelet, containerd)

containerd.service

## RUN ON EACH MASTER.
## SETUP SERVICE FOR CONTAINERD

cat < /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

key-keeper.service

## RUN ON EACH MASTER.
## SETUP SERVICE FOR KEY-KEEPER
cat < /etc/systemd/system/key-keeper.service
[Unit]
Description=key-keeper-agent

Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/bin/key-keeper -config-dir ${BASE_K8S_PATH}/pki -config-regexp .*vault-config 

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

kubelet.service

## RUN ON EACH MASTER.
## SETUP SERVICE FOR KUBELET
cat < /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target


[Service]
ExecStart=/usr/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

kubelet.d/conf

## RUN ON EACH MASTER.
## SETUP SERVICE-CONFIG FOR KUBELET

mkdir -p /etc/systemd/system/kubelet.service.d

cat < /etc/systemd/system/kubelet.service.d/10-fraima.conf
[Service]
EnvironmentFile=-${BASE_K8S_PATH}/kubelet/service/kubelet-args.env

ExecStart=
ExecStart=/usr/bin/kubelet \
\$KUBELET_HOSTNAME \
\$KUBELET_CNI_ARGS \
\$KUBELET_RUNTIME_ARGS \
\$KUBELET_AUTH_ARGS \
\$KUBELET_CONFIGS_ARGS \
\$KUBELET_BASIC_ARGS \
\$KUBELET_KUBECONFIG_ARGS
EOF

kubelet-args.env

## RUN ON EACH MASTER.
## SETUP SERVICE-CONFIG FOR KUBELET

mkdir -p  ${BASE_K8S_PATH}/kubelet/service/

cat < ${BASE_K8S_PATH}/kubelet/service/kubelet-args.env
KUBELET_HOSTNAME=""
KUBELET_BASIC_ARGS="
    --register-node=true
    --cloud-provider=external
    --image-pull-progress-deadline=2m
    --feature-gates=RotateKubeletServerCertificate=true
    --cert-dir=/etc/kubernetes/pki/certs/kubelet
    --authorization-mode=Webhook
    --v=2
"
KUBELET_AUTH_ARGS="
    --anonymous-auth="false"
"
KUBELET_CNI_ARGS="
    --cni-bin-dir=/opt/cni/bin
    --cni-conf-dir=/etc/cni/net.d
    --network-plugin=cni
"
KUBELET_CONFIGS_ARGS="
    --config=${BASE_K8S_PATH}/kubelet/config.yaml
    --root-dir=/var/lib/kubelet
    --register-node=true
    --image-pull-progress-deadline=2m
    --v=2
"
KUBELET_KUBECONFIG_ARGS="
    --kubeconfig=${BASE_K8S_PATH}/kubelet/kubeconfig
"
KUBELET_RUNTIME_ARGS="
    --container-runtime=remote
    --container-runtime-endpoint=/run/containerd/containerd.sock
    --pod-infra-container-image=k8s.gcr.io/pause:3.6
"
EOF

** Обратите внимание, что если вы в перспективе будете разворачивать K8S в облаке и интегрировать его с ним, то ставьте --cloud-provider=external

*** Полезной фичей может оказаться автоматический лейблинг ноды при регистрации в кластере
--node-labels=node.kubernetes.io/master,foo=bar

Ниже приведен список доступных системных меток, которые можно менять:
kubelet.kubernetes.io
node.kubernetes.io
beta.kubernetes.io/arch,
beta.kubernetes.io/instance-type,
beta.kubernetes.io/os,
failure-domain.beta.kubernetes.io/region,
failure-domain.beta.kubernetes.io/zone,
kubernetes.io/arch,
kubernetes.io/hostname,
kubernetes.io/os,
node.kubernetes.io/instance-type,
topology.kubernetes.io/region,
topology.kubernetes.io/zone

Для примера, нельзя установить системные лейбл не из списка:
--node-labels=node-role.kubernetes.io/master

4) Подготовка Vault.

Как мы ранее писали, сертификаты будем создавать через централизованное хранище Vault.

Для примера мы разместим опорный Vault server на master-0 в режиме dev с уже открытым стореджом и дефолтным токеном для удобства.

Vault

## RUN ON MASTER-0.
export VAULT_VERSION="1.12.1"
export VAULT_ADDR=${VAULT_SERVER}
export VAULT_TOKEN=${VAULT_MASTER_TOKEN}

wget -O /tmp/vault_${VAULT_VERSION}_linux_amd64.zip https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip
unzip /tmp/vault_${VAULT_VERSION}_linux_amd64.zip -d /usr/bin
## RUN ON MASTER-0.
cat < /etc/systemd/system/vault.service
[Unit]
Description=Vault secret management tool
After=consul.service


[Service]
PermissionsStartOnly=true
ExecStart=/usr/bin/vault server -log-level=debug -dev -dev-root-token-id="${VAULT_MASTER_TOKEN}" -dev-listen-address=0.0.0.0:9200
Restart=on-failure
LimitMEMLOCK=infinity

[Install]
WantedBy=multi-user.target
EOF
## RUN ON MASTER-0.
#enable Vault PKI secret engine 
vault secrets enable -path=pki-root pki

#set default ttl
vault secrets tune -max-lease-ttl=87600h pki-root

#generate root CA
vault write -format=json pki-root/root/generate/internal \
common_name="ROOT PKI" ttl=8760h

*Прошу обратить внимание, что если вы находитесь на территории России, у вас будут проблемы с доступом для скачиванию Vault и Terrraform.

** pki-root/root/generate/internal — Указывает, что сформируется CA, и в response прилетит только публичный ключ, приватный будет закрыт.

*** pki-root — базовое наименование сейфа для Root-CA, смена производится через кастомизацию terraform модуля, о котором будем говорить ниже.

**** Данная инсталляция vault развернута как обзорная и не может использоваться для продуктивной нагрузки.

Отлично, Vault мы развернули, теперь нужно подготовить роли, политики и доступы в нем для key-keeper.

Для этого воспользуемся нашим модулем для Terraform.

Terraform

## RUN ON MASTER-0.
export TERRAFORM_VERSION="1.3.4"

wget -O /tmp/terraform_${TERRAFORM_VERSION}_linux_amd64.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip

unzip /tmp/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /usr/bin
## RUN ON MASTER-0.
mkdir terraform

cat < terraform/main.tf
terraform {
  required_version = ">= 0.13"

}

provider "vault" {
    
    address = "http://127.0.0.1:9200/"
    token = "${VAULT_MASTER_TOKEN}"
}


variable "master-instance-count" {
  type = number
  default = 1
}

variable "base_domain" {
  type = string
  default = "${BASE_DOMAIN}"
}

variable "cluster_name" {
  type = string
  default = "${CLUSTER_NAME}"
}

variable "vault_server" {
  type = string
  default = "http://master-0.${BASE_CLUSTER_DOMAIN}:9200/"
}

# Данный модуль генерит весь набор переменных, 
# который потребуется в следующих статьях и модулях.
module "k8s-global-vars" {
    source = "git::https://github.com/fraima/kubernetes.git//modules/k8s-config-vars"
    cluster_name          = var.cluster_name
    base_domain           = var.base_domain
    master_instance_count = var.master-instance-count
    vault_server          = var.vault_server
}

# Тут происходит вся магия с Vault.
module "k8s-vault" {
    source = "git::https://github.com/fraima/kubernetes.git//modules/k8s-vault"
    k8s_global_vars   = module.k8s-global-vars
}
EOF
cd terraform 
terraform init --upgrade
terraform plan
terraform apply

В базовый набор Vault контента боевого кластера входит:

  1. Сейфы под etcd, kubernetes, frotend-proxy. (* Сейфы для PKI создаются по маскам):

    1. clusters/${CLUSTER_NAME}/pki/etcd

    2. clusters/${CLUSTER_NAME}/pki/kubernetes-ca

    3. clusters/${CLUSTER_NAME}/pki/front-proxy

  2. Сейф Key Value для секретов

    1. clusters/${CLUSTER_NAME}/kv/

  3. Роли для заказа сертификатов (линки ведут на описание сертификата)

    1. ETCD:

      1. etcd-client

      2. etcd-server (в данной инсталляции не используется)

      3. etcd-peer

    2. Kubernetes-ca:

      1. bootstrappers-client (в данной инсталляции не используется)

      2. kube-controller-manager-client

      3. kube-controller-manager-server

      4. kube-apiserver-kubelet-client **

      5. kubeadm-client (в данной инсталляции используется как cluster-admin)

      6. kube-apiserver-cluster-admin-client *** (в данной инсталляции не используется)

      7. kube-apiserver

      8. kube-scheduler-server

      9. kube-scheduler-client

      10. kubelet-peer-k8s-certmanager (В данной инсталляции не использется)

      11. kubelet-server

      12. kubelet-client

    3. Front-proxy:

      1. front-proxy-client

  4. Политики доступа к ролям из П.2

  5. Аппроли для доступа клиентов.

    1. Путь до Approle формируется по маске — clusters/${CLUSTER_NAME}/approle

    2. Имя Approle формируется по маске — ${CERT_ROLE}-${MASTER_NAME}

  6. Временные токены.

  7. Ключи шифрования для подписи jwt токенов от сервисных аккаунтов.

** Сертификат kube-apiserver-kubelet-clientво всех инсталляциях обычно имеет привилегии cluster-admin, в данной же ситуации, по дефолту он не имеет прав и потребует создания ClusterRolebinding для корректной работы с kubelet-ами нод, но об этом позже (смотрите в конце статьи в блоке Проверка).

*** kubeadm-client по дефолту имеет права cluster-admin. В этой инсталляции он будет использоваться как клиент доступа администратора для первичной настройки кластера.

5) Приступаем к формированию конфигурационных файлов для наших сервисов.

** Напоминаю, что их всего 3 (key-keeper, kubelet, containerd).
*** containerd (рассматривать не будем, т.к. он сам генерит базовый конфиг и в большинстве случаев его достаточно)

Начнем с Key-keeper

Со спецификой формирования конфига можно ознакомиться вот в этом README.

Конфиг очень длинный так, что не удивляйтесь… .

key-keeper.issuers

## RUN ON EACH MASTER.
# Для каждой ноды свое имя!!!!
export MASTER_NAME="master-0"

В первой части конфига указываем имя ноды, все остальные переменные указывали выше.

## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/pki/

cat < ${BASE_K8S_PATH}/pki/vault-config
---
issuers:

  - name: kube-apiserver-sa
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-sa-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-sa/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-sa/role-id
      resource:
        kv:
          path: clusters/${CLUSTER_NAME}/kv
      timeout: 15s

  - name: etcd-ca
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: etcd-ca-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/etcd-ca/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/etcd-ca/role-id
      resource:
        CAPath: "clusters/${CLUSTER_NAME}/pki/etcd"
        rootCAPath: "clusters/${CLUSTER_NAME}/pki/root"
      timeout: 15s

  - name: etcd-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: etcd-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/etcd-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/etcd-client/role-id
      resource:
        role: etcd-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/etcd"
      timeout: 15s

  - name: etcd-peer
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: etcd-peer-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/etcd-peer/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/etcd-peer/role-id
      resource:
        role: etcd-peer
        CAPath: "clusters/${CLUSTER_NAME}/pki/etcd"
      timeout: 15s

  - name: front-proxy-ca
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: front-proxy-ca-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/front-proxy-ca/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/front-proxy-ca/role-id
      resource:
        CAPath: "clusters/${CLUSTER_NAME}/pki/front-proxy"
        rootCAPath: "clusters/${CLUSTER_NAME}/pki/root"
      timeout: 15s

  - name: front-proxy-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: front-proxy-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/front-proxy-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/front-proxy-client/role-id
      resource:
        role: front-proxy-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/front-proxy"
      timeout: 15s

  - name: kubernetes-ca
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubernetes-ca-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubernetes-ca/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubernetes-ca/role-id
      resource:
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
        rootCAPath: "clusters/${CLUSTER_NAME}/pki/root"
      timeout: 15s

  - name: kube-apiserver
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver/role-id
      resource:
        role: kube-apiserver
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-apiserver-cluster-admin-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-cluster-admin-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-cluster-admin-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-cluster-admin-client/role-id
      resource:
        role: kube-apiserver-cluster-admin-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-apiserver-kubelet-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-apiserver-kubelet-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-kubelet-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-apiserver-kubelet-client/role-id
      resource:
        role: kube-apiserver-kubelet-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-controller-manager-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-controller-manager-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-client/role-id
      resource:
        role: kube-controller-manager-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-controller-manager-server
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-controller-manager-server-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-server/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-controller-manager-server/role-id
      resource:
        role: kube-controller-manager-server
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-scheduler-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-scheduler-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-client/role-id
      resource:
        role: kube-scheduler-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kube-scheduler-server
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kube-scheduler-server-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-server/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kube-scheduler-server/role-id
      resource:
        role: kube-scheduler-server
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kubeadm-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubeadm-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubeadm-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubeadm-client/role-id
      resource:
        role: kubeadm-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kubelet-client
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubelet-client-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubelet-client/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubelet-client/role-id
      resource:
        role: kubelet-client
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s

  - name: kubelet-server
    vault:
      server: ${VAULT_SERVER}
      auth:
        caBundle: 
        tlsInsecure: true
        bootstrap:
          file: /var/lib/key-keeper/bootstrap.token
        appRole:
          name: kubelet-server-${MASTER_NAME}
          path: "clusters/${CLUSTER_NAME}/approle"
          secretIDLocalPath: /var/lib/key-keeper/vault/kubelet-server/secret-id
          roleIDLocalPath: /var/lib/key-keeper/vault/kubelet-server/role-id
      resource:
        role: kubelet-server
        CAPath: "clusters/${CLUSTER_NAME}/pki/kubernetes"
      timeout: 15s
EOF

key-keeper.certs

## RUN ON EACH MASTER.
cat <> ${BASE_K8S_PATH}/pki/vault-config
certificates:

  - name: etcd-ca
    issuerRef:
      name: etcd-ca
    isCa: true
    ca:
      exportedKey: false
      generate: false
    hostPath: "${BASE_K8S_PATH}/pki/ca"

  - name: kube-apiserver-etcd-client
    issuerRef:
      name: etcd-client
    spec:
      subject:
        commonName: "system:kube-apiserver-etcd-client"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: etcd-peer
    issuerRef:
      name: etcd-peer
    spec:
      subject:
        commonName: "system:etcd-peer"
      usage:
        - server auth
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - $HOSTNAME
        - "${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}"
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/etcd"
    withUpdate: true

  - name: etcd-server
    issuerRef:
      name: etcd-peer
    spec:
      subject:
        commonName: "system:etcd-server"
      usage:
        - server auth
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        static:
          - 127.0.1.1
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - $HOSTNAME
        - "${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}"
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/etcd"
    withUpdate: true

  - name: front-proxy-ca
    issuerRef:
      name: front-proxy-ca
    isCa: true
    ca:
      exportedKey: false
      generate: false
    hostPath: "${BASE_K8S_PATH}/pki/ca"

  - name: front-proxy-client
    issuerRef:
      name: front-proxy-client
    spec:
      subject:
        commonName: "custom:kube-apiserver-front-proxy-client"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kubernetes-ca
    issuerRef:
      name: kubernetes-ca
    isCa: true
    ca:
      exportedKey: false
      generate: false
    hostPath: "${BASE_K8S_PATH}/pki/ca"

  - name: kube-apiserver
    issuerRef:
      name: kube-apiserver
    spec:
      subject:
        commonName: "custom:kube-apiserver"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        static:
          - 29.64.0.1
        interfaces:
          - lo
          - eth*
        dnsLookup:
          - api.${BASE_CLUSTER_DOMAIN}
      ttl: 10m
      hostnames:
        - localhost
        - kubernetes
        - kubernetes.default
        - kubernetes.default.svc
        - kubernetes.default.svc.cluster
        - kubernetes.default.svc.cluster.local
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kube-apiserver-kubelet-client
    issuerRef:
      name: kube-apiserver-kubelet-client
    spec:
      subject:
        commonName: "custom:kube-apiserver-kubelet-client"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kube-controller-manager-client
    issuerRef:
      name: kube-controller-manager-client
    spec:
      subject:
        commonName: "system:kube-controller-manager"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-controller-manager"
    withUpdate: true

  - name: kube-controller-manager-server
    issuerRef:
      name: kube-controller-manager-server
    spec:
      subject:
        commonName: "custom:kube-controller-manager"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - kube-controller-manager.default
        - kube-controller-manager.default.svc
        - kube-controller-manager.default.svc.cluster
        - kube-controller-manager.default.svc.cluster.local
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-controller-manager"
    withUpdate: true

  - name: kube-scheduler-client
    issuerRef:
      name: kube-scheduler-client
    spec:
      subject:
        commonName: "system:kube-scheduler"
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-scheduler"
    withUpdate: true

  - name: kube-scheduler-server
    issuerRef:
      name: kube-scheduler-server
    spec:
      subject:
        commonName: "custom:kube-scheduler"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - kube-scheduler.default
        - kube-scheduler.default.svc
        - kube-scheduler.default.svc.cluster
        - kube-scheduler.default.svc.cluster.local
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-scheduler"
    withUpdate: true

  - name: kubeadm-client
    issuerRef:
      name: kubeadm-client
    spec:
      subject:
        commonName: "custom:kubeadm-client"
        organizationalUnit:
          - system:masters
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kube-apiserver"
    withUpdate: true

  - name: kubelet-client
    issuerRef:
      name: kubelet-client
    spec:
      subject:
        commonName: "system:node:${MASTER_NAME}-${CLUSTER_NAME}"
        organization:
          - system:nodes
      usage:
        - client auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ttl: 10m
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kubelet"
    withUpdate: true

  - name: kubelet-server
    issuerRef:
      name: kubelet-server
    spec:
      subject:
        commonName: "system:node:${MASTER_NAME}-${CLUSTER_NAME}"
      usage:
        - server auth
      privateKey:
        algorithm: "RSA"
        encoding: "PKCS1"
        size: 4096
      ipAddresses:
        interfaces:
          - lo
          - eth*
      ttl: 10m
      hostnames:
        - localhost
        - $HOSTNAME
        - "${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}"
    renewBefore: 7m
    hostPath: "${BASE_K8S_PATH}/pki/certs/kubelet"
    withUpdate: true

secrets:
  - name: kube-apiserver-sa
    issuerRef:
      name: kube-apiserver-sa
    key: private  
    hostPath: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kube-apiserver-sa.pem

  - name: kube-apiserver-sa
    issuerRef:
      name: kube-apiserver-sa
    key: public  
    hostPath: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kube-apiserver-sa.pub
EOF

** Обратите внимание, что сертификаты выпускаются с ttl=10 минут и renewBefore=7 минут, это означает, что сертификат будет перевыпускаться каждые 3 минуты. Такие малые интервалы установлены, чтобы показать корректность работы функции перевыпуска сертификата. (Измените их на актуальные для вас значения.)

*** С версии 1.22 Kubernetes (ниже не проверял) все компоненты умеют автоматически определять, что конфигурационые файлы на файловой системе изменились и перечитывать их без перезапуска.

key-keeper.token

## RUN ON EACH MASTER.
mkdir -p /var/lib/key-keeper/

cat < /var/lib/key-keeper/bootstrap.token
${VAULT_MASTER_TOKEN}
EOF

** Не удивляйтесь, что в этом конфигурационном файле мастер ключ от Vault Server, как я говорил ранее — это упрощённая версия настройки.

*** Если чуть глубже изучите наш модуль Vault для Terraform, то поймете, что там создаются временные токены, которые нужно указывать в bootstrap в конфиге key-keeper. Для каждого issuer свой токен. Пример → https://github.com/fraima/kubernetes/blob/f0e4c7bc8f8d2695c419b17fec4bacc2dd7c5f18/modules/k8s-templates/cloud-init/templates/cloud-init-kubeadm-master.tftpl#L115

Большая части информации, описывающая почему именно так, а не иначе, приведена в статьях:

Сертификаты K8S или как распутать вермишель Часть 1

Сертификаты K8S или как распутать вермишель Часть 2

Важной особенностью является то, что мы больше не задумываемся о протухающих сертификатах, Key-keeper берет на себя эту задачу, от нас только требуется настроить мониторинг и алерты, для отслеживания корректной работы системы.

Kubelet config

config.yaml

## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kubelet

cat <> ${BASE_K8S_PATH}/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: "${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem"

tlsCertFile: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-server.pem
tlsPrivateKeyFile: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-server-key.pem

authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
  - "${SERVICE_DNS}"
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 1s
nodeStatusUpdateFrequency: 1s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: false
runtimeRequestTimeout: 0s
serverTLSBootstrap: true
shutdownGracePeriod: 15s
shutdownGracePeriodCriticalPods: 5s
staticPodPath: "${BASE_K8S_PATH}/manifests"
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
containerLogMaxSize: 50Mi
maxPods: 250
kubeAPIQPS: 50
kubeAPIBurst: 100
podPidsLimit: 4096
serializeImagePulls: false
systemReserved:
  ephemeral-storage: 1Gi
featureGates:
  APIPriorityAndFairness: true
  DownwardAPIHugePages: true
  PodSecurity: true
  CSIMigrationAWS: false
  CSIMigrationAzureFile: false
  CSIMigrationGCE: false
  CSIMigrationvSphere: false
rotateCertificates: false
serverTLSBootstrap: true
tlsMinVersion: VersionTLS12
tlsCipherSuites:
  - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
  - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
allowedUnsafeSysctls:
  - "net.core.somaxconn"
evictionSoft: 
  memory.available: 3Gi 
  nodefs.available: 25%
  nodefs.inodesFree: 15%
  imagefs.available: 30%
  imagefs.inodesFree: 25%
evictionSoftGracePeriod:  
  memory.available: 2m30s
  nodefs.available: 2m30s
  nodefs.inodesFree: 2m30s
  imagefs.available: 2m30s
  imagefs.inodesFree: 2m30s
evictionHard:
  memory.available: 2Gi
  nodefs.available: 20%
  nodefs.inodesFree: 10%
  imagefs.available: 25%
  imagefs.inodesFree: 15%
evictionPressureTransitionPeriod: 5s 
imageMinimumGCAge: 12h 
imageGCHighThresholdPercent: 55
imageGCLowThresholdPercent: 50
EOF

** clusterDNS — легко обжечься, если указал некорректное значение.

*** resolvConf — в Centos, Rhel, Almalinux может ругаться на путь, решается командами:

systemctl daemon-reload
systemctl enable systemd-resolved.service
systemctl start systemd-resolved.service

Документация описывающая проблему:
https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues

System configs

К базовой конфигурации операционной системы относится:

  1. Подготовка дискового пространства для /var/lib/etcd (в данной инсталляции не рассматривается)

  2. Настройка sysctl

  3. Настройка modprobe

  4. Установка базовых пакетов (wget, tar)

modprobe

## RUN ON EACH MASTER.
cat <> /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

sysctls

## RUN ON EACH MASTER.
cat <> /etc/sysctl.d/99-network.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF

sysctl --system

Kubeconfigs

Для того, чтобы базовые компоненты кластера и администратор могли общаться с Kube-apiserver, нужно сформировать kubeconfig для каждого из них.

** admin.conf Kubeconfig с правами cluster-admin для базовой настройки кластера администратором.

admin.conf

## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}

cat <> ${BASE_K8S_PATH}/admin.conf
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kubeadm
  name: kubeadm@kubernetes
current-context: kubeadm@kubernetes
kind: Config
preferences: {}
users:
- name: kubeadm
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kubeadm-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kube-apiserver/kubeadm-client-key.pem
EOF

kube-scheduler

## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kube-scheduler/

cat <> ${BASE_K8S_PATH}/kube-scheduler/kubeconfig
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kube-scheduler
  name: kube-scheduler@kubernetes
current-context: kube-scheduler@kubernetes
kind: Config
preferences: {}
users:
- name: kube-scheduler
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kube-scheduler/kube-scheduler-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kube-scheduler/kube-scheduler-client-key.pem
EOF

kube-controller-manager

## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kube-controller-manager

cat <> ${BASE_K8S_PATH}/kube-controller-manager/kubeconfig
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kube-controller-manager
  name: kube-controller-manager@kubernetes
current-context: kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: kube-controller-manager
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kube-controller-manager/kube-controller-manager-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kube-controller-manager/kube-controller-manager-client-key.pem
EOF

kubelet

## RUN ON EACH MASTER.
mkdir -p ${BASE_K8S_PATH}/kubelet

cat <> ${BASE_K8S_PATH}/kubelet/kubeconfig
---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ${BASE_K8S_PATH}/pki/ca/kubernetes-ca.pem
    server: https://127.0.0.1:${KUBE_APISERVER_PORT}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kubelet
  name: kubelet@kubernetes
current-context: kubelet@kubernetes
kind: Config
preferences: {}
users:
- name: kubelet
  user:
    client-certificate: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-client.pem
    client-key: ${BASE_K8S_PATH}/pki/certs/kubelet/kubelet-client-key.pem
EOF

Static Pods

kube-apiserver

## RUN ON EACH MASTER.
export ADVERTISE_ADDRESS=$(ip route get 1.1.1.1 | grep -oP 'src \K\S+')

cat < /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: ${ADVERTISE_ADDRESS}:${KUBE_APISERVER_PORT}
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=${ADVERTISE_ADDRESS}
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/ca/etcd-ca.pem
    - --etcd-certfile=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-etcd-client.pem
    - --etcd-keyfile=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-etcd-client-key.pem
    - --etcd-servers=${ETCD_SERVERS}
    - --kubelet-client-certificate=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-kubelet-client.pem
    - --kubelet-client-key=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-kubelet-client-key.pem
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/certs/kube-apiserver/front-proxy-client.pem
    - --proxy-client-key-file=/etc/kubernetes/pki/certs/kube-apiserver/front-proxy-client-key.pem
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/ca/front-proxy-ca.pem
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=${KUBE_APISERVER_PORT}
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-sa.pem
    - --service-cluster-ip-range=${SERVICE_CIDR}
    - --tls-cert-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-key.pem
    image: k8s.gcr.io/kube-apiserver:${KUBERNETES_VERSION}
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /livez
        port: ${KUBE_APISERVER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /readyz
        port: ${KUBE_APISERVER_PORT}
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /livez
        port: ${KUBE_APISERVER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /var/log/kubernetes/audit/
      name: k8s-audit
    - mountPath: /etc/kubernetes/pki/ca
      name: k8s-ca
      readOnly: true
    - mountPath: /etc/kubernetes/pki/certs
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/kube-apiserver
      name: k8s-kube-apiserver-configs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /var/log/kubernetes/audit/
      type: DirectoryOrCreate
    name: k8s-audit
  - hostPath:
      path: /etc/kubernetes/pki/ca
      type: DirectoryOrCreate
    name: k8s-ca
  - hostPath:
      path: /etc/kubernetes/pki/certs
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/kube-apiserver
      type: DirectoryOrCreate
    name: k8s-kube-apiserver-configs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
EOF

** Обратите внимание, что переменная ADVERTISE_ADDRESS требует интернета, если его нет просто укажите IP ADDRESS ноды.

kube-controller-manager

## RUN ON EACH MASTER.
cat < /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager/kubeconfig
    - --authorization-always-allow-paths=/healthz,/metrics
    - --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager/kubeconfig
    - --bind-address=${ADVERTISE_ADDRESS}
    - --client-ca-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --cluster-cidr=${SERVICE_CIDR}
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --cluster-signing-key-file=
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/kube-controller-manager/kubeconfig
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/ca/front-proxy-ca.pem
    - --root-ca-file=/etc/kubernetes/pki/ca/kubernetes-ca.pem
    - --secure-port=${KUBE_CONTROLLER_MANAGER_PORT}
    - --service-account-private-key-file=/etc/kubernetes/pki/certs/kube-apiserver/kube-apiserver-sa.pem
    - --tls-cert-file=/etc/kubernetes/pki/certs/kube-controller-manager/kube-controller-manager-server.pem
    - --tls-private-key-file=/etc/kubernetes/pki/certs/kube-controller-manager/kube-controller-manager-server-key.pem
    - --use-service-account-credentials=true
    image: k8s.gcr.io/kube-controller-manager:${KUBERNETES_VERSION}
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_CONTROLLER_MANAGER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_CONTROLLER_MANAGER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/kubernetes/pki/ca
      name: k8s-ca
      readOnly: true
    - mountPath: /etc/kubernetes/pki/certs
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/kube-controller-manager
      name: k8s-kube-controller-manager-configs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/kubernetes/pki/ca
      type: DirectoryOrCreate
    name: k8s-ca
  - hostPath:
      path: /etc/kubernetes/pki/certs
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/kube-controller-manager
      type: DirectoryOrCreate
    name: k8s-kube-controller-manager-configs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
EOF

kube-scheduler

## RUN ON EACH MASTER.
cat < /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/kube-scheduler/kubeconfig
    - --authorization-kubeconfig=/etc/kubernetes/kube-scheduler/kubeconfig
    - --bind-address=${ADVERTISE_ADDRESS}
    - --kubeconfig=/etc/kubernetes/kube-scheduler/kubeconfig
    - --leader-elect=true
    - --secure-port=${KUBE_SCHEDULER_PORT}
    - --tls-cert-file=/etc/kubernetes/pki/certs/kube-scheduler/kube-scheduler-server.pem
    - --tls-private-key-file=/etc/kubernetes/pki/certs/kube-scheduler/kube-scheduler-server-key.pem
    image: k8s.gcr.io/kube-scheduler:${KUBERNETES_VERSION}
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_SCHEDULER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: ${ADVERTISE_ADDRESS}
        path: /healthz
        port: ${KUBE_SCHEDULER_PORT}
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/pki/ca
      name: k8s-ca
      readOnly: true
    - mountPath: /etc/kubernetes/pki/certs
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/kube-scheduler
      name: k8s-kube-scheduler-configs
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/ca
      type: DirectoryOrCreate
    name: k8s-ca
  - hostPath:
      path: /etc/kubernetes/pki/certs
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/kube-scheduler
      type: DirectoryOrCreate
    name: k8s-kube-scheduler-configs
status: {}
EOF

etcd

## RUN ON EACH MASTER.
cat < /etc/kubernetes/manifests/etcd.yaml
---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - name: etcd
    command:
      - etcd
    args:
      - --name=${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}
      - --initial-cluster=${ETCD_INITIAL_CLUSTER}
      - --initial-advertise-peer-urls=https://${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}:${ETCD_PEER_PORT}
      - --advertise-client-urls=https://${MASTER_NAME}.${BASE_CLUSTER_DOMAIN}:${ETCD_SERVER_PORT}
      - --peer-trusted-ca-file=/etc/kubernetes/pki/ca/etcd-ca.pem
      - --trusted-ca-file=/etc/kubernetes/pki/ca/etcd-ca.pem
      - --peer-cert-file=/etc/kubernetes/pki/certs/etcd/etcd-peer.pem
      - --peer-key-file=/etc/kubernetes/pki/certs/etcd/etcd-peer-key.pem
      - --cert-file=/etc/kubernetes/pki/certs/etcd/etcd-server.pem
      - --key-file=/etc/kubernetes/pki/certs/etcd/etcd-server-key.pem
      - --listen-client-urls=https://0.0.0.0:${ETCD_SERVER_PORT}
      - --listen-peer-urls
    
            

© Habrahabr.ru