Ubuntu 18.02 avec Rancher (RKE) pour applications NodeJS, MongoDB, Redis, ELK, VueJS et MQTT – Partie 1

https://rancher.com/img/brand-guidelines/assets/logos/png/color/rancher-logo-horiz-color.png

Introduction

Déployer Rancher avec RKE dans une VM avec Ubuntu 18.02

Puis déploiement de deux web applications

Étape 1 - Installation de l'OS

Il suffit de télécharger l'image de Ubuntu 18.02 puis de faire une installation minimale.

La VM a deux disques

  • 20 GB : Pour l'OS
  • 40 GB : Pour les conteneurs

8 GB de Mémoires et 4 coeurs

l'adresse IP doit être statique et doit être accessible de internet (exposer les ports 80 et 443 ou une DMZ) (Optionnel)

Préparation

Le user utilisé : prod_user
vous devez utiliser le user que vous avez configuré lors de l'installation.

Pour installer rancher il faut configurer le système comme suit:

  • Configurer une adresse IP statique
nano /etc/netplan/50-cloud-init.yaml

Example de configuration

# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        ens3:
            addresses:
            - 192.168.2.252/24
            gateway4: 192.168.2.1
            nameservers:
                addresses:
                - 192.168.2.1
    version: 2
  • Installer les packages suivants
apt update
apt upgrade -y
apt install -y nano wget git
  • Création du groupe Docker
sudo groupadd docker
sudo useradd prod_user
sudo usermod -aG docker prod_user
  • Installer Docker ()
iptables -A INPUT -p tcp --dport 6443 -j ACCEPT

Installer iptables persistent (https://askubuntu.com/questions/1052919/iptables-reload-restart-on-ubuntu-18-04

apt install -y iptables-persistent netfilter-persistent

netfilter-persistent save
netfilter-persistent start

iptables-save  > /etc/iptables/rules.v4
ip6tables-save > /etc/iptables/rules.v6

Étape 1.1 - Configurer le DNS pour pointer le FQDN et l'adresse IP

Sur votre machine remote (celle utilisée pour configurer le rancher)

Vous devez modifier votre fichier hosts

192.168.2.252 public.webux.lab

Étape 2 - Installation de rancher avec RKE

Source: https://rancher.com/docs/rke/latest/en/

  1. Télécharger le binaire de RKE
  2. Télécharger le binaire de helm
  3. Télécharger le binaire de kubectl
  4. Générer la clé SSH
  5. Création du fichier de configuration
  6. Déploiement de rancher
  7. Installation du cert-manager
  8. Installation de Rancher

Étape 2.1 - Le binaire de RKE

Suivre les instructions selon votre OS,

Source: https://rancher.com/docs/rke/latest/en/installation/

Pour macOS,

brew install rke

Suite à l'installation,

rke
NAME:
   rke - Rancher Kubernetes Engine, an extremely simple, lightning fast Kubernetes installer that works everywhere

USAGE:
   rke [global options] command [command options] [arguments...]
   
VERSION:
   v1.0.4
   
AUTHOR(S):
   Rancher Labs, Inc. 
   
COMMANDS:
     up       Bring the cluster up
     remove   Teardown the cluster and clean cluster nodes
     version  Show cluster Kubernetes version
     config   Setup cluster configuration
     etcd     etcd snapshot save/restore operations in k8s cluster
     cert     Certificates management for RKE cluster
     encrypt  Manage cluster encryption provider keys
     help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug, -d    Debug logging
   --quiet, -q    Quiet mode, disables logging and only critical output will be printed
   --help, -h     show help
   --version, -v  print the version

Étape 2.2 - Le binaire de helm

Source: https://helm.sh

Suivre les instructions selon votre OS,

Pour MacOS

brew install helm
helm version
version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"}

Étape 2.3 - Le binaire de kubectl

Source: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Suivre les instructions selon votre OS,

Pour MacOS,

brew install kubectl
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}

Étape 2.4 - La clé SSH

Pour générer une clé publique/privée

ssh-keygen
ssh-copy-id -i ../.ssh/rke_public prod_user@192.168.2.252

Puis choisissez un emplacement pour sauvegarder la clé privée et PAS de password pour l'utiliser.

Étape 2.5 - Le fichier de configuration

Source et références : https://rancher.com/docs/rke/latest/en/config-options/

Nous allons utiliser l'example par défaut (finalement je n'ai pas eu le succès désiré avec la configuration) https://rancher.com/docs/rke/latest/en/example-yamls/#minimal-cluster-yml-example

J'ai utilisé cette commande:

rke config --name cluster.yml

Puis choisi ceci:

[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/rke_public
[+] Number of Hosts [1]: 
[+] SSH Address of host (1) [none]: 192.168.2.252
[+] SSH Port of host (1) [22]: 
[+] SSH Private Key Path of host (192.168.2.252) [none]: 
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (192.168.2.252) [none]: 
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/rke_public
[+] SSH User of host (192.168.2.252) [ubuntu]: prod_user
[+] Is host (192.168.2.252) a Control Plane host (y/n)? [y]: 
[+] Is host (192.168.2.252) a Worker host (y/n)? [n]: y
[+] Is host (192.168.2.252) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (192.168.2.252) [none]: 
[+] Internal IP of host (192.168.2.252) [none]: 
[+] Docker socket path on host (192.168.2.252) [/var/run/docker.sock]: 
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 
[+] Authentication Strategy [x509]: 
[+] Authorization Mode (rbac, none) [rbac]: 
[+] Kubernetes Docker image [rancher/hyperkube:v1.17.2-rancher1]: 
[+] Cluster domain [cluster.local]: webux.lab
[+] Service Cluster IP Range [10.43.0.0/16]: 
[+] Enable PodSecurityPolicy [n]: 
[+] Cluster Network CIDR [10.42.0.0/16]: 
[+] Cluster DNS Service IP [10.43.0.10]: 
[+] Add addon manifest URLs or YAML files [no]: 

Voici se qui va être déployé, suite à la création du fichier,
il faut ajouter d'autres modification à celui-ci

Ne pas oublier de changer les informations pour adapter votre configuration

# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 192.168.2.252
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: prod_user
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/rke_public
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
  labels:
    app: dns
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
    always_pull_images: false
    secrets_encryption_config: null
    audit_log: null
    admission_configuration: null
    event_rate_limit: null
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: webux.lab
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
    generate_serving_certificate: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: canal
  options: {}
  mtu: 1500
  node_selector: {}
authentication:
  strategy: x509
  sans: []
  webhook: null
addons: ""
addons_include: []
system_images:
  etcd: rancher/coreos-etcd:v3.4.3-rancher1
  alpine: rancher/rke-tools:v0.1.52
  nginx_proxy: rancher/rke-tools:v0.1.52
  cert_downloader: rancher/rke-tools:v0.1.52
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.52
  kubedns: rancher/k8s-dns-kube-dns:1.15.0
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
  kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  coredns: rancher/coredns-coredns:1.6.5
  coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  kubernetes: rancher/hyperkube:v1.17.2-rancher1
  flannel: rancher/coreos-flannel:v0.11.0-rancher1
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher5
  calico_node: rancher/calico-node:v3.10.2
  calico_cni: rancher/calico-cni:v3.10.2
  calico_controllers: rancher/calico-kube-controllers:v3.10.2
  calico_ctl: rancher/calico-ctl:v2.0.0
  calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
  canal_node: rancher/calico-node:v3.10.2
  canal_cni: rancher/calico-cni:v3.10.2
  canal_flannel: rancher/coreos-flannel:v0.11.0
  canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
  weave_node: weaveworks/weave-kube:2.5.2
  weave_cni: weaveworks/weave-npc:2.5.2
  pod_infra_container: rancher/pause:3.1
  ingress: rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
  metrics_server: rancher/metrics-server:v0.3.6
  windows_pod_infra_container: rancher/kubelet-pause:v0.1.3
ssh_key_path: ~/.ssh/rke_public
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
  dns_policy: ""
  extra_envs: []
  extra_volumes: []
  extra_volume_mounts: []
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
monitoring:
  provider: ""
  options: {}
  node_selector: {}
restore:
  restore: false
  snapshot_name: ""
dns: 
  provider: kube-dns
  node_selector:
    app: dns

Étape 2.6 - Lancer le déploiement

Dans le même répertoire que le fichier YAML, lancez

rke up --config cluster.yml 

Le premier coup, la configuration du network a échouée

FATA[0262] Failed to get job complete status for job rke-network-plugin-deploy-job in namespace kube-system 

J'ai simplement relancé le déploiement une seconde fois et le tout c'est déroulé avec succès (J'utilise une VM donc parfois elle bug)

Pour valider que le tout a fonctionné,

kubectl --kubeconfig kube_config_cluster.yml get pods 
NAME       READY   STATUS    RESTARTS   AGE
my-nginx   1/1     Running   0          9m

Étape 2.7 - Installation du cert-manager pour rancher

Il faut faire cette étape pour installer le Rancher UI

Source: https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm

Création du namespace pour le cert-manager

kubectl create namespace cert-manager --kubeconfig kube_config_cluster.yml 

Réponse:

namespace/cert-manager created

Installation du cert-manager

kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.13.1/deploy/manifests/00-crds.yaml --kubeconfig kube_config_cluster.yml 

Réponse:

customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created

Ajout du répertoire jetstack

helm repo add jetstack https://charts.jetstack.io --kubeconfig kube_config_cluster.yml 

Réponse:

"jetstack" has been added to your repositories

Mettre à jour le répertoire,

helm repo update --kubeconfig kube_config_cluster.yml 

Réponse:

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "rancher-stable" chart repository
...Successfully got an update from the "rancher-latest" chart repository
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈ Happy Helming!⎈ 

Installation helm chart

helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.13.1 \
  --kubeconfig kube_config_cluster.yml

Réponse:

NAME: cert-manager
LAST DEPLOYED: Tue Mar  3 22:07:27 2020
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager has been deployed successfully!

In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).

More information on the different types of issuers and how to configure them
can be found in our documentation:

https://docs.cert-manager.io/en/latest/reference/issuers.html

For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:

https://docs.cert-manager.io/en/latest/reference/ingress-shim.html

Validation

kubectl get pods --namespace cert-manager --kubeconfig kube_config_cluster.yml 

Réponse:

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7cb745cb4f-86w8m              1/1     Running   0          54s
cert-manager-cainjector-778cc6bd68-mn49d   1/1     Running   0          54s
cert-manager-webhook-69894d5869-gdpz5      1/1     Running   0          54s

Étape 3 - Installer Rancher

Source: https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/helm-rancher/#optional-install-cert-manager

Ajouter le helm de rancher version latest

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest --kubeconfig kube_config_cluster.yml 

Création du namespace pour rancher UI

kubectl create namespace cattle-system --kubeconfig kube_config_cluster.yml 

Déployer le helm de Rancher

Ne pas oublier de modifier le FQDN pour le votre.

helm install rancher rancher-latest/rancher \                                                                                                                       
  --namespace cattle-system \
  --set hostname=public.webux.lab \
--kubeconfig kube_config_cluster.yml
NAME: rancher
LAST DEPLOYED: Tue Mar  3 22:09:03 2020
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up.

Check out our docs at https://rancher.com/docs/rancher/v2.x/en/

Browse to https://public.webux.lab

Happy Containering!

Suite à l'installation, accéder à l'url de votre machine comme suit:

https://public.webux.lab

Choisir un mot de passe sécuritaire et configurer le DNS comme précédemment.

Pour accéder à Rancher à partir du FQDN, vous devez configurer un DNS ou ajouter l'adresse IP / Nom de domaine dans les fichiers hosts.

Conclusion

Le déploiement de Kubernetes avec RKE est plutôt simple et permet de déployer des applications avec la flexibilité des conteneurs rapidement.

La suite (déployer une application sur rancher)


Dans la deuxième partie, nous allons déployer la première application,

MongoDB avec NodeJS et VueJS

  • RestAPI
  • Socket.IO

Pour le logging, la suite ELK est utilisée

Et Redis est utilisé pour le load balancing

Erreurs lors du déploiement

#1

Initialement je faisais le déploiement avec CentOS 8.1, suite à plusieurs erreurs tel que la configuration de iptables qui ne c'est pas fait correctement et la perte de l'accès à internet pour les conteneurs, j'ai tout refait avec Ubuntu 18

Pendant l'installation de Docker, l'erreur suivante est survenue avec CentOS 8.1 :

Error: 
 Problem: package docker-ce-3:19.03.6-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
  - cannot install the best candidate for the job
  - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
  - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
  - package containerd.io-1.2.2-3.el7.x86_64 is excluded
  - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

Pour contourner l'erreur, il faut ajouter --nobest

sudo yum install -y docker-ce docker-ce-cli containerd.io --nobest

#2

rke up --config public_cluster.yml
INFO[0000] Running RKE version: v1.0.4                  
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [dialer] Setup tunnel for host [192.168.2.252] 
WARN[0000] Failed to set up SSH tunneling for host [192.168.2.252]: Can't retrieve Docker Info: error during connect: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info: Unable to access node with address [192.168.2.252:22] using SSH. Using encrypted private keys is only supported using ssh-agent. Please configure the option `ssh_agent_auth: true` in the configuration file or use --ssh-agent-auth as a parameter when running RKE. This will use the `SSH_AUTH_SOCK` environment variable. Error: Error configuring SSH: ssh: cannot decode encrypted private keys 
WARN[0000] Removing host [192.168.2.252] from node lists 
WARN[0000] [state] can't fetch legacy cluster state from Kubernetes: Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [192.168.2.252] 
INFO[0000] [certificates] Generating CA kubernetes certificates 
INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0000] [certificates] Generating Kubernetes API server certificates 
INFO[0000] [certificates] Generating Service account token key 
INFO[0000] [certificates] Generating Kube Controller certificates 
INFO[0000] [certificates] Generating Kube Scheduler certificates 
INFO[0000] [certificates] Generating Kube Proxy certificates 
INFO[0001] [certificates] Generating Node certificate   
INFO[0001] [certificates] Generating admin certificates and kubeconfig 
INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0001] Successfully Deployed state file at [./public_cluster.rkestate] 
INFO[0001] Building Kubernetes cluster                  
FATA[0001] Cluster must have at least one etcd plane host: please specify one or more etcd in cluster config

Voici la configuration qui a causé cette erreur

nodes:
    - address: 1.2.3.4
      user: prod_user
      ssh_key_path: /Users/tgingras/.ssh/rke_public
      role:
        - controlplane
        - etcd
        - worker

cluster_name: studiowebux_prod

# Specify network plugin-in (canal, calico, flannel, weave, or none)
network:
    plugin: canal

# Specify DNS provider (coredns or kube-dns)
dns:
    provider: coredns

ingress:
    provider: nginx
    node_selector:
      app: ingress
      
addons: |-
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: my-nginx
      namespace: default
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

Solution:

J'ai utilisé le générateur de configuration finalement.

# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 192.168.2.252
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: prod_user
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/rke_public
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
    always_pull_images: false
    secrets_encryption_config: null
    audit_log: null
    admission_configuration: null
    event_rate_limit: null
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: webuxlab.com
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
    generate_serving_certificate: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: canal
  options: {}
  mtu: 0
  node_selector: {}
authentication:
  strategy: x509
  sans: []
  webhook: null
addons: ""
addons_include: []
system_images:
  etcd: rancher/coreos-etcd:v3.4.3-rancher1
  alpine: rancher/rke-tools:v0.1.52
  nginx_proxy: rancher/rke-tools:v0.1.52
  cert_downloader: rancher/rke-tools:v0.1.52
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.52
  kubedns: rancher/k8s-dns-kube-dns:1.15.0
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
  kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  coredns: rancher/coredns-coredns:1.6.5
  coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  kubernetes: rancher/hyperkube:v1.17.2-rancher1
  flannel: rancher/coreos-flannel:v0.11.0-rancher1
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher5
  calico_node: rancher/calico-node:v3.10.2
  calico_cni: rancher/calico-cni:v3.10.2
  calico_controllers: rancher/calico-kube-controllers:v3.10.2
  calico_ctl: rancher/calico-ctl:v2.0.0
  calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
  canal_node: rancher/calico-node:v3.10.2
  canal_cni: rancher/calico-cni:v3.10.2
  canal_flannel: rancher/coreos-flannel:v0.11.0
  canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
  weave_node: weaveworks/weave-kube:2.5.2
  weave_cni: weaveworks/weave-npc:2.5.2
  pod_infra_container: rancher/pause:3.1
  ingress: rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
  metrics_server: rancher/metrics-server:v0.3.6
  windows_pod_infra_container: rancher/kubelet-pause:v0.1.3
ssh_key_path: ~/.ssh/rke_public
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
  dns_policy: ""
  extra_envs: []
  extra_volumes: []
  extra_volume_mounts: []
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
monitoring:
  provider: ""
  options: {}
  node_selector: {}
restore:
  restore: false
  snapshot_name: ""
dns: null

#3

INFO[0000] Running RKE version: v1.0.4                  
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [certificates] Generating admin certificates and kubeconfig 
INFO[0000] Successfully Deployed state file at [./public_cluster.rkestate] 
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [192.168.2.252] 
INFO[0000] [network] Deploying port listener containers 
INFO[0000] Pulling image [rancher/rke-tools:v0.1.9] on host [192.168.2.252], try #1 
INFO[0015] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0016] Starting container [rke-etcd-port-listener] on host [192.168.2.252], try #1 
INFO[0019] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.2.252] 
INFO[0019] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0020] Starting container [rke-cp-port-listener] on host [192.168.2.252], try #1 
INFO[0021] [network] Successfully started [rke-cp-port-listener] container on host [192.168.2.252] 
INFO[0021] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0022] Starting container [rke-worker-port-listener] on host [192.168.2.252], try #1 
INFO[0024] [network] Successfully started [rke-worker-port-listener] container on host [192.168.2.252] 
INFO[0024] [network] Port listener containers deployed successfully 
INFO[0024] [network] Running control plane -> etcd port checks 
INFO[0024] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0025] Starting container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0026] [network] Successfully started [rke-port-checker] container on host [192.168.2.252] 
INFO[0026] Removing container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0027] [network] Running control plane -> worker port checks 
INFO[0027] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0027] Starting container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0028] [network] Successfully started [rke-port-checker] container on host [192.168.2.252] 
INFO[0029] Removing container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0029] [network] Running workers -> control plane port checks 
INFO[0029] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0030] Starting container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0031] [network] Successfully started [rke-port-checker] container on host [192.168.2.252] 
INFO[0031] Removing container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0031] [network] Checking KubeAPI port Control Plane hosts 
INFO[0031] [network] Removing port listener containers  
INFO[0031] Removing container [rke-etcd-port-listener] on host [192.168.2.252], try #1 
INFO[0032] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.2.252] 
INFO[0032] Removing container [rke-cp-port-listener] on host [192.168.2.252], try #1 
INFO[0033] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.2.252] 
INFO[0033] Removing container [rke-worker-port-listener] on host [192.168.2.252], try #1 
INFO[0034] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.2.252] 
INFO[0034] [network] Port listener containers removed successfully 
INFO[0034] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0034] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0034] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0034] Starting container [cert-deployer] on host [192.168.2.252], try #1 
INFO[0035] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0040] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0040] Removing container [cert-deployer] on host [192.168.2.252], try #1 
INFO[0040] [reconcile] Rebuilding and updating local kube config 
INFO[0040] Successfully Deployed local admin kubeconfig at [./kube_config_public_cluster.yml] 
INFO[0040] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0040] [reconcile] Reconciling cluster state        
INFO[0040] [reconcile] This is newly generated cluster  
INFO[0040] Pre-pulling kubernetes images                
INFO[0040] Pulling image [rancher/hyperkube:v1.10.3-rancher2] on host [192.168.2.252], try #1 
INFO[0112] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0112] Kubernetes images pulled successfully        
INFO[0112] [etcd] Building up etcd plane..              
INFO[0112] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0114] Starting container [etcd-fix-perm] on host [192.168.2.252], try #1 
INFO[0118] Successfully started [etcd-fix-perm] container on host [192.168.2.252] 
INFO[0118] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0118] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0118] Container [etcd-fix-perm] is still running on host [192.168.2.252] 
INFO[0119] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0119] Removing container [etcd-fix-perm] on host [192.168.2.252], try #1 
INFO[0119] [remove/etcd-fix-perm] Successfully removed container on host [192.168.2.252] 
INFO[0119] Pulling image [rancher/coreos-etcd:v3.1.12] on host [192.168.2.252], try #1 
INFO[0125] Image [rancher/coreos-etcd:v3.1.12] exists on host [192.168.2.252] 
INFO[0125] Starting container [etcd] on host [192.168.2.252], try #1 
INFO[0126] [etcd] Successfully started [etcd] container on host [192.168.2.252] 
INFO[0126] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.2.252] 
INFO[0126] Pulling image [rancher/rke-tools:v0.1.52] on host [192.168.2.252], try #1 
INFO[0142] Image [rancher/rke-tools:v0.1.52] exists on host [192.168.2.252] 
INFO[0144] Starting container [etcd-rolling-snapshots] on host [192.168.2.252], try #1 
INFO[0145] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.2.252] 
INFO[0150] Image [rancher/rke-tools:v0.1.52] exists on host [192.168.2.252] 
INFO[0152] Starting container [rke-bundle-cert] on host [192.168.2.252], try #1 
INFO[0153] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.2.252] 
INFO[0153] Waiting for [rke-bundle-cert] container to exit on host [192.168.2.252] 
INFO[0153] Container [rke-bundle-cert] is still running on host [192.168.2.252] 
INFO[0154] Waiting for [rke-bundle-cert] container to exit on host [192.168.2.252] 
INFO[0154] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.2.252] 
INFO[0154] Removing container [rke-bundle-cert] on host [192.168.2.252], try #1 
INFO[0154] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0155] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0157] [etcd] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0158] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0158] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0158] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
INFO[0160] [controlplane] Building up Controller Plane.. 
INFO[0160] Checking if container [service-sidekick] is running on host [192.168.2.252], try #1 
INFO[0160] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0161] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0161] Starting container [kube-apiserver] on host [192.168.2.252], try #1 
INFO[0162] [controlplane] Successfully started [kube-apiserver] container on host [192.168.2.252] 
INFO[0162] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.2.252] 
INFO[0188] [healthcheck] service [kube-apiserver] on host [192.168.2.252] is healthy 
INFO[0189] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0191] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0199] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0200] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0202] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0202] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0203] Starting container [kube-controller-manager] on host [192.168.2.252], try #1 
INFO[0204] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.2.252] 
INFO[0204] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.2.252] 
INFO[0216] [healthcheck] service [kube-controller-manager] on host [192.168.2.252] is healthy 
INFO[0217] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0220] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0248] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0249] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0262] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0263] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0265] Starting container [kube-scheduler] on host [192.168.2.252], try #1 
INFO[0269] [controlplane] Successfully started [kube-scheduler] container on host [192.168.2.252] 
INFO[0269] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.2.252] 
INFO[0277] [healthcheck] service [kube-scheduler] on host [192.168.2.252] is healthy 
INFO[0278] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0280] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0294] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0295] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0303] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0303] [controlplane] Successfully started Controller Plane.. 
INFO[0303] [authz] Creating rke-job-deployer ServiceAccount 
FATA[0328] Failed to apply the ServiceAccount needed for job execution: Post https://192.168.2.252:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=30s: dial tcp 192.168.2.252:6443: connect: connection refused 

Assurez vous d'avoir configuré le firewall avant de lancer le déploiement.

firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --reload

#4

INFO[0000] Running RKE version: v1.0.4                  
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [certificates] Generating admin certificates and kubeconfig 
INFO[0000] Successfully Deployed state file at [./public_cluster.rkestate] 
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [192.168.2.252] 
INFO[0003] [network] Deploying port listener containers 
INFO[0003] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0005] Starting container [rke-etcd-port-listener] on host [192.168.2.252], try #1 
WARN[0010] Can't start Docker container [rke-etcd-port-listener] on host [192.168.2.252]: Error response from daemon: driver failed programming external connectivity on endpoint rke-etcd-port-listener (c13afab201e9e665c747145fb2ed120f34a8e9d767105bb15a1275c0af77cdd3): Error starting userland proxy: listen tcp 0.0.0.0:2380: bind: address already in use 
INFO[0010] Starting container [rke-etcd-port-listener] on host [192.168.2.252], try #2 
WARN[0011] Can't start Docker container [rke-etcd-port-listener] on host [192.168.2.252]: Error response from daemon: driver failed programming external connectivity on endpoint rke-etcd-port-listener (7466fb48e5c0f85e0b30f7d3c4f93d3656669a7ed92baf577b47ac85db9ad52c): Error starting userland proxy: listen tcp 0.0.0.0:2380: bind: address already in use 
INFO[0011] Starting container [rke-etcd-port-listener] on host [192.168.2.252], try #3 
WARN[0012] Can't start Docker container [rke-etcd-port-listener] on host [192.168.2.252]: Error response from daemon: driver failed programming external connectivity on endpoint rke-etcd-port-listener (a4f461cd656714ad1e598a76f918f72210e0776fa93bab42678f48d75c379762): Error starting userland proxy: listen tcp 0.0.0.0:2380: bind: address already in use 
INFO[0012] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0013] Starting container [rke-cp-port-listener] on host [192.168.2.252], try #1 
WARN[0014] Can't start Docker container [rke-cp-port-listener] on host [192.168.2.252]: Error response from daemon: driver failed programming external connectivity on endpoint rke-cp-port-listener (cfa7520e4c70e679d0019188bce730bd1e4368e28bcadb87934eb16eb1f183c5): Error starting userland proxy: listen tcp 0.0.0.0:6443: bind: address already in use 
INFO[0014] Starting container [rke-cp-port-listener] on host [192.168.2.252], try #2 
WARN[0015] Can't start Docker container [rke-cp-port-listener] on host [192.168.2.252]: Error response from daemon: driver failed programming external connectivity on endpoint rke-cp-port-listener (6d1c9ced8cf9f5b635ffebf9ae56ae628bda02259fdbd0c221940bd96588087f): Error starting userland proxy: listen tcp 0.0.0.0:6443: bind: address already in use 
INFO[0015] Starting container [rke-cp-port-listener] on host [192.168.2.252], try #3 
WARN[0016] Can't start Docker container [rke-cp-port-listener] on host [192.168.2.252]: Error response from daemon: driver failed programming external connectivity on endpoint rke-cp-port-listener (1689e92e3364357a278619724aed95b1feb0cfa71c2875d7aa924766f75e81d6): Error starting userland proxy: listen tcp 0.0.0.0:6443: bind: address already in use 
INFO[0016] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0017] Starting container [rke-worker-port-listener] on host [192.168.2.252], try #1 
INFO[0024] [network] Successfully started [rke-worker-port-listener] container on host [192.168.2.252] 
INFO[0024] [network] Port listener containers deployed successfully 
INFO[0024] [network] Running control plane -> etcd port checks 
INFO[0024] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0025] Starting container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0029] [network] Successfully started [rke-port-checker] container on host [192.168.2.252] 
INFO[0030] Removing container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0031] [network] Running control plane -> worker port checks 
INFO[0031] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0032] Starting container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0035] [network] Successfully started [rke-port-checker] container on host [192.168.2.252] 
INFO[0038] Removing container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0038] [network] Running workers -> control plane port checks 
INFO[0038] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0040] Starting container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0042] [network] Successfully started [rke-port-checker] container on host [192.168.2.252] 
INFO[0042] Removing container [rke-port-checker] on host [192.168.2.252], try #1 
INFO[0043] [network] Checking KubeAPI port Control Plane hosts 
INFO[0043] [network] Removing port listener containers  
INFO[0043] Removing container [rke-etcd-port-listener] on host [192.168.2.252], try #1 
INFO[0044] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.2.252] 
INFO[0044] Removing container [rke-cp-port-listener] on host [192.168.2.252], try #1 
INFO[0044] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.2.252] 
INFO[0044] Removing container [rke-worker-port-listener] on host [192.168.2.252], try #1 
INFO[0047] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.2.252] 
INFO[0047] [network] Port listener containers removed successfully 
INFO[0047] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0047] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0047] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0049] Starting container [cert-deployer] on host [192.168.2.252], try #1 
INFO[0054] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0060] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0060] Removing container [cert-deployer] on host [192.168.2.252], try #1 
INFO[0060] [reconcile] Rebuilding and updating local kube config 
INFO[0060] Successfully Deployed local admin kubeconfig at [./kube_config_public_cluster.yml] 
INFO[0061] [reconcile] host [192.168.2.252] is active master on the cluster 
INFO[0061] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0061] [reconcile] Reconciling cluster state        
INFO[0061] [reconcile] This is newly generated cluster  
INFO[0061] Pre-pulling kubernetes images                
INFO[0061] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0061] Kubernetes images pulled successfully        
INFO[0061] [etcd] Building up etcd plane..              
INFO[0061] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0062] Starting container [etcd-fix-perm] on host [192.168.2.252], try #1 
INFO[0068] Successfully started [etcd-fix-perm] container on host [192.168.2.252] 
INFO[0068] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0068] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0068] Container [etcd-fix-perm] is still running on host [192.168.2.252] 
INFO[0069] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0070] Container [etcd-fix-perm] is still running on host [192.168.2.252] 
INFO[0071] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0071] Container [etcd-fix-perm] is still running on host [192.168.2.252] 
INFO[0072] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0073] Removing container [etcd-fix-perm] on host [192.168.2.252], try #1 
INFO[0073] [remove/etcd-fix-perm] Successfully removed container on host [192.168.2.252] 
INFO[0073] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.2.252] 
INFO[0073] Removing container [etcd-rolling-snapshots] on host [192.168.2.252], try #1 
INFO[0075] [remove/etcd-rolling-snapshots] Successfully removed container on host [192.168.2.252] 
INFO[0075] Image [rancher/rke-tools:v0.1.52] exists on host [192.168.2.252] 
INFO[0077] Starting container [etcd-rolling-snapshots] on host [192.168.2.252], try #1 
INFO[0083] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.2.252] 
INFO[0088] Image [rancher/rke-tools:v0.1.52] exists on host [192.168.2.252] 
INFO[0089] Starting container [rke-bundle-cert] on host [192.168.2.252], try #1 
INFO[0097] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.2.252] 
INFO[0097] Waiting for [rke-bundle-cert] container to exit on host [192.168.2.252] 
INFO[0098] Container [rke-bundle-cert] is still running on host [192.168.2.252] 
INFO[0099] Waiting for [rke-bundle-cert] container to exit on host [192.168.2.252] 
INFO[0100] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.2.252] 
INFO[0100] Removing container [rke-bundle-cert] on host [192.168.2.252], try #1 
INFO[0101] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0102] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0108] [etcd] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0108] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0109] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0109] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
INFO[0112] [controlplane] Building up Controller Plane.. 
INFO[0112] Checking if container [service-sidekick] is running on host [192.168.2.252], try #1 
INFO[0113] [sidekick] Sidekick container already created on host [192.168.2.252] 
INFO[0113] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.2.252] 
INFO[0113] [healthcheck] service [kube-apiserver] on host [192.168.2.252] is healthy 
INFO[0113] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0115] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0119] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0119] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0121] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0121] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.2.252] 
INFO[0122] [healthcheck] service [kube-controller-manager] on host [192.168.2.252] is healthy 
INFO[0122] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0124] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0131] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0131] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0133] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0133] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.2.252] 
INFO[0137] [healthcheck] service [kube-scheduler] on host [192.168.2.252] is healthy 
INFO[0137] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0138] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0142] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0142] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0143] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0143] [controlplane] Successfully started Controller Plane.. 
INFO[0143] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0145] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0145] [authz] Creating system:node ClusterRoleBinding 
INFO[0145] [authz] system:node ClusterRoleBinding created successfully 
INFO[0145] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding 
INFO[0145] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully 
INFO[0145] Successfully Deployed state file at [./public_cluster.rkestate] 
INFO[0145] [state] Saving full cluster state to Kubernetes 
INFO[0146] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0146] [worker] Building up Worker Plane..          
INFO[0146] Checking if container [service-sidekick] is running on host [192.168.2.252], try #1 
INFO[0146] [sidekick] Sidekick container already created on host [192.168.2.252] 
INFO[0146] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0147] Starting container [kubelet] on host [192.168.2.252], try #1 
INFO[0148] [worker] Successfully started [kubelet] container on host [192.168.2.252] 
INFO[0148] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.2.252] 
INFO[0158] [healthcheck] service [kubelet] on host [192.168.2.252] is healthy 
INFO[0158] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0160] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0173] [worker] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0174] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0180] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0180] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0181] Starting container [kube-proxy] on host [192.168.2.252], try #1 
INFO[0182] [worker] Successfully started [kube-proxy] container on host [192.168.2.252] 
INFO[0182] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.2.252] 
INFO[0191] [healthcheck] service [kube-proxy] on host [192.168.2.252] is healthy 
INFO[0192] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0195] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0211] [worker] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0212] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0221] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0221] [worker] Successfully started Worker Plane.. 
INFO[0221] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0224] Starting container [rke-log-cleaner] on host [192.168.2.252], try #1 
INFO[0238] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.2.252] 
INFO[0238] Removing container [rke-log-cleaner] on host [192.168.2.252], try #1 
INFO[0257] [remove/rke-log-cleaner] Successfully removed container on host [192.168.2.252] 
INFO[0257] [sync] Syncing nodes Labels and Taints       
INFO[0258] [sync] Successfully synced nodes Labels and Taints 
INFO[0258] [network] Setting up network plugin: canal   
INFO[0258] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0260] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0260] [addons] Executing deploy job rke-network-plugin 
FATA[0294] Failed to get job complete status for job rke-network-plugin-deploy-job in namespace kube-system 

Je crois que les resources Hardwares sont trop limitées

Un seul coeur et 1 GB de RAM n'est pas suffisant

Après avoir augmenté les resources à 8GB et 4 coeurs, le déploiement c'est effectué avec succès

INFO[0000] Running RKE version: v1.0.4                  
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [certificates] Generating admin certificates and kubeconfig 
INFO[0000] Successfully Deployed state file at [./public_cluster.rkestate] 
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [192.168.2.252] 
INFO[0000] [network] No hosts added existing cluster, skipping port check 
INFO[0000] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0000] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0000] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0001] Starting container [cert-deployer] on host [192.168.2.252], try #1 
INFO[0003] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0008] Checking if container [cert-deployer] is running on host [192.168.2.252], try #1 
INFO[0008] Removing container [cert-deployer] on host [192.168.2.252], try #1 
INFO[0008] [reconcile] Rebuilding and updating local kube config 
INFO[0008] Successfully Deployed local admin kubeconfig at [./kube_config_public_cluster.yml] 
INFO[0008] [reconcile] host [192.168.2.252] is active master on the cluster 
INFO[0008] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0008] [reconcile] Reconciling cluster state        
INFO[0008] [reconcile] Check etcd hosts to be deleted   
INFO[0008] [reconcile] Check etcd hosts to be added     
INFO[0008] [reconcile] Rebuilding and updating local kube config 
INFO[0008] Successfully Deployed local admin kubeconfig at [./kube_config_public_cluster.yml] 
INFO[0008] [reconcile] host [192.168.2.252] is active master on the cluster 
INFO[0008] [reconcile] Reconciled cluster state successfully 
INFO[0008] Pre-pulling kubernetes images                
INFO[0008] Image [rancher/hyperkube:v1.10.3-rancher2] exists on host [192.168.2.252] 
INFO[0008] Kubernetes images pulled successfully        
INFO[0008] [etcd] Building up etcd plane..              
INFO[0008] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0010] Starting container [etcd-fix-perm] on host [192.168.2.252], try #1 
INFO[0012] Successfully started [etcd-fix-perm] container on host [192.168.2.252] 
INFO[0012] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0012] Waiting for [etcd-fix-perm] container to exit on host [192.168.2.252] 
INFO[0013] Removing container [etcd-fix-perm] on host [192.168.2.252], try #1 
INFO[0013] [remove/etcd-fix-perm] Successfully removed container on host [192.168.2.252] 
INFO[0013] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.2.252] 
INFO[0013] Removing container [etcd-rolling-snapshots] on host [192.168.2.252], try #1 
INFO[0015] [remove/etcd-rolling-snapshots] Successfully removed container on host [192.168.2.252] 
INFO[0015] Image [rancher/rke-tools:v0.1.52] exists on host [192.168.2.252] 
INFO[0018] Starting container [etcd-rolling-snapshots] on host [192.168.2.252], try #1 
INFO[0020] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.2.252] 
INFO[0025] Image [rancher/rke-tools:v0.1.52] exists on host [192.168.2.252] 
INFO[0028] Starting container [rke-bundle-cert] on host [192.168.2.252], try #1 
INFO[0030] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.2.252] 
INFO[0030] Waiting for [rke-bundle-cert] container to exit on host [192.168.2.252] 
INFO[0031] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.2.252] 
INFO[0031] Removing container [rke-bundle-cert] on host [192.168.2.252], try #1 
INFO[0031] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0032] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0033] [etcd] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0035] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0036] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0036] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
INFO[0036] [controlplane] Building up Controller Plane.. 
INFO[0036] Checking if container [service-sidekick] is running on host [192.168.2.252], try #1 
INFO[0036] [sidekick] Sidekick container already created on host [192.168.2.252] 
INFO[0036] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.2.252] 
INFO[0036] [healthcheck] service [kube-apiserver] on host [192.168.2.252] is healthy 
INFO[0036] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0038] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0039] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0039] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0041] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0041] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.2.252] 
INFO[0041] [healthcheck] service [kube-controller-manager] on host [192.168.2.252] is healthy 
INFO[0041] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0042] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0044] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0045] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0045] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0045] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.2.252] 
INFO[0045] [healthcheck] service [kube-scheduler] on host [192.168.2.252] is healthy 
INFO[0045] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0046] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0047] [controlplane] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0047] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0047] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0047] [controlplane] Successfully started Controller Plane.. 
INFO[0047] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0047] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0047] [authz] Creating system:node ClusterRoleBinding 
INFO[0047] [authz] system:node ClusterRoleBinding created successfully 
INFO[0047] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding 
INFO[0048] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully 
INFO[0048] Successfully Deployed state file at [./public_cluster.rkestate] 
INFO[0048] [state] Saving full cluster state to Kubernetes 
INFO[0048] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0048] [worker] Building up Worker Plane..          
INFO[0048] Checking if container [service-sidekick] is running on host [192.168.2.252], try #1 
INFO[0048] [sidekick] Sidekick container already created on host [192.168.2.252] 
INFO[0048] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.2.252] 
INFO[0048] [healthcheck] service [kubelet] on host [192.168.2.252] is healthy 
INFO[0048] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0049] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0050] [worker] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0051] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0051] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0051] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.2.252] 
INFO[0051] [healthcheck] service [kube-proxy] on host [192.168.2.252] is healthy 
INFO[0051] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0051] Starting container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0052] [worker] Successfully started [rke-log-linker] container on host [192.168.2.252] 
INFO[0052] Removing container [rke-log-linker] on host [192.168.2.252], try #1 
INFO[0053] [remove/rke-log-linker] Successfully removed container on host [192.168.2.252] 
INFO[0053] [worker] Successfully started Worker Plane.. 
INFO[0053] Image [rancher/rke-tools:v0.1.9] exists on host [192.168.2.252] 
INFO[0054] Starting container [rke-log-cleaner] on host [192.168.2.252], try #1 
INFO[0055] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.2.252] 
INFO[0055] Removing container [rke-log-cleaner] on host [192.168.2.252], try #1 
INFO[0055] [remove/rke-log-cleaner] Successfully removed container on host [192.168.2.252] 
INFO[0055] [sync] Syncing nodes Labels and Taints       
INFO[0055] [sync] Successfully synced nodes Labels and Taints 
INFO[0055] [network] Setting up network plugin: canal   
INFO[0055] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0055] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0055] [addons] Executing deploy job rke-network-plugin 
INFO[0056] [addons] Setting up coredns                  
INFO[0056] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0056] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0056] [addons] Executing deploy job rke-coredns-addon 
INFO[0066] [addons] CoreDNS deployed successfully..     
INFO[0066] [dns] DNS provider coredns deployed successfully 
INFO[0066] [addons] Setting up Metrics Server           
INFO[0066] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0066] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0066] [addons] Executing deploy job rke-metrics-addon 
INFO[0076] [addons] Metrics Server deployed successfully 
INFO[0076] [ingress] Setting up nginx ingress controller 
INFO[0076] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0076] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0076] [addons] Executing deploy job rke-ingress-controller 
INFO[0087] [ingress] ingress controller nginx deployed successfully 
INFO[0087] [addons] Setting up user addons              
INFO[0087] [addons] Saving ConfigMap for addon rke-user-addon to Kubernetes 
INFO[0087] [addons] Successfully saved ConfigMap for addon rke-user-addon to Kubernetes 
INFO[0087] [addons] Executing deploy job rke-user-addon 
INFO[0097] [addons] User addons deployed successfully   
INFO[0097] Finished building Kubernetes cluster successfully 

Laisser un commentaire