Cloud Native OCI

Cluster Maintenance


Created with ❤ by Oracle A-Team

Background

  • OKE provides a choice of kubernetes versions to use and as new versions are released, you can upgrade your master nodes and node pools independently.
  • Versions of Kubernetes on the master and nodes allow for variations as permitted by Kubernetes.
Node Statistics
  • Node Conditions
  • Eviction Signals & Policy
  • Observing Nodes
Observe nodes, and understand node conditions and eviction policies.
Upgrading Master Nodes
  • Automated In-Place Upgrade
How an in-place zero down-time upgrade of the master nodes are done on OKE.
Upgrading Node Pools
  • Out-of-place upgrade
  • Cordoning & Draining Nodes
How the node pool is upgraded to a new version of kubernetes or to run a different OS image.

Node Status

A node or a minion in Kubernetes is a worker node that is managed by the master through the Node Controller.
The Node object is the representation of the actual node created by the cloud provider.
The Kubelet is an agent that runs on the node to ensure healthy operation of the pods on the node and reports stats on the node.
The scheduler uses the info on the Node to make scheduling decisions.
The NodeController uses heartbeats to keep track of node status and perform evictions if necessary.
kubectl describe nodes

    Name:               10.0.10.9
    Roles:              node
    Labels:             beta.kubernetes.io/instance-type=VM.Standard1.4
                    beta.kubernetes.io/os=linux
                    displayName=oke-cydczjumfsw-n2ggndcmjtd-soczn56brva-0
                    failure-domain.beta.kubernetes.io/zone=PHX-AD-2
                    hostname=oke-cydczjumfsw-n2ggndcmjtd-soczn56brva-0
    Annotations:        alpha.kubernetes.io/provided-node-ip: 10.0.10.9
                    volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Wed, 12 Feb 2020 12:34:29 -0800
    Taints:             none
    Unschedulable:      false
    Conditions:
    Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
    ----             ------  -----------------                 ------------------                ------                       -------
    MemoryPressure   False   Tue, 25 Feb 2020 21:09:50 -0800   Wed, 12 Feb 2020 12:34:29 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
    DiskPressure     False   Tue, 25 Feb 2020 21:09:50 -0800   Wed, 12 Feb 2020 12:34:29 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
    PIDPressure      False   Tue, 25 Feb 2020 21:09:50 -0800   Wed, 12 Feb 2020 12:34:29 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
    Ready            True    Tue, 25 Feb 2020 21:09:50 -0800   Wed, 12 Feb 2020 12:34:39 -0800   KubeletReady                 kubelet is posting ready status
    Addresses:
    InternalIP:  10.0.10.9
    Capacity:
    cpu:                8
    ephemeral-storage:  40223552Ki
    hugepages-1Gi:      0
    hugepages-2Mi:      0
    memory:             28532212Ki
    pods:               110
    Allocatable:
    cpu:                8
    ephemeral-storage:  37070025462
    hugepages-1Gi:      0
    hugepages-2Mi:      0
    memory:             28429812Ki
    pods:               110
    System Info:
    Machine ID:                 c08f95ce89b64194bb05e5db16c92408
    Kernel Version:             4.14.35-1902.10.4.el7uek.x86_64
    OS Image:                   Oracle Linux Server 7.7
    Container Runtime Version:  docker://18.9.8
    Kubelet Version:            v1.13.5
    Kube-Proxy Version:         v1.13.5
    PodCIDR:                     10.244.1.0/24
    ProviderID:                  ocid1.instance.oc1.phx.anyhqljsc3adhhqc6lxypsuv323q25bbrflolcfro5hl2enz53tm23bssglq
    Non-terminated Pods:         (16 in total)
    Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
    ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
    kube-system                coredns-5c8f898f54-vtdxz                                       100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13d
    kube-system                kube-flannel-ds-z6bmk                                          100m (1%)     1 (12%)     50Mi (0%)        500Mi (1%)     13d
    kube-system                kube-proxy-6cx7g                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d
    mushop-setup               mushop-setup-prometheus-kube-state-metrics-6c8755ccfb-4fr2b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d
    mushop-setup               mushop-setup-prometheus-node-exporter-8n8pl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d
    mushop                     mushop-osb-oci-service-broker-6fbbf6c767-g45tr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d
    mushop                     mushop-test-api-6b645d6948-j9vqj                               100m (1%)     300m (3%)   100Mi (0%)       300Mi (1%)     5d23h
    mushop                     mushop-test-catalogue-65ff5cf66b-6s8wx                         100m (1%)     200m (2%)   64Mi (0%)        128Mi (0%)     5d23h
    mushop                     mushop-test-payment-5f56c45c96-xtmwg                           99m (1%)      100m (1%)   100Mi (0%)       100Mi (0%)     5d23h
    mushop                     mushop-test-storefront-54989b54c5-dlscr                        100m (1%)     300m (3%)   100Mi (0%)       300Mi (1%)     5d23h
    Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    Resource           Requests    Limits
    --------           --------    ------
    cpu                649m (8%)   2400m (30%)
    memory             548Mi (1%)  1754Mi (6%)
    ephemeral-storage  0 (0%)      0 (0%)
    Events:              none
                

Node Conditions

  • A node tries to preserve stability when available compute resources are low.
  • Kubelet can reclaim the resources on a node when the node resources hit eviction thresholds. Pods are evicted from the node.
  • If the evicted Pod is managed by a Deployment, the Deployment will create another Pod to be scheduled by Kubernetes.

Node Conditions

  • When eviction thresholds are met, the kubelet will report a condition that the node is under pressure. These include conditions like MemoryPressure, DiskPressure or Ready.
  • The scheduler always considers the Node conditions when scheduling pods to a Node.
  • Eviction order for Pods are determined by
    • Usage of the starved resource exceeds requests
    • Priority
    • Consumption of the starved compute resource relative to the Pods’ scheduling requests

Upgrading your Cluster

  • OKE manages the cluster control plane, and the master nodes are highly available.
  • Updating the cluster is an in-place operation and incurs no downtime for the workloads on the cluster.
  • Updates allow for the standard kubernetes variance in the versions
    • Master can be ahead of the the node pool versions by up-to two minor versions
    • Master cannot be behind any of the node pool versions
  • Cluster upgrade can be done through the API or any of the SDKs/clients .

Upgrading your Node Pools

  • Overview
  • Practice
    • Node Pools are managed by the user and upgrade is out-of-place.
    • ℹ️ Note
      • This applies to upgrades to the node pools only. Node pools can be scaled in-place.
      • If Node selectors are in use, then the node labels should also be applied to the new Nodes.
      • Use PodDisruptionBudgets to ensure enough replicas are available throughout the process.
    • Cordon Node Ensures no new pods are scheduled on the node.
      Create New Pool Create a new Node Pool with the desired properties
      Drain Node Drains the pods from the node. The scheduler will create new pods elsewhere (new NodePool)
    • kubectl cordon [node]
      Cordons the specified node. A selector can be passed in as well.
    • kubectl describe node [node]
      Verify that the node is marked Unschedulable.
    • kubectl drain [node]
      Drains the node by safely evicting the pods on the node. Verify with the describe command above. Pod disruption budgets are respected.
    • kubectl uncordon [node]
      Puts the node back in service. Applicable if the node were to undergo some maintenance. For upgrades,the node can be terminated at this stage.
Version: 1.8.0
Build: 2022-02-17T05:02:17Z
© 2022, Oracle and/or its affiliates. All rights reserved.