Kubernetes deployments

Kubectl

Install and Set Up kubectl - Kubernetes

Kubectl connects to the cluster using the token or certificate stored in the kubectl config file. By default this is found at ~/.kube/config

When dealing with multiple clusters it’s possible to have multiple kubeconfig files defined in an environment variable KUBECONFIG. Files can be separated by a colon. Make sure you include the default kubeconfig file.

KUBECONFIG=~/.kube/config:~/my-k8s-cluster/kubeconfig.yaml:~/my-other-k8s-cluster/kubeconfig.yaml

To get the contents of the kubeconfig file Browse to the Rancher cluster menu for your chosen cluster. Select “Cluster” from the horizontal menu and then click the “Kubeconfig file” button. Copy the contents into your local kubeconfig file and add to the KUBECONFIG env var.

Switching between clusters

Each config file will have one or more named contexts defined. You can switch between clusters by using:

shell> kubectl config use-context my-k8s-cluster
Switched to context "my-k8s-cluster".

Find the current cluster context with

shell> kubectl config current-context
my-k8s-cluster

and cluster info with:

shell> kubectl cluster-info
Kubernetes master is running at https://rancher2-staging.croud.tech/k8s/clusters/c-jmfl8
KubeDNS is running at https://rancher2-staging.croud.tech/k8s/clusters/c-jmfl8/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Visual Studio Code plugin

Cluster management is a little easier if you use the Kubernetes Visual Studio Code plugin. (ms-kubernetes-tools.vscode-kubernetes-tools).

This will create an icon in the VS Code Toolbar that allows you to easily switch between namespaces and view kubernetes resources and config.

Helm

Installing Helm

Helm allows you to manage kunernetes api resources using “Helm Charts”.

Helm Chart

A Helm Chart is a collection of api resource definition templates (using the GoLang templating syntax) with a set of default values for creating a release.

Helm release

Releases are created when installing a helm chart to a cluster. Helm chart releases on a cluster can be listed with helm ls.

Create a release with:

helm upgrade --install my-unique-release-name stable/my-chart --namespace my-namespace --values=./some-common-values.yaml --values=./some-more-specific-values.yaml

Helm values

Helm values tend to follow a specific pattern. Viewing the chart default values should give you enough information to create a custom values file for your release. Values are inherited so you only need to include the values in your release file that you want to change. It’s also possible to add multiple values files which can help prevent repetition by having values files that might be common to all releases (DB Hosts, SMTP creds for example).

Creating helm charts

To create a helm chart use the helm create my-chart command. This will create a chart using the boilerplate helm chart files so all you need to do is modify it. It’s important to follow the conventions used in the boilerplate files (pod naming etc).

Publishing helm charts

In addition to the official stable and incubator chart repositories it’s possible to publish our own charts to a repository we host in AWS S3. We have two S3 hosted repositories, croudtech-stable and croudtech-incubator. Charts in ‘stable’ are considered production ready. Chart’s in incubator are considered bleeding edge and may not work.

To publish a chart to our repo simply create the chart in the ‘stable’ directory of https://github.com/CroudTech/devops-helm-charts and push to GitHub. Pushes to the master branch will publish to the ‘stable’ repository and pushes to the ‘incubator’ branch will publish to the incubator repository. Incubator charts will publish on every push. Stable charts need the version number bumping in the charts Chart.yaml file.

Using the custom helm repositories

To use a custom repository you must add it to your local helm installation.

shell> helm repo add croudtech-incubator s3://croudtech-helm-charts/croudtech-incubator
"croudtech-incubator" has been added to your repositories
shell> helm repo add croudtech-stable s3://croudtech-helm-charts/croudtech-stable
"croudtech-stable" has been added to your repositories

To update a chart (when new charts or chart versions have been published)

shell> helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "croudtech-incubator" chart repository
...Successfully got an update from the "croudtech-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Autohelm

Autohelm is a wrapper for Helm. It allows multiple chart releases to be specified in a single file.

Prerequisites

  • Python2 Runtime

    Autohelm iself is built on Python2 and as such will require the Python2 runtime present

  • Helm

  • Autohelm

  • Autohelm S3 Plugin

    The Croud Helm Chart repository is stored in S3

Overview

An auto helm file represents a single namespace. And values can be added in-line or by referencing values files.

For example:

namespace: kong
repositories:
  croudtech-incubator:
    url: s3://croudtech-helm-charts/croudtech-incubator
  croudtech-stable:
    url: s3://croudtech-helm-charts/croudtech-stable
minimum_versions: #set minimum version requirements here
  helm: 2.10.0
  autohelm: 0.6.5
charts:
  ingress-kong:
    chart: kong-ingress
    repository: croudtech-stable
    version: 0.1.3
    files:
      - ../values/kong-ingress/kong-ingress.yaml
  kong-dashboard:
    chart: konga
    repository: croudtech-stable
    version: 0.1.3
    files:
      - ../values/konga/konga.yaml

To deploy the charts in this auto helm file run:

autohelm plot ./my-autohelm-file.yaml

To deploy only a specific chart defined in the file run:

autohelm plot ./my-autohelm-file.yaml --only kong-dashboard

Any chart repositories defined in the auto helm file will be automatically added or updated so you won’t need to use the helm repo * commands mentioned in the helm section fo this document.

Autohelm Workflow

In order to assist the actual deployment of autohelm files, a docker image has been created with all the neccessary libraries installed, for details how to use this Docker image to aid deployment see this Autohelm Workflow

Telepresence

Installing Telepresence

Telepresence is a powerful tool that allows us to create a VPN connection to the cluster Kubectl is connected to. It also allows us to replace running containers in the cluster with docker images running on our local machines.

Supported Platforms

The following guide apllies to Mac and Linux (Ubuntu), when running in vpn-tcp mode (the default), Telepresence will allow all processes running on the machine running Telepresence to access any workloads on the target cluster directly. Running a Telepresence VPN allows local development of components requiring connectivity to one or more other services without needing those service running on the local workstation.

On Windows currently only one telepresence connection method is supported (inject-tcp), this allows cluster connectivity from a single process running on the machine running Telepresence. Windows users needing VPN connectivity to clusters beyond a single terminal process are advised to go with VM based solution using the Ubuntu Platform.

See this link for more information:

https://www.telepresence.io/reference/windows

Prequisites

  • Supported Platform
  • kubectl
  • kubeconfig configured with appropriate context

Installation

Ensure you have all Prequisites installed, then run command below:

curl -s https://packagecloud.io/install/repositories/datawireio/telepresence/script.deb.sh | sudo bash
sudo apt install --no-install-recommends telepresence

For detailed instructions follow installation guide here:

https://www.telepresence.io/reference/install

Usage

shell> telepresence
T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method
T:  limitations see https://telepresence.io/reference/methods.html
T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.

T: No traffic is being forwarded from the remote Deployment to your local machine. You can use the --expose option to specify which ports you want to forward.

T: Guessing that Services IP range is 10.43.0.0/16. Services started after this point will be inaccessible if are outside this range; restart telepresence if you can't access a new Service.
@Default|bash-3.2$

This has opened a VPN connection and you should be able to access kubernetes services using the kubernetes DNS endpoints.

For example

http://activity-log.stg-tennant.svc.cluster.local

The hostnames always follow the pattern {SERVICE_NAME}.{NAMESPACE}.svc.cluster.local