Deploying and managing an application on Kubernetes, while easy in a single cluster configuration, becomes complex across clusters. Complexity surrounds not only deploying the application but also management capabilities, such as monitoring, security, scale, and inter-service connectivity across clusters.

Istio, simplifies the operation of micro-service based applications across Kubernetes clusters by enabling the following capabilities:

  1. Traffic Management
    • Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
    • Fine-grained control of traffic behavior with rich routing rules, retries, A/B testing, canary releases, fail-overs, and fault injection.
  2. Security
    • A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
    • Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
  3. Observability
    • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.

Istio is designed to be part of the application deployment, through the use of a proxy per deployment. This proxy ensures enforcement of traffic rules, security policies and collects observability data (logs, metrics, tracing) from each deployment.

Once the application starts to scale, Istio will also scale, as long as proxies are deployed with each new replica. The interconnection of multiple micro-services with the appropriate traffic management, observability and security is what is called a Service Mesh.

Istio is a service mesh.

From Istio.io’s site, a service mesh is:

Service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.

Istio Overview

There are other service mesh options in the eco-system

  1. linkerd (CNCF)
  2. istio
  3. conduit

In this blog we will look at deploying Istio on VMware Cloud PKS as part of a simple application. I will explore the initial installation in this blog, and I will also cover managing the application with Istio in subsequent blogs.

  1. part 1 — (this blog) Proper installation with some specifics for VMware Cloud PKS
  2. part 2 — Initiating service mesh and properly configuring Istio Load Balancing
  3. part 3 — Exploring a race condition between Envoy and application deployments

Istio Architecture

Before we begin covering installation, its warranted to review Istio’s architecture to understand how Istio works in reference to the application. From the istio.io site:

There are two parts of the architecture:

  1. Data plane — Any and all communications between services is controlled by the “injected” proxy in each deployment. This proxy is called “Envoy”, developed by Lyft.
  2. Control plane— Is composed of three baseline components that help “control” and manage the service to service communications.
    • Mixer — enforces access control and policies, and collects telemetry data from Envoy
    • Pilot — supports service discovery and provisions envoy with the right configuration for traffic management
    • Citadel — support service to service and end user authentication with built in credential and identity management
    • Galley — (NEW) Galley validates user authored Istio API configuration on behalf of the other Istio control plane components. Over time, Galley will take over responsibility as the top-level configuration ingestion, processing and distribution component of Istio.

Envoy Proxy

No discussion of the architecture is complete with out describing the Envoy Proxy. Envoy proxy is the “heart” of Istio. Without it a majority of the features and capabilities would not be possible.

From the envoy.io site:

Originally built at Lyft, Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. Built on the learnings of solutions such as NGINX, HAProxy, hardware load balancers, and cloud load balancers, Envoy runs alongside every application and abstracts the network by providing common features in a platform-agnostic manner. When all service traffic in an infrastructure flows via an Envoy mesh, it becomes easy to visualize problem areas via consistent observability, tune overall performance, and add substrate features in a single place.

What are the capabilities?

  • Dynamic service discovery
  • Load balancing
  • TLS termination
  • HTTP/2 and gRPC proxies
  • Circuit breakers
  • Health checks
  • Staged rollouts with %-based traffic split
  • Fault injection
  • Rich metrics

How is envoy.io deployed in the application?

Using my standard fitcycle app (https://github.com/bshetti/container-fitcycle), we can see where envoy fits in the application’s deployment.

Above we see the application (fitcycle) deployed in a Kubernetes cluster with several pods

  • 2 API replicas supporting the “API Service” — each pod has envoy proxy injected
  • django pod with envoy proxy injected
  • mysql pod with envoy proxy injected

Once envoy proxy is deployed it enables us to push down traffic management policies, support metrics to prometheus, push traffic and interconnectivity data to service graph, etc.

Helm Deployment of Istio

The best and simplest way to deploy Istio is to use helm. Helm is a package manager that helps you create, install and manage applications. Helm using a construct called “chart” that is a collection of Kubernetes yamls. This collection encompasses all aspects of the application: services, deployments, ingress rules, configuration mappings, etc.

More info on helm:

https://helm.sh

There are set of “stable” charts that people have created and managed on github.

Unfortunately, istio helm chart is NOT IN GITHUB under HELM. The istio helm chart is in the istio release it self.

Follow the steps outlined here to install Istio with helm:

Step 1 — Download latest release (I used istio1.0.0) https://istio.io/docs/setup/kubernetes/download-release/

Step 2 — Ensure your application meets the requirements outlined here: https://istio.io/docs/setup/kubernetes/download-release/

Before we get to Step 3 and beyond, we need to explore the proper naming of ports in your application’s service yamls and labeling of deployments.

    1)Named Ports in service.yaml:

    Service ports must be properly identified:
       <protocol>[-<suffix>]
       <protocol> = http, http2, grpc, mongo, redis
       <suffix> = anything you want or nothing

    Example:
      VALID: name: http2-foo or name: http
      INVALID: name: http2foo

    2) App and Version labels in deployment.yaml

    The following labels need to be consistent across all deployments with respect to the "app" label.
    Different variants of the deployments can have varying "version" labels.

    app: APPNAME
    version: v1

Here is an example of how to use the above rules followed in the fitcycle application:

api service service.yaml (see bold)


apiVersion: v1
kind: Service
metadata:
  name: api-server
  labels:
    app: fitcycle
spec:
  ports:
    - name: http-fcapi
      protocol: TCP
      port: 5000
      nodePort: 30431
  selector:
    app: fitcycle
    tier: api
  type: NodePort

api service deployment.yaml (see bold)


apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: api-server
  labels:
    app: fitcycle
spec:
  selector:
    matchLabels:
      app: fitcycle
      tier: api
  strategy:
    type: Recreate
  replicas: 3
  template:
    metadata:
      labels:
        app: fitcycle
        tier: api
        version: v1
    spec:

Step 3 — Install helm

Follow the steps outlined in the “install from binary” section:

I installed v2.9.1 https://docs.helm.sh/using_helm/#from-the-binary-releases

Step 4 — Modify vales.yaml for VMware Cloud PKS

Since we are using VMware Cloud PKS we need to modify the values.yaml file for VMware Cloud PKS.

VMware Cloud PKS only allows use of nodeports in the following range: 30400–30449

The values.yaml file is found here:

~/istio-1.0.0/install/kubernetes/helm/istio/values.yaml

Here are the values that need changing in values.yaml (see bold)


    loadBalancerIP: ""
        serviceAnnotations: {}
        type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be

        ports:
          ## You can add custom gateway ports
        - port: 80
          targetPort: 80
          name: http2
          nodePort: 30401
        - port: 443
          name: https
          nodePort: 30402
        - port: 31400
          name: tcp
          nodePort: 30403

Step 5 — install istio with helm

1-First install the appropriate istio credentials (if you are running helm prior to 2.10.0)

$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

2-Second create the helm service account

$ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml

3-Third init tiller with service account

$ helm init --service-account tiller

4-Forth install istio

$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system

Istio Running

Once istio installation is complete, you will see the following:

Services:

This is what you should see for services:

Deployments:

This is what you should see for deployments:

Several key points:

  1. istio-system namespace is created and holds all the istio components
  2. prometheus is loaded by default

Injecting Envoy Proxy into the application

Injecting the envoy proxy on each service is simple. The proxy needs to be injected per deployment. (automatic injection is also possible, and I will describe this in another blog)

In my configuration of fitcycle, I have three services

  1. django based webserver
  2. flask based apiserver
  3. mysql database

I simply inject envoy into EACH deployment with the following command.

     kube-inject -f api-server-deployment.yaml | kubectl apply -f -

To remove the proxy at any point, simply run

     kube-inject -f api-server-deployment.yaml | kubectl delete -f -

Once the proxies are injected you can see their status as follows:

Summary

Now that envoy proxy is injected into the application, the installation of istio with the application is complete.

Here is what we accomplished:

  1. Installed helm
  2. Configured application to properly adhere to istio port and version requirements
  3. Configured security for helm and istio
  4. Installed istio
  5. Injected envoy proxy into application components

In the next blog, we will explore how utilize service graph and prometheus from istio, and configure load balancing.