Google Anthos Day from KubeCon 2019 San Diego

Overview

This is the high level overview of Anthos components in typical enterprise environment.

Diagram showing Anthos components

Core Anthos Components

Cloud

On-Premises

GKE

GKE

GKE On-Prem (1.0)

Multicluster Management

Yes

Yes

Configuration Management

Anthos Config Management (1.0)

Anthos Config Management (1.0)

Migration

Migrate for Anthos (Beta)

N/A

Service Mesh

Anthos Service Mesh (Beta) Traffic Director

Istio OSS (1.1.13)

Logging & Monitoring

Stackdriver Logging, Stackdriver Monitoring, alerting

Stackdriver for system components

Marketplace

Kubernetes Applications in GCP Marketplace

Kubernetes Applications in GCP Marketplace

Container Management

GKE and GKE On-Prem bundle the upstream Kubernetes releases and provide management capabilities for creating, scaling, upgrading Kubernetes clusters.

For cloud, GCP hosts the control plane, and GKE manages the node components(kubelet, kube-proxy, container-runtime) on the instances of Compute Engine.

For on-prem, all components are hosted in the customer's on-prem virtual environment.

Policy and Config Management

KubeCon 2019 San Diego Video

With Anthos Config Management, you can create a common configuration for all administrative policies that apply to your Kubernetes clusters both on-premises and in the cloud.

Define configs and policies

  • Where: In centralized Git repository

  • Format: YAML or JSON

Validations

A built-in validator that reviews every line of code before it gets to your repository.

Question: How the validation gets done ?

  • Nomos (pre-commit)

  • OPA Gatekeeper (post-commit)

Deploy configs and policies

Anthos Config Management Operator, deployed on GKE or GKE On-Prem clusters, it monitors and apply any changes detected in a Git repo.

This approach leverages core Kubernetes concepts, such as Namespaces, labels, and annotations to determine how and where to apply the config changes to all of your Kubernetes clusters, no matter where they reside. The repo provides a versioned, secured, and controlled single source of truth for all of your Kubernetes configurations. Any YAML or JSON that can be applied with kubectl commands can be managed with the Anthos Config Management Operator and applied to any Kubernetes cluster using Anthos Config Management.

Active monitoring

Continuous monitoring of the cluster state to prevent configuration drift

Benefits mentioned

Anthos Config Management has the following benefits for your Kubernetes Engine clusters:

  • Single source of truth, control, and management

    • Enables the use of code reviews, validation, and rollback workflows.

    • Avoids shadows ops, where Kubernetes clusters drift out of sync due to manual changes.

    • Enables the use of CI/CD pipelines for automated testing and rollout.

  • One-step deployment across all clusters

    • Anthos Config Management turns a single Git commit into multiple kubectl commands across all clusters.

    • Rollback by simply reverting the change in Git. The reversion is then automatically deployed at scale.

  • Rich inheritance model for applying changes

    • Using Namespaces, you can create configuration for all clusters, some clusters, some Namespaces, or even custom resources.

    • Using Namespace inheritance, you can create a layered Namespace model that allows for configuration inheritance across the repo folder structure.

Reference: https://cloud.google.com/anthos-config-management/docs/overview

Migrate

KubeCon 2019 San Diego Video

There are two business solutions provided.

  • (Migrate for Anthos) Move and convert applications running on VMware, AWS, Azure, or Compute Engine VMs directly into containers in Google Kubernetes Engine (GKE).

  • (Migrate for Compute Engine) For other workloads that are better suited as a VM, simply move them as is.

Both of above solutions utilize the feature from Velostrata.

Migrate for Anthos

The second column in the picture is what exists currently when the VM is migrated to GKE container. The only option currently is to do vertical scaling when the capacity is reached. The yellow components leverage Kubernetes and the green components run inside containers. The third column in the picture is how the future would look like where they can have multiple containers with horizontal pod autoscaling.

Migrate for Anthos uses a wrapper image to create containers from your VMs. As part of functionality in Migrate for Anthos, this image:

  • VM operating system is converted into kernel supported by GKE.

  • VM system disks are mounted inside container using persistent volume(PV) and stateful dataset.

  • Networking, logging and monitoring use GKE constructs.

  • Applications running inside VM using systemd scripts run in container user space.

  • During the initial migration phase, storage is streamed to container using CSI. The storage can then be migrated to any storage class supported by GKE.

Migrate for Compute Engine

Official documentation could be found at: https://cloud.google.com/migrate/compute-engine/docs/4.8/

Velostrata team has enhanced the VM migration tool to also convert VM to containers and then do the migration. The fundamentals of Velostrata including agentless and streaming technologies still remain the same for “MIgrate for Anthos”. Velostrata manager and cloud extensions needs to be installed in GCP environment to do the migration. Because Velostrata uses streaming technology, the complete VM storage need not be migrated to run the container in GKE, this speeds up the entire migration process.

Benefits mentioned

  • Security optimized GKE node kernel with automatic upgrades.

  • Density and control. Use multiple operating systems and versions on container hosts, benefit from isolation, fine-grained resource allocations, and network permissions.

  • Integrated resource management. Desired-state management with powerful tagging strategies and selector policies. GKE allows you to focus on managing apps, not infrastructure.

  • Augment legacy apps with modern services. Add-ons such as Istio seamlessly integrate up-to-date functionality with existing apps. Istio allows you to automate network and security policies without changing your application code.

Service Mesh

Anthos Service Mesh is a suite of tools to help you monitor and manage a reliable service mesh on Google Cloud Platform (GCP), powered by Istio. Logging and monitoring features are powered by stackdriver

Anthos Service Mesh provides a web console from which you could monitor and manage a reliable service mesh.

Key features mentioned

Observability features:

  • Service metrics and logs for all traffic within your mesh's GKE cluster are automatically ingested to GCP.

  • Out-of-the-box service dashboards in the Google Cloud Platform Console with the information you need to understand your services.

  • In-depth telemetry in the GCP Console lets you dig deep into your metrics and logs, filtering and slicing your data on a wide variety of attributes.

  • Service-to-service relationships at a glance: understand who connects to each service and the services it depends on.

  • Quickly see the communication security posture not only of your service, but its relationships to other services.

  • Dig deeper into your service metrics and combine them with other GCP metrics using Stackdriver.

  • Gain clear and simple insight into the health of your service with service level objectives (SLOs), which allow you to easily define and alert on your own standards of service health.

Security features:

  • Anthos Service Mesh certificate authority

    • You don't need to manage a certificate authority; Anthos Service Mesh certificate authority (Mesh CA) manages the issuance and rotation of mTLS certificates and keys for GKE Pods based on Managed Workload Identity.

    • You can simply configure mTLS using Istio policies to ensure strong authentication and encryption in transit.

    • Integration with VPC service controls

Dashboard overview

Another video from the same presenter

Software Deliver Platform

KubeCon 2019 San Diego Video

This session shows how to Google product to build a software deliver platform.

Google products mentioned

  • GCP container registry (Similar to Harbor)

  • GCP config management (We do not have at this moment)

Workflow

  1. Developers make changes to App repo

  2. CI pipeline is triggered to build the application container image and pushes it onto "GCP Container Registry

  3. Ops create configurations to tell how the application is going to be deployed

  4. The process from step #3 will notify the Config repo to generate the K8s manifest by using kustomize, and then commit the manifest into Env repo staging branch

  5. CD pipeline deploys the application onto staging env for testing

  6. Once testing is done, Ops could merge staging branch into master branch, then deploy the application onto production env

  7. Security developers could use Anthos Config Management to apply policies onto production env

Cloud Run for Anthos

KubeCon 2019 San Diego Video:

Cloud Run is Google's serverless computing product which is built on Knative and has been integrated with bunch of other Google services, e,g. Logging & Monitoring powered by Stackdriver, etc. Cloud Run is for Google's fully managed infra.

The Cloud Run for Anthos makes Cloud Run features available on Anthos GEK cluster, could be on-prem or in Google Cloud.

Build: The capability of building container images for Knative compatible serverless environment. Cloud Run has integrated with Cloud Build

Serving: The containerized stateless applications need to be served as services and also need to be revisioned. This is not newly invented, it is the concept from Knative. However, GCloud and Cloud Run Console are the provided user interfacing to manage the applications

Eventing: The services created should be available to be triggered by some events (HTTP/s requests or other events producer). Nothing new, but all about Knative Eventing. It binds the event producer with your service.

The events producer could be Kafka(Showed in demo) or Cloud Pub/Sub.

Security

KubeCon 2019 San Diego Video:

Anthos Modernized Application Security White Paper

https://services.google.com/fh/files/misc/anthos_an_opportunity_to_modernize_application_security_white_paper.pdf

  • Enforcing consistent policies across environments (Anthos Config Management)

  • Deploying only trusted workloads (GCR & Binary Authorization)

  • Isolating workloads with different risk profiles

Last updated