404 Not Found
  • Introduction
  • Monitoring related
    • K8s cluster monitoring
    • Monitor Jenkins with G.A.P on K8s cluster
    • Monitoring tools | projects
      • Grafana
      • AlertManager
      • Prometheus
      • Wavefront
  • Logging related
    • BOSH logs
    • How to gather systemd log
    • K8s cluster logging
    • Logging tools | projects
      • vRealize Log Insight
      • Fluentd
      • syslog vs fluentd
  • Having fun with docker
    • Using docker-compose for redmine
    • Customize Fluentd docker image
  • K8S or Apache Mesos
  • K8S Related
    • Main Architecture
      • Master components
        • API Server
        • etcd
        • Controller Manager
        • Kube Scheduler
      • Worker components
        • kubelet
        • kube-proxy
    • K8S Storage
      • Volume Provisioning
      • Understand CSI
      • How to write CSI
      • VMware CNS
      • K8S storage e2e experiment under VMware vSphere
      • Experiment on Persistent Volume Access Mode
      • Design: Storage in Cluster-API architecture
    • K8S Networking
      • Ingress
      • Endpoints
    • K8S Policies
      • Resource Quotas
    • K8S Management Platform
    • K8S Tests Tool
    • K8S Extension
      • CRDs
        • Custom Resources
        • Custom Controllers
        • How to user code-generator
        • K8S Operators
        • Operators Development Tools
          • Kubebuilder
          • Metacontroller
          • Operator SDK
      • Custom API Server
    • K8S Resource CRUD Workflow
    • K8S Garbage Collection
  • K8S CONTROLLER RELATED
    • IsController: true
    • Controller clients
  • PKS RELATED
    • How to Access VMs and Databases related to PKS
    • PKS Basics
    • BOSH Director
    • Backup and Restore on Ent. PKS with Velero
  • CICD RELATED
    • Configure Jenkins to run on K8S
    • Customize Jenkins JNLP slave image
    • Jenkins global shared libs
  • Google Anthos
    • Google Anthos Day from KubeCon 2019 San Diego
    • Migrate for Anthos
    • Config Connector
  • SYSTEM DESIGN RELATED
    • Design Data Intensive Application - Notes
      • RSM
        • Reliability
        • Scalability
      • Data models and Query Languages
      • Storage and Retrieval
    • How Alibaba Ensure K8S Performance At Large Scale
  • Miscellaneous
    • Knative
    • Serverless
    • Service Mesh
    • gRPC
    • Local persistent volumes
    • ownerReferences in K8S
    • File(NAS) vs Block(SAN) vs Object storage
    • KubeVirt
    • Why K8S HA chooses 3 instead of 5..6..7 as the size of masters?
    • goroutine & go channel
    • How to make docker images smaller
Powered by GitBook
On this page
  • Node level logging
  • Container logs
  • System components logging
  • Cluster level logging
  • Using a node logging agent
  • Using side-car container
  • Expose logs directly from application

Was this helpful?

  1. Logging related

K8s cluster logging

PreviousHow to gather systemd logNextLogging tools | projects

Last updated 3 years ago

Was this helpful?

As mentioned in , logging is very useful to debug problems and monitor cluster activities. In this article, we are mainly talking about Day 2 logging, which is after your K8s installation.

Node level logging

Container logs

A container running inside a pod might dump logs into stdout and stderr. Those logs are accessible by using kubectl logs <pod_name> <container_name> command. This is typically how developer access logs from a container of a pod.

Following example is from K8s official doc, where the pod is dumping the logs directly into stdout.

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

After the kubectl apply, the pod keeps writing logs into stdout. To read those logs, one could use kubectl logs counter.

The location where the stdout get stored

Interesting enough, the log files under /var/log/containers/ are actually symbolic links to the log files under /var/log/pods .

Lifecycle management of the stdout log files

Those container log files will not stay forever. According to the K8s official doc, if a container restarts, the "kubelet" keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.

System components logging

There are two types of system components: those that run in a container and those that do not run in a container. For example:

  • The K8s scheduler and kube-proxy run in a container.

  • The kubelet and container runtime, for example Docker, do not run in containers.

Similarly to the container logs, system component logs in the /var/log directory should be rotated. In K8s clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.

Cluster level logging

From above, we know that the log files are ephemeral, either they are rotated or evicted with a pod. And if node dies, those log files are gone as well. So, we need to separate backend to store, analyze, query those logs. We call this concept as cluster level logging.

Using a node logging agent

Using side-car container

Expose logs directly from application

In order to access those stdout logs from container, there must be a place to store them. the Docker container engine redirects stdout and stderr to , which is configured in K8s to write to a file under /var/log/containers/ in JSON format by default. By SSH into the node where the pod is running, you will see container log files under /var/log/containers/ directory. kubectl logs command will return the same result as vim the log file.

And the log files need to be rotate to avoid huge resource consumption. Current K8s(v1.14) is not responsible for rotating the logs, however the deployment tools need to take care of this. tool is configured to run every hour if you K8s cluster is deployed by kube-up.sh script. You can also set up a container runtime to rotate application’s logs automatically, e.g. by using docker’s log-opt. Both logrotate and docker's log-opt are set the log file size threshold to 10MB.

On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, they write to .log files in the /var/logdirectory. System components inside containers always write to the /var/log directory, bypassing the default logging mechanism. They use the logging library. You can find the conventions for logging severity for those components in the .

Above section is copied from official

K8s logging architecture doc
a logging driver
logrotate
klog
development docs on logging
K8s doc