K8s cluster logging
As mentioned in K8s logging architecture doc, logging is very useful to debug problems and monitor cluster activities. In this article, we are mainly talking about Day 2 logging, which is after your K8s installation.
Node level logging
Container logs
A container running inside a pod might dump logs into stdout and stderr. Those logs are accessible by using kubectl logs <pod_name> <container_name>
command. This is typically how developer access logs from a container of a pod.
Following example is from K8s official doc, where the pod is dumping the logs directly into stdout.
After the kubectl apply
, the pod keeps writing logs into stdout. To read those logs, one could use kubectl logs counter
.
The location where the stdout get stored
In order to access those stdout logs from container, there must be a place to store them. the Docker container engine redirects stdout and stderr to a logging driver, which is configured in K8s to write to a file under /var/log/containers/
in JSON format by default. By SSH into the node where the pod is running, you will see container log files under /var/log/containers/
directory. kubectl logs
command will return the same result as vim
the log file.
Interesting enough, the log files under /var/log/containers/
are actually symbolic links to the log files under /var/log/pods
.
Lifecycle management of the stdout log files
Those container log files will not stay forever. According to the K8s official doc, if a container restarts, the "kubelet" keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
And the log files need to be rotate to avoid huge resource consumption. Current K8s(v1.14) is not responsible for rotating the logs, however the deployment tools need to take care of this. logrotate tool is configured to run every hour if you K8s cluster is deployed by kube-up.sh
script. You can also set up a container runtime to rotate application’s logs automatically, e.g. by using docker’s log-opt
. Both logrotate and docker's log-opt are set the log file size threshold to 10MB.
System components logging
There are two types of system components: those that run in a container and those that do not run in a container. For example:
The K8s scheduler and kube-proxy run in a container.
The kubelet and container runtime, for example Docker, do not run in containers.
On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, they write to .log
files in the /var/log
directory. System components inside containers always write to the /var/log
directory, bypassing the default logging mechanism. They use the klog logging library. You can find the conventions for logging severity for those components in the development docs on logging.
Similarly to the container logs, system component logs in the /var/log
directory should be rotated. In K8s clusters brought up by the kube-up.sh
script, those logs are configured to be rotated by the logrotate
tool daily or once the size exceeds 100MB.
Above section is copied from official K8s doc
Cluster level logging
From above, we know that the log files are ephemeral, either they are rotated or evicted with a pod. And if node dies, those log files are gone as well. So, we need to separate backend to store, analyze, query those logs. We call this concept as cluster level logging.
Using a node logging agent
Using side-car container
Expose logs directly from application
Last updated