Debezium is developed base on Kafka Connect framework and we need Apache Kafka cluster to store the captured events data from source databases. Sometime we do not require the level of tolerance and reliability provided by Apache Kafka cluster, but you still need Change Data Capture (CDC). This is where Debezium Engine come into the picture.
The Kubernetes ETCD database can be easily backup and restored by using the etcd commands such as etcdutl or etcdctl. However there are no proper documented procedure on how to restore the control planes together with the ETCD. There are many posts out there but none of them solve my problem. In the past, I spent amount of time to recreate the entire Kubernetes cluster because I was not able to properly bring up the Kubernetes cluster due to unforeseen problems, with ETCD restored. This is the first time I managed to recover my Kubernetes cluster without rebuild the entire cluster. Let’s take a look how this can be done.
We are going to learn how to deploy and configure Fluent Bit (a sub-project of FluentD) to capture the Nginx Ingress Controller logs on Kubernetes and stream the formatted logs to ElasticSearch. We can then perform the logs analytics using Kibana.
Let’s look at how can we quickly deploy a standalone ElasticSearch and Kibana using Docker Compose to start monitoring application and system logs
When come to operating Kubernetes, we need to ensure the cluster backup is one of the key operation as part of our backup strategy. We need to properly backup and secure the application data which is typically stored in persistent volumes. On the other hand, we need to backup the etcd data because it is the operational data store that is used by Kubernetes in order to operate accordingly. In this post we are looking at how to schedule the etcd data backup using out of the box Kubernetes features.