How to setup Prometheus and Alertmanager monitoring on kubernetes cluster

Shubham Khairnar
5 min readNov 24, 2020

--

Monitoring stack on Kubernetes

Nowadays, to detect and prevent failures, it is very convenient to have a good monitoring tool, that is why you need a monitoring system. Monitoring systems are responsible for controlling the technology used by a company (hardware, networks and communications, operating systems or applications) in order to analyze their operation and performance, and to detect and alert about possible errors.

A good monitoring system is able to monitor devices, infrastructures, applications, services, and even business processes. A good monitoring system helps to increase productivity. This is shown through several aspects.

In this article, I’ll guide you to setup monitoring stack using Prometheus and alertmanager on a Kubernetes cluster and collect node, pods and services metrics automatically using Kubernetes service discovery configurations.

So, let’s get started 😉

Prometheus

Prometheus is an open-source system monitoring and alerting toolkit.

Read more — https://prometheus.io/docs/introduction/overview/

Watch more — https://www.youtube.com/channel/UC4pLFely0-Odea4B2NL1nWA/videos

Alertmanager

The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.

Read more — https://prometheus.io/docs/alerting/latest/alertmanager/

Pre-requisite:

1) A kubernetes cluster up and running with kubectl setup on your workstation

How to setup? 🤔,Refer:

AWS EKShttps://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

Google Cloud https://humanitec.com/blog/how-to-set-up-a-kubernetes-cluster-on-gcp

2) Connect to the cluster

AWS EKShttps://aws.amazon.com/premiumsupport/knowledge-center/eks-cluster-connection/

Google Cloud https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl

Let’s get started with the setup.

Setup:

Step-1) Cloning:

All the configuration files I’ve mentioned in this article are available on GitHub. please don’t hesitate to contribute to the repo for adding features.

You can use the config files from the GitHub repo or create the files on the go as mentioned in the steps.

You can clone the repo using the following command.

git clone https://github.com/ShubhamKhairnar/monitoring.git

Step-2) Create a namespace:

We will create a Kubernetes namespace for all our monitoring components.

Execute the following command to create a namespace named “monitoring”:

kubectl create namespace monitoring

Step-3) Create cluster role:

kubectl apply -f clusterrole_bindings.yml

Step-4) Create ConfigMaps:

Prometheus:

Create a config map with all the prometheus scrape config and alerting rules, which will be mounted to the Prometheus container in /etc/prometheus as prometheus.yaml and prometheus.rules.

The prometheus.yaml contains the configuration to dynamically discover pods and services running in the Kubernetes cluster.

We have the following scrape jobs:

a) kubernetes-apiservers: All the metrics from the API servers.

b) kubernetes-nodes: Kubernetes node metrics

c) kubernetes-pods: Pod metrics (Pod metadata should be annoted as: prometheus.io/scrape andprometheus.io/port)

d) kubernetes-cadvisor: Collects all cAdvisor metrics.

e) kubernetes-service-endpoints: All the Service endpoints (service metadata is annotated with prometheus.io/scrape andprometheus.io/port)

f) database-exporter: All the database metrics from the database (Optional)

g) prometheus.rules: All the alert rules for sending alerts to alert manager using an email.

Alertmanager:

Alert Manager reads its configuration from a config.yaml file. It contains the configuration of alert template path, email and other alert receiving configuration.

In this setup, we are using email receivers. we have added some infrastructure rules and email-alerts, you can fill your email details and we are good to go. 😎

kubectl apply -f configmap.yml

Step-5) Create prometheus and alertmanager deployment:

kubectl apply -f deployment.yml

It uses the official Prometheus image from the docker hub.

Step-6) Exposing Prometheus and alertmanager as a Service:

To access the Prometheus dashboard over a IP or a DNS name, you need to expose it as Kubernetes service.

kubectl apply -f service.yml

If you are on AWS or Google Cloud, you can use LoadBalancer as a type, which will create a load balancer and points it to the service.

To do that, edit your service and change type to LoadBalancer and aws/gcp will assign loadbalancer to your service.

Step-7) Configuration check:

Check whether all the pods and services are running using following command:

kubectl get all -n monitoring

Kubernetes status

If you move to Prometheus_Loadbalancer:9090; you can see below screen:

Prometheus UI

If you slide to targets section:

Targets configured

Rules and alerts can be seen here:

Rules configured
Alerts configured

If you move to Alertmanager_Loadbalancer:9090; you can see below screen:

Alertmanager UI

Setting Up Grafana

We are going to use Grafana for visualization of Prometheus metrics to monitor the kubernetes cluster as well as database metrics. if you have one, obviously! 😛

Please follow below article for the setup:

How to Setup Grafana On Kubernetes cluster using helm chart

Thanks for being there with me till the end of this article on How to setup Prometheus and Alertmanager monitoring on kubernetes cluster, remember to Clap, Comment and Subscribe. 🥳

--

--

Shubham Khairnar
Shubham Khairnar

Written by Shubham Khairnar

AWS Certified Developer | DevOps Engineer at Citiustech | Photographer | Learner