Monitoring postgres inside Openshift

Good day residents of Habr!



Today I want to tell you how we really wanted to monitor postgres and a couple of entities inside the OpenShift cluster and how we did it.



At the entrance, they had:







To work with a java application, everything was quite simple and transparent, and to be more precise, then:



1) Adding to build.gradle



implementation "io.micrometer:micrometer-registry-prometheus"
      
      





2) Launch prometheus with configuration



  - job_name: 'job-name' metrics_path: '/actuator/prometheus' scrape_interval: 5s kubernetes_sd_configs: - role: pod namespaces: names: - 'name'
      
      





3) Adding a display in Grafana



Everything was quite simple and prosaic until the time came to monitor the bases that are near us in the namespace (yes, this is bad, no one does it, but it happens differently).



How does this work?



In addition to the hearth with postgres and prometheus itself, we need another entity - exporter.



An abstract exporter is an agent who collects metrics from an application or even a server. For postgres, the exporter is written in Go, it works on the principle of running scripts based on sql inside, and then prometheus takes the results. It also allows you to expand the collected metrics by adding your own.



We deploy it like this (deployment.yaml example, not binding on anything):



 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres-exporter labels: app: {{ .Values.name }} monitoring: prometheus spec: serviceName: {{ .Values.name }} replicas: 1 revisionHistoryLimit: 5 template: metadata: labels: app: postgres-exporter monitoring: prometheus spec: containers: - env: - name: DATA_SOURCE_URI value: postgresdb:5432/pstgr?sslmode=disable - name: DATA_SOURCE_USER value: postgres - name: DATA_SOURCE_PASS value: postgres resources: limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi livenessProbe: tcpSocket: port: metrics initialDelaySeconds: 30 periodSeconds: 30 readinessProbe: tcpSocket: port: metrics initialDelaySeconds: 10 periodSeconds: 30 image: exporter name: postgres-exporter ports: - containerPort: 9187 name: metrics
      
      





He also needed a service and an imagestream.



After the deployment, we really want everyone to see each other.



Add the following piece to the prometheus config:



  - job_name: 'postgres_exporter' metrics_path: '/metrics' scrape_interval: 5s dns_sd_configs: - names: - 'postgres-exporter' type: 'A' port: 9187
      
      





And then it all worked, it remains to add all this goodness to the graphan and enjoy the result.



In addition to the ability to add your own queries, in prometheus you can change the setting by collecting the necessary metrics more targeted.



A similar method was done for:





PS All data on names, ports and the rest is taken from the ceiling and does not carry any information.



Useful links:

List of various exporters



All Articles