Getting Started with Helm Charts (Monitoring using Prometheus Operator)
This document explains how to get started with Scalar products monitoring on Kubernetes using Prometheus Operator (kube-prometheus-stack). Here, we assume that you already have a Mac or Linux environment for testing. We use Minikube in this document, but the steps we will show should work in any Kubernetes cluster.
What we create
We will deploy the following components on a Kubernetes cluster as follows.
+--------------------------------------------------------------------------------------------------+
| +------------------------------------------------------+ +-----------------+ |
| | kube-prometheus-stack | | Scalar Products | |
| | | | | |
| | +--------------+ +--------------+ +--------------+ | -----(Monitor)----> | +-----------+ | |
| | | Prometheus | | Alertmanager | | Grafana | | | | ScalarDB | | |
| | +-------+------+ +------+-------+ +------+-------+ | | +-----------+ | |
| | | | | | | +-----------+ | |
| | +----------------+-----------------+ | | | ScalarDL | | |
| | | | | +-----------+ | |
| +--------------------------+---------------------------+ +-----------------+ |
| | |
| | Kubernetes |
+----------------------------+---------------------------------------------------------------------+
| <- expose to localhost (127.0.0.1) or use load balancer etc to access
|
(Access Dashboard through HTTP)
|
+----+----+
| Browser |
+---------+
Step 1. Start a Kubernetes cluster
First, you need to prepare a Kubernetes cluster. If you use a minikube environment, please refer to the Getting Started with Scalar Helm Charts. If you have already started a Kubernetes cluster, you can skip this step.
Step 2. Prepare a custom values file
-
Get the sample file scalar-prometheus-custom-values.yaml for
kube-prometheus-stack
. -
Add custom values in the
scalar-prometheus-custom-values.yaml
as follows.-
settings
prometheus.service.type
toLoadBalancer
alertmanager.service.type
toLoadBalancer
grafana.service.type
toLoadBalancer
grafana.service.port
to3000
-
Example
alertmanager:
service:
type: LoadBalancer
...
grafana:
service:
type: LoadBalancer
port: 3000
...
prometheus:
service:
type: LoadBalancer
... -
Note:
- If you want to customize the Prometheus Operator deployment using Helm Charts, you need to set the following configuration for monitoring Scalar products.
- The
serviceMonitorSelectorNilUsesHelmValues
andruleSelectorNilUsesHelmValues
must be set tofalse
(true
by default) to make Prometheus Operator detectsServiceMonitor
andPrometheusRule
of Scalar products.
- The
- If you want to customize the Prometheus Operator deployment using Helm Charts, you need to set the following configuration for monitoring Scalar products.
-
Step 3. Deploy kube-prometheus-stack
-
Add the
prometheus-community
helm repository.helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-
Create a namespace
monitoring
on the Kubernetes.kubectl create namespace monitoring
-
Deploy the
kube-prometheus-stack
.helm install scalar-monitoring prometheus-community/kube-prometheus-stack -n monitoring -f scalar-prometheus-custom-values.yaml
Step 4. Deploy (or Upgrade) Scalar products using Helm Charts
- Note:
- The following explains the minimum steps. If you want to know more details about the deployment of ScalarDB and ScalarDL, please refer to the following documents.
-
To enable Prometheus monitoring of Scalar products, set
true
to the following configurations in the custom values file.- Configurations
*.prometheusRule.enabled
*.grafanaDashboard.enabled
*.serviceMonitor.enabled
- Sample configuration files
-
ScalarDB (scalardb-custom-values.yaml)
envoy:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
scalardb:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true -
ScalarDL Ledger (scalardl-ledger-custom-values.yaml)
envoy:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
ledger:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true -
ScalarDL Auditor (scalardl-auditor-custom-values.yaml)
envoy:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
auditor:
prometheusRule:
enabled: true
grafanaDashboard:
enabled: true
serviceMonitor:
enabled: true
-
- Configurations
-
Deploy (or Upgrade) Scalar products using Helm Charts with the above custom values file.
- Examples
-
ScalarDB
helm install scalardb scalar-labs/scalardb -f ./scalardb-custom-values.yaml
helm upgrade scalardb scalar-labs/scalardb -f ./scalardb-custom-values.yaml
-
ScalarDL Ledger
helm install scalardl-ledger scalar-labs/scalardl -f ./scalardl-ledger-custom-values.yaml
helm upgrade scalardl-ledger scalar-labs/scalardl -f ./scalardl-ledger-custom-values.yaml
-
ScalarDL Auditor
helm install scalardl-auditor scalar-labs/scalardl-audit -f ./scalardl-auditor-custom-values.yaml
helm upgrade scalardl-auditor scalar-labs/scalardl-audit -f ./scalardl-auditor-custom-values.yaml
-
- Examples
Step 5. Access Dashboards
If you use minikube
-
To expose each service resource as your
localhost (127.0.0.1)
, open another terminal, and run theminikube tunnel
command.minikube tunnel
After running the
minikube tunnel
command, you can see the EXTERNAL-IP of each service resource as127.0.0.1
.kubectl get svc -n monitoring scalar-monitoring-kube-pro-prometheus scalar-monitoring-kube-pro-alertmanager scalar-monitoring-grafana
[Command execution result]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
scalar-monitoring-kube-pro-prometheus LoadBalancer 10.98.11.12 127.0.0.1 9090:30550/TCP 26m
scalar-monitoring-kube-pro-alertmanager LoadBalancer 10.98.151.66 127.0.0.1 9093:31684/TCP 26m
scalar-monitoring-grafana LoadBalancer 10.103.19.4 127.0.0.1 3000:31948/TCP 26m -
Access each Dashboard.
-
Prometheus
http://localhost:9090/
-
Alertmanager
http://localhost:9093/
-
Grafana
http://localhost:3000/
- Note:
- You can see the user and password of Grafana as follows.
-
user
kubectl get secrets scalar-monitoring-grafana -n monitoring -o jsonpath='{.data.admin-user}' | base64 -d
-
password
kubectl get secrets scalar-monitoring-grafana -n monitoring -o jsonpath='{.data.admin-password}' | base64 -d
-
- You can see the user and password of Grafana as follows.
- Note:
-
If you use other Kubernetes than minikube
If you use a Kubernetes cluster other than minikube, you need to access the LoadBalancer service according to the manner of each Kubernetes cluster. For example, using a Load Balancer provided by cloud service or the kubectl port-forward
command.
Step 6. Delete all resources
After completing the Monitoring tests on the Kubernetes cluster, remove all resources.
-
Terminate the
minikube tunnel
command. (If you use minikube)Ctrl + C
-
Uninstall
kube-prometheus-stack
.helm uninstall scalar-monitoring -n monitoring
-
Delete minikube. (Optional / If you use minikube)
minikube delete --all
- Note:
- If you deploy the ScalarDB or ScalarDL, you need to remove them before deleting minikube.
- Note: