Skip to main content
Version: NG-2.16

Agent Installation and Management [Kubernetes/Redhat OpenShift]

Common prerequisites

  1. Namespace
kubectl create namespace vsmaps

Why this is required?

the provided manifests (ServiceAccount/RBAC/Pods/ConfigMaps) are typically scoped to a fixed namespace (commonly vsmaps). Keeping agents in a dedicated namespace avoids naming conflicts, simplifies RBAC scoping, and makes lifecycle operations (status/uninstall) predictable.

  1. Cluster access
  • You need kubectl access with permissions to create:
    • Namespace resources (ConfigMaps, DaemonSets/Deployments, Services)
    • RBAC objects (ClusterRole, ClusterRoleBinding) for Healthbeat
  1. Connectivity
  • Ensure the agent pods can reach required Kubernetes endpoints (kubelet, kube-state-metrics, apiserver, etc.)
  • Ensure connectivity to Kafka broker(s) used by vuSmartMaps (topics used by agents are noted below).

Healthbeat

What Healthbeat deploys

  • Deployment (cluster-wide metrics)
  • DaemonSet (node/pod/container metrics)
    Kubernetes

Prerequisites

  • Healthbeat image available on nodes (Download from Kubernetes O11ysource)
  • Metrics-proxy image available on nodes (Download from Kubernetes O11ysource)

Installation

1) Extract package

Extract the downloaded Healthbeat Kubernetes package.

  • <PACKAGE_HOME> = extracted folder (example: /home/user/k8s_install/healthbeat)

2) Import images on nodes

Run on each node (or ensure images are available on all nodes through your cluster’s image distribution approach):

cd <PACKAGE_HOME>
sudo ctr -n=k8s.io images import healthbeat.tar.gz
sudo ctr -n=k8s.io images import metrics-proxy.tar.gz

3) Install kube-state-metrics (required for state_* metrics)

Option 1 — Helm (if available):

helm install kube-metrics . -n kube-system

Option 2 — If helm is NOT installed on the client cluster

  • If your package includes a Helm binary, use it directly (example path—adjust to your package layout):
<PACAKGE_HOME>/bin/helm install kube-metrics <PACKAGE_HOME>/kube-state-metrics -n kube-system
  • If you must deploy via YAML instead, use a kube-state-metrics manifest
---
# Source: kube-state-metrics/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
name: kube-state-metrics
namespace: kube-system
imagePullSecrets:
---
# Source: kube-state-metrics/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
name: kube-state-metrics
rules:
- apiGroups: ["certificates.k8s.io"]
resources:
- certificatesigningrequests
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources:
- cronjobs
verbs: ["list", "watch"]
- apiGroups: ["extensions", "apps"]
resources:
- daemonsets
verbs: ["list", "watch"]
- apiGroups: ["extensions", "apps"]
resources:
- deployments
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- endpoints
verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs: ["list", "watch"]
- apiGroups: ["extensions", "networking.k8s.io"]
resources:
- ingresses
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources:
- jobs
verbs: ["list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- limitranges
verbs: ["list", "watch"]
- apiGroups: ["admissionregistration.k8s.io"]
resources:
- mutatingwebhookconfigurations
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- persistentvolumeclaims
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- persistentvolumes
verbs: ["list", "watch"]
- apiGroups: ["policy"]
resources:
- poddisruptionbudgets
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- pods
verbs: ["list", "watch"]
- apiGroups: ["extensions", "apps"]
resources:
- replicasets
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- replicationcontrollers
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- resourcequotas
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- secrets
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- services
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
- apiGroups: ["admissionregistration.k8s.io"]
resources:
- validatingwebhookconfigurations
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- volumeattachments
verbs: ["list", "watch"]
---
# Source: kube-state-metrics/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
# Source: kube-state-metrics/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
#name: kube-state-metrics
name: kube-state-metrics
namespace: kube-system
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
annotations:
prometheus.io/scrape: 'true'
spec:
type: "ClusterIP"
ports:
- name: "http"
protocol: TCP
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: kube-state-metrics
---
# Source: kube-state-metrics/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: kube-system
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
replicas: 1
strategy:
type: RollingUpdate
revisionHistoryLimit: 10
template:
metadata:
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: kube-state-metrics
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: "2.10.1"
spec:
hostNetwork: false
serviceAccountName: kube-state-metrics
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-state-metrics
args:
- --port=8080
- --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
imagePullPolicy: IfNotPresent
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.1
ports:
- containerPort: 8080
name: "http"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL

Apply:

kubectl apply -f kube-state-metrics.yaml

4) Deploy Healthbeat

Start:

kubectl apply -f <PACKAGE_HOME>/conf.pack/kubernetes.yaml

Stop:

kubectl delete -f <PACKAGE_HOME>/conf.pack/kubernetes.yaml

Status:

kubectl get pods -n vsmaps | grep healthbeat-k8s

Update existing Healthbeat configurations

If the agent is already installed and you only need configuration updates:

  1. Download the agent update package from kubernates O11ysource.
  2. Extract it on the target environment
  3. Apply the updated config/manifest from <PACKAGE_HOME>/conf.pack
kubectl apply -f <PACKAGE_HOME>/conf.pack/kubernetes.yaml -n vsmaps

If only ConfigMaps changed, you may also restart the pods:

kubectl rollout restart ds/healthbeat-k8s -n vsmaps || true
kubectl rollout restart deploy/healthbeat-k8s -n vsmaps || true

Uninstall

kubectl delete -f kubernetes.yaml -n vsmaps

Red Hat OpenShift Notes

OpenShift requires additional handling for:

  1. Kubelet TLS CA bundle (to scrape Kubelet securely)
  2. SCC/UID behavior (pods may crash due to randomized UID assignment)

1) Kubelet TLS CA bundle

Update the Healthbeat manifest configmap (healthbeat.yml) to include:

ssl.certificate_authorities:
- <path_to_ssl>/kubelet-service-ca.crt

OpenShift clusters may expose the CA bundle via secrets/configmaps; in some setups it can be available as:

  • /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
    Or a configmap such as kubelet-serving-ca (cluster-specific).

If kube-state-metrics is accessed over HTTPS, add the bearer token and CA settings as well.

2) SCC / Random UID crashloop handling

Issue: On OpenShift, pods can enter CrashLoopBackOff because OpenShift assigns a random UID that may not match the image-level user expectations.

Fix : allow the service account to run with the image-level user by granting anyuid:

oc adm policy add-scc-to-user anyuid -z healthbeat-k8s -n <NAMESPACE>

Example (customer-specific):

oc adm policy add-scc-to-user anyuid -z healthbeat-k8s -n adpnp-prod2

Reference (alternate approach used in some setups): some environments instead grant privileged SCC and mark pod security context accordingly.
Choose anyuid vs privileged based on the customer’s security policy and what the manifest requires.

Logbeat

Modes supported

  • DaemonSet Pod (collect container stdout logs from each node)
  • Sidecar container (collect application logs from a mounted path inside a pod)

Prerequisites

  • Logbeat image present on nodes. You can download it from here.
  • Access to host path for container logs:
    • /var/log/containers

Installation

1) Extract package

  • <PACKAGE_HOME> = extracted folder

2) Import image on nodes

cd <PACKAGE_HOME>
sudo ctr -n=k8s.io images import logbeat.tar.gz

3) Update Logbeat configuration

Update the ConfigMap in logbeat.yml:

  • Set which pod logs to collect (paths under /var/log/containers/...)
  • Set Kafka broker host/port:
output.kafka:
hosts: ["${SHIPPER_HOST}:${SHIPPER_LOGS_PORT}"]
topic: "%{[type]}"

Also ensure the registry/data path is writable/persistent as per your manifest (so Logbeat doesn’t re-read files on restart).

4) Deploy Logbeat

Start:

kubectl apply -f logbeat.yml -n vsmaps

Stop:

kubectl delete -f logbeat.yml -n vsmaps

Status:

kubectl get pods -n vsmaps | grep logbeat

Update existing Logbeat configurations

  1. Download the Logbeat update package
  2. Extract it
  3. Apply the updated YAML/ConfigMap:
kubectl apply -f <PACKAGE_HOME>/conf.pack/logbeat.yml -n vsmaps

Restart (DaemonSet):

kubectl rollout restart ds/logbeat -n vsmaps

Uninstall

kubectl delete -f logbeat.yml -n vsmaps

Red Hat OpenShift Notes

OpenShift requires additional handling for:

  1. SCC/UID behavior (pods may crash due to randomized UID assignment)

1) Kubelet TLS CA bundle

Update the Healthbeat manifest configmap (logbeat.yml) to include:

ssl.certificate_authorities:
- <path_to_ssl>/kubelet-service-ca.crt

OpenShift clusters may expose the CA bundle via secrets/configmaps; in some setups it can be available as:

  • /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
    Or a configmap such as kubelet-serving-ca (cluster-specific).

If kube-state-metrics is accessed over HTTPS, add the bearer token and CA settings as well.

2) SCC / Random UID crashloop handling

Issue: On OpenShift, pods can enter CrashLoopBackOff because OpenShift assigns a random UID that may not match the image-level user expectations.

Fix : allow the service account to run with the image-level user by granting anyuid:

oc adm policy add-scc-to-user anyuid -z logbeat-k8s -n <NAMESPACE>

Example (customer-specific):

oc adm policy add-scc-to-user anyuid -z logbeat-k8s -n adpnp-prod2

Reference (alternate approach used in some setups): some environments instead grant privileged SCC and mark pod security context accordingly.
Choose anyuid vs privileged based on the customer’s security policy and what the manifest requires.

Logbeat as a Sidecar

Use sidecar mode when your application writes logs to a pod-local directory (for example, /var/log/vusmartmaps/) and you want Logbeat to ship those logs.

Below is an example of running Logbeat as a sidecar container. In this setup, the main application container writes logs to /var/log/vusmartmaps/. To collect those files, a Logbeat sidecar runs in the same pod.

Key points in this example:

  • A ConfigMap contains the Logbeat configuration.
  • The ConfigMap is mounted into the Logbeat container.
  • The logs directory (/var/log/vusmartmaps/) is mounted into both containers (main application and Logbeat), so Logbeat can read the same files the application writes.
note

A ServiceAccount is not required for sidecar mode because Logbeat is reading files from a shared in-pod volume and does not need Kubernetes API permissions for pod metadata discovery.

Refer to the Logbeat configuration below:

---
apiVersion: v1
kind: ConfigMap
metadata:
name: logbeat-config
labels:
k8s-app: logbeat
data:
logbeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/vusmartmaps/vusoft_logging.2023-10-18

fields_under_root: true
fields:
document_type: logbeatpath1
type: "test"
node_name: "${NODE_NAME}"
pod_name: "${POD_NAME}"
pod_namespace: "${POD_NAMESPACE}"
pod_ip: "${POD_IP}"

processors:
- add_cloud_metadata:
- add_host_metadata:

output.kafka:
hosts: ["${SHIPPER_HOST}:${SHIPPER_LOGS_PORT}"]
#topic: "%{[fields.type]}"
topic: "%{[type]}"
required_acks: 1
compression: gzip

Please refer to the example container section below:

# Source: cairo/templates/vunode-sf.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cairo-vunode
labels:
app: vunode
managedBy: vunet
helm.sh/chart: cairo-0.1.0
spec:
serviceName: vuinterface
replicas: 1
selector:
matchLabels:
app: vunode
managedBy: vunet
helm.sh/chart: cairo-0.1.0
template:
metadata:
labels:
app: vunode
managedBy: vunet
helm.sh/chart: cairo-0.1.0
spec:

securityContext:
fsGroup: 1000
runAsUser: 1000
runAsGroup: 1000
affinity:
podAntiAffinity:

nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cairo
operator: In
values:
- "True"
containers:
- image: ghcr.io/vunetsystems/cairo:latest
imagePullPolicy: Never
name: vunode
resources: {}
ports:
volumeMounts:
- mountPath: /var/log/vusmartmaps/
name: vunode-logs
workingDir: /home/vunet/workspace/cairo
- name: logbeat
image: ghcr.io/vunetsystems/logbeat:latest-8
command:
- "/usr/share/filebeat/filebeat"
args: [
"-c", "/etc/logbeat.yml",
"-e",
]
#securityContext:
# runAsUser: 0
env:
- name: PATH
value: /usr/share/filebeat:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: SHIPPER_HOST
value: broker
- name: SHIPPER_LOGS_PORT
value: "9092"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
volumeMounts:
- mountPath: /var/log/vusmartmaps/
name: vunode-logs
- mountPath: /etc/logbeat.yml
subPath: logbeat.yml
name: config-logbeat
hostname: vunode
volumes:
- configMap:
name: logbeat-config
efau dltMode: 0640
name: config-logbeat
updateStrategy: {}
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: vunode-logs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi