Skip to main content
Version: NG 3.0 (Beta)

Deployment and Installation for Managed Kubernetes Clusters

Introduction

This document describes the deployment and installation process of vuSmartMaps™ NG-3.0 on supported Managed Kubernetes clusters. The deployment is performed using vuLauncher, a command-line utility that automates the installation of vuSmartMaps services on existing Kubernetes environments such as Amazon EKS, Azure AKS, Google Kubernetes Engine (GKE), OpenShift, and more.

Deployment prerequisites

Managed Kubernetes Cluster

vuSmartMaps requires a managed Kubernetes cluster to set up the platform. Supported Kubernetes clusters include:

  • Managed on-premise clusters like OpenShift or Rancher.
  • Amazon Elastic Kubernetes Service
  • Azure Kubernetes Service
  • Google Kubernetes Engine
  • VMware Tanzu Kubernetes Grid

Common Prerequisites for Managed Kubernetes Cluster

  • Deployment Host

    • A system with kubectl access to the target managed Kubernetes cluster is required to execute vuLauncher.
  • Managed Kubernetes Environment

    • A supported managed Kubernetes environment with the number of worker nodes provisioned per the agreed-upon sizing estimate.
    • Storage classes that support dynamic provisioning, configured for various data tiers as per the agreed-upon sizing.
  • Supported Kubernetes Version

    • The Kubernetes version must be 1.29 or higher.
  • Launcher Configuration

    • vuLauncher uses a set of YAML configuration files that define the installation environment and resource limits for application services. These configuration files are typically prepared before deployment and are specific to each installation.
  • Container Registry

    • The target environment must have access to a container registry for storing images used in deploying vuSmartMaps services. Two options are supported:

      • VuNet Public Registry (ghcr.io): If outbound internet access to VuNet's GitHub Container Registry (GHCR) is permitted, images can be pulled directly at runtime. No local registry setup is required.

      • Customer-Managed Private Registry: If internet access is restricted, image binaries will be shared separately by VuNet. The customer must upload these images to their own registry and update the d-spec accordingly.

  • DNS Names and Load Balancer

    • vuSmartMaps is accessed externally through an ingress proxy and load-balancer service. This ingress layer acts as the single entry point for both the user interface and data ingestion. In cloud-managed Kubernetes environments, the ingress service is typically deployed as a LoadBalancer service, which provisions an external IP address and/or hostname.

    • For external access to the vuSmartMaps UI, it is recommended to configure a meaningful DNS name pointing to the provisioned load-balancer IP or hostname.

    • For data collection, each DataHub instance requires a unique DNS name, routed through the ingress proxy service using SNI (Server Name Indication)-based routing.

    • For example, if the domain is mybank.com, vuSmartMaps is deployed with 3 DataHub instances, and the environment name is "prod", the recommended DNS configuration is:

DNS NamePoints To
vsmaps.prod.mybank.comLoad Balancer IP / Hostname
vunet-datahub-0.prod.mybank.comDataHub node(s)
vunet-datahub-1.prod.mybank.comDataHub node(s)
vunet-datahub-2.prod.mybank.comDataHub node(s)
note
  • In environments where unique DNS names for vuSmartMaps cannot be created, host-IP-based data collection must be used as a fallback. In this mode, data flows directly from data sources to the DataHub (Kafka broker) services using node IPs. The d-spec file must be configured accordingly.
  • This approach requires additional firewall rules — specifically, TCP port 9094 must be opened to allow direct data collection.
  • DNS-based collection is the preferred and recommended approach for production deployments. Host-IP-based collection should only be used as a fallback for lightweight or non-production environments.

Cloud-Specific Prerequisites

Azure Kubernetes Service

Step 1: List Available AKS Clusters

az aks list --output table

Lists all AKS clusters in your Azure subscription, helping identify the correct cluster name and resource group before fetching credentials.

Step 2: Retrieve AKS Cluster Credentials

az aks get-credentials \
--resource-group <resource-group-name> \
--name <aks-cluster-name>

Configures kubectl with the credentials required to access the specified AKS cluster.

Step 3: Verify Cluster Access

kubectl get nodes -o wide 

Confirms that all nodes are in the Ready state before proceeding with deployment.

Step 4: Set Shared Cluster Variable

export shared_cluster=true

Required when deploying on a shared cluster. This flag prevents conflicts with existing workloads.

Step 5: Apply RBAC Manifests

cd $home/launcher/static-files/cluster-setup

Applies Role-Based Access Control (RBAC) manifests to grant the necessary permissions to application components.

Step 6: Create Namespaces

kubectl create ns vsmaps

Creates the dedicated Kubernetes namespace for the vuSmartMaps deployment.

Step 7: Create Container Registry Secret

kubectl create secret docker-registry <image-pull-secret-name> \
--docker-server=<container-registry> \
--docker-username=<username> \
--docker-password=<token> \
--docker-email=<email> \
-n <namespace>

Purpose: Allows the Kubernetes cluster to authenticate with the container registry and pull the required images.

  • <image-pull-secret-name> → Name of the Kubernetes secret to create.
  • docker-server=<container-registry> → URL of the container registry (e.g., ghcr.io).
  • docker-username=<username> → Username for registry authentication.
  • docker-password=<token> → Personal Access Token or password for registry authentication.
  • docker-email=<email>→Email address associated with the registry account
  • -n <namespace> → Kubernetes namespace where the secret will be created.

Example: Using VuNet GHCR

If using VuNet's public GitHub Container Registry, the registry URL is ghcr.io/vunetsystems. Use the credentials provided by VuNet support.

Amazon EKS

Step 1: Verify AWS CLI Configuration

aws configure list

Displays the currently configured AWS CLI profile, including the default region and credentials.

Step 2: List EKS Clusters

aws eks list-clusters --region <region>

Lists all EKS clusters in the specified region to identify the correct cluster name.

Step 3: Generate Kubeconfig

aws eks update-kubeconfig \
--region <region> \
--name <cluster-name>

Configures kubectl to communicate with the specified EKS cluster.

Step 4: Verify Cluster Access

kubectl get nodes

Confirms that the cluster is accessible and all nodes are in the Ready state.

Step 5: Associate OIDC Provider

eksctl utils associate-iam-oidc-provider \
--region <region> \
--cluster <cluster-name> \
--approve

Associate an OpenID Connect (OIDC) identity provider with the EKS cluster. This is mandatory for enabling IAM roles for service accounts, which are required by the EBS CSI driver.

Step 6: Install the EBS CSI Driver Add-on

aws eks create-addon \
--cluster-name <cluster-name> \
--addon-name aws-ebs-csi-driver \
--region <region>

Installs the Amazon EBS Container Storage Interface (CSI) driver add-on. This is required to enable dynamic PersistentVolume (PV) provisioning using Amazon EBS storage on the cluster.

Step 7: Apply RBAC Manifests

Cd $HOME/launcher/static-files/cluster-setup

Step 8: Create Namespaces

kubectl create ns vsmaps

Step 9: Create Container Registry Secret

kubectl create secret docker-registry <image-pull-secret-name> \
--docker-server=<container-registry> \
--docker-username=<username> \
--docker-password=<token> \
--docker-email=<email> \
-n <namespace>

Purpose: Allows the cluster to pull container images.

note

Refer to the AKS section above for parameter descriptions. The same parameters apply.

Google Kubernetes Engine

Step 1: List GKE Clusters

gcloud container clusters list

Lists all GKE clusters in the active project, helping identify the cluster name and zone.

Step 2: Verify Active Project

gcloud config list

Confirms that the correct Google Cloud project is currently active.

Step 3: Generate Kubeconfig

gcloud container clusters get-credentials <cluster-name> \
--zone <zone> \
--project <project-id>

Configures kubectl with the credentials required to access the specified GKE cluster.

Step 4: Verify Cluster Access

kubectl get nodes

Step 5: Apply RBAC Manifests

cd $HOME/launcher/static-files/cluster-setup

Step 6: Create Namespaces

kubectl create ns vsmaps

Step 7: Create Container Registry Secret

| kubectl create secret dockkubectl create secret docker-registry <image-pull-secret-name> \
--docker-server=<container-registry> \
--docker-username=<username> \
--docker-password=<token> \
--docker-email=<email> \
-n <namespace>

Purpose: Allows the cluster to pull container images.

note

Refer to the AKS section above for parameter descriptions. The same parameters apply.

OpenShift Cluster (OCP)

Step 1: Apply CRDs and RBAC Manifests (If No Admin Access)

oc apply -f launcher/static-files/cluster-setup

Applies Custom Resource Definitions (CRDs) and RBAC manifests required by application components. This step is required even if you do not have full cluster-admin access.

Step 2: Assign SCC Policies (Mandatory)

oc adm policy add-scc-to-user anyuid -z <service-account> -n <namespace>
  • OpenShift enforces Security Context Constraints (SCCs) to control pod permissions. Both non-root and anyuid SCCs must be applied to all created service accounts to allow pods to run with the UIDs specified in the Helm charts.
  • These SCCs must be applied for each service during deployment, as vuLauncher does not configure them automatically.

Launcher Prerequisites

Before proceeding, ensure that the following network ports are properly configured in your environment.

Access Requirements

Ensure that the required ports are open for external communication.

External Communication

The following ports must be open for downloading the vuSmartMaps binary and for cluster access.

#SourceDestinationPortProtocolDescription
1Jump Server (vuLauncher)download.vunetsystems.com443TCPDownload the vuSmartMaps installation package. If internet access is restricted, the binary must be downloaded offline and copied to the jump server.
2All Nodesghcr.io/vunetsystems/443TCPVuNet GitHub Container Registry. Not required if a local container registry is used.
3All Nodespkg-containers.githubusercontent.com443TCPGitHub package containers. Not required if a local container registry is used.

Generic External Access Requirements

#SourceDestinationPortProtocolDescription
1Users and administratorsvuSmartMaps cluster443TCPvuSmartMaps UI access, installation, and configuration.
2Jump Server (vuLauncher)Kubernetes control plane node(s)443 or 6443TCPAccess to the Kubernetes API server via kubeconfig.

Data Source Access Requirements (Telemetry Collection)

#SourceDestinationPort(s)ProtocolDescription
1Nodes running vuSmartMaps agentsAll DataHub nodes443 (DNS-based) / 9094 (host-IP-based)TCPPort 443 is used when DNS-based data collection is configured (default). Port 9094 is required only when host-IP-based collection is used.
2App servers with vuNet APM agentsvuSmartMaps cluster nodes4317, 4318TCPOpenTelemetry data collection for APM traces and logs. Ports must be open on all cluster nodes since the collector pod can be scheduled on any node.
3vuSmartMaps cluster nodesNetwork devices (SNMP)161UDPSNMP polling from vuSmartMaps to supported network and security devices.
4vuSmartMaps cluster nodesSystems polled over HTTP443TCPHTTP polling, Prometheus scraping, and similar collection methods.
5vuSmartMaps cluster nodesDatabases (JDBC)DB listener portTCPJDBC-based polling to collect database performance metrics.
6Network devicesvuSmartMaps cluster nodes162UDPSNMP trap ingestion from network devices.
7Network devicesvuSmartMaps cluster nodes514UDPSyslog ingestion from network devices.

Intra-Cluster Communication

Within the vuSmartMaps cluster, various services, including the Kubernetes control plane, communicate across worker nodes. It is recommended that all nodes reside within the same VLAN or subnet for unrestricted communication. If access control policies are enforced, the following ports must be opened.

Ports: vuLauncher VM to vuSmartMaps Nodes
#PortProtocolDescription
122TCPSSH access from the vuLauncher VM to all vuSmartMaps nodes.
2443TCPMain UI access port. Must be open between all vuSmartMaps nodes and the vuLauncher VM.

Configuration Files

  • The following configuration files are included in the build. These are client-specific and will vary per deployment.

  • Two primary YAML files must be reviewed and verified before initiating deployment:

    • Environment Configuration File: Contains VM access credentials and cluster connection details.

    • Deployment Specification (d-spec): Specifies the list of nodes, service-to-node mappings, and resource limits for each service.

Deployment Specification (D-spec)

  • The Deployment Specification (D-spec) is unique to each environment and customer installation. It defines deployment-specific configurations such as resource limits, service distribution across nodes, storage class mappings, and other environment-specific parameters required for installation.
  • The D-spec for any installation is created by the VuNet Support team after the required infrastructure is provisioned in the target environment.

primary_images — Container Registry Configuration

This section contains the credentials required to pull container images during deployment. Update this section with the applicable registry details based on the customer's setup.

Option 1 — Direct Pull from VuNet GHCR (Recommended if internet whitelisting is available):

If outbound internet access to VuNet's GitHub Container Registry (GHCR) is permitted, images can be pulled directly at runtime using the GHCR repository details in this section. Credentials are provided in encrypted form.

Option 2 — Customer-Managed / Private Registry:

If the environment does not have GHCR access, image binaries will be shared separately by VuNet. The administrator of the target env must upload these images to their registry and update this section with the corresponding registry URL and credentials.

primary_images:
location: <URL of target registry>
username: "*******"
password: "*******"
email: "noreply@customer.com"

category — Node Grouping Tags

This section defines a list of tag names used to group nodes. If a new category is needed, its name must be added here before it can be referenced in the node or services sections.

category:
- "data-store-optimized"
- "data-hub-optimized"
- "core-services-optimized"

env_name — Environment Type

This parameter specifies the deployment environment type:

  • "vm": Use when deploying on virtual machines without an existing Kubernetes cluster. vuLauncher will install Kubernetes using kubeadm.
  • "cloud": Use when deploying on an existing managed Kubernetes cluster (e.g., AKS, EKS, GKE, OCP).
env_name: "cloud"

node — Node Configuration for Managed environments

When env_name is set to "cloud", the node section references existing Kubernetes node names rather than IP addresses.

  • node_name: The Kubernetes node name, as returned by kubectl get nodes.
  • tags: Assigns the node to one or more category groups.
node:
- node_name: "node-1"
tags:
- "core-services-optimized"
- node_name: "node-2"
tags:
- "core-services-optimized"
- node_name: "node-3"
tags:
- "data-hub-optimized"

storage_class — Storage Class Mapping

This section is only applicable when env_name is "cloud". It maps each data tier to an existing Kubernetes storage class. The default tier is mandatory; the others are optional.

storage_class:
default: "storageclass-1"
hot: "storageclass-2"
cold: "storageclass-3"
warm: "storageclass-4"
default-replica: ""

deployments — vuStream and vuBlock Component Configuration

This section defines resource limits for vuStream and vuBlock components deployed as part of the vuSmartMaps installation.

  • name: Name of the deployment component.
  • tags: Category groups for node selection. Only nodes with matching tags will be considered.
  • resource_limit: Defines minimum and maximum CPU and memory resources for the pod.
deployments:
- name: pipeline
resource_limit:
min:
cpu: "0.5"
memory: "128Mi"
max:
cpu: "1"
memory: "256Mi"

services — Service Configuration

This section lists all services to be deployed as part of the vuSmartMaps installation. This section is generally expected to have the required information updated on the d-spec generated by the VuNet support team at the time of installation.

  • name: Name of the service.
  • tags: Category groups for node selection.
  • resource_limit: Minimum and maximum CPU and memory for the pod.
  • replica: Number of pod replicas to deploy.
  • disk: Required persistent storage, specified per data tier.
  • config_override: Optional. Allows overriding values in the service's values.yaml before deployment. The key is the YAML path, and the value is the override. If the key does not exist, it is created.
services:
- name: "service_name"
tags:
- "core-services-optimized"
resource_limit:
min:
cpu: "0.25"
memory: "500Mi"
max:
cpu: "0.4"
memory: "2Gi"
replica: 1
disk:
default: 2Gi
default-replica: 2Gi
config_override:
defaultSettings.key1: "value1"
defaultSettings.key2: "value2"

Downloading the Binary

Create a directory in the home directory and download the NG installation binary using the following command:

mkdir -p ~/launcher && cd ~/launcher
wget https://download.vunetsystems.com/_Downloads_/_vuDocker_/3.0/vuSmartMaps_offline_NG-3.0.tar.gz \
--user=<username> --password=<password> --no-check-certificate
note
  • Contact VuNet support to obtain credentials for the download server and the environment-specific d-spec file.
  • If direct access to the download server is not available, the binary can be downloaded via the URL provided by VuNet support and then copied to the jump server.
  • For deployments on managed Kubernetes (non-VM-based), use the online binary variant if internet access is available.

Extract the downloaded binary:

tar -xvzf vuSmartMaps_offline_NG-3.0.tar.gz

Starting vuLauncher

For managed Kubernetes deployments, the launcher skips the Kubernetes installation stages and starts directly from the Helm chart phase. Use the following command:

./build/launcher_linux_x86 --sizing <path-to-d-spec.yaml> --ignore-preflight --debug --start-from helm-chart-updates

To see all available arguments:

./build/launcher_linux_x86 --help
note
  • --sizing is a required argument whenever the launcher is invoked with any other flags.
  • If installation stops at any point, it can be resumed by re-running the same command. To resume without overwriting the existing state, omit the --sizing argument on the subsequent run.

CLI Arguments

The following CLI arguments are applicable for managed Kubernetes deployments:

ArgumentDescription
--sizing <path>Required. Path to the d-spec YAML file. Must be provided when starting or restarting an installation.
--debugEnables debug mode, producing additional diagnostic log output.
**--view <installationservices>**
--dry-runValidates the d-spec file and checks the configuration without starting the actual installation. Useful for pre-flight verification.
--start <stage-name>Starts the specified stage only. Stops after that stage completes.
--start-from <stage-name>Starts from the specified stage and continues through all subsequent stages. Use --view installation to list available stage names.
--versionDisplays the NG version tag associated with this launcher binary.
--ignore-preflightSkips the pre-flight validation checks that run at startup. Useful for environments where pre-flight checks are not applicable.
--remove-service --services <names>Removes the specified installed services (comma-separated release names). Omit --services to remove all installed services.

The --version flag output:

Launcher Workflow

When vuLauncher starts, it validates all configuration files (the environment config and d-spec) before proceeding. For managed Kubernetes deployments, the workflow begins at the Helm Chart Updates stage. The stages are:

Helm Chart Updates

  • In this stage, vuLauncher processes the d-spec and applies all resource limits, replicas, storage configuration, and config overrides to the Helm charts before installation.

Service Installation

  • In this stage, vuLauncher installs all Helm charts for the services listed in the d-spec. Key behaviors:
    • Service dependencies are defined in each chart's values.yaml. vuLauncher determines the correct installation order automatically.
    • If a service's Helm chart fails to install, all dependent (child) charts are skipped.
    • Resuming from a failed chart: Re-run the command without the --sizing argument. This preserves the existing installation state and resumes from the last failed chart.
    • Starting fresh: Manually remove all existing Helm charts, then re-run the command with the --sizing argument to restart from scratch.

Template Upload

  • This is the final stage of the installation. vuLauncher uploads all agent binaries, vuBlock, vuStream templates, and other static files to the MinIO object storage service.

Key Points

  • If the launcher fails at any step, inspect the logs for diagnostic information. Once the issue is resolved, re-run the command to resume from where it stopped.

  • Always ensure the d-spec is validated using --dry-run before initiating a full installation.

  • For managed Kubernetes deployments, always use --start-from helm-chart-updates to skip VM-based Kubernetes installation stages.

Conclusion

This guide describes the end-to-end process for deploying vuSmartMaps NG-3.0 on managed Kubernetes clusters, covering prerequisites, cloud-specific setup steps, d-spec configuration, and the vuLauncher workflow.

FAQs

What happens if my vuLauncher installation fails midway? Can I resume, or do I need to start over?

If the installation fails at any stage, vuLauncher allows you to resume from where it stopped. Use the same command without the --sizing argument to avoid overwriting the previous state.

Refer to: Service Installation and CLI Arguments > - -start-from

How can I validate if my d-spec file is correctly configured before initiating a full installation?

You can run vuLauncher in dry-run mode using the --dry-run flag. This checks the validity of the d-spec file and highlights issues before actual deployment begins.

Refer to: CLI Arguments > --dry-run

Is it mandatory to expose all listed ports if my setup is on a single-node deployment?

No, for single-node deployments, the required ports must be open internally on the same node, not across multiple nodes.

Refer to: Intra Cluster Communication > Note at the end

How does vuLauncher decide where to create containerd and kubelet directories during VM installation?

vuLauncher uses a preference order for mount points to place containerd and kubelet data. If directories like /var/lib/containerd already exist, this stage fails.

Refer to: Launcher Workflow > Create Soft Link

Can I install vuSmartMaps on VMs that already have Kubernetes or Docker installed?

No, the documentation explicitly recommends using fresh VMs. Existing Kubernetes or Docker installations may conflict with vuLauncher’s process.

Refer to: Deployment Prerequisites > Fresh VMs are recommended

What should I do if my organization has strict internet access policies? Will vuLauncher still work?

Yes, but in such cases, VuNet support will help install missing Linux dependencies and may ship required images instead of pulling them online.

How does vuLauncher manage service-to-VM mapping for optimized deployment?

The tags in the d-spec file group VMs, allowing services to be mapped to optimized node groups using resource-driven logic during deployment.

Refer to:Deployment Specification (D-spec) > category and tags sections.

What are the critical services that need external access, and what are their corresponding port requirements?

Key external communications include downloading installation packages, accessing the GitHub Container Registry, and telemetry data ingestion. Ports like 443, 9092, 4317, 161, etc., must be opened accordingly.

Refer to: Access Requirements > External Communication & Data Source-specific Access

How do I decide whether to set env_name to “vm” or “cloud” in the d-spec file?

Set env_name: "vm" if you're installing on VMs without an existing Kubernetes setup. Use env_name: "cloud" if deploying on an existing managed Kubernetes cluster (e.g., AKS, EKS, GKE).

Refer to:Deployment Specification > env_name parameter

If one of the worker nodes fails during Kubernetes setup, will the whole installation stop?

Yes. vuLauncher will mark the installation as failed if even one worker node setup fails. You'll need to resolve the issue and restart from the worker node setup stage.

Refer to: Launcher Workflow > Worker Node Setup