Customizing Layer5 Cloud Deployment with Helm
Layer5βs Helm charts support a number of configuration options. Please refer to following table of configuration options.
Layer5 offers on-premises installation of its Meshery Remote Provider: Layer5 Cloud. Contained in the Layer5 Helm repository is one chart with two subcharts (see repo index).
Before you begin ensure the following are installed:
ingress-nginx.cert-manager.This deployment uses two namespaces, cnpg-postgres for hosting the PostgreSQL database using CloudNativePG operator and layer5-cloud namespace for the Layer5 cloud. You can also choose to keep all components in the same namespace.
kubectl create ns cnpg-postgres
kubectl create ns layer5-cloud
Layer5 uses PostgreSQL database that requires a persistent storage. It can be configured in many different ways in a Kubernetes cluster. Here we are using local path provisioner from Rancher which automatically creates a PV using a set local path. Running the follwing command to deploy the local path provisioner:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml
This creates a default storage class called local-path which stores data by default in /opt/local-path-provisioner and has the reclaim policy set to Delete.
NOTE: It is recommended you create a new storage class that uses a different path with ample storage and uses
Retainreclaim policy.
For this guide, we will use the defaults.
This example deployment uses ingress-nginx but you may choose to use an ingress controller of your choice.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.3/deploy/static/provider/cloud/deploy.yaml
The INIT_CONFIG environment variable allows you to configure the initial setup of your self-hosted Layer5 Cloud provider. This variable accepts a JSON string that defines the provider initialization configuration.
INIT_CONFIG enables you to:
Set the INIT_CONFIG environment variable with a JSON configuration string:
export INIT_CONFIG='{"provider": {"name": "my-provider", "settings": {...}}}'
For Docker deployments:
# example
docker run -e INIT_CONFIG='{"provider": {"name": "my-provider"}}' layer5/meshery-cloud
For Kubernetes deployments, add to your deployment manifest:
env:
- name: INIT_CONFIG
value: '{"provider": {"name": "my-provider", "settings": {...}}}'
The INIT_CONFIG JSON structure supports the following fields:
provider.name: The name of your provider instanceprovider.settings: Custom provider settings specific to your deploymentYou will install the Postgres database first followed by Layer5 cloud.
In this example, we are using CloudNativePG’s operator based approach to create a PostgreSQL cluster. You can choose a different approach of your choice.
PostgreSQL requires persistent storage which can be configured in many different ways in a Kubernetes cluster. Here we are using local path provisioner from Rancher which automatically creates a PV using a set local path. Running the follwing command to deploy the local path provisioner:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml
This creates a default storage class called local-path which stores data by default in /opt/local-path-provisioner. You can create a new storage class that uses a different path. For this deployment, we will use the defaults.
Add and install CloudNativePG operator using the following commands:
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm upgrade --install cnpg --namespace cnpg-system --create-namespace cnpg/cloudnative-pg
Deploying a PostgreSQL cluster requires the follwing pre-requisite resources:
Run the following commands to create them replacing username and passwords as needed:
kubectl -n cnpg-postgres create secret generic meshery-user --from-literal=username=meshery --from-literal=password=meshery --type=kubernetes.io/basic-auth
kubectl -n cnpg-postgres create secret generic cnpg-superuser --from-literal=username=postgres --from-literal=password=postgres --type=kubernetes.io/basic-auth
For this documentation, we use the following manifests to deploy a PostgreSQL cluster:
# cluster.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cnpg-postgres
namespace: cnpg-postgres
spec:
instances: 2
# Persistent storage configuration
storage:
storageClass: local-path
size: 10Gi
superuserSecret:
name: cnpg-superuser
bootstrap:
initdb:
database: meshery
owner: meshery
secret:
name: meshery-user
postInitSQL:
- create database hydra owner meshery;
- create database kratos owner meshery;
- create extension "uuid-ossp";
- ALTER ROLE meshery WITH SUPERUSER;
postInitApplicationSQLRefs:
configMapRefs:
- name: extra-init
key: init.sql
---
# extra-init.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: extra-init
namespace: cnpg-postgres
data:
init.sql: |
GRANT ALL PRIVILEGES ON DATABASE meshery to meshery;
GRANT ALL PRIVILEGES ON DATABASE hydra to meshery;
GRANT ALL PRIVILEGES ON DATABASE kratos to meshery;
CloudNativePG provides a curated list of samples showing configuration options that can be used as a reference.
Apply the YAML file. You should notice two cnpg pods shortly thereafter.
NAME READY STATUS RESTARTS AGE
cnpg-postgres-1 1/1 Running 0 3h5m
cnpg-postgres-2 1/1 Running 0 3h5m
Retrieve the Service endpoints of cnpg. This must be updated in the Layer5 values.yaml file later.
Start by adding the Layer5 helm chart repo.
helm repo add layer5 https://docs.layer5.io/charts
Next, to modify values such as the database connection or other parameters, you will use the values.yaml file.
You can generate it using the following command:
helm show values layer5/layer5-cloud > values.yaml
Review and update values if necessary. If you have followed this tutorial with the exact steps, there are no changes requires to get started.
Deploy Layer5 cloud using the helm install command.
helm install -f values.yaml layer5-cloud -n layer5-cloud
Port forward the Hydra Admin service.
Run the following command to create the hydra client:
hydra clients create \
--endpoint <port forwarded endpoint> \
--id meshery-cloud \ <--- ensure the id specified matches with the env.oauthclientid in values.yaml
--secret some-secret \ <--- ensure the secret specified matches with the env.oauthsecret in values.yaml
--grant-types authorization_code,refresh_token,client_credentials,implicit \
--response-types token,code,id_token \
--scope openid,offline,offline_access \
--callbacks <Layer5 Cloud host>/callback
Layer5βs Helm charts support a number of configuration options. Please refer to following table of configuration options.