Skip to main content

Helm Chart

Currently, csghub helm chart ce/ee has been merged.

Advantages

As a native package management tool for Kubernetes, Helm Chart is the preferred deployment method for CSGHUB in production environments. Its design strictly follows the following principles:

  1. Backward compatibility: Ensure a smooth upgrade path through standardized version control. Users can achieve seamless version iteration through the helm upgrade command, significantly reducing the risk of production environment changes.

  2. Continuous architecture optimization: Regularly refactor the Chart to optimize the parameterized configuration architecture, while improving deployment performance, enhancing configuration flexibility and maintainability.

  3. Enterprise-level management: Supports multi-environment differentiated configuration, version rollback, and enterprise-level features, in line with cloud native best practices.

System Requirements

CSGHUB uses Kubernetes Helm Chart as the standard deployment solution for production environments. The following are the software and hardware specifications required for operation:

Hardware Requirements

Resource TypeMinimum ConfigurationRecommended ConfigurationNotes
CPU/Memory4 cores 8GB8 cores 16GB
Processor Architecture-AMD64/ARM64Supports x86 and ARM architectures

Kubernetes Basic Requirements

Optional component requirements

Component nameRecommended versionFunctional description
Knative Serving1.16.1+K8S 1.28+ is required when automatic configuration is enabled
Argo Workflowv3.5.12+Model evaluation and image building workflow
LeaderWorkSetv0.6.1Multi-machine and multi-card distributed training support
Nvidia Device PluginCUDA≥12.1GPU acceleration support (requires NVIDIA driver ≥384.81)

It is recommended that the production environment use the recommended configuration for optimal performance and stability. For resource-constrained environments, the minimum configuration can be used, but it may affect system responsiveness.

Tip: The above components (except Nvidia Device Plugin) are automatically configured when the csghub helm chart is installed.

Quick deployment (for testing purposes only)

Note: Currently only ubuntu/debian systems are supported.

One-click installation will automatically configure the following resources:

  • Single-node k3s cluster
  • csghub helm chart
  • nvidia-device-plugin (if enabled)

Install and configure using the following commands:

The script can be executed repeatedly.

  • Default installation

    By default, the csghub service is exposed using NodePort.

    # example.com is just an example domain name
    curl -sfL http://quick-install.opencsg.com | bash -s -- example.com
  • Use LoadBalancer to expose services:

    Tip: When using the LoadBalancer service type for installation, please change the server sshd service port to a non-22 port in advance. This type will automatically occupy port 22 as the git ssh service port.

    curl -sfL http://quick-install.opencsg.com | INGRESS_SERVICE_TYPE=LoadBalancer bash -s -- example.com
  • Enable NVIDIA GPU support

    curl -sfL http://quick-install.opencsg.com | ENABLE_NVIDIA_GPU=true bash -s -- example.com
  • Enable Starship support

    curl -sfL http://quick-install.opencsg.com | EDITION=ee ENABLE_STARSHIP=true bash -s -- example.com

Note: After the deployment is complete, access CSGHub by viewing login.txt.

Configurable variable description:

  • ENABLE_K3S

    Default true, install a K3S single-node cluster.

  • ENABLE_NVIDIA_GPU

    Default false, automatically configure nvidia RuntimeClass and install Nvidia Device Plugin after enabling.

  • ENABLE_STARSHIP

    Default false, install Starship, only EE version is available (EE version is installed by default).

  • HOSTS_ALIAS

    Default true, automatically configure domain name resolution to this host after installation.

  • INSTALL_HELM

    Default true, install helm tool.

  • INGRESS_SERVICE_TYPE

    Default NodePort, use NodePort service type to expose csghub service.

    If ENABLE_K3S=true this option can be set to Loadbalancer (only this one, because the built-in Loadbalancer service can only bind to localhost), but please note that if LoadBalancer is used, port 22 will be preempted by ingress nginx controller. If you still choose to use this type, please change the default port of the sshd service to something else.

  • KOURIER_SERVICE_TYPE

Default NodePort, use the NodePort service type to expose the Knative Serving service.

  • EDITION

    Default ee, install the ee version of the csghub helm chart (without ee license, the same functions as ce).

Standard deployment

Install Helm Chart

  • Add helm repository

    helm repo add csghub https://charts.opencsg.com/repository/csghub
    helm repo update
  • Create Secret

    kubectl create ns csghub
    kubectl -n csghub create secret generic csghub-kube-configs --from-file=/root/.kube/

    Why do you need to inject kubeconfig.

  • Deploy Helm Chart

    The ee version is installed by default.

    helm upgrade --install csghub csghub/csghub \
    --namespace csghub \
    --create-namespace \
    --set global.ingress.domain="example.com" \
    --set global.deploy.knative.serving.services[0].type="LoadBalacner" \
    --set global.deploy.knative.serving.services[0].domain="app.internal" \
    --set global.deploy.knative.serving.services[0].host="192.168.18.10" \
    --set global.deploy.knative.serving.services[0].port="80"

    In the above command, 192.168.18.10 is only an example IP address. The real address can only be obtained after installation. Therefore, if you want to override this setting, you need to obtain the relevant address after the first installation and perform the upgrade again.

    By default, global.deploy.autoConfigure=true will automatically install Knative Serving, Lws, and Argo Workflow. You can also manually install.

    For more configurations, please refer to values.yaml.

  • Get Knative Serving service configuration

    # If it is empty, it means that the address has not been successfully allocated. Please check the cluster-related services.
    # global.deploy.knative.serving.services[0].host
    kubectl get svc kourier -n kourier-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

    # global.deploy.knative.serving.services[0].port
    80
  • Update configuration

    If the LoadBalancer configuration method is used here, you need to use the following command to upgrade:

    • host: 10.6.0.10 (only for example IP, the actual address is subject to the address assigned by LoadBalancer)

    • port: 80

    helm upgrade --install csghub csghub/csghub \
    --namespace csghub \
    --create-namespace \
    --set global.ingress.domain="example.com" \
    --set global.deploy.knative.serving.services[0].type="LoadBalancer" \
    --set global.deploy.knative.serving.services[0].domain="app.internal" \
    --set global.deploy.knative.serving.services[0].host="10.6.0.10" \
    --set global.deploy.knative.serving.services[0].port="80"

Login to csghub

After the above command is installed, the following information will be output. Log in to the csghub instance according to the following command.

Release "csghub" has been upgraded. Happy Helming!
......
Visit CSGHub at the following address:

Address: http://csghub.example.com
Credentials: root/OTc1M2M0ZWMzYWIwNGU3MTMx
......
For more details, visit:
https://github.com/OpenCSGs/csghub-charts

How to configure access using NodePort.

FAQ

1. Test cluster does not support Dynamic Volume Provisioning

When the test cluster does not support dynamic volume provisioning, you need to manually create a persistent volume as follows:

  1. Create PV
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-gitaly-0 # You can customize the PV name
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete # It is usually recommended to use Retain or Delete
storageClassName: hostpath
hostPath:
path: /data/hostpath/gitaly-0 # Need to be replaced with the actual host path, and ensure that the path is shared among multiple nodes
claimRef:
namespace: csghub # Replace with the namespace where the PVC is located, the default is csghub
name: data-csghub-gitaly-0 # In csghub, statefulset will automatically create a PVC with the same name and bind it
EOF

Create the following resources in sequence according to the above commands:

metadata.namespec.capacity.storagehostPath.pathclaimRef.name
pv-gitaly-0200Gi/data/gitaly-0data-csghub-gitaly-0
pv-gitlab-shell-01Gi/data/gitlab-shell-0data-csghub-gitlab-shell-0
pv-minio-0500Gi/data/minio-0data-csghub-minio-0
pv-nats-010Gi/data/nats-0data-csghub-nats-0
pv-postgresql-050Gi/data/postgresql-0data-csghub-postgresql-0
pv-redis-010Gi/data/redis-0data-csghub-redis-0

gitaly and minio, please define the storage capacity according to the actual situation.

  1. View resources
kubectl get pv

2. How to configure access using NodePort

Because the access of external URL is not only exposed to users, but also used by inline calls within the program, manually changing the service exposure type to NodePort alone cannot guarantee the normal operation of the instance. You need to configure it in the following way.

  • Deploy csghub using NodePort

    helm upgrade --install csghub csghub/csghub \
    --namespace csghub \
    --create-namespace \
    --set global.ingress.domain="example.com" \
    --set global.ingress.service.type="NodePort" \
    --set ingress-nginx.controller.service.type="NodePort" \
    --set global.deploy.knative.serving.services[0].type="NodePort" \
    --set global.deploy.knative.serving.services[0].domain="app.internal" \
    --set global.deploy.knative.serving.services[0].host="192.168.18.10" \
    --set global.deploy.knative.serving.services[0].port="30213"
  • Get real service information

    # If it is empty, it means that the address has not been successfully allocated. Please check the cluster-related services.
    # global.deploy.knative.serving.services[0].host
    Current node IP address

    # global.deploy.knative.serving.services[0].port
    kubectl get svc kourier -n kourier-system -o jsonpath='{.spec.ports[0].nodePort}'
  • Update configuration

    Refer to the update steps of Loadbalancer.

3. How to prepare domain name

Csghub helm chart deployment requires a domain name because Ingress does not support routing forwarding using IP addresses.

  • Domain name type

    Public domain name: Use cloud resolution directly.

    Custom domain name: Configure address resolution by yourself.

    Mainly configure the domain name resolution in the following two places:

    • CoreDNS resolution of Kubernetes cluster

    • Client host hosts resolution

  • Domain name usage

    For example, specify the domain name example.com during installation.

    Csghub helm chart will use this domain name as the parent domain name and create the following subdomains:

    • csghub.example.com

      Used for the access entry of the csghub main service.

      If --set global.ingress.useTop=true is specified during installation, example.com will be used as the access entry.

    • casdoor.example.com

      Used to access the casdoor unified login system.

    • minio.example.com

      Used to access object storage.

    • registry.example.com

      Used to access the container image repository.

    • temporal.example.com

      Used to access the scheduled task system.

    • starship.example.com

      Used to access the starship management configuration panel.

    • starship-api.example.com

      Used to access the starship management console, mainly used to configure the Codesouler model engine.

    • *.public.example.com

      Used to access all Knative instances. This resolution requires wildcard domain name resolution.

4. Why do you need to inject kubeconfig

.kube/config file is an important configuration file for accessing the Kubernetes cluster. During the deployment of the csghub helm chart, it needs to be provided to the csghub helm chart as a Secret. Due to the support of CSGHub's cross-cluster features, the service account (serviceAccount) cannot meet the operation requirements of CSGHub. This .kube/config must at least contain full read and write permissions to the namespace where the target cluster deployment instance is located. If the automatic configuration of argo and KnativeServing is enabled, more permissions such as creating a namespace are required.

5. Persistent Volume Description

There are multiple components in the csghub helm chart that need to persist data. The components are as follows:

  • PostgreSQL

    Default 50Gi, used to store database data files.

  • Redis

    Default 10Gi, used to store Redis AOF dump files.

  • Minio

    Default 500Gi, used to store avatar images, LFS files, and Docker Image mirror files.

  • Gitaly

    Default 200Gi, used to store Git warehouse data.

  • Nats

    Default 10Gi, used to store message flow related data.

  • GitLab-Shell

    Default 1Gi, used to store host key pairs.

In the actual deployment process, you need to adjust the size of the PVC according to the usage, or directly use an extensible StorageClass.

It should be noted that the csghub helm chart does not actively create the relevant Persistent Volume, but automatically applies for PV resources by creating a Persistent Volume Claim, so your Kubernetes cluster needs to support Dynamic Volume Provisioning. If it is a self-deployed cluster, dynamic management can be achieved through simulation.

For detailed reference:

6. Manually install dependent resources

Problem feedback

If you encounter any problems during use, you can submit feedback through the following methods: