Skip to content

Latest commit

 

History

History
314 lines (219 loc) · 18.4 KB

File metadata and controls

314 lines (219 loc) · 18.4 KB

Harvester Developer Guide

This guide provides essential information for developers who want to contribute to Harvester.

Table of Contents

Things You Need to Know Before Development

Understanding Harvester

We recommend installing a Harvester cluster to understand how to use its features. We provide several installation methods:

Rules

Code changes must be submitted via GitHub pull requests. Here are some general guidelines:

  • Find or create an issue first. Every PR should link to at least one issue.
  • Fork the repository to which you want to contribute. Make and commit changes in your fork.
  • Create a pull request that targets the appropriate branch, using the PR description template.
  • Sign off your commits by including a Signed-off-by line in each commit message.

Repository Relations

This section describes the scope of each repository:

Main Components

  • harvester/harvester
    • Basic features, including virtual machines, images, upgrades (package/upgrade), volumes, etc.
  • harvester/harvester-installer
    • Installation console and ISO building. It also packages the underlying OS (harvester/os2) and rancher/rancherd.
  • harvester/os2
    • The Harvester baseOS provider
  • harvester/docs
  • harvester/network-controller-harvester
    • Manages host network configuration.
  • harvester/terraform-provider-harvester
    • Terraform provider for Harvester.
  • harvester/node-disk-manager
    • Provides an automated way to add storage to Longhorn as Harvester's backend storage, including multipath support.
  • harvester/harvester-ui-extension
  • harvester/load-balancer-harvester
  • harvester/node-manager
    • Manages the node kernel configuration of the Harvester cluster

Check out the Component Maintainers for a complete list.

Add-ons

Built-in

  • harvester/pcidevices (Document)
    • PCI, USB, GPU, and vGPU device passthrough.
  • harvester/nvidia-driver-toolkit (Document)
    • Enable vGPU devices and assign them to Harvester virtual machines. It's used with harvester/pcidevices together.
  • harvester/rancher-logging (Document)
    • Collect versatile logs, events, and audits from the Harvester cluster and route them to many kinds of servers based on flows.
  • harvester/rancher-monitoring (Document)
    • Collect Harvester cluster and virtual machine metrics, view them on the embedded dashboard, and send alert(s) to remote servers.
  • harvester/vm-import-controller (Document)
    • Helps migrate VM workloads from external clusters to an existing Harvester cluster. It currently supports VMware and OpenStack.
  • harvester/kubeovn-operator (Document)
    • Manage the lifecycle of Kube-OVN as a secondary CNI on underlying Harvester clusters.

Experimental

  • harvester/experimental-addons/harvester-csi-driver-lvm (Document)
    • CSI driver that supports local path provisioning through LVM.
  • harvester/experimental-addons/vm-dhcp-controller (Document)
    • A managed DHCP service for virtual machines running on Harvester.
  • harvester/experimental-addons/rancher-vcluster (Document)
    • Runs Rancher Manager as a workload on the underlying Harvester cluster, implemented using vcluster. It's one way to start a Rancher service and use Rancher Integration with Harvester.

Rancher Related

  • harvester/docker-machine-driver-harvester
    • Use Harvester as a cloud provider to provision guest clusters in Rancher. The related UI is located here.
  • harvester/cloud-provider-harvester
    • Makes Harvester function as a Kubernetes cloud provider.
  • harvester/harvester-csi-driver
    • Provides the generic storage interface for downstream clusters.

Integrated Upstream Repositories

  • longhorn/longhorn
    • Harvester uses Longhorn for virtual machine and node volumes.
  • kubevirt/kubevirt
    • Harvester uses KubeVirt to provide virtualization.
  • kubevirt/containerized-data-importer
    • Harvester uses Containerized Data Importer to provide third-party storage integration.

Development

Prerequisites

Before you start, ensure the following are installed on your development machine:

  • OS: Linux or macOS (Linux is required to build ISOs; macOS is fine for day-to-day development).
  • Go: See go.mod and use that major.minor version to avoid module/tooling mismatches.
  • Docker-compatible container engine: required by Dapper for builds (Docker Engine or Docker Desktop; Rancher Desktop or Colima also work).
  • Make and Git: used to invoke build scripts and manage source.

Notes:

  • Dapper is downloaded automatically by make; you just need a working container engine.
  • To push locally built images, you’ll set environment variables such as: REPO, PUSH, and USE_LOCAL_IMAGES (see Testing and Building).

Basic Knowledge

The most important components in Harvester are the custom resources, controllers, webhooks, and the API server.

Deployment

The main deployment charts reside in deploy/charts/harvester and deploy/charts/harvester-crd.

In the underlying implementation, we use rancherd to install all resources needed in Harvester. We deploy a pod called harvester-cluster-repo-xxxx to serve the Harvester deploy/charts via an Nginx server. Other services that need the Helm charts will call this server to fetch the charts. To debug the deployment process, run journalctl -u rancherd on the host to examine the logs.

Writing Test Cases

When you implement a feature, unit tests are required.

Most Harvester features follow the Kubernetes controller pattern, so it's important to test them. Please take a look at the following example to understand basic test case structure:

In addition to unit tests, write automated tests where applicable. If the feature requires an automated test, refer to the Automation Testing section for how to write and run them.

Debugging and Troubleshooting

In general, we need cluster information for debugging. We usually obtain a support bundle from users. It contains a lot of useful cluster information.

Then, we use rancher/support-bundle-kit to emulate the cluster described by the support bundle. It's a toolkit for generating and analyzing support bundles for Harvester and Longhorn.

After building the binary from the project, you can run ./bin/support-bundle-kit-{amd64|arm64} simulator --bundle-path ./supportbundle_xxxx --reset. It takes some time to build the simulated cluster. Then you can use the kubeconfig from ~/.sim/admin.kubeconfig to operate the cluster. In general, most kubectl and helm commands will work on the simulater cluster. Commands such as kubectl exec, kubectl port-forward etc., which require the existence of real workloads, will not work.

Finally, refer to the following resources to search for possible solutions.

If you need help for development, you can ask questions in the harvester-dev channel in Rancher Slack

Testing and Building

All repositories use Dapper to build. Ensure you have a Docker-compatible container engine before building.

Test Your Changes by Patching the Image

In general, each repository produces two images:

  • The controller image
  • The webhook image

For quick testing, you can patch the image in the cluster with a locally built image. Normally, you can use make or ./scripts/build in each repository, and push the result to a registry by retagging the resulting image. If that doesn't work, see the scripts under ./scripts.

Therefore, if you develop a virtual machine feature, you should patch the image in the Harvester controller. This depends on which feature you're developing. Before patching, identify which repositories need to be updated in the Harvester cluster.

Test Your Changes with a Fresh ISO

Making changes to the Harvester installer requires a different workflow to generate the ISO files.

You need a Linux machine with Docker for development. After you make changes to the code, run the build with:

make

To build a Harvester ISO, run:

make build-iso

Additionally, see Build ISO images for more build options. If you'd like to push the images to a Docker registry, use the following commands:

export REPO={docker user name}
export PUSH=true
export USE_LOCAL_IMAGES=true
make
make build-iso

After building, check the dist/artifacts directory for the resulting files. You can test the ISO on physical servers or use Vagrant to test on virtual machines.

Other common environment variables:

  • TAG: Target image tag
  • RKE2_IMAGE_REPO: Used as the base RKE2 version while building ISO.

Automation Testing

In addition to unit tests in each project, we have a dedicated repository, harvester/tests, to test Harvester features. See the README for project setup details.

Use these directories to test different targets:

If you're not sure how to start from scratch, take a look at these two PRs:

Here are example commands to run the tests:

# xxx.xxx.xxx.xxx -> This is your cluster IP.
# Example: run an API test file
pytest harvester_e2e_tests/apis/test_support_bundle.py --username {account} --password {password} --endpoint {https://xxx.xxx.xxx.xxx}

# Example: run an integration test file
pytest harvester_e2e_tests/integrations/test_1_images.py --username {account} --password {password} --endpoint {https://xxx.xxx.xxx.xxx}

Before Opening a Pull Request

Make sure you have an issue on GitHub. All PRs require an issue. Describe the problem you're trying to solve and how you plan to address it. If the issue has the require/hep label, a HEP (Harvester Enhancement Proposal) is required for discussion before implementation.

The general workflow is:

  • Create or find an issue.
  • Discuss why we need to fix it, what the goal is, and how to solve it.
  • Open a pull request for an HEP if needed.
  • Open a pull request to solve the issue.
  • Test the solution.

How to Find an Issue to Work On

All issues are tracked in the harvester/harvester repository. Issues with a milestone or a "good first issue" label are ready for development. You can use this filter and this filter to find open issues with a milestone.

If an issue is already assigned but you want to work on it, leave a comment asking to be assigned or to coordinate with the current assignee. Discussion is welcome.

Branch Strategy

The branch strategy for the harvester/harvester repository is:

  • Create a pull request that targets the master or main branch. Some older repositories still use the master branch.

For more detailed information, see the Branch Strategy Wiki.

Code Style

The code must be linted with golangci-lint. You can manually run the linter, configure your IDE with the config, or run the following command to validate:

make validate

Commit Message Format

We don't enforce a strict commit message format; any reasonable format is acceptable. One recommendation is Conventional Commits.

After Opening a Pull Request

  • Provide a test plan in the PR description. Describe how to test and the expected results. A short demo recording is welcome.
  • Ensure all checks pass in the PR.

After All PRs Are Merged

We use the GitHub Project to manage our issues (see Issue Management). Once all PRs are merged, we'll move the issue status to "Ready for Test". The bot will then create a comment titled "Pre Ready-For-Testing Checklist". Please fill out all necessary information in that comment.

Depending on the severity and scope of your changes, discuss with the maintainers to decide if your pull requests need to be backport to older supported versions of Harvester.

Example Issues

If you're not sure how to get started, take a look at the following examples. In general, we have different types of pull requests categorized by topic. Each topic might involve multiple PRs, including changes to charts, YAML files, and other component repositories.

  • Bumping Dependencies: #8642
  • Harvester Feature (Frontend + Backend): #7136
  • Harvester Upgrades and Documentation: #8163
  • Harvester Deploy YAML Changes: #8116, #8746
  • Rancherd and Harvester Installer: #7312
  • Node-Disk-Manager Feature: #8296
  • PCI Device Add-on: #6779