We’re excited to announce the release of OpenNebula Apps 7.0.0, the companion release to OpenNebula 7.0. This version includes updated base operating systems and several enhancements to existing service appliances.
AI-Driven Inference Appliances
This release introduces some new appliances that enable managed AI inference workloads in your OpenNebula environments:
Deprecation notice: The Ray and NVIDIA Dynamo appliances have been deprecated in favor of the vLLM appliance.
- Ray – Distributed AI Framework
- NVIDIA Dynamo – Optimized Inference Appliance
- vLLM – High-performance Inference Engine
Kubernetes at Scale with Rancher & CAPI
We've also added a new Rancher/CAPI appliance, designed to streamline the deployment and lifecycle management of OpenNebula-based RKE2 clusters through Rancher's powerful web interface. This integration ensures full compatibility with Cluster API (CAPI) tooling.
Expanding ARM Architecture Support
We're continuing to expand our support for ARM-based architectures. This release brings updated base OS and service appliances for ARM, with additional improvements planned in the coming weeks.
Context Linux Packages
- Embedded onegate clients has been updated to properly handle HTTP headers. This is needed to interface with latest version of the server (#238)
Supported guests:
| x86_64 | aarch64 | |
|---|---|---|
| AlmaLinux 8, 9, 10 | ✅ | ✅ |
| Alpine Linux: 3.18, 3.19, 3.20, 3.21 | ✅ | ✅ |
| Amazon Linux: 2, 2023 | ✅ | |
| Debian: 11, 12 , 13 | ✅ | ✅ |
| Devuan: 5 | ✅ | |
| Fedora: 39, 40, 41, 42 | ✅ | ✅ |
| FreeBSD: 13, 14 | ✅ | |
| openSUSE: 15 | ✅ | ✅ |
| Oracle Linux: 8, 9 | ✅ | |
| Red Hat Enterprise Linux: 8, 9 | ✅ | ✅ |
| Rocky Linux: 8, 9, 10 | ✅ | ✅ |
| Ubuntu: 20.04, 22.04, 24.04 | ✅ | ✅ |
Note
AlmaLinux 10, Debian 13 and Rocky 10 was added later (2025-11-12)
Context Windows Packages
- no changes
Supported guests:
- Windows10
- Windows11
- Windows2022
Service appliances
All appliances have been updated to include the latest context packages in this release.
VRouter
- Fixes onegate client vm update method (problem with parameter encoding)
OneKE
- Increase base image size to 3G (#215)
- Add missing proxy config for joined masters (#217)
- Adds ONEAPP_K8S_SERVICE_CIDR configuration parameter for specifying kubernetes cluster services IP CIDR block. Defaults to '10.43.0.0/16'. Also, it is added automatically to the ONEAPP_K8S_NO_PROXY parameter (HTTP/S proxy exceptions list). #206 More info: https://docs.rke2.io/reference/server_config#networking
- Pins OneKE ubuntu base image
Update September 10th, 2025 (version 7.0.0-0-20250909)
- Released OneKE 1.33 (rke2 1.31.3->1.33.4, helm 3.16.3->3.18.6, longhorn 1.7.2->1.9.1, metallb 0.14.8->0.15.2, traefik 28.0.0->37.1.0 ruby 3.3-alpine3.18->3.3-alpine3.22)
Rancher CAPI appliance
This release includes the first beta version of the Rancher CAPI appliance, an integration that enables users to manage OpenNebula CAPI RKE2 clusters directly from the Rancher web interface:
- Rancher running on a lightweight K3s cluster, along with the Turtles extension.
- CAPONE imported as an infrastructure provider.
- RKE2 and Kubeadm charts to simplify cluster deployments.
Harbor Appliance
- no changes
MinIO Appliance
- no changes
Ray Appliance for Managed Inference
Deprecation notice: The Ray appliance have been deprecated in favor of the vLLM appliance.
- Allows pinning of Ray and dependencies versions
- Implements the packaging of the nvidia driver from an url or a local path.
- Renames ONEAPP_RAY_MAX_NEW_TOKENS to ONEAPP_RAY_MODEL_MAX_NEW_TOKENS for being consistent with the standard.
- Changes the default for ONEAPP_RAY_MODEL_MAX_NEW_TOKENS from 512 to 1024 for being consistent with the documentation.
- Fixes an issue when the model application tries to download the language model from hugging face, if the ONEAPP_RAY_MODEL_TOKEN is empty or not set, the authorization fails against Hugging Face API.
- Fixes netplan conflict with network manager #244
Dynamo Appliance
Deprecation notice: The Dynamo appliance have been deprecated in favor of the vLLM appliance.
- New AI inference appliance based on NVIDIA Dynamo appliance. Refer to the appliance documentation for more details:
vLLM Appliance
- New AI appliance based on vLLM, a fast and easy-to-use library for LLM inference and serving.
Acknowledgements
Some of the software features included in this repository have been made possible through the funding of the following innovation projects: ONEnextgen and ONEedge5G.