-
Notifications
You must be signed in to change notification settings - Fork 100
Description
Version
kubernetes 1.32.7
Expected Behavior
I expect Karpenter to successfully provision the fsv2 skus in the node pool below in the AKS cluster as it does for other SKUs. If the provisioning fails to due to an incompatibility issue I expect karpenter to surface logs in log analytics that point to the cause of the compatibility.
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
annotations:
labels:
limited: testpool
name: testpool
status:
conditions:
- lastTransitionTime: '2025-12-02T17:01:18Z'
message: object is awaiting reconciliation
observedGeneration: 3
reason: AwaitingReconciliation
status: Unknown
type: NodeRegistrationHealthy
- lastTransitionTime: '2025-12-02T17:01:18Z'
message: ''
observedGeneration: 3
reason: ValidationSucceeded
status: 'True'
type: ValidationSucceeded
- lastTransitionTime: '2025-12-02T17:34:55Z'
message: NodeClass not found on cluster
observedGeneration: 3
reason: NodeClassNotFound
status: 'False'
type: NodeClassReady
- lastTransitionTime: '2025-12-02T17:34:55Z'
message: NodeClassReady=False
observedGeneration: 3
reason: UnhealthyDependents
status: 'False'
type: Ready
nodeClassObservedGeneration: 3
resources:
cpu: '0'
ephemeral-storage: '0'
memory: '0'
nodes: '0'
pods: '0'
spec:
disruption:
budgets:
- nodes: 30%
consolidateAfter: 0s
consolidationPolicy: WhenEmptyOrUnderutilized
template:
metadata:
labels:
kubernetes.azure.com/ebpf-dataplane: cilium
limited: testpool
spec:
expireAfter: Never
nodeClassRef:
group: karpenter.azure.com
kind: AKSNodeClass
name: testing
requirements:
- key: karpenter.azure.com/sku-name
operator: In
values:
- Standard_F8s_v2
- Standard_F16s_v2
startupTaints:
- effect: NoExecute
key: node.cilium.io/agent-not-ready
value: 'true'
apiVersion: karpenter.azure.com/v1beta1
kind: AKSNodeClass
metadata:
annotations:
finalizers:
- karpenter.azure.com/termination
labels:
name: testing
images:
- id: >-
/subscriptions/109a5e88-712a-48ae-9078-9ca8b3c81345/resourceGroups/AKS-Ubuntu/providers/Microsoft.Compute/galleries/AKSUbuntu/images/2204gen2containerd/versions/202511.07.0
requirements:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.azure.com/sku-hyperv-generation
operator: In
values:
- '2'
- id: >-
/subscriptions/109a5e88-712a-48ae-9078-9ca8b3c81345/resourceGroups/AKS-Ubuntu/providers/Microsoft.Compute/galleries/AKSUbuntu/images/2204containerd/versions/202511.07.0
requirements:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.azure.com/sku-hyperv-generation
operator: In
values:
- '1'
- id: >-
/subscriptions/109a5e88-712a-48ae-9078-9ca8b3c81345/resourceGroups/AKS-Ubuntu/providers/Microsoft.Compute/galleries/AKSUbuntu/images/2204gen2arm64containerd/versions/202511.07.0
requirements:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- key: karpenter.azure.com/sku-hyperv-generation
operator: In
values:
- '2'
kubernetesVersion: 1.32.7
spec:
imageFamily: Ubuntu
osDiskSizeGB: 60
Actual Behavior
Karpenter fails to provision the nodes in the cluster due to an incompatibility issue with the SKU. The warning surfaces in the portal and events, but the cause of the compatibility is not surfaced anywhere. In the diagnostic logs, karpenter logs exist but none that reference the incompatibility. Debugging the cause of the issue does not seem possible unless there are logs that publish elsewhere. We would like to adopt Karpenter NAP in multiple of our environments but seeing unpredictable outcomes related to SKUS, and not having the ability to diagnose or debug are problematic.
Steps to Reproduce the Problem
Deploy the above yaml in a 1.32.7 cluster.
Resource Specs and Logs
I am unable to provide scrubbed logs right now, but I can in the future if necessary.
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment