-
Notifications
You must be signed in to change notification settings - Fork 136
Open
Labels
lifecycle/frozenIndicates that an issue or PR should not be auto-closed due to staleness.Indicates that an issue or PR should not be auto-closed due to staleness.
Description
I'm following this guide. I don't really have any changes to test, but I'm running it more to get an idea of how to test things locally.
I tried both 1.25 and 1.26, but I'm hitting the same error when I do make cluster-sync from the kubevirt directory:
+ /home/dshah/kubevirt/_out/cmd/dump/dump --kubeconfig=/home/dshah/kubevirtci/_ci-configs/k8s-1.25/.kubeconfig
failed to fetch vmis: the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
failed to fetch vmims: the server could not find the requested resource (get virtualmachineinstancemigrations.kubevirt.io)
dump network-attachment-definitions: the server could not find the requested resource
failed to fetch kubevirts: the server could not find the requested resource (get kubevirts.kubevirt.io)
failed to fetch vms: the server could not find the requested resource (get virtualmachines.kubevirt.io)
failed to fetch vmsnapshots: the server could not find the requested resource (get virtualmachinesnapshots.snapshot.kubevirt.io)
failed to fetch vmrestores: the server could not find the requested resource (get virtualmachinerestores.snapshot.kubevirt.io)
failed to fetch vm exports: the server could not find the requested resource (get virtualmachineexports.export.kubevirt.io)
vmi list is empty, skipping logDomainXMLs
failed to get vmis from namespace : the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
failed to get vmis from namespace : the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
make: *** [Makefile:158: cluster-sync] Error 1The steps I followed are as below:
# clone the kubevirtci repo and export its path as $KUBEVIRTCI_DIR
# clone the kubevirt repo and export its path as $KUBEVIRT_DIR
$ cd $KUBEVIRTCI_DIR/cluster-provision/k8s/1.25
$ ../provision.sh # this ends successfully
$ export KUBEVIRTCI_PROVISION_CHECK=1
$ export KUBEVIRTCI_GOCLI_CONTAINER=quay.io/kubevirtci/gocli:latest
$ export KUBEVIRT_PROVIDER=k8s-1.25
$ export KUBECONFIG=$(./cluster-up/kubeconfig.sh)
$ export KUBEVIRT_NUM_NODES=2
$ make cluster-up # this ends successfully
$ rsync -av $KUBEVIRTCI_DIR/_ci-configs/ $KUBEVIRT_DIR/_ci-configs
$ cd $KUBEVIRT_DIR
$ make cluster-syncAm I doing something wrong?
Below are some outputs from the cluster. There doesn't seem to be any kubevirt related pod running on the cluster:
$ ./cluster-up/kubectl.sh get pods -A
selecting docker as container runtime
NAMESPACE NAME READY STATUS RESTARTS AGE
default local-volume-provisioner-v68w9 1/1 Running 0 27m
default local-volume-provisioner-vbfqp 1/1 Running 0 28m
kube-system calico-kube-controllers-8fdc956-rvsrq 1/1 Running 0 28m
kube-system calico-node-8khxc 1/1 Running 0 28m
kube-system calico-node-kgckh 1/1 Running 0 27m
kube-system coredns-6d6f78d859-2lm6w 1/1 Running 0 28m
kube-system coredns-6d6f78d859-9l9cp 1/1 Running 0 28m
kube-system etcd-node01 1/1 Running 1 28m
kube-system kube-apiserver-node01 1/1 Running 1 28m
kube-system kube-controller-manager-node01 1/1 Running 2 (5m45s ago) 28m
kube-system kube-proxy-7gk5h 1/1 Running 0 28m
kube-system kube-proxy-ldhkb 1/1 Running 0 27m
kube-system kube-scheduler-node01 1/1 Running 2 (5m45s ago) 28m
kubevirt disks-images-provider-nc4pf 1/1 Running 0 7m7s
kubevirt disks-images-provider-r2k88 1/1 Running 0 7m7s
$ ./cluster-up/kubectl.sh get deployments -A
selecting docker as container runtime
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system calico-kube-controllers 1/1 1 1 34m
kube-system coredns 2/2 2 2 34m
$ ./cluster-up/kubectl.sh get namespaces
selecting docker as container runtime
NAME STATUS AGE
default Active 34m
kube-node-lease Active 34m
kube-public Active 34m
kube-system Active 34m
kubevirt Active 13m
$ ./cluster-up/kubectl.sh get all -n kubevirt
selecting docker as container runtime
NAME READY STATUS RESTARTS AGE
pod/disks-images-provider-nc4pf 1/1 Running 0 13m
pod/disks-images-provider-r2k88 1/1 Running 0 13m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/disks-images-provider 2 2 2 2 2 <none> 13mMetadata
Metadata
Assignees
Labels
lifecycle/frozenIndicates that an issue or PR should not be auto-closed due to staleness.Indicates that an issue or PR should not be auto-closed due to staleness.