-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Add KubeVirt support to Tilt dev workflow #11697
Conversation
Is there someone who is familiar with KubeVirt that could do a first review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for writing this down.
It will be great if we can do a further step and re-use the existig scrpt for creating kind + registry as well as create a new script for automating most of the steps described in this doc
a560441
to
7bbd562
Compare
@fabriziopandini I've refactored the PR and converted most of the instructions to code. We now have Thanks for the suggestion 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
The changes make sense to me. I tried this out with make kind-cluster-kubevirt
and everything seems to work as advertised in Tilt. I did make serve
to follow through the docs, and I think they're clear and the tab formatting works fine.
LGTM label has been added. Git tree hash: b986b25002289d3e30b1412e76e01616859efe9d
|
Thx! /lgtm /assign @nunnatsa @fabriziopandini |
56bb5e7
to
fe92230
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some comments on the doc.
Great work on the bash scripts!
fe92230
to
eb53bc0
Compare
I think I've addressed all the feedback. Happy to get a final review 🙏 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just few nits otherwise lgtm, but I'm not sure when I will have bandwidth to test, hopefully someone could get there before me
hack/kind-install-for-capk.sh
Outdated
while [[ -z $(kubectl -n metallb-system get pods \ | ||
-l app=metallb,component=controller -o jsonpath="{.items[0].metadata.name}" 2>/dev/null) ]]; do | ||
sleep 2 | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to waiting for MetalLB Deployment and DaemonSet pods can be a bit simplified with kubectl wait
option.
kubectl wait --for condition=available deployment controller -n metallb-system --timeout 150s
kubectl wait --for condition=ready pod -l app=metallb,component=speaker -n metallb-system --timeout 150s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used the loop because with kubectl wait
I kept getting errors because the script was trying to check for Ready
status on a non-existent pod. We need to allow some time for the deployment or daemon set controller to create the pods.
Good point regarding checking for available
status on the deployment rather than the pods. This works. But looks like we don't have an equivalent for daemon sets.
We could do what you suggested and check for available
deployment and ready
pod, however there will be a race between speaker pod creation and the wait on speaker pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've adopted your suggestion for now to make progress. Maybe we can leave the race be since it's probably quite rare that it would trip someone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, totally agree about potential race between controller and speaker pods.
Probably, I've found solution for the DaemonSet wait for ready condition:
kubectl wait -n metallb-system daemonsets.apps speaker --for=jsonpath='{.status.numberReady}'=$(kubectl get daemonsets.apps speaker -n metallb-system -ojsonpath='{.status.currentNumberScheduled}'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, but it doesn't work: When trying to run
kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for condition=Ready --timeout 5m
right after
kubectl wait -n metallb-system daemonsets.apps speaker \
--for jsonpath='{.status.numberReady}'="$(kubectl get daemonsets.apps speaker -n metallb-system -ojsonpath='{.status.currentNumberScheduled}')" --timeout 5m
error: no matching resources found
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's strange, but let's keep the original approach. Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I've already adopted your suggestion partially, just without waiting for the daemonset (see current state of code). Think we should go back to the sleep
loops?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, adopted part is good. Without waiting for the DaemonSet approach, as it's seems to not working in some cases.
Probably this code should work. I tried to run it locally and it's working for me
kubectl wait -n metallb-system daemonsets.apps -l app=metallb,component=speaker --for=jsonpath='{.status.numberReady}'=$(kubectl get daemonsets.apps -l app=metallb,component=speaker -n metallb-system -ojsonpath='{.items[0].status.currentNumberScheduled}')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's "working" for me only because by the time the controller deployment has converged the speaker DS has converged, too. I ran
kubectl wait -n metallb-system daemonsets.apps -l app=metallb,component=speaker \
--for=jsonpath='{.status.numberReady}'="$(kubectl get daemonsets.apps -l app=metallb,component=speaker -n metallb-system -ojsonpath='{.items[0].status.currentNumberScheduled}')"
kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for condition=Ready --timeout 5m
for testing and got:
error: no matching resources found
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep the current state and if anyone ever complains about the race we switch to sleep
.
2b628c3
to
f45295f
Compare
I'd like to merge this ASAP since we depend on this PR for the Node Bootstrapping working group (https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/community/20241112-node-bootstrapping.md). We can't test Ignition functionality using CAPD so we need the CAPK Tilt workflow. I think we can follow up with fixes or enhancements at any point later on but for now let's have something that works. Also, this is a dev tool rather than a user-facing feature so IMO it's OK if it's not 100% polished. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last nit, happy to merge afterwards :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last nit, happy to merge afterwards :-)
Signed-off-by: Johanan Liebermann <jliebermann@microsoft.com>
Signed-off-by: Johanan Liebermann <jliebermann@microsoft.com>
Thanks @chrischdi. I think we should be good to go. |
/assign (I want to take a look at the delta since my last lgtm) |
lgtm from my side, pending stefans review :-) |
Thank you! /lgtm |
LGTM label has been added. Git tree hash: e48a1686713852f0d48b09e29f0462214759ac18
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it:
This PR adds support for local development using KubeVirt as an alternative to CAPD.
This is useful in cases where CAPD can't be used for whatever reason, with one example being developing Ignition-related features (since Ignition runs in early boot and therefore can't be containerized easily).
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged): None/area devtools