Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quick Start with Docker provider generating 9 (nine) nodes cluster instead of 6 (six) #11955

Open
dmvolod opened this issue Mar 11, 2025 · 1 comment
Labels
area/provider/infrastructure-docker Issues or PRs related to the docker infrastructure provider kind/bug Categorizes issue or PR as related to a bug. needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@dmvolod
Copy link
Member

dmvolod commented Mar 11, 2025

What steps did you take and what happened?

Going through Quick Start with Docker and got following results

kubectl get machines
NAME                                     CLUSTER           NODENAME                                 PROVIDERID                                          PHASE     AGE   VERSION
capi-quickstart-7bvjq-7ddxm              capi-quickstart   capi-quickstart-7bvjq-7ddxm              docker:////capi-quickstart-7bvjq-7ddxm              Running   18m   v1.32.0
capi-quickstart-7bvjq-czg5m              capi-quickstart   capi-quickstart-7bvjq-czg5m              docker:////capi-quickstart-7bvjq-czg5m              Running   17m   v1.32.0
capi-quickstart-7bvjq-rjpmm              capi-quickstart   capi-quickstart-7bvjq-rjpmm              docker:////capi-quickstart-7bvjq-rjpmm              Running   11m   v1.32.0
capi-quickstart-md-0-bvhfn-kr69p-2j4t8   capi-quickstart   capi-quickstart-md-0-bvhfn-kr69p-2j4t8   docker:////capi-quickstart-md-0-bvhfn-kr69p-2j4t8   Running   18m   v1.32.0
capi-quickstart-md-0-bvhfn-kr69p-k8zb9   capi-quickstart   capi-quickstart-md-0-bvhfn-kr69p-k8zb9   docker:////capi-quickstart-md-0-bvhfn-kr69p-k8zb9   Running   18m   v1.32.0
capi-quickstart-md-0-bvhfn-kr69p-q27mz   capi-quickstart   capi-quickstart-md-0-bvhfn-kr69p-q27mz   docker:////capi-quickstart-md-0-bvhfn-kr69p-q27mz   Running   18m   v1.32.0
worker-3dkl6x                            capi-quickstart   capi-quickstart-worker-3dkl6x            docker:////capi-quickstart-worker-3dkl6x            Running   17m   v1.32.0
worker-3fq5r4                            capi-quickstart   capi-quickstart-worker-3fq5r4            docker:////capi-quickstart-worker-3fq5r4            Running   17m   v1.32.0
worker-uw8gfz                            capi-quickstart   capi-quickstart-worker-uw8gfz            docker:////capi-quickstart-worker-uw8gfz            Running   17m   v1.32.0

kubectl --kubeconfig capi-quickstart.kubeconfig get nodes
NAME                                     STATUS   ROLES           AGE     VERSION
capi-quickstart-7bvjq-7ddxm              Ready    control-plane   10m     v1.32.0
capi-quickstart-7bvjq-czg5m              Ready    control-plane   9m45s   v1.32.0
capi-quickstart-7bvjq-rjpmm              Ready    control-plane   3m49s   v1.32.0
capi-quickstart-md-0-bvhfn-kr69p-2j4t8   Ready    <none>          10m     v1.32.0
capi-quickstart-md-0-bvhfn-kr69p-k8zb9   Ready    <none>          9m53s   v1.32.0
capi-quickstart-md-0-bvhfn-kr69p-q27mz   Ready    <none>          10m     v1.32.0
capi-quickstart-worker-3dkl6x            Ready    <none>          9m28s   v1.32.0
capi-quickstart-worker-3fq5r4            Ready    <none>          9m28s   v1.32.0
capi-quickstart-worker-uw8gfz            Ready    <none>          9m28s   v1.32.0

It seems to clusterctl generate logic has a bug for Docker provider.

What did you expect to happen?

Expected to generate 6 nodes cluster as described in the guide.

Cluster API version

clusterctl version
clusterctl version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"068c0f3c8ed3a7e382952f8ae68b648aee240a60", GitTreeState:"clean", BuildDate:"2025-02-18T19:07:25Z", GoVersion:"go1.22.12", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes version

kubectl version
Client Version: v1.29.14
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.32.2

Anything else you would like to add?

No response

Label(s) to be applied

/kind bug
/area provider/infrastructure-docker

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. area/provider/infrastructure-docker Issues or PRs related to the docker infrastructure provider needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 11, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If CAPI contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/infrastructure-docker Issues or PRs related to the docker infrastructure provider kind/bug Categorizes issue or PR as related to a bug. needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

2 participants