-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/v1beta1 #20
Feature/v1beta1 #20
Conversation
Skipping CI for Draft Pull Request. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rejoshed The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the CAPI version upgrade, should we do a minor version bump to 0.4, with an updated entry in metadata.yaml? That should make it easier to work with clusterctl as well. Otherwise code LGTM, I'd just be curious to see how it was tested.
config/rbac/service_account.yaml
Outdated
@@ -1,5 +1,5 @@ | |||
apiVersion: v1 | |||
kind: ServiceAccount | |||
metadata: | |||
name: controller-manager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not controller manager? I believe other CAPI providers have separate docker images for a controller vs manager. In our case, they're combined. Shouldn't the name reflect that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was following along with the Cluster Api book's upgrade destructions. They used the name manager, and I wasn't sure whether it was relevant or not.
This change wasn't intended for the final PR unless it was necessary.
I just swapped it back as it wasn't.
It wasn't! Not at the time you reviewed anyway. It's still draft atm, but I'll get a description and write about some of my testing in the final PR. : ) Currently I've created and deleted a cluster. I'll probably try scaling up and down before the final PR.
Sure! I'll get'err done. |
Makefile
Outdated
@@ -134,7 +134,7 @@ undeploy: bin/kustomize ## Undeploy controller from the K8s cluster specified in | |||
.PHONY: binaries | |||
binaries: bin/controller-gen bin/kustomize bin/ginkgo bin/golangci-lint bin/mockgen bin/kubectl ## Locally install all needed bins. | |||
bin/controller-gen: ## Install controller-gen to bin. | |||
GOBIN=$(PROJECT_DIR)/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.2.9 | |||
GOBIN=$(PROJECT_DIR)/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.8.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest release is 0.80.
@@ -181,19 +181,12 @@ tilt-up: cluster-api kind-cluster cluster-api/tilt-settings.json manifests cloud | |||
export CLOUDSTACK_B64ENCODED_SECRET=$$(base64 -i cloud-config) && cd cluster-api && tilt up | |||
|
|||
.PHONY: kind-cluster | |||
kind-cluster: cluster-api cluster-api/hack/kind-install-for-capd.sh # Create a kind cluster with a local Docker repository. | |||
kind-cluster: cluster-api # Create a kind cluster with a local Docker repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're now on v1.+, we no longer have to checkout the kind install file as it's present by default.
@@ -37,7 +37,7 @@ func (r *CloudStackCluster) SetupWebhookWithManager(mgr ctrl.Manager) error { | |||
Complete() | |||
} | |||
|
|||
//+kubebuilder:webhook:path=/mutate-infrastructure-cluster-x-k8s-io-v1alpha3-cloudstackcluster,mutating=true,failurePolicy=fail,sideEffects=None,groups=infrastructure.cluster.x-k8s.io,resources=cloudstackclusters,verbs=create;update,versions=v1alpha3,name=mcloudstackcluster.kb.io | |||
//+kubebuilder:webhook:path=/mutate-infrastructure-cluster-x-k8s-io-v1beta1-cloudstackcluster,mutating=true,failurePolicy=fail,sideEffects=None,groups=infrastructure.cluster.x-k8s.io,resources=cloudstackclusters,verbs=create;update,versions=v1beta1,name=mcloudstackcluster.kb.io,admissionReviewVersions=v1beta1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not see admissionReviewVersion in the CAPI book's upgrade destructions, so it took me a bit to realize it was needed. Grr...
ctx, cancel = context.WithCancel(context.TODO()) | ||
|
||
By("bootstrapping test environment") | ||
testEnv = &envtest.Environment{ | ||
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases")}, | ||
ErrorIfCRDPathMissing: false, | ||
WebhookInstallOptions: envtest.WebhookInstallOptions{ | ||
DirectoryPaths: []string{filepath.Join("..", "..", "config", "webhook")}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API name change apparently.
@@ -35,8 +35,6 @@ import ( | |||
"sigs.k8s.io/controller-runtime/pkg/client" | |||
"sigs.k8s.io/controller-runtime/pkg/envtest" | |||
"sigs.k8s.io/controller-runtime/pkg/envtest/printer" | |||
logf "sigs.k8s.io/controller-runtime/pkg/log" | |||
"sigs.k8s.io/controller-runtime/pkg/log/zap" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consolidated our logging to use only klog. This seems to be the default across providers.
--- | ||
apiVersion: admissionregistration.k8s.io/v1beta1 | ||
apiVersion: admissionregistration.k8s.io/v1 | ||
kind: MutatingWebhookConfiguration | ||
metadata: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These file changes were generated by controller-gen.
@@ -63,8 +63,7 @@ type CloudStackClusterReconciler struct { | |||
// | |||
// For more details, check Reconcile and its Result here: | |||
// - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile | |||
func (r *CloudStackClusterReconciler) Reconcile(req ctrl.Request) (retRes ctrl.Result, retErr error) { | |||
ctx := context.Background() | |||
func (r *CloudStackClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (retRes ctrl.Result, retErr error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interface method signature is different for v1beta1. Context is passed instead.
&source.Kind{Type: &capiv1.Cluster{}}, | ||
&handler.EnqueueRequestsFromMapFunc{ | ||
ToRequests: util.ClusterToInfrastructureMapFunc(infrav1.GroupVersion.WithKind("CloudStackCluster"))}, | ||
handler.EnqueueRequestsFromMapFunc( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EnqueuRequestsFromMapFunc is now a method and not a struct.
@@ -51,33 +52,70 @@ func init() { | |||
//+kubebuilder:scaffold:scheme | |||
} | |||
|
|||
func main() { | |||
type managerOpts struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really didn't like that all the options passed via args were just individual vars.
In use these vars did not indicate that they were input from the user at startup.
Now instead of cloudConfigFile we have opts.CloudConfigFile. This also helps to remove the opt vars from main, so for clarity alone, this is a win!
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.) | ||
// to ensure that exec-entrypoint and run can make use of them. | ||
"github.com/aws/cluster-api-provider-cloudstack/pkg/cloud" | ||
flag "github.com/spf13/pflag" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pflag is a 'drop in' replacement for the builtin flag that supports posix compatible flags.
So --metrics-addr instead of -metrics-addr.
"cloud-config-file", | ||
"/config/cloud-config", | ||
"Overrides the default path to the cloud-config file that contains the CloudStack credentials.") | ||
flag.StringVar( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a few changes to input params as the upgrade desctructions required renaming and or adding and or change of default values.
Nothing to important though.
/retest |
metadata.yaml
Outdated
contract: v1alpha3 | ||
- major: 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we still be including the v1alpha3 refs if we no longer support it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really sure. I just made this match other providers in the way they set this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha. They offer backward-compatible v1alpha3/4 APIs, though. If/since we're no longer offering those API versions, I'd rather we just make like they never existed. I figure this data is here to drive some sort of compatibility mechanism, which I'd rather not inadvertantly activate if we're not compatible to v1alpha3 anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I'll just delete the v1alpha3 stuffs with early versionings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be clear, this commit itself won't serve as a version bump/release, right? We'll keep developing against it before we cut a final tag/release?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, for sure. : )
The one surprise for me here was that we aren't offering v1alpha3 as an alternative supported API version anymore. I had imagined you were adding v1beta1 as a new hub, with conversion to/from v1alpha3. I'm on the fence about whether it's needed / worth the effort, given how cross-version applications seem to be blocked by other CAPI-isms like clusterctl. |
go.sum
Outdated
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= | ||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= | ||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= | ||
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE= | ||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this library is bad according to dependabot. Let's keep the replace in the go.mod
metadata.yaml
Outdated
contract: v1alpha3 | ||
- major: 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be clear, this commit itself won't serve as a version bump/release, right? We'll keep developing against it before we cut a final tag/release?
Can we confirm with the stakeholders that not supporting v1alpha3 is acceptable before we merge this change? |
bc4f5f3
to
2af5502
Compare
/lgtm |
Issue # Bump to v1beta1 -- can't get tickets to load atm, so can't get number.
Description of changes:
Upgraded to support CAPI v1beta1 resources. See comments in
Files Changed
tab for further descriptions.Testing performed:
Created a cluster in a shared network.
Deleted the cluster.
Created a cluster in an isolated network.
Scaled from 1:1 nodes to 2:3 back to 1:1.
Deleted the cluster.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.