Conversation
|
This issue is currently awaiting triage. If the repository mantainers determine this is a relevant issue, they will accept it by applying the The DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: LogicalShark The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/assign hdp617 |
eb87994 to
11f7455
Compare
|
|
For the quickstart, I was approaching it from a customer's perspective: 'What's the fastest way to get a self-managed k8s cluster?' The current approach is great for local dev changes to the component, but I'm more interested in targeting the customer experience here. How about separating this into 2?
|
03427c3 to
1cc9372
Compare
2ed2381 to
4102b67
Compare
3d21946 to
b9ed23d
Compare
Signed-off-by: LogicalShark <maralder@google.com>
zhang-xuebin
left a comment
There was a problem hiding this comment.
Nice work, thanks @LogicalShark !
| # This automatically extracts your GCE node's service account using kubectl and gcloud. | ||
| NODE_NAME=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') | ||
| NODE_ZONE=$(kubectl get node $NODE_NAME -o jsonpath='{.metadata.labels.topology\.kubernetes\.io/zone}') | ||
| NODE_SA=$(gcloud compute instances describe $NODE_NAME \ |
There was a problem hiding this comment.
NODE_SA
The kubectl command here may be unnecessary. Looks like this is just getting VM's default SA, right? Why not just use gcloud compute instances describe $VM_name . SInce it's manual cluster, the user should know which VM it is.
| - --cloud-provider=gce | ||
| - --allocate-node-cidrs=true | ||
| - --cluster-cidr=10.4.0.0/14 | ||
| - --cluster-name=kops-k8s-local |
There was a problem hiding this comment.
cluster-name
Does this always exist? If I just create a cluster using kubeadm, it doesn't have a "cluster-name", no? In this case, should we just make up a name?
| > [!NOTE] | ||
| > If you skipped building your own image in Step 1 and chose to deploy the public upstream image (`k8scloudprovidergcp/cloud-controller-manager:latest`), you **must** also include `command: ["/cloud-controller-manager"]` in your patch's `containers` block. Locally built Dockerfile images automatically set the correct `ENTRYPOINT`, so they do not require this override! | ||
|
|
||
| > [!IMPORTANT] |
There was a problem hiding this comment.
[!IMPORTANT]
This applies as well no matter we build own image, right? i.e. if we build the owner image, we also need to specify these args.
| - --use-service-account-credentials=true | ||
| - --v=2 | ||
| EOF | ||
| (cd deploy/packages/default && kustomize edit add patch --path args-patch.yaml) |
There was a problem hiding this comment.
kustomize edit add patch --path args-patch.yaml
If we use locally built image, there is already is kustomization.yaml in this folder (probably from line 61), how does these two reconcile together?
| gcloud compute instances create k8s-master \ | ||
| --zone=$ZONE \ | ||
| --machine-type=$MACHINE_TYPE \ | ||
| --image-family=$IMAGE_FAMILY \ |
There was a problem hiding this comment.
curious if IMAGE_FAMILY/IMAGE_PROJECT needed or we can leave it as default?
| cat <<EOF > kubeadm-config.yaml | ||
| apiVersion: kubeadm.k8s.io/v1beta3 | ||
| kind: ClusterConfiguration | ||
| kubernetesVersion: v1.30.0 # Match the version you installed |
There was a problem hiding this comment.
version you installed
How to find out this?
| --scopes=cloud-platform \ | ||
| --tags=k8s-master | ||
|
|
||
| # Worker instances |
There was a problem hiding this comment.
I see you created worker VM but I didn't see the worker node joining the cluster, did I miss anything?
Fixes #1019