In this blog post I will write about working on and tweaking a Laravel Kubernet setup I am working on Called Laravel K8 . This post goes through settings things up locally and then running Laravel in the local cluster. Will also have information on adding it to a remote, but won’t get into that too much now as this will be a long post already.
Table of Contents
- Repository
- Local setup
- Kubectl
- Minikube
- Minikube Initiation
- Incompatible Minikube Build
- Image Pull Issues
- Docker K8 Images
- Kubectl Configuration
- Minikube Admin
- Minikube and Docker
- Bye, Bye Tiller
- RBAC in Charts
- Namespace
- Nginx Ingress
- Ingress URL
- Minikube Load Balancer Issues
- Minikube and Ingress
- Cert Manager
- ImagePullBackOff Error
- Dnsmasq Fix
- ACME Production Cluster
- Kubectl Remote Connection
- Kubectl and Digital Ocean
Repository
Current set up is a Laravel 5.5 app made by Bestmomo which I am upgrading to Laravel 6, cloned by Eamon to work with Kubernetes, Helm and Docker. Also working on updating Kubernetes and Docker settings on it. You are all welcome to participate.
Local Setup
Docker and Kubectl latest are running. Docker for Mac I just updated and it is at 2.2.0.0 with Docker engine 19.0.3.5, composer 1.25.2 and Kubernetes 1.15.5. Now we just need to make sure that Minikube is running and Kubectl is working as needed.
Kubectl
Let’s check how kubectl is doing locally. Installed with Homebrew, but what about the version?
kubectl version
➜ laravel-k8 git:(master) kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-23T14:21:54Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"} Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
So it is working, but it cannot connect to the local Kubernetes Minikube.
Minikube
Let’s see how Minikube is doing. Probably the reason kubectl is not able to open stuff
➜ laravel-k8 git:(master) minikube service list zsh: command not found: minikube
So we need to make sure Minikube works on our Mac with Zsh:
➜ laravel-k8 git:(master) brew link minikube Linking /usr/local/Cellar/minikube/1.6.2… 3 symlinks created
Now it does work
➜ laravel-k8 git:(master) minikube service list 💣 Failed to get service URL: machine does not exist 📌 Check that minikube is running and that you have specified the correct namespace (-n flag) if required.
Minikube Initiation
The issue you see is that it may not be running as of yet. So let’s start the Minikube shall we?
➜ laravel-k8 git:(master) minikube start
😄 minikube v1.6.2 on Darwin 10.15.2
✨ Selecting '' driver from existing profile (alternates: [virtualbox])
💥 The existing "minikube" VM that was created using the "virtualbox" driver, and is incompatible with the "hyperkit" driver.
👉 To proceed, either:
1) Delete the existing "minikube" cluster using: 'minikube delete' * or * 2) Start the existing "minikube" cluster using: 'minikube start --vm-driver=virtualbox'
💣 Exiting.
Incompatible Build
Well, another issue with outdated driver. let’s just remove it and add the Hyperkit driver instead as we have not use for it and are starting anew:
➜ laravel-k8 git:(master) minikube delete ⚠️ Unable to get the status of the minikube cluster. 🙄 "minikube" cluster does not exist. Proceeding ahead with cleanup. 🔥 Removing /Users/jasper/.minikube/machines/minikube … 💔 The "minikube" cluster has been deleted. 🔥 Successfully deleted profile "minikube"
Image Pull Issues
Then we restarted the Minikube and it tried to down the needed driver and images and hit a snag:
minikube start
➜ laravel-k8 git:(master) minikube start
😄 minikube v1.6.2 on Darwin 10.15.2
✨ Automatically selected the 'hyperkit' driver (alternates: [virtualbox])
💾 Downloading driver docker-machine-driver-hyperkit:
> docker-machine-driver-hyperkit.sha256: 65 B / 65 B [---] 100.00% ? p/s 0s
> docker-machine-driver-hyperkit: 10.81 MiB / 10.81 MiB 100.00% 172.43 KiB
🔑 The 'hyperkit' driver requires elevated permissions. The following commands will be executed:
$ sudo chown root:wheel /Users/jasper/.minikube/bin/docker-machine-driver-hyperkit $ sudo chmod u+s /Users/jasper/.minikube/bin/docker-machine-driver-hyperkit
Password:
💿 Downloading VM boot image …
> minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
> minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [] 100.00% 13.71 MiB p/s 11s
🔥 Creating hyperkit VM (CPUs=2, Memory=2000MB, Disk=20000MB) …
And we run into errors pulling in the image
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.2:49731->192.168.64.1:53: read: connection refused
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
Docker K8 Images
Reading https://github.com/kubernetes/kubernetes/issues/85498 decided to pull in two images myself
➜ laravel-k8 git:(master) docker pull k8s.gcr.io/kube-proxy:v1.16.3 v1.16.3: Pulling from kube-proxy 39fafc05754f: Pull complete db3f71d0eb90: Pull complete fa5e785d928f: Pull complete Digest: sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34 Status: Downloaded newer image for k8s.gcr.io/kube-proxy:v1.16.3 k8s.gcr.io/kube-proxy:v1.16.3 ➜ laravel-k8 git:(master) docker pull k8s.gcr.io/kube-proxy:v1.17.0 v1.17.0: Pulling from kube-proxy 597de8ba0c30: Pull complete 3f0663684f29: Pull complete e1f7f878905c: Pull complete 3029977cf65d: Pull complete cc627398eeaa: Pull complete d3609306ce38: Pull complete d7c1c982f192: Pull complete Digest: sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 Status: Downloaded newer image for k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0
and then I did another minikube startup:
minikube start laravel-k8 git:(master) minikube start 😄 minikube v1.6.2 on Darwin 10.15.2 ✨ Selecting 'hyperkit' driver from existing profile (alternates: [virtualbox]) 💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. 🏃 Using the running hyperkit "minikube" VM … ⌛ Waiting for the host to be provisioned … ⚠️ VM may be unable to resolve external DNS records ⚠️ VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository 🐳 Preparing Kubernetes v1.17.0 on Docker '19.03.5' … 🚀 Launching Kubernetes … 🏄 Done! kubectl is now configured to use "minikube"
VM was however again unable to to connect to K8s.gcr.io. Kubernetes 1.17.0 which I pulled in was used on Docker 19.0.3.5 instead. So let’s see if the Kubernetes ctl now is working:
➜ laravel-k8 git:(master) kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-23T14:21:54Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Yes, Sir, it is!
Kubectl Configuration
Let’s double check configuration and where kubctl is connected to:
kubectl config view
apiVersion: v1 clusters: cluster: certificate-authority: /Users/jasper/.minikube/ca.crt server: https://192.168.64.2:8443 name: minikube contexts: context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: name: minikube user: client-certificate: /Users/jasper/.minikube/client.crt client-key: /Users/jasper/.minikube/client.key
So it seems to use our local setup which is what we want for now. When I go to https://192.168.64.2:8443/ I get to see an insecure page and only this:
{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 }
When I list all services I see only two at the moment
NAMESPACE | NAME | TARGET PORT | URL |
---|---|---|---|
default | kubernetes | No node port | |
kube-system | kube-dns | No node port | |
————- | ———— | ————– | —– |
And when I do a cluster check I see
➜ laravel-k8 git:(master) ✗ kubectl cluster-info Kubernetes master is running at https://192.168.64.2:8443 KubeDNS is running at https://192.168.64.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
So this all confirms we are working locally.
Namespace Creation
Next step is to create a namespace for our Kube / cluster:
kubectl create namespace laravel6
that works just fine
laravel-k8 git:(master) ✗ kubectl create namespace laravel6 namespace/laravel6 created
Minikube Admin
Cool Minikube tool to check things in a gui is the Minikube Dashboard. You can start it using
minikube dashboard
Minikube and Docker
If you want to use Minikube’s built in docker-daemon use:
eval $(minikube docker-env)
Once done you can use
docker ps
and see all docker containers running
See https://gist.github.com/kevin-smets/b91a34cea662d0c523968472a81788f7
Install RBAC for Tiller
Next up is the creation of a Role Based Access Control for Tiller using Helm. Tiller is the server side part of Helm and RBAC is for secure role based access:
Most cloud providers enable a feature called Role-Based Access Control – RBAC for short. If your cloud provider enables this feature, you will need to create a service account for Tiller with the right roles and permissions to access resources.
https://v2.helm.sh/docs/using_helm/
Let’s double check Helm is installed and working and if not do a
brew install kubernetes-helm
Once it is up and running we could do:
kubectl apply -f kubernetes/kubernetes-yaml/rbac-tiller.yaml
but we first checked the file in the repo. We had to update the api version and change the namespace to laravel6.
And then we we wondered if we needed all this locally. Do we? It seems we can:
Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally, and configured to talk to a remote Kubernetes cluster.
So it seems we can continue. And we ran into errors here:
kubectl apply -f kubernetes/kubernetes-yaml/rbac-tiller.yaml clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding created error: unable to recognize "kubernetes/kubernetes-yaml/rbac-tiller.yaml": no matches for kind "ServiceAccount" in version "rbac.authorization.k8s.io/v1"
That was because the ApiVersion for Service Account was supposed to be v1, not the url as used for the cluster.
apiVersion: v1
kind: ServiceAccount
Also see https://v2.helm.sh/docs/rbac/#example-service-account-with-cluster-admin-role
when I reran the command I had success
kubectl apply -f kubernetes/kubernetes-yaml/rbac-tiller.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding configured
Don’t worry, we also updated the repository.
Bye, Bye Tiller
The command to init Helm with Tiller hit a snag:
helm init --tiller-namespace laravel6 --service-account tiller
It no longer works and only leads to Error: unknown flag: –tiller-namespace
Then I found out Helm 3 did away with Tiller. See: https://github.com/Azure/application-gateway-kubernetes-ingress/issues/697 and https://helm.sh/docs/faq/#changes-since-helm-2
It is no longer used for security and RBAC. We can simply use kubctl config for this https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
Restore to Pre Tiller Setup
So how to restore? Well delete:
kubectl delete -f kubernetes/kubernetes-yaml/rbac-tiller.yaml
is an option , however it implies
- The resources were first created. It simply removes all of those, if you really want to “revert to the previous state” I’m not sure there are built-in tools in Kubernetes to do that (so you really would restore from a backup, if you have one)
- The containers did not modify the host machines: containers may mount root filesystem and change it, or kernel subsystems (iptables, etc). The
delete
command would not revert it either, and in that case you really need to check the documentation for the product to see if they offer any official way to guarantees a proper cleanup
https://stackoverflow.com/a/57683241/460885
As we are testing locally I did minikube delete and minikube start again. Will add my own backups to avoid this stuff at a later stage.
RBAC in Charts
We do however still need to add the new way to set Role Based Access Control to our own custom charts. How would we go about these?
Content to be added here…
Namespace
Let’s add a namespace shall we? This so to put all our containers in their proper place.
kubectl create namespace laravel6
Nginx Ingress
To add Nginx Ingress with Helm 3 you first need to do a:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
See https://stackoverflow.com/a/57964140/460885
Then you shall see
➜ laravel-k8 git:(master) helm repo add stable https://kubernetes-charts.storage.googleapis.com "stable" has been added to your repositories
Then you can run
helm install nginx-ingress stable/nginx-ingress --wait --namespace laravel6 --set rbac.create=true,controller.service.externalTrafficPolicy=Local
And if all went well you shall see Nginx Ingress running
kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6955765f44-9k4nn 1/1 Running 0 118m kube-system coredns-6955765f44-zp2qr 1/1 Running 0 118m kube-system etcd-minikube 1/1 Running 0 118m kube-system kube-addon-manager-minikube 1/1 Running 0 118m kube-system kube-apiserver-minikube 1/1 Running 0 118m kube-system kube-controller-manager-minikube 1/1 Running 0 118m kube-system kube-proxy-qrzc5 1/1 Running 0 118m kube-system kube-scheduler-minikube 1/1 Running 0 118m kube-system storage-provisioner 1/1 Running 0 118m laravel6 nginx-ingress-controller-69d5dc598f-zfpwd 0/1 ImagePullBackOff 0 7m28s laravel6 nginx-ingress-default-backend-659bd647bd-568kb 0/1 ImagePullBackOff 0 7m28s
Ingress URL
We also need to connect our ip address to Ingress. url variable needs to be stored in shell before you run the following below so do a
MY_URL=l6.domain.com # change this to your domain
This will only work if you already set things up with our remote or local hoster to work with that A record.
Then you can run the actual command.
INGRESS_IP=$(kubectl get svc --namespace laravel6 --selector=app=nginx-ingress,component=controller -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}');echo ${INGRESS_IP}
However, how do we do this locally? With
kubectl cluster-info
we can get the ip address:
➜ laravel-k8 git:(master) ✗ kubectl cluster-info Kubernetes master is running at https://192.168.64.3:8443 KubeDNS is running at https://192.168.64.3:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Going to https://192.168.64.3:8443 we are now warned the connection is not private as before as we see the same raw data
{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 }
so access denied for us. Command kubectl cluster-info dump gives us more information, but is kind of overwhelming.
Minikube Load Balancer Issues
When we check for ip addresses using get svc we do see that the load balancer is stuck in pending mode:
kubectl get svc --namespace laravel6 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cert-manager ClusterIP 10.96.214.109 9402/TCP 5d20h cert-manager-webhook ClusterIP 10.96.198.14 443/TCP 5d20h nginx-ingress-controller LoadBalancer 10.96.54.46 80:32258/TCP,443:32087/TCP 5d21h nginx-ingress-default-backend ClusterIP 10.96.102.145 80/TCP 5d21h
The reason why:
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Minikube + Ingress
We can either use MetalLB or Minikube’s tutorial to set up an Ingress. We did the latter.
We started out with the tutorial this way:
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 kubectl expose deployment web --target-port=8080 --type=NodePort kubectl get service web minikube service web --url
Created kubernetes/kubernetes-yaml/minikube-ingress.yml with
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: laravel-k8.test http: paths: - path: / backend: serviceName: web servicePort: 8080
Then you could run it using
kubectl apply -f kubernetes/kubernetes-yaml/minikube-ingress.yml ingress.networking.k8s.io/example-ingress created
You can then check for the ip address using
kubectl get ingress NAME HOSTS ADDRESS PORTS AGE example-ingress laravel-k8.test 192.168.64.3 80 3m45s
Then add this to /etc/hosts
192.168.64.3 laravel-k8.test
Now you can access the site in the browser or using curl
curl laravel-k8.test Hello, world! Version: 1.0.0 Hostname: web-9bbd7b488-qb6hz
But this does not send traffic to the Nginx Laravel App just yet. It simply redirects to the Hello World app. More to be added…
Cert Manager
To install the Certificate Manager a few steps need to be taken:
## IMPORTANT: you MUST install the cert-manager CRDs **before** installing the ## cert-manager Helm chart $ kubectl apply --validate=false \ -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13/deploy/manifests/00-crds.yaml ## Add the Jetstack Helm repository $ helm repo add jetstack https://charts.jetstack.io
Then the actual Helm Chart can be installed:
helm install cert-manager --namespace laravel6 jetstack/cert-manager --set ingressShim.extraArgs='{--default-issuer-name=letsencrypt-prod,--default-issuer-kind=ClusterIssuer}','extraArgs={--v=4}'
NAME: cert-manager
LAST DEPLOYED: Tue Feb 4 14:25:48 2020
NAMESPACE: laravel6
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://docs.cert-manager.io/en/latest/reference/issuers.html
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the ingress-shim
documentation:
https://docs.cert-manager.io/en/latest/reference/ingress-shim.html
and when we check pods we do see three new ones
➜ laravel-k8 git:(master) ✗ kubectl get pod --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6955765f44-9k4nn 1/1 Running 0 160m kube-system coredns-6955765f44-zp2qr 1/1 Running 0 160m kube-system etcd-minikube 1/1 Running 0 160m kube-system kube-addon-manager-minikube 1/1 Running 0 160m kube-system kube-apiserver-minikube 1/1 Running 0 160m kube-system kube-controller-manager-minikube 1/1 Running 0 160m kube-system kube-proxy-qrzc5 1/1 Running 0 160m kube-system kube-scheduler-minikube 1/1 Running 0 160m kube-system storage-provisioner 1/1 Running 0 160m laravel6 cert-manager-7974f4ddf4-gkz58 0/1 ErrImagePull 0 6m laravel6 cert-manager-cainjector-76f7596c4-v8n6c 0/1 ErrImagePull 0 6m laravel6 cert-manager-webhook-8575f88c85-j2sdm 0/1 ContainerCreating 0 6m laravel6 nginx-ingress-controller-69d5dc598f-zfpwd 0/1 ImagePullBackOff 0 49m laravel6 nginx-ingress-default-backend-659bd647bd-568kb 0/1 ImagePullBackOff 0 49m
and a little later on
laravel-k8 git:(master) kubectl get pod --namespace laravel6 NAME READY STATUS RESTARTS AGE cert-manager-7974f4ddf4-gkz58 0/1 ImagePullBackOff 0 15m cert-manager-cainjector-76f7596c4-v8n6c 0/1 ImagePullBackOff 0 15m cert-manager-webhook-8575f88c85-j2sdm 0/1 ContainerCreating 0 15m nginx-ingress-controller-69d5dc598f-zfpwd 0/1 ImagePullBackOff 0 58m nginx-ingress-default-backend-659bd647bd-568kb 0/1 ImagePullBackOff 0 58m
ImagePullBackOff
Reading https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/ I realized dnsmasq was messing with my connection. Adding the listen rule to main nano /usr/local/etc/dnsmasq.conf and restarting it did not work. You do howver have to turn Minikube for dnsmasq not to die on you. Also a separate Minkube dnsmasq config is better:
# cat /usr/local/etc/dnsmasq.d/minikube.conf server=/kube.local/192.168.64.1 listen-address=192.168.64.1
See https://github.com/laravel/valet/issues/1016
Then I turned it off:
sudo brew services stop dnsmasq
which did not help either. Stil had a failure to connect
➜ laravel-k8 git:(master) minikube start 😄 minikube v1.6.2 on Darwin 10.15.2 ✨ Selecting 'hyperkit' driver from existing profile (alternates: [virtualbox]) 💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. 🔄 Starting existing hyperkit VM for "minikube" … ⌛ Waiting for the host to be provisioned … ⚠️ VM may be unable to resolve external DNS records ⚠️ VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository 🐳 Preparing Kubernetes v1.17.0 on Docker '19.03.5' … 🚀 Launching Kubernetes … ^C
Then I realized it was still running and so was valet so did a
laravel-k8 git:(master) valet stop Stopping php… Stopping nginx… Valet services have been stopped. ➜ laravel-k8 git:(master) minikube start 😄 minikube v1.6.2 on Darwin 10.15.2 ✨ Selecting 'hyperkit' driver from existing profile (alternates: [virtualbox]) 💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. 🔄 Starting existing hyperkit VM for "minikube" … ⌛ Waiting for the host to be provisioned … ^C ➜ laravel-k8 git:(master) sudo brew services stop dnsmasq Stoppingdnsmasq
… (might take a while) ==> Successfully stoppeddnsmasq
(label: homebrew.mxcl.dnsmasq)
And then another minikube start and then when I checked I hit the jackpot
➜ laravel-k8 git:(master) kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6955765f44-9k4nn 1/1 Running 2 3h53m kube-system coredns-6955765f44-zp2qr 1/1 Running 2 3h53m kube-system etcd-minikube 1/1 Running 2 3h53m kube-system kube-addon-manager-minikube 1/1 Running 2 3h53m kube-system kube-apiserver-minikube 1/1 Running 2 3h53m kube-system kube-controller-manager-minikube 1/1 Running 2 3h53m kube-system kube-proxy-qrzc5 1/1 Running 2 3h53m kube-system kube-scheduler-minikube 1/1 Running 2 3h53m kube-system storage-provisioner 1/1 Running 3 3h53m laravel6 cert-manager-7974f4ddf4-gkz58 0/1 ContainerCreating 0 78m laravel6 cert-manager-cainjector-76f7596c4-v8n6c 1/1 Running 0 78m laravel6 cert-manager-webhook-8575f88c85-j2sdm 0/1 ContainerCreating 0 78m laravel6 nginx-ingress-controller-69d5dc598f-zfpwd 1/1 Running 0 122m laravel6 nginx-ingress-default-backend-659bd647bd-568kb 1/1 Running 0 122m
DNSMasq Clean Fix
Doing an edit of the included Laravel valet dsnmasq configuration first:
nano /Users/jasper/.config/valet/dnsmasq.conf
and adding the following line to it:
listen-address=192.168.64.1
and restarting dnsmasq as root:
sudo brew services start dnsmasq
Also worked. Minikube could be started again without issue either.
NB Not sure why this root setup for dnsmasq is needed, but all other valet services also run as root. probably due to choices I made with adding sudo on installation.
Update
The file /Users/jasper/.config/valet/dnsmasq.conf is no longer there since my latest Valet update to version 2.8.2. There is only Users/jasper/.config/valet/dnsmasq.d/tld-test.conf
Just add listen-address=192.168.64.1 to /usr/local/etc/dnsmasq.conf instead or tweak the listen-address line and use
listen-address=127.0.0.1,192.168.64.1
and do a
sudo brew services restart dnsmasq
Once done Minikube will start without issues
➜ ~ minikube start 🎉 minikube 1.7.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.7.2 💡 To disable this notice, run: 'minikube config set WantUpdateNotification false' 🙄 minikube v1.6.2 on Darwin 10.15.3 ✨ Selecting 'hyperkit' driver from existing profile (alternates: [virtualbox]) 💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. 🔄 Starting existing hyperkit VM for "minikube" … ⌛ Waiting for the host to be provisioned … 🐳 Preparing Kubernetes v1.17.0 on Docker '19.03.5' … 🚀 Launching Kubernetes … 🏄 Done! kubectl is now configured to use "minikube"
NB The command
valet restart
will also restart dnsmasq, but it then refused to run my Valet based websites afterwards.
NBB To check what port is being used by what program use:
sudo lsof -iTCP -sTCP:LISTEN -n -P |grep dns
You should see something like
dnsmasq 179 nobody 5u IPv4 0xc7947fc473e40ac3 0t0 TCP 127.0.0.1:53 (LISTEN)
sudo used as the process runs as root here. For non root processes you can drop sudo
ACME Production Cluster
Next step would be the Cluster Issues. We want to use Let’s Encrypt’s ACME and we want to set up a production cluster. Locally we will use staging and even on a server staging or production we will first use staging. This so LE will not block us for abusing their system during tests.
kubectl apply -f kubernetes/kubernetes-yaml/acme-prod-cluster-issuer.yaml
will get us going, but do check the file and see whether it is to your needs. Currently we have this:
apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # You must replace this email address with your own. # Let's Encrypt will use this to contact you about expiring # certificates, and issues related to your account. email: user@example.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: example-issuer-account-key # Add a single challenge solver, HTTP01 using nginx solvers: - http01: ingress: class: nginx
To be continued..
Kubectl Remote Connection
If we would want to use a different connection / config file we can use
kubectl cluster-info --kubeconfig=path_to_your_kubeconfig_file
or
kubectl --kubeconfig="use_your_kubeconfig.yaml" get nodes
Also see https://www.digitalocean.com/community/cheatsheets/getting-started-with-kubernetes-a-kubectl-cheat-sheet#connecting,-configuring-and-using-contexts and https://www.digitalocean.com/docs/kubernetes/how-to/connect-to-cluster/
Kubectl and Digital Ocean
So what if we wanted to connect to remote Digital Ocean Managed Kubernetes cluster. Well, DO Docs tells us all about that. You need kubectl and doctl. Latter you need to get a certificate to authenticate and connect to the remote cluster.
Doctl
Doctl can be used to get the authentication token or certificate
doctl kubernetes cluster kubeconfig save use_your_cluster_name
This downloads the kubeconfig
for the cluster, merges it with any existing configuration from ~/.kube/config
, and automatically handles the authentication token or certificate.
NB doctl can be installed on the Mac using brew install doctl.
Doctl Autocompletion Failure
Doctl auto completion should also be possible, but did not quite work out:
brew install bash-completion
followed by
source $(brew --prefix)/etc/bash_completion
added to ~/.zshrc
as explained at https://github.com/digitalocean/doctl
resulted in errors:
/usr/local/etc/bash_completion:850: defining function based on alias _expand' /usr/local/etc/bash_completion:850: parse error near
()'
so decided to remove this.
Bonus Zsh Kubectl Autocompletion
Quick autocompletion for zsh as well as that was not added
echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc
See https://kubernetes.io/docs/reference/kubectl/cheatsheet/ for more tricks