Lightweight Kubernetes k3s on local machine with Grafana/Docker

@kondlawork
8 min readJul 3, 2023

--

How to get lightweight Kubernetes up and running on workstation. K3s is a lightweight, certified Kubernetes distribution designed for resource-constrained environments such as edge devices, IoT devices, and small-scale deployments. It is developed by Rancher Labs and is built with the goal of providing a minimalistic and easy-to-use Kubernetes distribution that consumes fewer resources while maintaining full compatibility with the Kubernetes API.

Here are some key features and characteristics of K3s:

  1. Lightweight and resource-efficient: K3s is designed to have a small footprint and consume fewer resources compared to standard Kubernetes distributions. It has a reduced memory footprint, smaller binary size, and lower CPU overhead, making it suitable for environments with limited resources.
  2. Easy installation and management: K3s is designed to be easy to install and manage. It can be installed on various operating systems, including Linux, macOS, and Windows. The installation process is simplified and can be completed with a single binary. It also provides a lightweight container runtime, containerd, by default.
  3. High availability and resilience: K3s supports the same high-availability features as standard Kubernetes, allowing you to deploy highly resilient clusters. It provides features like automatic etcd snapshots and backups, automatic scaling of control plane components, and integrated service load balancing.
  4. Security and compatibility: K3s maintains full compatibility with the Kubernetes API, ensuring that existing Kubernetes applications and tools can be used with K3s without modifications. It also includes security enhancements such as built-in TLS encryption, RBAC (Role-Based Access Control), and support for Seccomp and AppArmor for container security.

Use cases for K3s:

  1. Edge computing: K3s is well-suited for edge computing scenarios where resources are limited and a lightweight Kubernetes distribution is required. It enables the deployment and management of containerized applications on edge devices, enabling organizations to process data closer to its source and reduce latency.
  2. IoT deployments: K3s can be used in Internet of Things (IoT) deployments where Kubernetes capabilities are needed but the devices have limited resources. With K3s, you can orchestrate and manage containerized workloads on IoT devices, providing a scalable and flexible solution for IoT application development and deployment.
  3. Development and testing environments: K3s can be used to set up lightweight Kubernetes clusters for development and testing purposes. It allows developers to easily create local Kubernetes environments on their laptops or desktops without consuming excessive resources, enabling them to test and iterate their applications efficiently.
  4. Small-scale deployments: K3s is suitable for small-scale deployments where a full-fledged Kubernetes distribution might be overkill. It provides a simplified installation process and requires fewer resources, making it easier to deploy and manage Kubernetes clusters in small-scale production environments or for personal projects.

Overall, K3s offers a lightweight, easy-to-use, and resource-efficient Kubernetes distribution that is particularly useful in edge computing, IoT, development/testing, and small-scale deployment scenarios.

Install k3d a wrapper for k3s

(base) skondla@Sams-MBP:Downloads $ brew search k3d
==> Formulae
k3d ✔ f3d

# k3d is already installed on my macbook

(base) skondla@Sams-MBP:Downloads $ brew update && brew install k3d
Updated 3 taps (weaveworks/tap, homebrew/core and homebrew/cask).
==> New Formulae
bbot erlang@25 trzsz-ssh
==> New Casks
whisky
==> Outdated Formulae
aws-iam-authenticator eksctl libuv

You have 3 outdated formulae installed.
You can upgrade them with brew upgrade
or list them with brew outdated.
Warning: k3d 5.5.1 is already installed and up-to-date.
To reinstall 5.5.1, run:
brew reinstall k3d

(base) skondla@Sams-MBP:Downloads $ which k3d
/usr/local/bin/k3d
(base) skondla@Sams-MBP:~ $ k3d cluster create devhacluster --servers 3 --agents 1
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-devhacluster'
INFO[0000] Created image volume k3d-devhacluster-images
INFO[0000] Starting new tools node...
INFO[0000] Creating initializing server node
INFO[0000] Creating node 'k3d-devhacluster-server-0'
INFO[0000] Starting Node 'k3d-devhacluster-tools'
INFO[0001] Creating node 'k3d-devhacluster-server-1'
INFO[0002] Creating node 'k3d-devhacluster-server-2'
INFO[0002] Creating node 'k3d-devhacluster-agent-0'
INFO[0002] Creating LoadBalancer 'k3d-devhacluster-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] Starting new tools node...
INFO[0002] Starting Node 'k3d-devhacluster-tools'
INFO[0003] Starting cluster 'devhacluster'
INFO[0003] Starting the initializing server...
INFO[0004] Starting Node 'k3d-devhacluster-server-0'
INFO[0005] Starting servers...
INFO[0005] Starting Node 'k3d-devhacluster-server-1'
INFO[0027] Starting Node 'k3d-devhacluster-server-2'
INFO[0040] Starting agents...
INFO[0040] Starting Node 'k3d-devhacluster-agent-0'
INFO[0042] Starting helpers...
INFO[0042] Starting Node 'k3d-devhacluster-serverlb'
INFO[0049] Injecting records for hostAliases (incl. host.k3d.internal) and for 6 network members into CoreDNS configmap...
INFO[0051] Cluster 'devhacluster' created successfully!
INFO[0051] You can now use it like this:
kubectl cluster-info
(base) skondla@Sams-MBP:~ $ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-devhacluster-agent-0 Ready <none> 76s v1.26.4+k3s1 172.23.0.6 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-0 Ready control-plane,etcd,master 109s v1.26.4+k3s1 172.23.0.3 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-1 Ready control-plane,etcd,master 92s v1.26.4+k3s1 172.23.0.4 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-2 Ready control-plane,etcd,master 79s v1.26.4+k3s1 172.23.0.5 <none> K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
(base) skondla@Sams-MBP:~ $ k get po -o wide -A --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-59b4f5bbd5-hkdm6 1/1 Running 0 5m34s 10.42.0.5 k3d-devhacluster-server-0 <none> <none>
kube-system helm-install-traefik-crd-gphwk 0/1 Completed 0 5m34s 10.42.0.2 k3d-devhacluster-server-0 <none> <none>
kube-system helm-install-traefik-r8w4p 0/1 Completed 1 5m34s 10.42.0.3 k3d-devhacluster-server-0 <none> <none>
kube-system local-path-provisioner-76d776f6f9-dlkfm 1/1 Running 0 5m34s 10.42.0.4 k3d-devhacluster-server-0 <none> <none>
kube-system metrics-server-7b67f64457-2mgv8 1/1 Running 0 5m34s 10.42.0.6 k3d-devhacluster-server-0 <none> <none>
kube-system svclb-traefik-cabd407d-jz4v5 2/2 Running 0 5m23s 10.42.1.3 k3d-devhacluster-server-1 <none> <none>
kube-system svclb-traefik-cabd407d-lpn5n 2/2 Running 0 5m23s 10.42.0.7 k3d-devhacluster-server-0 <none> <none>
kube-system svclb-traefik-cabd407d-rzqpb 2/2 Running 0 5m14s 10.42.3.2 k3d-devhacluster-agent-0 <none> <none>
kube-system svclb-traefik-cabd407d-zgs5m 2/2 Running 0 5m16s 10.42.2.2 k3d-devhacluster-server-2 <none> <none>
kube-system traefik-56b8c5fb5c-2mtmf 1/1 Running 0 5m23s 10.42.1.2 k3d-devhacluster-server-1 <none> <none>
rabbitmq-system rabbitmq-cluster-operator-54b4bf5cbf-ghrrr 1/1 Running 0 11s 10.42.3.3 k3d-devhacluster-agent-0 <none> <none>

Deploy operator

(base) skondla@Sams-MBP:~ $ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com condition met
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
olmconfig.operators.coreos.com/cluster created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
operatorgroup.operators.coreos.com/global-operators created
operatorgroup.operators.coreos.com/olm-operators created
clusterserviceversion.operators.coreos.com/packageserver created
catalogsource.operators.coreos.com/operatorhubio-catalog created
Waiting for deployment "olm-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "olm-operator" successfully rolled out
deployment "catalog-operator" successfully rolled out
Package server phase: Succeeded
deployment "packageserver" successfully rolled out
(base) skondla@Sams-MBP:~ $ k get ns
NAME STATUS AGE
default Active 21m
flaskapp1-namespace Active 12m
kube-node-lease Active 21m
kube-public Active 21m
kube-system Active 21m
olm Active 36s
operators Active 36s
rabbitmq-system Active 16m
(base) skondla@Sams-MBP:~ $ k get po -o wide -n olm
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
b0f4d91aa5e8d8305a4304fac4f9323aaf88530b10a969c32fb08c5d094gk4x 0/1 Completed 0 23s 10.42.2.8 k3d-devhacluster-server-2 <none> <none>
catalog-operator-6bfcb6bfb4-mj8th 1/1 Running 0 68s 10.42.1.5 k3d-devhacluster-server-1 <none> <none>
olm-operator-596db48b74-88brk 1/1 Running 0 68s 10.42.2.5 k3d-devhacluster-server-2 <none> <none>
operatorhubio-catalog-bwbrn 1/1 Running 0 59s 10.42.2.6 k3d-devhacluster-server-2 <none> <none>
packageserver-7bb854f48b-92qms 1/1 Running 0 58s 10.42.1.6 k3d-devhacluster-server-1 <none> <none>
packageserver-7bb854f48b-ks7fz 1/1 Running 0 58s 10.42.2.7 k3d-devhacluster-server-2 <none> <none>
(base) skondla@Sams-MBP:~ $ kubectl create -f https://operatorhub.io/install/prometheus.yaml
subscription.operators.coreos.com/my-prometheus created
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
(base) skondla@Sams-MBP:~ $
(base) skondla@Sams-MBP:~ $
(base) skondla@Sams-MBP:~ $
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
prometheusoperator.v0.65.1 Prometheus Operator 0.65.1 prometheusoperator.0.47.0 Succeeded
(base) skondla@Sams-MBP:~ $ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
OLM is already installed in olm namespace. Exiting...
(base) skondla@Sams-MBP:~ $ kubectl create -f https://operatorhub.io/install/grafana-operator.yaml
subscription.operators.coreos.com/my-grafana-operator created
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
prometheusoperator.v0.65.1 Prometheus Operator 0.65.1 prometheusoperator.0.47.0 Succeeded
(base) skondla@Sams-MBP:~ $ kubectl get csv -n operators
NAME DISPLAY VERSION REPLACES PHASE
elastic-cloud-eck.v2.8.0 Elasticsearch (ECK) Operator 2.8.0 elastic-cloud-eck.v2.7.0 Succeeded
prometheusoperator.v0.65.1 Prometheus Operator 0.65.1 prometheusoperator.0.47.0 Succeeded
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:9.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-pv
volumes:
- name: grafana-pv
persistentVolumeClaim:
claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
sessionAffinity: None
type: LoadBalancer
(base) skondla@Sams-MBP:grafana $ kubectl port-forward service/grafana 3000:3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000

Grafana dashboard

--

--

@kondlawork
@kondlawork

Written by @kondlawork

I am a software engineering manager, and cloud architect who design, build, deploy, scale ,simplify and cost optimize platform architecture.

No responses yet