Awesome
kubectl df-pv
A kubectl
plugin to see df
for persistent volumes.
Requirements
☑ kube-apiserver
has api/v1/nodes/
endpoint enabled
☑ Appropriate RBAC. This utility is meant for cluster-admin
like user; specifically, you need a service account with enough RBAC privileges to access api/v1/nodes/
from the kube-apiserver
.
☑ Using a storage provisioner that populates pv metrics in a compatible manner (see what's been tested below)
Quick Start
Installation
Via Krew
curl https://krew.sh/df-pv | bash
# . ~/.bashrc # run if you use bash shell
# . ~/.zshrc # run if you use zsh shell
From source
cd $GOPATH/src/github.com/
mkdir -p yashbhutwala
cd yashbhutwala/
git clone git@github.com:yashbhutwala/kubectl-df-pv.git
cd kubectl-df-pv/
make install
df-pv --help
Via Release Binary
macOS
download_path="./df-pv"
version="v0.2.2"
curl --fail -Lo $download_path "https://github.com/yashbhutwala/kubectl-df-pv/releases/download/${version}/kubectl-df-pv_${version}_darwin_amd64.tar.gz"
chmod +x $download_path
mv $(download_path) /some-dir-in-your-PATH/df-pv
Usage
kubectl df-pv
Flags
> kubectl df-pv --help
df-pv emulates Unix style df for persistent volumes w/ ability to filter by namespace
It autoconverts all "sizes" to IEC values (see: https://en.wikipedia.org/wiki/Binary_prefix and https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory)
It colors the values based on "severity" [red: > 75% (too high); yellow: < 25% (too low); green: >= 25 and <= 75 (OK)]
Usage:
df-pv [flags]
Flags:
-h, --help help for df-pv
-n, --namespace string if present, the namespace scope for this CLI request (default is all namespaces)
-v, --verbosity string log level; one of [info, debug, trace, warn, error, fatal, panic] (default "info")
Other useful commands
enable trace logging, but output to a file
df-pv -v trace 2> trace.log
Tested
Works on
☑ GKE
(kubernetes.io/gce-pd
dynamic provisioner; both with ssd
and standard
)
☑ kubeadm
configured bare-metal cluster (rook ceph block dynamic provisioner using script)
Does not work due to storage provisioner
☒ kind
(rancher/local-path-provisioner
dynamic provisioner)
☒ minikube
(gcr.io/k8s-minikube/storage-provisioner
minikube-hostpath dynamic provisioner)
TODO
[ ] EKS
[ ] AKS
TODO Features
Yet to be completed
☒ sort-by flag
☒ exclude namespaces
☒ only show a specific colored result ("red", "yellow", "green")
Completed
☑ df
for all Persistent Volumes in the cluster
☑ human readable output as default (using IEC format)
☑ color based on usage [red: > 75% (too high); yellow: < 25% (too low); green: >= 25 and <= 75 (OK)]
☑ print PV name
☑ print volume mount name
Motivation
Have you ever wondered, "How much free disk space do all my PVs have?" Me too! That's why I built this plugin! I have always just wanted to quick way to see the disk usage of my Persistent Volumes (similar to df
or du
in Unix). It turns out I'm not the only one, there has been many upstream Kubernetes issues open again and again about this, even some KEPs and PRs. I have compiled some of the issues and KEPs that I've seen in the wild here:
Issues
"this feature is needed .. !!!!" - @halradaideh
"There was a plan to implement this for 1.7 but we ran out of time."
KEPs
"PVC should show how much of the available capacity is used vs available"
"Expose storage metrics to end users"
"exposing storage metrics to users"... "Status? Was it done?"
Other relevant/rabbit-hole links
"Volume metrics exposed in /stats/summary not available in /metrics"
something similar to du in metrics
client-go issue about kubelet api by @DirectXMan12
blog about kubectl printers and columns