PureDevOps Community

Kubernetes namespace stuck in Terminating state , How to fix it

Why do some namespaces never delete?

Kubernetes stores each namespace as a YAML or JSON file.

$ kubectl get namespace ${NAMESPACE} -o yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
	kubectl.kubernetes.io/last-applied-configuration: |
	kubernetes.io/metadata.name: tackle-operator
spec:
  finalizers:
  - kubernetes
status:
  conditions:
 - lastTransitionTime: "2022-01-19T19:05:31Z"
	message: 'Some content in the namespace has finalizers remaining: tackles.tackle.io/finalizer in 1 resource instances'
	reason: SomeFinalizersRemain
	status: "True"
	type: NamespaceFinalizersRemaining
  phase: Terminating

Notice the inclusion of the finalizers field in the above JSON. Some namespaces have a finalizer defined under spec.

A finalizer is a special metadata key that tells Kubernetes to wait until a specific condition is met before it fully deletes a resource.

Deleting a namespace stuck in a terminating state

Step 1: Dump the contents of the namespace in a temporary file called tmp.json:

$ kubectl get namespace ${NAMESPACE} -o json > namespace-name.json

Step 2: Edit the temporary file in your favorite text editor to remove kubernetes from finalizers array which is under spec:

$ vi namespace-name.json

from 

  "spec": {
        "finalizers": [
            "kubernetes"
        ]
    },

to 

  "spec": {
        "finalizers": [
        ]
    },

Step 3: Use the latest kubectl replace command to apply

kubectl replace --raw /api/v1/namespaces/stucked-namespace/finalize -f namespace-name.json