patterndockerMinor
Helm install or upgrade release failed on Kubernetes cluster: the server could not find the requested resource or UPGRADE FAILED: no deployed releases
Viewed 0 times
installthedeployedresourcekubernetescouldreleasehelmfailedfind
Problem
Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.
Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.
If I run
Error: release foo failed: the server could not find the requested
resource
If I run
Error: UPGRADE FAILED: "foo" has no deployed releases
I don't really understand why.
This is my helm version:
On my kubernetes cluster I have tiller deployed with the same version, when I run
```
Name: tiller-deploy-84b8...
Namespace: kube-system
Priority: 0
PriorityClassName:
Node: k8s-worker-1/167.114.249.216
Start Time: Tue, 26 Feb 2019 10:50:21 +0100
Labels: app=helm
name=tiller
pod-template-hash=84b...
Annotations:
Status: Running
IP:
Controlled By: ReplicaSet/tiller-deploy-84b8...
Containers:
tiller:
Container ID: docker://0302f9957d5d83db22...
Image: gcr.io/kubernetes-helm/tiller:v2.12.3
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d...
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 26 Feb 2019 10:50:28 +0100
Ready: True
Restart Count: 0
Liveness:
Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.
If I run
helm install --name foo . -f values.yaml --namespace foo-namespace I have this output:Error: release foo failed: the server could not find the requested
resource
If I run
helm upgrade --install foo . -f values.yaml --namespace foo-namespace or helm upgrade foo . -f values.yaml --namespace foo-namespace I have this error:Error: UPGRADE FAILED: "foo" has no deployed releases
I don't really understand why.
This is my helm version:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}On my kubernetes cluster I have tiller deployed with the same version, when I run
kubectl describe pods tiller-deploy-84b... -n kube-system:```
Name: tiller-deploy-84b8...
Namespace: kube-system
Priority: 0
PriorityClassName:
Node: k8s-worker-1/167.114.249.216
Start Time: Tue, 26 Feb 2019 10:50:21 +0100
Labels: app=helm
name=tiller
pod-template-hash=84b...
Annotations:
Status: Running
IP:
Controlled By: ReplicaSet/tiller-deploy-84b8...
Containers:
tiller:
Container ID: docker://0302f9957d5d83db22...
Image: gcr.io/kubernetes-helm/tiller:v2.12.3
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d...
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 26 Feb 2019 10:50:28 +0100
Ready: True
Restart Count: 0
Liveness:
Solution
Yes this happens frequently when debugging helm releases. The problem happens when a previously failed release is preventing you from updating it.
If you run
As this normally happens when trying to get something new running I typically
https://github.com/helm/helm/pull/2280
If you run
helm ls you should see a release in state FAILED. You might have deleted it in which case it might show up with helm ls -a. Such a release cannot be upgraded using the normal approach of having helm compare the new yaml to the old yaml to detect what objects to change as it is a failed release.As this normally happens when trying to get something new running I typically
helm delete —purge the failed release. That is slightly drastic though so you may want to try to do the upgrade adding —force https://github.com/helm/helm/pull/2280
Context
StackExchange DevOps Q#6482, answer score: 3
Revisions (0)
No revisions yet.