a6
Size: a a a
a6
S
DL
VR
S
j
VR
ИИ
j
VR
D
D
AZ
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system svclb-traefik-rpds2 2/2 Running 3 12h
kube-system svclb-traefik-x7ldj 2/2 Running 2 12h
kube-system svclb-traefik-6dw7w 2/2 Running 2 12h
kube-system svclb-traefik-kkhhr 2/2 Running 5 14h
kube-system coredns-854c77959c-z9j2l 0/1 Running 3 13h
kube-system svclb-traefik-8mwmf 2/2 Running 8 13h
kube-system traefik-6f9cbd9bd4-wd9gs 1/1 Running 3 13h
kube-system svclb-traefik-vxtgs 2/2 Running 5 13h
kube-system local-path-provisioner-5ff76fc89d-2llbx 0/1 CrashLoopBackOff 179 13h
kube-system metrics-server-86cbb8457f-5t5p6 0/1 CrashLoopBackOff 182 13h
sudo kubectl logs -n kube-system local-path-provisioner-5ff76fc89d-2llbx
time="2021-03-24T09:17:15Z" level=fatal msg="Error starting daemon: Cannot start Provisioner: failed to get Kubernetes server version: the server has asked for the client to provide credentials"
sudo kubectl describe -n kube-system pod local-path-provisioner-5ff76fc89d-2llbx
Name: local-path-provisioner-5ff76fc89d-2llbx
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: k8smaster131/192.168.10.136
Start Time: Tue, 23 Mar 2021 22:22:22 +0300
Labels: app=local-path-provisioner
pod-template-hash=5ff76fc89d
Annotations: <none>
Status: Running
IP: 10.42.1.18
IPs:
IP: 10.42.1.18
Controlled By: ReplicaSet/local-path-provisioner-5ff76fc89d
Containers:
local-path-provisioner:
Container ID: containerd://98f21203bdc8f12ef392e283c5f7babc025455516e64d94c1d2d697092805471
Image: rancher/local-path-provisioner:v0.0.19
Image ID: docker.io/rancher/local-path-provisioner@sha256:9666b1635fec95d4e2251661e135c90678b8f45fd0f8324c55db99c80e2a958c
Port: <none>
Host Port: <none>
Command:
local-path-provisioner
start
--config
/etc/config/config.json
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 24 Mar 2021 12:17:15 +0300
Finished: Wed, 24 Mar 2021 12:17:15 +0300
Ready: False
Restart Count: 180
Environment:
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/etc/config/ from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from local-path-provisioner-service-account-token-ktl7n (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: local-path-config
Optional: false
local-path-provisioner-service-account-token-ktl7n:
Type: Secret (a volume populated by a Secret)
SecretName: local-path-provisioner-service-account-token-ktl7n
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule op=Exists
node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
AZ
---- ------ ---- ---- -------
Warning BackOff 43m (x1411 over 5h48m) kubelet Back-off restarting failed container
Normal Pulled 38m (x4 over 39m) kubelet Container image "rancher/local-path-provisioner:v0.0.19" already present on machine
Normal Created 38m (x4 over 39m) kubelet Created container local-path-provisioner
Normal Started 38m (x4 over 39m) kubelet Started container local-path-provisioner
Warning BackOff 4m45s (x163 over 39m) kubelet Back-off restarting failed container
Normal SandboxChanged <invalid> kubelet Pod sandbox changed, it will be killed and re-created.
sudo kubectl logs -n kube-system metrics-server-86cbb8457f-5t5p6
I0324 09:12:31.681087 1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
Error: Unauthorized
Usage:
...
panic: Unauthorized
goroutine 1 [running]:
main.main()
/go/src/github.com/kubernetes-incubator/metrics-server/cmd/metrics-server/metrics-server.go:39 +0x13b
sudo kubectl describe -n kube-system pod metrics-server-86cbb8457f-5t5p6
Name: metrics-server-86cbb8457f-5t5p6
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: k8smaster131/192.168.10.136
Start Time: Tue, 23 Mar 2021 22:22:22 +0300
Labels: k8s-app=metrics-server
pod-template-hash=86cbb8457f
Annotations: <none>
Status: Running
IP: 10.42.1.20
IPs:
IP: 10.42.1.20
Controlled By: ReplicaSet/metrics-server-86cbb8457f
Containers:
metrics-server:
Container ID: containerd://85ac85a519d5a8edbdfeb6f80d08bd26695f6fa93e89747798207e225ddbd750
Image: rancher/metrics-server:v0.3.6
Image ID: docker.io/rancher/metrics-server@sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 24 Mar 2021 12:17:34 +0300
Finished: Wed, 24 Mar 2021 12:17:34 +0300
Ready: False
Restart Count: 183
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-rjkgk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
metrics-server-token-rjkgk:
Type: Secret (a volume populated by a Secret)
SecretName: metrics-server-token-rjkgk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule op=Exists
node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 44m (x1403 over 5h49m) kubelet Back-off restarting failed container
Normal Pulled 39m (x4 over 40m) kubelet Container image "rancher/metrics-server:v0.3.6" already present on machine
Normal Created 39m (x4 over 40m) kubelet Created container metrics-server
Normal Started 39m (x4 over 40m) kubelet Started container metrics-server
AZ
Warning BackOff 41s (x186 over 40m) kubelet Back-off restarting failed container
Normal SandboxChanged <invalid> kubelet Pod sandbox changed, it will be killed and re-created.
AZ
VC
---- ------ ---- ---- -------
Warning BackOff 43m (x1411 over 5h48m) kubelet Back-off restarting failed container
Normal Pulled 38m (x4 over 39m) kubelet Container image "rancher/local-path-provisioner:v0.0.19" already present on machine
Normal Created 38m (x4 over 39m) kubelet Created container local-path-provisioner
Normal Started 38m (x4 over 39m) kubelet Started container local-path-provisioner
Warning BackOff 4m45s (x163 over 39m) kubelet Back-off restarting failed container
Normal SandboxChanged <invalid> kubelet Pod sandbox changed, it will be killed and re-created.
sudo kubectl logs -n kube-system metrics-server-86cbb8457f-5t5p6
I0324 09:12:31.681087 1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
Error: Unauthorized
Usage:
...
panic: Unauthorized
goroutine 1 [running]:
main.main()
/go/src/github.com/kubernetes-incubator/metrics-server/cmd/metrics-server/metrics-server.go:39 +0x13b
sudo kubectl describe -n kube-system pod metrics-server-86cbb8457f-5t5p6
Name: metrics-server-86cbb8457f-5t5p6
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: k8smaster131/192.168.10.136
Start Time: Tue, 23 Mar 2021 22:22:22 +0300
Labels: k8s-app=metrics-server
pod-template-hash=86cbb8457f
Annotations: <none>
Status: Running
IP: 10.42.1.20
IPs:
IP: 10.42.1.20
Controlled By: ReplicaSet/metrics-server-86cbb8457f
Containers:
metrics-server:
Container ID: containerd://85ac85a519d5a8edbdfeb6f80d08bd26695f6fa93e89747798207e225ddbd750
Image: rancher/metrics-server:v0.3.6
Image ID: docker.io/rancher/metrics-server@sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 24 Mar 2021 12:17:34 +0300
Finished: Wed, 24 Mar 2021 12:17:34 +0300
Ready: False
Restart Count: 183
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-rjkgk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
metrics-server-token-rjkgk:
Type: Secret (a volume populated by a Secret)
SecretName: metrics-server-token-rjkgk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule op=Exists
node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 44m (x1403 over 5h49m) kubelet Back-off restarting failed container
Normal Pulled 39m (x4 over 40m) kubelet Container image "rancher/metrics-server:v0.3.6" already present on machine
Normal Created 39m (x4 over 40m) kubelet Created container metrics-server
Normal Started 39m (x4 over 40m) kubelet Started container metrics-server
AZ
D
AZ