error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is cu
error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
1.安装应用的时候
[root@node1 linux-amd64]# helm install ui stable/weave-scope
Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
[root@node1 linux-amd64]# kubectl api-resource
Error: unknown command "api-resource" for "kubectl"
Did you mean this?
api-resources
Run 'kubectl --help' for usage.
解决:
kubectl api-resources
[root@node1 linux-amd64]# kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
endpoints ep true Endpoints
events ev true Event
limitranges limits true LimitRange
namespaces ns false Namespace
nodes no false Node
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
pods po true Pod
podtemplates true PodTemplate
replicationcontrollers rc true ReplicationController
resourcequotas quota true ResourceQuota
secrets true Secret
serviceaccounts sa true ServiceAccount
services svc true Service
mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io false CustomResourceDefinition
apiservices apiregistration.k8s.io false APIService
controllerrevisions apps true ControllerRevision
daemonsets ds apps true DaemonSet
deployments deploy apps true Deployment
replicasets rs apps true ReplicaSet
statefulsets sts apps true StatefulSet
tokenreviews authentication.k8s.io false TokenReview
localsubjectaccessreviews authorization.k8s.io true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
cronjobs cj batch true CronJob
jobs batch true Job
certificatesigningrequests csr certificates.k8s.io false CertificateSigningRequest
leases coordination.k8s.io true Lease
endpointslices discovery.k8s.io true EndpointSlice
events ev events.k8s.io true Event
ingresses ing extensions true Ingress
ingressclasses networking.k8s.io false IngressClass
ingresses ing networking.k8s.io true Ingress
networkpolicies netpol networking.k8s.io true NetworkPolicy
runtimeclasses node.k8s.io false RuntimeClass
poddisruptionbudgets pdb policy true PodDisruptionBudget
podsecuritypolicies psp policy false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io false ClusterRole
rolebindings rbac.authorization.k8s.io true RoleBinding
roles rbac.authorization.k8s.io true Role
priorityclasses pc scheduling.k8s.io false PriorityClass
csidrivers storage.k8s.io false CSIDriver
csinodes storage.k8s.io false CSINode
storageclasses sc storage.k8s.io false StorageClass
volumeattachments storage.k8s.io false VolumeAttachment
2.找到1中的 apiservice
kubectl get apiservice
[root@node1 linux-amd64]# kubectl get apiservice
NAME SERVICE AVAILABLE AGE
v1. Local True 164d
v1.admissionregistration.k8s.io Local True 164d
v1.apiextensions.k8s.io Local True 164d
v1.apps Local True 164d
v1.authentication.k8s.io Local True 164d
v1.authorization.k8s.io Local True 164d
v1.autoscaling Local True 164d
v1.batch Local True 164d
v1.coordination.k8s.io Local True 164d
v1.networking.k8s.io Local True 164d
v1.rbac.authorization.k8s.io Local True 164d
v1.scheduling.k8s.io Local True 164d
v1.storage.k8s.io Local True 164d
v1beta1.admissionregistration.k8s.io Local True 164d
v1beta1.apiextensions.k8s.io Local True 164d
v1beta1.authentication.k8s.io Local True 164d
v1beta1.authorization.k8s.io Local True 164d
v1beta1.batch Local True 164d
v1beta1.certificates.k8s.io Local True 164d
v1beta1.coordination.k8s.io Local True 164d
v1beta1.discovery.k8s.io Local True 164d
v1beta1.events.k8s.io Local True 164d
v1beta1.extensions Local True 164d
v1beta1.metrics.k8s.io kube-system/metrics-server False (FailedDiscoveryCheck) 13d
v1beta1.networking.k8s.io Local True 164d
v1beta1.node.k8s.io Local True 164d
v1beta1.policy Local True 164d
v1beta1.rbac.authorization.k8s.io Local True 164d
v1beta1.scheduling.k8s.io Local True 164d
v1beta1.storage.k8s.io Local True 164d
v2beta1.autoscaling Local True 164d
v2beta2.autoscaling Local True 164d
可以看到有个事False的
3.直接删除
kubectl delete apiservice v1beta1.metrics.k8s.io
4.再次查看
[root@node1 linux-amd64]# kubectl get apiservice
NAME SERVICE AVAILABLE AGE
v1. Local True 164d
v1.admissionregistration.k8s.io Local True 164d
v1.apiextensions.k8s.io Local True 164d
v1.apps Local True 164d
v1.authentication.k8s.io Local True 164d
v1.authorization.k8s.io Local True 164d
v1.autoscaling Local True 164d
v1.batch Local True 164d
v1.coordination.k8s.io Local True 164d
v1.networking.k8s.io Local True 164d
v1.rbac.authorization.k8s.io Local True 164d
v1.scheduling.k8s.io Local True 164d
v1.storage.k8s.io Local True 164d
v1beta1.admissionregistration.k8s.io Local True 164d
v1beta1.apiextensions.k8s.io Local True 164d
v1beta1.authentication.k8s.io Local True 164d
v1beta1.authorization.k8s.io Local True 164d
v1beta1.batch Local True 164d
v1beta1.certificates.k8s.io Local True 164d
v1beta1.coordination.k8s.io Local True 164d
v1beta1.discovery.k8s.io Local True 164d
v1beta1.events.k8s.io Local True 164d
v1beta1.extensions Local True 164d
v1beta1.networking.k8s.io Local True 164d
v1beta1.node.k8s.io Local True 164d
v1beta1.policy Local True 164d
v1beta1.rbac.authorization.k8s.io Local True 164d
v1beta1.scheduling.k8s.io Local True 164d
v1beta1.storage.k8s.io Local True 164d
v2beta1.autoscaling Local True 164d
v2beta2.autoscaling Local True 164d
5.重新安装
[root@node1 linux-amd64]# helm install ui stable/weave-scope
NAME: ui
LAST DEPLOYED: Wed Mar 1 23:14:36 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You should now be able to access the Scope frontend in your web browser, by
using kubectl port-forward:
kubectl -n default port-forward $(kubectl -n default get endpoints \
ui-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040
then browsing to http://localhost:8080/.
For more details on using Weave Scope, see the Weave Scope documentation:
https://www.weave.works/docs/scope/latest/introducing/
相关文章
- K8S学习圣经5:Pod负载管理的十八般兵器
- 如何在k8s集群中,删除所有命名空间中的Evicted(被驱逐)的pod?
- 如何通过minio operator在k8s中部署minio租户(tenant)集群
- C#构造方法(函数) C#方法重载 C#字段和属性 MUI实现上拉加载和下拉刷新 SVN常用功能介绍(二) SVN常用功能介绍(一) ASP.NET常用内置对象之——Server sql server——子查询 C#接口 字符串的本质 AJAX原生JavaScript写法
- K8S原理剖析:存储原理剖析和实践
- 如何批量删除k8s资源对象
- 阶段学习资料整理(prometheus监控、Flink、k8s)
- k8s报错:Error from server (NotFound): the server could not find the requested resource (get services h
- 误删Server后Tomcat服务器报错The Tomcat server configuration at ServersTomcat v8.5 Server at
- K8s+云原生
- [云原生K8S] k8s亲和、反亲和、污点、容忍
- Kubernetes(K8s)(三)——kubectl命令
- k8s--如何使用Namespaces
- k8s部署etcd集群
- sql server 小技巧(3) SQL Server 2012 数据库完整导出到SQL Azure (包括数据)
- k8s Node-problem-detector
- k8s 节点的NotReady是什么导致的?NotReady会导致什么问题? https://komodor.com/learning-center/
- Tomcat启动失败错误解决Could not publish server configuration for Tomcat v7.0 Server at localhost