-
Notifications
You must be signed in to change notification settings - Fork 40.6k
kubectl get pod
is very slow, it takes about 15 seconds,is there any way to improve it?
#73570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It maybe caused by network jam, try |
@WanLinghao |
Then my best guess is processing JSON file consumes too much time. |
@WanLinghao - did you happen to confirm this was an issue. We're still on 1.9.7 and I've got a box where no nodes are reporting pressure any sort, the servers are quite beefy, under minimal load, but I'm seeing API response times of about 5500ms. |
To provide additional information: the etcd pod has restarted 24 times. @Nayruden can you post what you saw here? |
We have a beefy system with minimal load and are seeing large latency from the api-server.
No logs I can find are suggestive of what the issue might be. Server has plenty of extra CPU, memory, and HDD I/O that's not being used. |
To confirm, is the etcd pod the backend of the cluster? |
@WanLinghao correct |
Please check if the ectd pod has something wrong by: |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
i have same this issue |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I have the same issue. I used it with -v=99 as verbose. The response from the master received quickly however, the rendering takes time. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@jerry3k: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The issue should be reopened as I can't find any solution too! |
@WanLinghao I am having the same issue, and I tried this command:
As you can see the first curl command takes a minute and a half. Is there some way to clear this network jam? |
@rogperez did you install kubernetes natively or through rancher (rke)? |
@jerry3k According to my symlinks I used brew
|
@rogperez What worked for me was to clean up the host nodes and reinstalling kubernetes (but i was using rke). Here were the steps you must do for all your nodes:
docker rm -f $(docker ps -qa)
docker rmi -f $(docker images -q)
docker volume rm $(docker volume ls -q)
for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done
sudo rm -rf /etc/ceph \
/etc/cni \
/etc/kubernetes \
/opt/cni \
/opt/rke \
/run/secrets/kubernetes.io \
/run/calico \
/run/flannel \
/var/lib/calico \
/var/lib/etcd \
/var/lib/cni \
/var/lib/kubelet \
/var/lib/rancher/rke/log \
/var/log/containers \
/var/log/pods \
/var/run/calico
ip address show
ip link delete flannel.1
Like i said i was using Rancher - RKE for my setup and this worked. So everytime I have installed k8s on a fresh VM (or bare-metal) I have always had this issue of kubectl being slow and fortunately the only solution was to clean up and reinstall on the same machines. |
We sloved this problem,kubelctl cache the result from apiserver but it’s really slow because it fsync everytime ! So we link .kube/cache and http_cache dir to /dev/shm .After that everything works really good! |
@wingerted The |
@gosoon Ok, I know it's very few files in the cache. But in our environment, the cache dir not in the memory cost a lot. We also have 8k pod in a namespace Before we link cache to /dev/shm, it shows
After
Well, 25% is not a big improve since 8k pod info is a large data from apiserver to local, we can just get ns to see the diff clear Before
After
Of cause I clean the cache every time in the test. And our local disk is a SATA HDD. |
Link cache to /dev/shm doesn't work very well, and it's still very slow. |
As an rke user I echo this. This worked for me too. |
Just to provide some details, it seems to me that is cause by the difference in the kubectl version and the k8s cluster. Observed the same issue with Here are the results from the experiments that I did:
Here is the same test using
|
I am not sure why but this is something I am facing with k3s running on openSUSE Leap 15.2. Reinstalling didn't work till now |
jenkins@7b4cc536a0af:~$ time kubectl get namespace -v6 In my situation, the first call takes 20 seconds, all the rest are OK. any idea? kubectl version is 1.2x |
For me deallocating the VM and creating a new system with fresh installation started working without delays. Quite strange because I wasn't able to pin point the reason but it works and that's okay with me. May no one faces the same issue in a production cluster 🙂 |
On my local instance - I followed these steps and it worked really well
I think it could be that my hdd was slow and kubectl is trying to read a lot of files on every command -- you can see this in action if you run any kubectl command with a high verbosity -- |
I'm not sure of the underlying workings but...
...the cache directory will be repopulated and subsequent commands appear to be much quicker thereafter. |
Same here on 1.21. Even on minikube it takes about 2s to update discovery cache. kubectl also does this pretty often which is annoying |
version 1.27 [#] time kubectl get pod -o wide | wc -l real 0m0.044s [#] time kubectl get pod -n prod -o wide | wc -l real 0m3.452s |
ps. maybe check your .kube/config file to see if it's 1.8 million lines long like mine |
This helped me. and also removing some clutter from my kube config file. |
Our kubernetes cluster have 1000 nodes and 7100 pods,I don't think the cluster is large,but using kubectl is very slow,is there any way to improve it?
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
): v1.8.1uname -a
): 3.10.0-514.16.1.el7.x86_64/sig CLI
The text was updated successfully, but these errors were encountered: