UT issue is related to Kernel version
rpc.gssd/rpc.idmapd/rpc.svcgssd are dedicated for NFSv4.
NLM: NFS Lock Manager
A quick glance tells that I don’t have rpc.lockd running just kernel lockd. Also “man lockd” says: The rpc.lockd program starts the NFS lock manager(NLM) on the kernels that don’t start it automatically. However, since most kernels do start it automatically, rpc.lockd is usually not required. Even so, running ti anyway is harmless.
In the morning, I commented “exportfs -r” in /etc/init.d/nfs, then it caused the NFS share can not be showmounted. It means this command must be executed before start up NFS daemon.
See how heavily each nfsd thread is being used, look at the file /proc/net/rpc/nfsd. The last ten numbers on the th line in that file indicate the number of seconds that thread usage was that percentage of the maximum allows.
The first number is the number of threads available for serving requests and the last eight are number fo seconds that each thread are 100% busy. If the last few numbers have accumulated a significiant amount of time, then your server probody needs some threads.
According to today’s testing for duplicating UT issue, if there’s only a nfs daemon starting ,the mount operatings are quickly in each nodes,even without “-o tcp”. However, more tests are needed to verify its performance when encounter heavy load. Perhaps just as what Gavin said, let UT update their kernel to 2.6.
By the way, it has been proven this mount issue is not related with the mount version. If implement mount2.12a in RH9(2.4) kernel, the dlm_sendd is still a grant CPU hog. Later, We use mount2.11y(UT environment) on kernel to mount, the mount operations are quicky and smoothly. Then, it seems to be caused by different Kernel verion between 2.4 and 2.6.