I'm dealing with a similar NFS disconnection issue with NetApp and thought I would share another discussion and link:A discussion on NetApp's forum:
https://communities.netapp.com/thread/10396
A KB article from NetApp relating to nfsd.tcp.close.idle.notify:warning:
https://kb.netapp.com/support/index?page=content&id=2013194
Basically, they are saying this may just happen and recommend a takeover/giveback or controller reset. Funny thing about our situation is that the KB article shows it was published April 9, 2012, which is the exact day we first noticed the problem on our FAS. Anyway, thought it might help.
More details are coming form NetApp this week on the nfsd.tcp.close.idle.notify issue. There appears to be a non-public bug with the stack running out of buffer space. A fix is being worked on.
If you have Oracle RAC on the same filer, you may be running in to an issue there too. Another non-public bug, this one from Oracle, has to do with a misbehaving direct NFS client. A patch is available for it and we're rolling it now.
There are NetApp settings you can make to help alleviate the bug although this didn't fix it for us, nor did the takeover/giveback. We tripped it over the issue again within a couple of days.
.
Set the TCP receive window size from the default of 64K to 256K.
a. “options nfs.tcp.recvwindowsize 262144”
b. The source of this recommendation came from NetApp TR-3557 "HP-UX NFS Performance with Oracle".
c. By increasing the default 4 x we are giving the protocol stack more room for the application.
2. Increase the nfs.tcp.xfersize to put more data in flight effectively reducing the handshaking going on at the protocol level.
a. “options nfs.tcp.xfersize 65536”