SCIENTIFIC-LINUX-DEVEL Archives

June 2016

SCIENTIFIC-LINUX-DEVEL@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Valentin B <[log in to unmask]>
Reply To:
Valentin B <[log in to unmask]>
Date:
Fri, 3 Jun 2016 21:59:37 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (40 lines)
Hi Alec,

Just ran another test using 1M chunks. The result is:

$ time dd if=/dev/zero of=testfile2 bs=1M count=1500
1500+0 records in
1500+0 records out
1572864000 bytes (1.6 GB) copied, 42.8267 s, 36.7 MB/s

real    0m42.828s
user    0m0.000s
sys    0m0.633s


I used here a count of 1500 since quota has been enforced on user accounts.

By MTU=9000 should apply to the NFS server correct?  I'm not sure I 
understand your experience using jumbo frames. Did it have a positive 
impact?  How was the performance?  How about e.g opening browsers, 
temrinals, editors and other applications.  We use zabbix to monitor our 
systems and from what I can see, there is quite a large I/O wait on the 
NFS server.

Thanks for sharing this with me.


On 06/03/2016 06:38 PM, Alec T. Habig wrote:
> Patrick J. LoPresti writes:
>> How did you wind up with rsize/wsize of 8K? The default on Linux has
>> been 1 megabyte for a long time.
> Hmm.  I've (independently from the original poster) got the 8k sizes:
> with the network set up for jumbo frames (MTU=9000) the nfs server then
> chunks out network packets that get through the switches with minimal
> overhead.  Making this change from the default (1k, MTU=1500) made a
> huge throughput difference at the time we implemented it: which was a
> number of years ago, so certainly the world has changed since then.
>
> How do 1MB sized nfs chunks interact with the networking?
>

ATOM RSS1 RSS2