SCIENTIFIC-LINUX-USERS Archives

May 2011

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Nathan Yehle <[log in to unmask]>
Reply To:
Nathan Yehle <[log in to unmask]>
Date:
Wed, 25 May 2011 18:48:55 -0500
Content-Type:
multipart/signed
Parts/Attachments:
text/plain (1779 bytes) , smime.p7s (1677 bytes)
Greetings ,

I have rebuilt 2.6.39-UL1 using the Ultralight RPM specs to create this rpm for use with Scientific Computing Servers at MWT2.org.

http://uct2-grid1.uchicago.edu/repo/MWT2/kernel-2.6.39-UL1.x86_64.rpm

As you might recall we rebuild the UL kernel using CONFIG_PREEMPT_NONE=y  and have seen great performance gain on file servers as documented here:

http://twiki.mwt2.org/bin/view/ITB/UltraLightKernel

2.6.39 has one interesting new ext4 scalability feature for larger servers but sadly we run 25TB trays which are too large for ext4 as of yet. 

Does anyone know of a functional e2fsprogs on SL55 that can build ext4 partitions larger than 16TB? Until then we still with xfs.


1.1. Ext4 SMP scalability

In 2.6.37, huge Ext4 scalability improvements were merged and mentioned in the changelog. But this feature was not ready for prime time and had been disabled in source before the release - something that the changelog didn't mention. In this release it has been enabled by default. This is the text from the previous changelog:

"In this release Ext4 will use the "bio" layer directly instead of the intermediate "buffer" layer. The "bio" layer (alias for Block I/O: it's the part of the kernel that sends the requests to the IO/O scheduler) was one of the first features merged in the Linux 2.5.1 kernel. The buffer layer has a lot of performance and SMP scalability issues that will get solved with this port. A FFSB benchmark in a 48 core AMD box using a 24 SAS-disk hardware RAID array with 192 simultaneous ffsb threads speeds up by 300% (400% disabling journaling), while reducing CPU usage by a factor of 3-4


We have been using SL55 with 2.6.38-UL1 in production for some time now with good results.  Although we still can't get kdump working...

-Nate





ATOM RSS1 RSS2