SCIENTIFIC-LINUX-USERS Archives

December 2005

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
John Wagner <[log in to unmask]>
Reply To:
John Wagner <[log in to unmask]>
Date:
Wed, 14 Dec 2005 20:22:20 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (74 lines)
Troy Dawson wrote:

> Brett Viren wrote:
>
>> John Rowe <[log in to unmask]> writes:
>>
>>
>>> The options would seem to be:
>>>
>>> * AFS
>>> *GFS
>>
>
> GFS is really geared towards all of the machines having access to the 
> disk by fiber chanel, or some similar way, and all being able to write 
> to the disk.
> Not for one machine having access to the files and sharing it to all 
> the others (which is more of what I picture a NFS replacement being.)

GFS also provides a 2 tier access method by utilizing the GNBD 
(Generalized Network Block Device) method of serving up the data storage 
area. In this model a number of nodes can be designated as I/O serving 
nodes and they serve up a SAN based storage area to the other nodes. The 
only stipulation is that all the I/O nodes must see (or be connected to) 
the same storage area. So for instance, if you had a  10TB SAN  which 
allowed 4 hosts to connect to it then you could have up to 4 I/O serving 
nodes with FC connection to the SAN. Each of these nodes would then 
export a quarter of the file storage space as a network block device. 
Each GFS client system (typically the compute nodes) imports the 4 
network block devices and constructs a single filesystem using the GFS 
pooling mechanism (this is basically a logical volume manager built into 
GFS). Thus you get file block striping over the 4 I/O servers with the 
obvious gains in I/O throughput. (Actually you have to configure 
striping  as by default it does not do this).

It is relatively painless to set up once you have tried it a few times. 
The manuals are not exactly perfect but it is certainly one way to share 
a single file storage area across a number of nodes. Please note that it 
is not the most scalable filesystem if you are using a large cluster 
(ala more than 400 nodes). In this case Lustre is a better bet as it has 
a more scalable architecture for large cluster usage. I think Lustre 
locking is more advanced than GFS's as well. Sometimes there are long 
delays in GFS when the filesystem is under heavy usage. (like 5 minutes 
to do an "ls" command).

Of course both Lustre and GFS can be used freely without support. 
Lustre's latest versions have become available to the community since 
SC05 when CFS announced simultaneous release for paying customers and 
community users.

Hope this helps.

Regards,

John

Fujitsu Systems Europe.

>
> I really think the G, standing for Global, is really misleading.
>
>>> * Lustre
>>
>>
>>
>> One more:
>>
>> http://www.fs.net/sfswww/
>>
>> -Brett.
>
>
>

ATOM RSS1 RSS2