SCIENTIFIC-LINUX-USERS Archives

December 2005

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
Mailling list for Scientific Linux users worldwide <[log in to unmask]>
Date:
Thu, 15 Dec 2005 10:24:08 +0100
MIME-version:
1.0
Reply-To:
Content-type:
text/plain; charset=us-ascii
Subject:
From:
Enrico Sirola <[log in to unmask]>
In-Reply-To:
Organization:
StatPro Italia S.r.l.
Comments:
To: John Wagner <[log in to unmask]> cc: Troy Dawson <[log in to unmask]>, John Rowe <[log in to unmask]>
Parts/Attachments:
text/plain (49 lines)
Hello John,


>>>>> "John" == John Wagner <[log in to unmask]> writes:

    John> Troy Dawson wrote:
    >> Brett Viren wrote:
    >>> John Rowe <[log in to unmask]> writes:
    >>> 
    >>> 
    >>>> The options would seem to be:
    >>>> 
    >>>> * AFS *GFS
    >>>
    >>  GFS is really geared towards all of the machines having access
    >> to the disk by fiber chanel, or some similar way, and all being
    >> able to write to the disk.  Not for one machine having access
    >> to the files and sharing it to all the others (which is more of
    >> what I picture a NFS replacement being.)

    John> GFS also provides a 2 tier access method by utilizing the
    John> GNBD (Generalized Network Block Device) method of serving up
    John> the data storage area. In this model a number of nodes can
    John> be designated as I/O serving nodes and they serve up a SAN
    John> based storage area to the other nodes. The only stipulation
    John> is that all the I/O nodes must see (or be connected to) the
    John> same storage area. So for instance, if you had a 10TB SAN
    John> which allowed 4 hosts to connect to it then you could have
    John> up to 4 I/O serving nodes with FC connection to the
    John> SAN. Each of these nodes would then export a quarter of the
    John> file storage space as a network block device. Each GFS
    John> client system (typically the compute nodes) imports the 4
    John> network block devices and constructs a single filesystem
    John> using the GFS pooling mechanism (this is basically a logical
    John> volume manager built into GFS). Thus you get file block
    John> striping over the 4 I/O servers with the obvious gains in
    John> I/O throughput. (Actually you have to configure striping as
    John> by default it does not do this).

Unfortunately, it seems RedHat discontinued the support for GFS over
multipathed gnbd in the 4.* versions of RHEL. What they actually
advice is to use iSCSI. I think this is related to the recent
(partial?) rewrite of the device mapper.
Bye,
e.

-- 
Enrico Sirola <[log in to unmask]>

ATOM RSS1 RSS2