SCIENTIFIC-LINUX-USERS Archives

October 2018

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Larry Linder <[log in to unmask]>
Reply To:
Date:
Sat, 13 Oct 2018 10:20:22 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (98 lines)
The problem is not associated with the file system.
We have a newer system with SL 7.5 and xfs and we have the same problem.

I omited a lot of directories because of time and importance.  fstab is
what is mounted and used by OS.

The fstab was copied exactly as SL 7.5 built it.  It does not give you a
clue as to what the directories are and it shouldn't.

The point is that I would like to use more pysical drives on this system
but because of MB or OS the last physical disk is not seen, which is
"sde".  One of older SCSI sysems had 31 disks attached to it.

The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd.
SSD sda
WD  sdb
WD  sdc
WD  sdd
WD  sde is missing from "fstab" and not mounted.
plextor dvd

We tried a nanual mount and it works but when you reboot it is gone
becasuse it not in "fstab".

Why so many disks:
Two of these disks are used for back up of users on the server.  Twice a
day @ 12:30 and at 0:30 each day.  These are also in sync with two disks
that are at another physical location.  Using "rsync" you have to be
carefull or it can be an eternal garbage colledtor.  This is off topic.

A disk has a finite life so every 6 mo.  We rotate in a new disk and
toss the oldest one.  It takes two and 1/2 years to cycle threw the
pack.
This scheme has worked for us for the last 20 years.  We have never had
a server die on us. We have used Sl Linux form version 4 to current and
before that RH 7->9 and BSD 4.3.

We really do not have a performance problem even on long 3d renderings-
The slowest thing in the room is the speed one can type or point.
Models, simulations, drawings are done before you can reach for your
cup.

Thank You
Larry Linder


On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote:
> On 10/12/18 8:09 PM, ~Stack~ wrote:
> > On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
> > [snip]
> >> On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
> >> ext filesystems, I've known its original author for decades. (He was
> >> my little brother in my fraternity!) But there's not a compelling
> >> reason to use it in recent SL releases.
> >
> > Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
> > precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
> > wait. You can't. ;-)
> >
> > My server with EXT4 will be back on line with adjusted filesystem sizes
> > before the XFS partition has even finished backing up! It is a trivial,
> > well-documented, and quick process to adjust an ext4 file-system.
> >
> > Granted, I'm in a world where people can't seem to judge how they are
> > going to use the space on their server and frequently have to come to me
> > needing help because they did something silly like allocate 50G to /opt
> > and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
> > filesystems for others happens far too frequently for me. At least it is
> > easy for the EXT4 crowd.
> >
> > Also, I can't think of a single compelling reason to use XFS over EXT4.
> > Supposedly XFS is great for large files of 30+ Gb, but I can promise you
> > that most of the servers and desktops I support have easily 95% of their
> > files under 100M (and I would guess ~70% are under 1M). I know this,
> > because I help the backup team on occasion. I've seen the histograms of
> > file size distributions.
> >
> > For all the arguments of performance, well I wouldn't use either XFS or
> > EXT4. I use ZFS and Ceph on the systems I want performance out of.
> >
> > Lastly, (I know - single data point) I almost never get the "help my
> > file system is corrupted" from the EXT4 crowd but I've long stopped
> > counting how many times I've heard XFS eating files. And the few times
> > it is EXT4 I don't worry because the tools for recovery are long and
> > well tested. The best that can be said for XFS recovery tools is "Well,
> > they are better now then they were."
> >
> > To me, it still boggles my mind why it is the default FS in the EL world.
> >
> > But that's me. :-)
> >
> > ~Stack~
> >
> 
> The one thing I'd offer you in terms of EXT4 vs XFS.... Do NOT have a system crash on very large filesystems (> than 1TB) with EXT4.
> 
> It will take days to fsck completely.  Trust me on this.  I did it (5.5TB RAID6)... and then converted to XFS.  Been running well for 3 years now.

ATOM RSS1 RSS2