SCIENTIFIC-LINUX-USERS Archives

October 2018

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Bruce Ferrell <[log in to unmask]>
Reply To:
Bruce Ferrell <[log in to unmask]>
Date:
Tue, 16 Oct 2018 19:59:37 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (155 lines)
On 10/15/18 6:39 AM, Larry Linder wrote:
> When you look at the /dev/disk and the directories there is no occurance
> of "sde"
>
> We tried to modify "fstab" manuall but the device code - decoding scheme
> didn't work.  System booted to "rescue".
>
> There are a number of problems with the GigaBit MB and one has to do
> with the serial communication.
>
> I looked into the bios and all 4 WD disks are present. Disk 5 as "sde"
> is not seen there.  We tried moving disks around and the same result so
> its not a disk problem.
> These are all WD disks
>
> However we have noticed that when you count up the devices to be mounted
> in "fstab" there are 16.  A number of the mounts are due to the user and
> SL OS.
>
> On this server we will stick with xt4 for the time being.
>
> We have investigated a Port Expansion board to allow us to use more
> physical disks but when you peek under the covers and look how they work
> the performance penality is not worth the trouble.
>
> Larry Linder
>
> On Sat, 2018-10-13 at 09:55 -0700, Bruce Ferrell wrote:
>> My one and only question is, do you see the device for sde, in any
>> form (/dev/sdeX, /dev/disk/by-*, etc) present in /etc/fstab with the
>> proper mount point(s)?
>>
>> It really doesn't matter WHAT the device tech is.  /etc/fstab just
>> tells the OS where to put the device into the filesystem... Or it did
>> before systemd  got into the mix.
>>
>> Just for grins and giggles, I'd put sde (and it's correct
>> partition/mount point) into fstab and reboot during a maintenance
>> window.
>>
>> if that fails, I'd be taking a hard look at systemd and the units that
>> took over disk mounting.  Systemd is why I'm still running SL 6.x
>>
>> Also, if you hot swapped the drive, the kernel has a nasty habit of
>> assigning a new device name. What WAS sde becomes sdf until the next
>> reboot... But fstab and systemd just don't get that.  Look for
>> anomalies.  disk devices that you don't recognize in fstab or the
>> systemd configs.
>>
>>
>> On 10/13/18 7:20 AM, Larry Linder wrote:
>>
>>> The problem is not associated with the file system.
>>> We have a newer system with SL 7.5 and xfs and we have the same problem.
>>>
>>> I omited a lot of directories because of time and importance.  fstab is
>>> what is mounted and used by OS.
>>>
>>> The fstab was copied exactly as SL 7.5 built it.  It does not give you a
>>> clue as to what the directories are and it shouldn't.
>>>
>>> The point is that I would like to use more pysical drives on this system
>>> but because of MB or OS the last physical disk is not seen, which is
>>> "sde".  One of older SCSI sysems had 31 disks attached to it.
>>>
>>> The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd.
>>> SSD sda
>>> WD  sdb
>>> WD  sdc
>>> WD  sdd
>>> WD  sde is missing from "fstab" and not mounted.
>>> plextor dvd
>>>
>>> We tried a nanual mount and it works but when you reboot it is gone
>>> becasuse it not in "fstab".
>>>
>>> Why so many disks:
>>> Two of these disks are used for back up of users on the server.  Twice a
>>> day @ 12:30 and at 0:30 each day.  These are also in sync with two disks
>>> that are at another physical location.  Using "rsync" you have to be
>>> carefull or it can be an eternal garbage colledtor.  This is off topic.
>>>
>>> A disk has a finite life so every 6 mo.  We rotate in a new disk and
>>> toss the oldest one.  It takes two and 1/2 years to cycle threw the
>>> pack.
>>> This scheme has worked for us for the last 20 years.  We have never had
>>> a server die on us. We have used Sl Linux form version 4 to current and
>>> before that RH 7->9 and BSD 4.3.
>>>
>>> We really do not have a performance problem even on long 3d renderings-
>>> The slowest thing in the room is the speed one can type or point.
>>> Models, simulations, drawings are done before you can reach for your
>>> cup.
>>>
>>> Thank You
>>> Larry Linder
>>>
>>>
>>> On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote:
>>>> On 10/12/18 8:09 PM, ~Stack~ wrote:
>>>>> On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
>>>>> [snip]
>>>>>> On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
>>>>>> ext filesystems, I've known its original author for decades. (He was
>>>>>> my little brother in my fraternity!) But there's not a compelling
>>>>>> reason to use it in recent SL releases.
>>>>> Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
>>>>> precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
>>>>> wait. You can't. ;-)
>>>>>
>>>>> My server with EXT4 will be back on line with adjusted filesystem sizes
>>>>> before the XFS partition has even finished backing up! It is a trivial,
>>>>> well-documented, and quick process to adjust an ext4 file-system.
>>>>>
>>>>> Granted, I'm in a world where people can't seem to judge how they are
>>>>> going to use the space on their server and frequently have to come to me
>>>>> needing help because they did something silly like allocate 50G to /opt
>>>>> and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
>>>>> filesystems for others happens far too frequently for me. At least it is
>>>>> easy for the EXT4 crowd.
>>>>>
>>>>> Also, I can't think of a single compelling reason to use XFS over EXT4.
>>>>> Supposedly XFS is great for large files of 30+ Gb, but I can promise you
>>>>> that most of the servers and desktops I support have easily 95% of their
>>>>> files under 100M (and I would guess ~70% are under 1M). I know this,
>>>>> because I help the backup team on occasion. I've seen the histograms of
>>>>> file size distributions.
>>>>>
>>>>> For all the arguments of performance, well I wouldn't use either XFS or
>>>>> EXT4. I use ZFS and Ceph on the systems I want performance out of.
>>>>>
>>>>> Lastly, (I know - single data point) I almost never get the "help my
>>>>> file system is corrupted" from the EXT4 crowd but I've long stopped
>>>>> counting how many times I've heard XFS eating files. And the few times
>>>>> it is EXT4 I don't worry because the tools for recovery are long and
>>>>> well tested. The best that can be said for XFS recovery tools is "Well,
>>>>> they are better now then they were."
>>>>>
>>>>> To me, it still boggles my mind why it is the default FS in the EL world.
>>>>>
>>>>> But that's me. :-)
>>>>>
>>>>> ~Stack~
>>>>>
>>>> The one thing I'd offer you in terms of EXT4 vs XFS.... Do NOT have a system crash on very large filesystems (> than 1TB) with EXT4.
>>>>
>>>> It will take days to fsck completely.  Trust me on this.  I did it (5.5TB RAID6)... and then converted to XFS.  Been running well for 3 years now.

One other thing to look at is the kernel ring buffer with the dmesg command.


When you pull and re-insert the drive, you'll see messages there from udev and the kernel detecting the drive and assigning device names

The fact that you see no sde in /dev/disk (and sub dirs) is an excellent indication the drive got assigned a new ID by udev.

ATOM RSS1 RSS2