SCIENTIFIC-LINUX-USERS Archives

June 2011

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Nico Kadel-Garcia <[log in to unmask]>
Reply To:
Nico Kadel-Garcia <[log in to unmask]>
Date:
Sun, 12 Jun 2011 22:38:57 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (38 lines)
On Sat, Jun 11, 2011 at 12:45 PM, Lamar Owen <[log in to unmask]> wrote:
> On Friday, June 10, 2011 08:29:49 PM you wrote:
>> It's problematic, especially if one leaves
>> the 32-bit versions of components and libraries dual-installed with
>> the 64-bit, deletes one and not the other.
>
> Multilib support can indeed be a problem.  In a very large memory system one would be wise to make sure that the system is 'pure' 64 bit on x86_64.  (other 64-bit systems vary as to recommendations.....)
>
>> The codebase for SL 5 and
>> RHEL 5 uses significantlyou out of date kernels, glibc, and other core
>> utilities. so yes: if you stretch the environment beyond the common
>> resources at the time it was originally designed, you can enter the
>> world of surprising corner cases.
>
> Given the fact of backporting, how 'out-of-date' the kernel, glibc, and other core utilities are is difficult to determine.  But one person's out-of-date is another's 'stable' instead.  Reminds me of this VAX in the corner, driving a microdensitometer....
>
> But that doesn't address the problem for the OP: why does SLC5.6 see things so differently than upstream's code of the same version and built from the same source?  That's what the OP was asking about; and I'm looking forward to seeing the OP post back about any new information.

Me, too. I'm not assuming that our OP actually had identically
configured RHEL 5.x and SL 5.x environments. But the CD based reports
are interesting.

> But what were the 'common' resources at the time of EL5's initial release?  What were the 'extreme' resources?  IA64 systems were certainly available with >32GB of RAM prior to EL5's introduction. I specified and got quoted an Appro quad socket Opteron system with 64GB of RAM a year or more before EL5's introduction in November of 2007; it was very expensive, too, with over 75% of the cost of the whole machine being RAM, at the time.  And my current VMware hosts are a little over 4 years old; and they have 32GB of RAM (Dell PE6950's, quad socket dual-core Opterons; wouldn't mind upgrading to quad core or hex core chips if they were supported, and 64GB of RAM is an option with all four sockets populated as they are).  And those PE6950's shipped 8 months before EL5.0 went GA.  That may not have been common, though; I do know they were expensive. But Dell at least has been pretty good about keeping drivers, BIOS's, and other critical things like SAS controller and DRAC firmware updated through the years, even for that hardware.

It was bloody expensive. Let's see, 4 years ago.... I'd finished some
work helping build, and design, blade servers, including a lot of RHEL
4 and some singificant SuSE integration. 8 Gig was considered hefty:
64 Gig was considered wonderful because they were so expensive and the
profit was so high.

> Uptime junkies need to get a life.  Uptime isn't the be-all, end-all.....even if it is a part of the whole availability equation.... As you probably agree, Nico.  In an HA VMware situation, uptime of the hosts, as long as the downtime is planned, is a non-issue.  That should be valid for other virtualization solutions, too, as long as you've configured it HA.

Virtualization and its high availability is useful, but also not a
be-all and end-all. It doesn't exercise kernels, it doesn't break
locks, and if some error has eroded your filesystem, it doesn't give
you a chance to fsck unless you actually reboot. It *does* massively
reduce the cost of a reboot.

ATOM RSS1 RSS2