SCIENTIFIC-LINUX-USERS Archives

July 2013

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Nico Kadel-Garcia <[log in to unmask]>
Reply To:
Nico Kadel-Garcia <[log in to unmask]>
Date:
Wed, 24 Jul 2013 08:11:33 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (55 lines)
On Tue, Jul 23, 2013 at 10:46 PM, Yasha Karant <[log in to unmask]> wrote:
> On 07/23/2013 06:02 PM, Nico Kadel-Garcia wrote:

>> I'm glad for you, and startled myself. Our favorite upstream vendor
>> certainly supported doing updates from major OS versions to major OS
>> versoins: you just couldn''t gracefully do it *live", because changing
>> things like major versions of glibc and rpm while you're in the midst
>> of using them to do the update is...... intriguingly problematic.
>> (Tried it once: don't recommend it!)
>>
>
> One should never have to deal with the "intriguingly problematic" situation
> to which you allude, at least not with a properly engineered software
> system.  The upgrading runtime system (that which is actually executing to
> do the upgrade) should not depend upon any executable images from the system
> being upgraded, but should stand alone -- installing / overwriting to new
> executable images.  The only primary issue would be a power/hardware failure
> during the upgrade, possibly leaving the system in an unstable ("mixed"
> between incompatible executables) state.  Otherwise, upon completion of the
> upgrading environment (possibly executing from a "temp" area on the hard
> drive into some portion of main memory), the new standalone bootable system
> (with bootloader and boot files) would be installed, and the system should
> do a full reboot equivalent to a "cold" power-on.

Theory is nice. Practice... can get a bit interesting. The problem
isn't the "properly engineered system", it's the practices of handling
a remote system without boot media access to provide exactly the
isolated wrapper environment you describe.

There are additional issues, unfortunately, When doing the software
installations, software updates such as "rpm" installations are
normally done inside a "Use this alternative as if it were the "/"
directory instead" directive handed to the rpm command. But when the
newer version of RPM is a lot newer, you get into format changes of
the old RPM database in /var/lib/rpm.

> The fact that you did need to deal with an "intriguingly problematic"
> situation seems to indicate a not very good upgrade implementation.  The
> same thing could happen with an update, depending upon which systems
> dependencies are changed (e.g., a new glibc that is not backward compatible
> with the one being used by the previous running image).

I simply didn't have access to boot media and boot time console access
on the remotely installed systems, which had to be down for less than
one hour apiece. I was asked if I *could* do it, and with some testing
found that I could. Doing the testing to get the procedure, now *THAT*
cost time. And mind you, this was years back, with the original Red
Hat 6.2, and a similar in place years later with with RHEL 4.3. The
latter indicates that it's probably feasible with Scientific Linux 4
to Scientific Linux 5, but it was problematic. I don't recommend it.

Incompatible glibc's are almost inevitable with major OS updates. So
are database software changes, such as when RPM went from Berkeley DB
to SQLite. (Thank you, upstream vendor, for that one, thank you!)

ATOM RSS1 RSS2