SCIENTIFIC-LINUX-USERS Archives

March 2008

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Stephan Wiesand <[log in to unmask]>
Reply To:
Date:
Thu, 20 Mar 2008 18:29:03 +0100
Content-Type:
TEXT/PLAIN
Parts/Attachments:
TEXT/PLAIN (53 lines)
Hi Chris,

On Wed, 19 Mar 2008, Christopher Hunter wrote:

> We are trying to put together intelligent policy for yum software
> updates for workstations and servers.  We would appreciate anyone
> willing to share how the control yum updates for servers & workstations.

None of our systems receive updates which haven't been cleared 
by an admin before. We don't distinguish between servers and 
workstations, but between three classes: (A) receiving updates very early 
(includes my desktop, those of a few colleagues, and some test systems 
nobody is relying on), (B) receiving updates typically two days later 
(selected systems representing all major categories like desktops, farm 
nodes, and several kinds of servers), and (C) everything else, typically 
receiving updates three days after (B).

We mirror the SL repositories, base and errata. Separate errata 
repositories for the three classes of systems are populated with hard 
links by a script run by hand. The script reads a configuration file
defining how long an update is delayed before being applied on one of the 
system classes, depending on a regular expression for the package name.
This procedure is semi automatic at best: the admin must know what he's 
doing, and temporary overrides in the config files are not unusual.

Clients generate a yum.conf according to their class. There's actually 
more than one yum.conf: a general one including everything meant to be 
used by admins only, one for daily updates, one for updates during boot 
(possibly containing more packages).

Kernel updates are not applied using yum but with a homegrown script, and 
only during shutdown or boot. Systems generally have at least two kernels 
installed (which ones is defined in our configuration database). The 
running kernel is never removed. A new kernel is installed only if all 
required module packages are available. If kernel updates happen during 
boot, and there's a new default kernel (#1 in the list from the CMDB), 
and the running kernel is the previous default one, the system will 
reboot itself. The same if boot-only updates (like glibc) were just 
applied.

I guess this is all quite similar to what Jon described for their systems.

Sorry, I have little to no experience with the yum plugins.

Cheers,
  	Stephan

-- 
Stephan Wiesand
    DESY - DV -
    Platanenallee 6
    15738 Zeuthen, Germany

ATOM RSS1 RSS2