SCIENTIFIC-LINUX-USERS Archives

December 2010

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Stephen John Smoogen <[log in to unmask]>
Reply To:
Stephen John Smoogen <[log in to unmask]>
Date:
Mon, 20 Dec 2010 14:37:25 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (51 lines)
On Mon, Dec 20, 2010 at 12:44, Ken Schumacher <[log in to unmask]> wrote:
> Stephen,
>
>
> On Dec 17, 2010, at 7:02 PM, Stephen John Smoogen wrote:
>
>> On Thu, Dec 16, 2010 at 14:28, Ken Schumacher <[log in to unmask]> wrote:
>>> Greetings,
>>>
>>> I have a repeatable problem on at least one of our SLF 4.4 systems.  It seems that running commands like 'yum --check-update' seem to run into some sort of memory leak.  The yum output gets to the point of saying "Reading repository metadata in from local files" and a top listing on a another window shows the memory use simply climbing.  The original window will not respond to a Ctrl-C.
>>
>> 1) various versions of Yum does not respond to Ctrl-C because doing so
>> can cause the rpm package database to be left in a bad place.
>
> That's inconvenient in my current situation, but I understand the thinking behind it.  I can work around this by having a second window open allowing me to 'kill -15' the yum process once it gets into this bad state.
>
>> 2) Yum will use a lot of memory depending on how much is installed. Of
>> course a lot is subjective and needs to be quantified. [100 mb was a
>> lot on one system and nothing on another.]
>
> I wait about 60 CPU seconds before killing the yum process.  According to 'top', at that point it is using 100% of one CPU and it has already allocated itself 2 GB of memory.  On this cluster head node, that is just a bit over 10% of the node's memory, but I am concerned about letting it go on consuming memory for fear of interfering with other services on the node.
>
> I have checked the version of the yum and yum.conf RPMs on this node and compared to other systems we maintain.  We have other systems running those same versions without this memory consumption problem.  I have run yum using the '-d 5' flag to get some verbose debug output.  The last output before this memory consumption starts says:

Would need to know what is installed on the system

>   Reading repository metadata in from local files
>   Setting up Package Sacks
>
>> 3) 4.4 is really old. 4.8 is standard now and 4.9 will be out of the
>> door by summer (it will also probably be the last 4.x series like the
>> 3.9 was the last of the 3 series.)
>
> The node was originally installed with the LTS 4.4 release (Wilson).  Until recently, we have been running daily yum updates against the node, so all the necessary errata and security updates have been applied.  Being a cluster head node, we can't jump the node up to a 5.x release without proper planning and scheduling of downtime, etc.  Our user base expects the release to remain stable, so such upgrades are carefully considered.

Well I thought that applying all the updates would bring the system to
4.8 but I realize that Scientific Linux does keep old releases alive.

What does 'rpm -Va --nofiles' tell you?

How do you get the repodata for the systems (local mirror or remote one)?
Can you try updating 1-2 packages directly? or does even yum list give
you a 2GB process?

-- 
Stephen J Smoogen.
"The core skill of innovators is error recovery, not failure avoidance."
Randy Nelson, President of Pixar University.
"Let us be kind, one to another, for most of us are fighting a hard
battle." -- Ian MacLaren

ATOM RSS1 RSS2