SCIENTIFIC-LINUX-USERS Archives

August 2019

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Mark Stodola <[log in to unmask]>
Reply To:
Mark Stodola <[log in to unmask]>
Date:
Thu, 22 Aug 2019 08:19:37 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (80 lines)
On 8/21/19 8:10 PM, Bill Maidment wrote:
> Hi
> During copying a large file (about 200GB) to a backup hard drive, I am 
> getting a multitude of XFS possible memory allocation deadlock messages.
> RedHat Portal shows the following:
> 
> XFS issues "possible memory allocation deadlock in kmem_alloc" messages
> Solution Verified - Updated August 9 2019 at 2:51 AM - English
> Issue
> 
>      Seeing file system access issues on XFS based file systems.
>      dmesg shows continuous entries with:
> 
>      XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
> 
> Does anyone know what the solution is? And if SL7 will get this solution 
> soon?
> 

Here are the good bits from that page...
Also, you can create a free developer account and get access to these 
support topics and such resources.  There is a bit more info listed on 
the page, root cause, diagnostics, etc.

Resolution

This is a long standing issue with xfs and highly fragmented files.

kernel-3.10.0-1062.el7 from Errata RHSA-2019:2029 contains fixes to 
mitigate this issue when caused by individual file fragmentation. Please 
upgrade to this kernel or later.

Workarounds

There are several solutions that can be used to avoid high file 
fragmentation:

     Preallocate the space to be used by the file with unwritten 
extents. This gives the allocator the opportunity to allocate the whole 
file in one go and use the least amount of extents. As the blocks are 
written they will break up the unwritten extents into written/unwritten 
space and when all of the unwritten space has been converted the extent 
map will match the original optimal preallocated state.

     Use the extent size hint feature of XFS. This feature tells the 
allocator to allocate more space than may be needed by the current write 
request so that a minimum extent size is used. The extent will initially 
be allocated as an unwritten extent and will be converted as the 
individual blocks within the extent are written. As with preallocated 
files, when the entire extent has been written the extent size will 
match the original unwritten extent. The extent size hint feature can be 
set on a file or directory with this command:
     Raw

     $ xfs_io -c "extsize <extent size>" <dir or file>

     If set on a directory then all files created within that directory 
after the hint is set will inherit the feature. You cannot set the hint 
on files that already have extents allocated. If it is not possible to 
modify the application then this is the suggested option to use.

     Use asynchronous buffered I/O. This will offer the chance to have 
many logically consecutive pages build up in the cache before being 
written out. Extents can then be allocated for the entire range of 
outstanding pages instead of each page individually. This will not only 
reduce fragmentation but means less I/Os need to be issued to the 
storage device.

     Avoid writing the file in a random order. If blocks can be 
coalesced within the application before being written out using direct 
I/O then there's a chance the file can be written sequentially which the 
allocator can use to allocate extents contiguously.

     Use xfs_fsr to defragment individual large files. Note xfs_fsr is 
unable to defragment files that are currently in use, using the -v 
option is recommended to report on any issues that prevent defragmentation.
     Raw

     xfs_fsr -v /path/to/large/file

ATOM RSS1 RSS2