SCIENTIFIC-LINUX-USERS Archives

February 2012

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Date:
Mon, 6 Feb 2012 14:19:48 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (60 lines)
On 2012/02/06 13:37, Chris Schanzle wrote:
> On 02/06/2012 04:02 PM, jdow wrote:
>> (On a heavily loaded system, just when are you going to find 12 gigabytes
>> of fully contiguous storage?)
>
> Probably lots of places on the below 1.0 TB Dell R910 box: :-) [no, not heavily
> loaded at the moment, so your point is still valid, but don't forget times are
> a-changin'!]
>
> # numactl --hardware
> available: 4 nodes (0-3)
> node 0 size: 258511 MB
> node 0 free: 255268 MB
> node 1 size: 258560 MB
> node 1 free: 258467 MB
> node 2 size: 258560 MB
> node 2 free: 258356 MB
> node 3 size: 256540 MB
> node 3 free: 256450 MB
> node distances:
> node 0 1 2 3
> 0: 10 20 20 20
> 1: 20 10 20 20
> 2: 20 20 10 20
> 3: 20 20 20 10

I don't forget. My partner works on machines that dwarf that Dell. (He writes
emulator code for UniSys.) {^_-}

At the time I worked on HardFrame I had a 16 meg machine. So 100k was a small
chunk of memory and usually easy to find, until the machine had been really
active for a week or so.

Fragmentation Happens. And it's simply another major factor on slowdowns.
Each high level transaction involves that transaction time, disk rotation
time, command queuing time, low level transaction time, low level command
queuing, rotational latency, data transfer time, and possible copying time.
(I worked in a zero copy environment, which can be noticeably faster.)

In a nice world the low level latency is a one time per transaction thing.
It needs to be considered because sometimes machines really are heavily
loaded and your values above aren't anywhere to be found.

Once the transaction overhead equals the actual data transfer time you're
not going to go faster than twice your speed no how no way. So REALLY large
buffers are little or no improvement and in case of memory fragmentation
can start costing you time.

Note that some systems have "fairness" built in. That means really large
transfers are automatically broken into smaller chunks to avoid blocking
the machine for extended DMA transfers. I've seen this in driver software,
disk firmware, and even bus level DMA firmware. (That latter was an Amiga
feature in its last days.)

Really, I don't see much need for actual low level transfer sizes to exceed
2gig even with SSD type devices - YET. We're an order of magnitude or more
away from the conditions that will change that overly broad statement.

{^_^}

ATOM RSS1 RSS2