SCIENTIFIC-LINUX-USERS Archives

June 2013

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Vladimir Mosgalin <[log in to unmask]>
Reply To:
Vladimir Mosgalin <[log in to unmask]>
Date:
Mon, 10 Jun 2013 21:29:25 +0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (93 lines)
Hi Chuck Munro!

 On 2013.06.10 at 09:42:38 -0700, Chuck Munro wrote next:

> My question is how do I minimize writes to the disk array?  Is it
> possible to significantly reduce disk writes once the host SL6 OS
> has booted up and the guest OS's are running?

There is no real reason to do anything about it; any modern SSD, even
cheapest consumer model won't die from "too many writes" on low-loaded
server (they might die from controller failure or as a result of power
failure, but not from NAND flash chips wear). The special cases where
you should care about wear are loaded SQL databases (they often make all
data go through write buffer of sorts, such as xlog on PostgreSQL or
redo logs on Oracle DB), ZFS ZIL, maybe ext4 with data=journal option
(unsure) and similar cases, where a lot of data is constantly passing
through device.

I'm using SSDs on many servers and, except for the cases above, writes
never exceed typical desktop writes (1-3 TB / year). Any SSD should
endure such writes for many years; if it will die, something else would
be the reason.

So main advice is "don't bother". If you are worried, make sure that SSD
you get provides information about life time writes in SMART; Crucial M4
and various Intel models do that, among many (but some don't).
Then, just check SMART after a few months of usage and stop worrying
after seeing how small write numbers will be.

> The current hard-drive-based box has lots of RAM (8 GBytes) and each

You could mount /tmp as tmpfs, but then again, there isn't much point
for just a virtual host.

> guest has a pretty small footprint, so swapping has never been
> invoked.  So far the current SL6-based firewall pretty well runs
> itself with very little effort on my part, so things like syslog are
> usually not monitored much.  Can the /var filesystem be safely
> mounted from a file server or does it have to be on a local drive
> during bootup?

It can, but it requires tricks. /var/run & /var/lock should be local or
you'll have to hack init scripts.. It's easier to run whole / from NFS
(dracut supports that, though not 100% sure that SL6 version does).

I really advice you not to try this, at least in SL6. Future versions
will have changes to make it easier (/var/run & /var/lock in tmpfs), but
currently, you'd better not.

Anyhow, writing logs hardly contributes to writes on SSD because these
writes are buffered. Even 1 GB of written logs per day is .4 TB of
writes per year. Even cheapest SSDs will handle 10 TB of lifetime
writes, most will more.

> I do let logwatch and logrotate run on the current box (with hard
> drives) but I'm tempted to reduce logging to a bare minimum and
> rotate logs to /dev/null

Well, once logs WERE written, rotating them doesn't contribute to writes
(it usually just renames files).

However, you can do full remote logging - turn off local logs and send
them to remote system, if you really are desperate.


> At this point it's all somewhat academic, but I'd like to consider
> the possibility of reducing heat and eliminating as many moving

I wouldn't expect much difference from heat perspective, unless you are
replacing tons of drives. Under medium load, HDD consumes around 6W,
SSD consumes around 2W. You can win 10 times more of that by switching
to low-power CPU models, for example.

> Your suggestions??

I really believe that you are looking in the wrong direction. There is
no need to do anything on system you described to reduce writes to SSD.
You will prolong theoretical flash life from 200 years to 210 years or
similar (real numbers, on most systems I've seen SSD flash wear is less
than 1% after 2 years of usage), which won't change real life of SSD.

Stop bothering about writes unless we are talking about special cases
like ones mentioned above (I've seen 0.5 PB writes per year for ZFS ZIL,
for example). If any, not using TRIM (and having huge write
amplification) reduces SSD life much more than saving up on writes.
Check out
http://blog.neutrino.es/2013/howto-properly-activate-trim-for-your-ssd-on-linux-fstrim-lvm-and-dmcrypt/
or google for "SSD write amplification" if you are curious.

-- 

Vladimir

ATOM RSS1 RSS2