Subject: | |
From: | |
Reply To: | |
Date: | Fri, 16 Feb 2007 07:38:38 -0600 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
Hello all,
I have a small cluster of SL 4.4 machines with common NIS logins and
NFS shared home directories. In the short term, I'd rather not buy a
tape drive for backups. Instead, I've got a jury-rigged backup
scheme. The node that serves the home directories with NFS runs a
nightly tar job (through cron),
root@server> tar cf home_backup.tar ./home
root@server> mv home_backup.tar /data/backups/
where /data/backups is a folder that's shared (via NFS) across the
cluster. The actual backup then occurs when the other machines in
the cluster (via cron) copy home_backup.tar to a private (root-access-
only) local directory.
root@client> cp /mnt/server-data/backups/home_backup.tar /private_data/
where "/mnt/server-data/backups/" is where the server's "/data/
backups/" is mounted, and where /private_data/ is a folder on
client's local disk.
Here's the problem I'm seeing with this scheme. users on my cluster
have quite a bit of stuff stored in their home directories, and
home_backup.tar is large (~4GB). When I try the cp command on
client, only 142MB of the 4.2GB is copied over (this is repeatable -
not a random error, and always about 142MB). The cp command doesn't
fail, rather, it quits quietly. Why would only some of the file be
copied over? Is there a limit on the size of files which can be
transferred via NFS? There's certainly sufficient space on disk for
the backups (both client's and server's disks are 300GB SATA drives,
formatted to ext3)
I'm using the standard NFS that's available in SL43, config is
basically default.
regards,
Nathan Moore
- - - - - - - - - - - - - - - - - - - - - - -
Nathan Moore
Physics, Pasteur 152
Winona State University
[log in to unmask]
AIM:nmoorewsu
- - - - - - - - - - - - - - - - - - - - - - -
|
|
|