I apologise for the hiatus. My mailing list account's SMTP server seemed
to dislike me immensely (and it still does), and I patiently waited a
while for it to "fix itself" before bypassing it entirely and going with
my uni's SMTP service instead to send e-mail.
Luke Scharf wrote:
> I've observed that, after a prolonged discussion with you, your adviser,
> your department head, etc, a compromise can usually reached. Typically,
> the compromise looks like the following:
>
> 1. They will create some sort of official or unofficial exclusion for
> "clueful people doing intelligent things to get work done".
> 2. All security problems are on your head -- not theirs. They'll
> reserve the option to cut off your network access, if there's a
> problem.
> 3. Don't bother them, and they won't bother you. If your machines
> behave themselves, they'll bet you do your thing.
>
>
Thank you, kind sir, for a most thorough and useful response. I plan to
use just about every single point you've mentioned below. I sort of
implicitly assumed what you said above, and was in the process of trying
to intelligently articulate my side of the story when I wrote to this
list for thoughts.
I've snipped most things below, not because they're irrelevant, but
because they're spot on and there's little left to discuss.
> 2. Firewall: Offer to firewall everything (put it behind a
> Linksys-style NAT box (with the latest firmware, of course!) or
> something and have people SSH'ing into the lab connect to a
> central box, and then SSH to where they need to go -- or do some
> fancy port redirection). If you're not running NFS, then the
> iptables firewall with an exception for SSH is good. Or both! If
> you do both, and then administer the machines as if they're on the
> open Internet, you're in good shape. Make sure to mention that
> you know the firewall doesn't make your machines secure -- these
> folks hear from a lot of people who think that they can remove
> their e-mail virus scanner and web-spyware-removal-tools when the
> machine is firewalled. But, if they don't trust your software on
> the open Internet, it will make them feel a bit better -
> especially when when it's evident that you know how firewalls fit
> into the overall IT toolbox.
>
The IT staff are offering to do something of this sort. Their policy
seems to be, if it is one of {a list of OSs} and it's
patched/firewalled, it can directly access the internet. Otherwise, it
has to be behind a NAT.
I was opposed to this because people (including me) have their favourite
machines in the lab and like to access them directly. I didn't I didn't
think of what you said, allow one box they trust on the network, and set
up my internal network inside it, allowing people to hop-hop into their
machine directly (enough).
I use shfs instead of nfs (so there are no issues there).
> 3. Virus Scanner: You can install clamav from DAG's repository. It
> doesn't scan in realtime, but when they ask if you have a virus
> scanner on every machine, you can say "yes!". If you put a clamav
> command in the crontab ("@daily /usr/bin/freshclam ; nice -n 19
> clamscan --recursive --no-mail --infected /"), the machine is
> being automatically scanned.
>
This I hadn't even thought of, I must admit. Almost everything else
you'd mentioned came up. I was in a naive frame of mind which went
roughly like so---if someone does something stupid and gets infected, at
worst they lose all their data, the box won't necessarily be rooted. And
after some thought, I realised that people don't care about the box
being up. I mean, they probably do, but they're probably more concerned
about their data.
Plus I do get to say "yes, we proactively monitor for viruses!"
> These suggestions ought convince the central IT folks that you're
> clueful, conscientious, and taking the administration of the machines
> seriously. In the end, this is all any IT organization really wants.
>
And that, was exactly what I was shooting for.
Thank you very much; you've been extremely helpful.
Harish
|