Subject: | |
From: | |
Reply To: | |
Date: | Mon, 20 Aug 2012 20:03:18 +0200 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
On 20/08/12 14:02, Janne Snabb wrote:
> Hello,
>
> I made some statistics and comparisons about security advisories
> published by three popular RHEL 6 clones: CentOS 6, Oracle Linux 6 and
> Scientific Linux 6.
>
> The article is available at the following URL:
>
> http://bitrate.epipe.com/rhel-vs-centos-scientific-oracle-linux-6_187
>
> I hope you find it interesting.
This is really interesting. However, there are five "Important" and
"Critical" erratas which are delivered considerably slower
(RHSA-2012:0387, RHSA-2012:0388, RHSA-2012:1009, RHSA-2012:1054 and
RHSA-2012:1064). SL's average is very much impacted by 3 of them.
What would be more interesting, from a statistical point of view is to
take those extremes out of the comparison. On an average level, All
distros delivers errata updates fairly fast, but those extremes skews
the real average. Of course extreme delays happened and will also
happen in the future, I'm not trying to hide that. But getting an
average what is more the "typically expected average" would be probably
give a better indication.
So my suggestion is to take those four erratas I listed out of the
equation, which are only tagged as "Important" or "Critical". Those
five erratas are really a minority in a bigger set of data and surely
looks like exceptions, across all distros.
As I said, I'm not trying to hide the fact that SL had some slow
deliveries. We need those graphs too. Such things happens to all
distroes, sometimes you just get set behind. But they do actually skew
the /typically expected/ delivery delay badly.
And people with statistics background can probably explain even better
than me why it's interesting to remove the extremes and compare that too.
kind regards,
David Sommerseth
|
|
|