-
January 5th, 2006, 11:06 AM
#11
-
January 5th, 2006, 12:04 PM
#12
Hey The Duck I certainly do! and this report has been discussed in another thread recently
My comments, as in the past, are based on the statistical validity of these reports. My major premise is that we just don't have sufficient data to compute really cutting edge stats here.
The CERT report, if you look at the detail in the tables, mainly includes third party commercial software that just happens to run on that operating system. This provides the first major logical problem, in that Microsoft have a lot of software and embedded software at that.
So, are all the IE, office, etc. vulnerabilities to be classed as Microsoft, even though versions can be found that run on other platforms?
Furthermore, there is no accounting for exposure here. Say I write a Linux application with 1,000 vulnerabilities in it, that is used by 500 people Worldwide; that will clock up 1,000 Linux vulnerabilities. Yet, a single Windows vulnerability clocks up a score of 1 and may affect 50,000,000 users.
Again, there is no allowance for "core vulnerabilities" as opposed to "optional vulnerabilities". Here, I consider a "core vulnerability" to be one that is essential to the operating system, and comes "out of the box", so to speak, optional ones you install yourself. I do not install all the Microsoft patches and upgrades on my machines because I don't have the relevant software or I never use that particular facility.
Lastly, Microsoft et al, have an inbuilt advantage over open source providers. If you look at the box that the third party software comes in, it has a little Microsoft logo (seal of approval) on it. That is because it is a closed, patented operating system.
All the open source suppliers can do is patent their particular distribution. They have no control over third party applications vendors.
Unfortunately, we do not have any "hardcore" statistics to enable us to determine true "exposure" to threats, vulnerabilities and the like.
Just my warped opinions
EDIT:
brokencow - Good question...anyone know the answer?
Once again, the statistics are corrupt. Some AV firms claim to detect 250,000 malwares, others far less. It depends if they use a generic or a specific identification methodology?
Anyways, it is a fallacy, aren't the guys in FOP#1 going to get more incoming than those in GHQ?
-
January 5th, 2006, 12:24 PM
#13
Originally posted here by nihil
Furthermore, there is no accounting for exposure here. Say I write a Linux application with 1,000 vulnerabilities in it, that is used by 500 people Worldwide; that will clock up 1,000 Linux vulnerabilities. Yet, a single Windows vulnerability clocks up a score of 1 and may affect 50,000,000 users.
Good point - and there are so many of the same OSS apps that it's no wonder CERT have found so many vulnerabilities.
Heck the whole post's good - but I can't give you any APs because I've got to 'spread them around' first
-
January 5th, 2006, 01:12 PM
#14
Hey J_K9
I can't give you any APs because I've got to 'spread them around' first
Don't worry about that, it is very refreshing to have this kind of discussion without the usual "My OS/distro is better than your OS/distro" type argument?
I think that the IT industry is very "flat" in its analysis at the moment. Personally, I see this as a multivariate analysis situation, which really boils down to a risk/vulnerability modelling exercise.
I have messed around with that sort of thing from a project management viewpoint for a good few years (built tools in Access 2.0 ) but my problem has always been, whilst you can identify parameters, how do you quantify and enumerate the values?
There are so many versions around, so many patches that have been applied or not? it is a statistician's nightmare!
Take Linux for example, SuSE has a vulnerability, so does Red Hat..............does that count as 2 or as 1?...................how many machines are in that situation?.............it goes on and on........?
I am afraid to mention pirate copies, as they complicate things even further...........
It is a very inexact science at the moment, so I get a bit miffed by people putting out interpretations of the meagre statistics that we have as if they were some religious tract
-
January 5th, 2006, 01:43 PM
#15
Originally posted here by nihil
Take Linux for example, SuSE has a vulnerability, so does Red Hat..............does that count as 2 or as 1?...................how many machines are in that situation?.............it goes on and on........?
I see what you mean - counting them really would be statistical hell! You see the same thing with AV scanners; some have 65000 signatures whereas others have hundreds of thousands - because some count the variants while others don't.
So, CERT's results should just be taken with a pinch of salt then, eh?
-jk
-
January 5th, 2006, 02:19 PM
#16
Hi J_K9
So, CERT's results should just be taken with a pinch of salt then, eh?
NO!!! that is my point, or a part of it..............CERT did not particulatly comment on the results, so in statistical terms I would consider it "raw data". It is the subsequent "interpretation" by partisans and hack journalists (who feed off partisans) that causes the confusion.
I have every confidence in CERT, and can claim to have been a subscriber to their bulletins well back into the last century............hey, millenium even But, you have to read all the small print about the statistics, and what they were based on. If you are prepared to read the whole lot, they do tell you.
My problem with the statistics is their problem, is our problem. We just do not have enough data to present a plausible risk model.
My gut feel is that most operating systems are capable of being secured. Another variable there being "out of the box" or after expert tuning?.............which leads on to "how expert, is your expert"?
Might I suggest that you just take a quick scroll through those CERT lists and ask yourself, how many of these have I ever even heard of let alone encountered in real life?
Cheers
-
January 5th, 2006, 02:40 PM
#17
nihil - Ok. So CERT's report is correct except that as the number of computers vulnerable to a certain vuln are not taken into account, the report can be interpreted in the wrong way?
I looked at the report, and there tons of apps I have never heard of - for example, "Jim Faulkner Newspost".
My gut feel is that most operating systems are capable of being secured.
Yes, and as MsM says - an OS is only as secure as the admin makes it. (or something like that ).
-
January 5th, 2006, 03:15 PM
#18
J_K9 absolutely!!!
It is a question of how we choose to interpret this "raw data" and how it is collected. Like CERT have to rely upon information received? Unfortunately, there is no "absolute" method of data collection.
My point is that we do not have a realistic risk model; only some opinionated interpretations.
I have often wondered if it would be worthwhile writing an "other tutorial" on the subject of reviews and statistics? it might help stop a few flame wars?
I do hope that our initial discussions will prevent your thread being hijacked into the usual flamewar
Otherwise my comment is "what about VMS then?..........how many vulnerabilities, exploits and malwares can you detail ".................that is usually a good opener
-
January 5th, 2006, 03:40 PM
#19
Originally posted here by nihil
I have often wondered if it would be worthwhile writing an "other tutorial" on the subject of reviews and statistics? it might help stop a few flame wars?
Great idea! Imagine how much of everyone's bandwidth that would save in the long-run!
Oh well, I'm off to see why my nmap scan has been going on for half an hour and still hasn't finished - when the nessus scan finished about 15 mins ago...
-
January 5th, 2006, 04:51 PM
#20
Here's another thing to consider which I'm surprised no one caught on to. The total *nix vulnerabilities list is about 10 different operating systems. There's OS X, AIX, HP-UX, OpenBSD, SCO Unixware, Solaris, FreeBSD and others.
Of course your going to have a higher number of vulnerabilities if you lump a bunch of different OS's together against 1 OS. Also, alot of those vulnerabilities are duplicates that were updated somehow.
Also, they didn't count certain vulnerabilities against Microsoft. IIRC, there have been few if any mozilla firefox vulnerabilities that only effected the linux platform. They effected about everyone. So why am I seeing 5 mozilla firefox lines under the *nix section and only 1 under the Microsoft section.
The raw data is good, but I don't understand the logic behind some of this. They should have seperated the different OS vulnerabilities.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
|