Recent threads about Windows and UN*X/Linux security have really crystallized something for me... very few people have any idea what exactly constitutes an operating system's security. Needless to say, this situation demands a tutorial. ;)
First let me discuss for a moment what the perceived idea of operating system security is, and why it is incorrect. Many arguments revolve around themes like: “Windows has way more viruses and exploits all the time!” “Apache.org got rooted, twice! Linux sucks.” “My professor said…” “Oh yeah? Hack my computer!” None of these really focus on operating system security at all. The Windows viruses and exploits, like the Apache.org roots only deal with specific systems, be they highly configured or completely default. The operating system itself isn’t discussed. (I have no desire to go into the default configuration subject here as it is a marketing choice and in no way affects they operating systems security.) This is also why the argument “It is only as secure as the person running it” is also flawed. Yes any given system is only as secure as it was configured to be, but again this is no statement on the operating system’s security.
Why then do people constantly dance around and around this subject? Answers about specific configurations are of no use to someone wishing to know which to use or simply just curious. Answers about each only being as good as the admin merely raise further questions, because wouldn’t that mean that all systems can be exactly as secure? Obviously this isn’t true… answers are never reached and the issue gets reopened time and time again.
I wish to close this topic by educating members on exactly what operating system security is and how it is quantified.
Security, as many of you know revolves around the appropriate protection of three elements:
or "CIA" as someone who is studying for their CISSP will invariably spout. Confidentiality is of course keeping information secret, integrity is preventing the information from being altered incorrectly, and availability is ensuring that the system can perform its function. Now availability is a whole different animal than the other two, and in this day and age essentially moot to discuss without the context of a network, as such it extends beyond the scope of this paper, sorry. Operating system security then consists of six (nine including availability) parts:
Confidentiality: Protection Model
Integrity: Protection Model
Each of the protection models should be proven (typically against a safety analysis model such as the Harrison, Ruzzo, and Ullman model) to ensure that they are free (or as close as possible) of theoretical exceptions, that is instances where no matter how well it is implemented the model can be circumvented. Well known protection models include the Bell-LaPadula hierarchical mandatory access confidentiality model and the Biba hierarchical integrity model. The model is the most important aspect of security, even if everything else in the system is perfect, it will still be exploitable if a weak model is used. Systems tend to use several protection models for each of the three roles, mostly because the more complicated they get, they harder they are to prove so it is simpler to use a collection of simple models rather than a single comprehensive one.
Capabilities are the tools and functionality the operating system uses to implement a given model and may include things like the specific access controls or what privileges are available and how they are defined. Examples include groups, how setting the system time is controlled, or having the system crash when it is unable to audit particular events.
Assurances are a way of determining that the models are implemented correctly and cannot be bypassed and that the capabilities actually do what they are supposed to. Additionally assurances can cover nearly all aspects of the operating system, from the maturity level of the development team to the quality and comprehensiveness of the documentation to the architecture of the operating system itself (though this also falls under the security model, however I’ve listed it here because it is a function of a higher assurance system rather than a specific protection model.) For example, using a microkernel architecture allows for much higher assurances as all aspects of the protection models may be implemented at a single point known as a reference monitor (“An access control concept that refers to an abstract machine that mediates all accesses to objects by subjects” - Federal Standard 1037C) concept.
Now, rather than making vague accusations about (random OS) sucking because you read some report in “(competing OS) Weekly” about how (random OS) has more reported defacements in the last three months, you could say something along the lines of: “I feel (competing OS) makes for a better web server because it uses the X integrity protection model and (random OS) uses the Y integrity protection model, and that has been proven flawed, so it doesn’t matter how much money/programmers they throw at it, the problem is too deep to fix without additionally functionality.” Doesn’t that look much better? Only problem… if you know what the models, capabilities, and assurances were, you wouldn’t be reading this document, right? So odds are you don’t actually know details about any of these.
Real world operating system protection models fall basically into one of two types: Mandatory and Discretionary. Mandatory access controls tend to be found in higher security systems like those frequently used in the American (imagine my surprise in Australia at the ANZ and NAB… “Trusted operating system, what’s that?”) Financial and aerospace (this includes a lot more companies than you might think, eg. GE) sectors and US military/government (also a growing number of technology companies including IBM, HDS, and HP.) Mandatory access controls can follow any number of models, but they essentially say the same thing: subjects cannot change an object’s permissions. Discretionary access controls are essentially the opposite: subjects (having the required permissions to do so) can change an object’s permissions. Effectively, this means that in mandatory controls the subject’s permissions are defined by the subject’s level/label/compartment/network flag/whatever whereas with discretionary controls the subject’s user ID defines the subject’s permissions. This means both models have different traps; a common one for mandatory controls is that the system becomes unusable and with discretionary controls subjects may attain more rights then they should.
Like I said before, know your models, they define everything on top of them and knowing the model will help you in knowing the potential hot spots of a given system… as any good hacker or admin will tell you, this is essential when dealing with either side of the 0-day question.
Capabilities tend to revolve around not only the specifics of the security model’s access control implementation, but also other supporting elements. An example would be trusted subjects (subjects which are allowed to violate the security model in some predefined way) within a mandatory access control system. These allow the admin to intervene and prevent the system from migrating toward entropy. Other examples of supporting elements include Windows’ crash on audit failure feature and Windows’ (among other operating systems) segregation of administrators and operators. Perhaps a better-known example would be discretionary access controls, which are found in more common operating systems like Windows and Linux. These need to be finely grained. Various rights (read, write, execute, delete, give/take ownership, read/write attributes, email, and print are good ones) should be defined for both allow and deny with granularity to a single subject (subjects, groups, services, and systems ideally). Systems with more anaemic controls then this are likely to have a number of problems including the aforementioned organic propagation of permissions and will be more complicated to maintain in instances with non-related subjects needing similar access to the same or a wide set of different objects.
Just because a system utilizes a good model doesn’t mean it is secure, if the model is badly or incompletely implemented the system will not only have grave security issues, but in the case of supporting capabilities attackers will actually know what exact weaknesses as most supporting capabilities are exist to fix specific theoretical flaws in the applied model.
Finally, to everyone’s favourite part… assurances. Yes, this includes code bugs and to a lesser extent configuration errors. Which are responsible for at least 99% of the exploits discussed here and unfortunately, most systems don’t segregate confidentiality and integrity much, so these code bugs effect both typically. Now comes the more complicated part, and this is considering how the system handles security related checks, and what level of assurance this process has. Clearly having a single, very simple (ideally a finite state machine) security monitor that checks every process is the way to go. Unfortunately systems with this type of assurance tend to be out of yours or my price ranges, but the principal applies to lower assurances systems as well… the closer you are to the theoretical ideal, the better. Obviously a systems that requires every application to be responsible for its own security has lower assurances (countless security checks) than a system which handles security at the kernel level and effectively segregates applications from the rest of the system. The next major aspect of assurance deals with configuration. Does the vendor provide adequate documentation? Note, I did not say “does documentation exist?” because different authors are likely to have different ideas about doing the same things, frequently with more and different types of shortcuts. The vendor should make available some sort of trusted facilities manual and the better this is, the more comprehensive it’ll be. In a perfect world the vendor would provide you with the exact configuration for the most secure stance in any/any combination of roles. Sadly, commercial systems are a loooong way from this ideal. In the meantime however guidelines for specific roles should be made available be the vender and be clear to implement by even a jr. admin.
Hopefully this will give you a better idea of what to compare when comparing the security of various operating systems. If you still are having trouble weighing the relative strengths and weaknesses of different models, capabilities, or assurances, I suggest you check out DOD-5200.28-STD or ISO-15408. Many people will argue that these are dated or just not applicable to real life, the reality is… this couldn’t be further from the truth. It is still possible to evaluate every aspect of the most modern operating system’s security against with these documents. Meaning, they have not been outgrown and why is that? Because they are fairly vague, merely roadmaps, basically they say: “What type of model does the system use? Can this model be proven? If not does the system utilize supporting capabilities to adequately shore up any inadequacies in the model? Has the model been implemented correctly? Is it bypassable through poor assurances or flawed capabilities? Do you have documentation on how to use the system correctly?” So you can see how this is a more or less timeless yardstick and reading a few evaluations will be very helpful in understanding further specifics.
Follow the same approach and you’ll be able to compare the security of any operating systems, organizational systems, applications, networks, anything. :)
Now I realize this makes for threads a little less fun than: “(random OS) sucks because it has too many holes”, but perhaps they might be a little more useful as well.
PS. I apologize about the length, my girlfriend is on night shift this week and I was bored outta my mind. ;)