January 27th, 2005, 11:19 PM
Criminal IT: Should you trust the Internet?
This is not a news but a commentary that I found. I bold the interesting part.
Source : http://news.zdnet.com/2100-1009_22-5553554.html
Commentary--Do you trust the Internet? It's a silly, meaningless question: of course you trust the Internet, under certain circumstances; and under other circumstances, of course you do not--or at least, you should not.
The Internet can be relied upon to route data between any two connected computers, finding its way automatically around any failed or damaged elements; that was, after all, what it was designed to achieve. Unfortunately, that is all it can be relied upon to manage. It provides no intrinsic guarantees of confidentiality, either of the existence or the contents of transmitted packets of data; and equally it provides no intrinsic guarantees of the transactional integrity of the messages it transmits.
There is no guarantee of delivery time, for example, simply what might be thought of as 'best effort' delivery for those messages which might form streaming media or voice over IP traffic--and for every other form of data, from e-mail to web pages, the packets all simply have to take their chance.
You can't trust the Internet to preserve confidentiality, you can't trust it to ensure integrity--other than the internal, checksum integrity of individual messages--and you have only the lowest level of assurances about availability, since the Internet comes with no 'quality of service' rating.
This means that, in fairness, you shouldn't really trust the Internet--any more than you would trust, say, notices pinned on a public board somewhere. But of course, the Internet isn't the element being trusted in activities such as ecommerce; instead, it is the medium over which the trustworthy services are being delivered. It is the medium that supports SSL, secure e-mail and reliable credit card processing; it is the medium which provides access to web servers, electronic mail and the reliable delivery of voice and data traffic. It is these services which we should be deciding whether or not to trust--along with the computers, or rather the operating systems and applications, which communicate over the Internet's wires and radio frequencies.
So can you trust those things? That is after all the most fundamental question for information security: can you and should you trust that the confidentiality, integrity and availability of information will be maintained by those components which claim to do precisely that? Should you trust products such as encryption, firewalls and the security components intrinsic to operating systems?
Perhaps the most insightful comment on the issues of trusting services is now nearly 21 years old, devised by one of the founding fathers of the Unix operating system, Ken Thompson. Called Reflections On Trusting Trust, it was the acceptance speech delivered by Thompson when he collected an ACM award in 1984. Published in the August 1984 Communications of the ACM, it paints a scary picture of how much we should not trust components about which we have no knowledge.
Thompson's speech described the way in which it would have been possible for the creators of Unix--Thompson and Dennis Ritchie--to have modified the login program so as to allow them unconstrained access to every Unix machine--and incidentally to every computer derived from their first operating systems. A simple modification to the login program would allow a default password to have been used whenever the program encountered Ritchie's or Thompson's login names: instead of checking the passwords in the standard 'passwd' file, the program would have instead used a built-in password. Ritchie and Thompson would have been able to log in to any computer.
Of course, a simple check of the source code of the login program would have shown the rogue lines of C, containing the check and the default password to apply. Thompson's trick would have been busted the first time anyone was curious to examine the source, which was distributed along with the operating system in the halcyon, innocent days of the early 1980s. To avoid this problem, though, Thompson said they would also have modified the C compiler. If the compiler detected it was compiling the source code for the login program, it would automatically include the Trojan code to allow them access; the lines didn't have to appear in the login program because they would be automatically included there by the compiler.
The problem then shifts, however, to the compiler: an examination of the compiler source code would have shown the rogue lines of code to create the login Trojan. To avoid this, Thompson also proposed that the C compiler include its own Trojan. The C compiler is itself written in C--in developing new programming languages, it is common for the compiler to be written so that it can compile itself, the ultimate test of the compiler and of the complexity of the language. Thompson proposed that their first compiler could have been modified to include a Trojan which detected whether or not the compiler was in fact compiling a C compiler. If it was, the Trojan would include the modifications to allow it to infect the login program.
Follow the logic of this. Every C compiler written in C needs to be compiled by a C compiler. The first C compiler was infected with the Trojan, so every version of the operating system compiled with that first compiler would be infected. If anyone created an alternative C compiler, again written in C, they would have to compile it--and would therefore have to use that first compiler. Because of this, the alternative compiler would be infected, as would then any further operating system compiled using this alternative.
If Thompson had indeed produced the Trojan, avoiding its effects would have been enormously hard work. Fortunately, neither he nor Ritchie actually introduced the Trojan which he described but it does illuminate the dependencies which trust places on a wide variety of 'invisible' components, such as the compiler and the standard libraries.
To fully trust any component, you would need to know how it was produced--not merely at the level of the source code but for the entire development and production. The so-called 'A1' secure systems feature comprehensive checks on the design, development, compilation, deliver, roll-out and operation of the trusted component but these are as rare as chicken teeth. For most of us, we have to make do with systems which are, at best, only C2-rated systems in which some minimal security function is provided but not guaranteed.
So again, ask yourself quite how much you trust the Internet, a system built on invisible, unknown components, performing unknown functions in an unknown manner... still happy to shop, bank and flirt online?
Neil Barrett is visiting professor in the Centre for Forensic Computing at the Royal Military College of Science, Cranfield University, and the author of several books, papers and articles covering computer crime. A frequent computer expert witness for the prosecution, he has given evidence in cases of hacking, paedophilia, fraud and even murder.
It scares me because one of most often use compiler is Microsoft Visual Studio. What stop M$ to compile Trojan into every Visual Studio program out there?
January 27th, 2005, 11:28 PM
The 10 million or so microsoft haters and suspicious buggers out there who take everything M$ ever releases and RE it, watch what it does to their systems and what it sends where on the internet..... M$ would be caught.... and when they were they would be dead in the water....
What stop M$ to compile Trojan into every Visual Studio program out there?
That's what's stopping them.... user oversight.....
Don\'t SYN us.... We\'ll SYN you.....
\"A nation that draws too broad a difference between its scholars and its warriors will have its thinking done by cowards, and its fighting done by fools.\" - Thucydides
January 27th, 2005, 11:36 PM
it would be possible to embed something that is not active. Asleep waiting for activation. The issue is, we trust them a little. But that is why we have packet sniffers and Snort, and firewalls. Who needs a "secret" backdoor. We find unintentional ones all the time.
West of House
You are standing in an open field west of a white house, with a boarded front door.
There is a small mailbox here.
January 28th, 2005, 12:00 AM
I remember this. It is the foundation of one of my major nightmares--that unintentional backdoors or bugs are left in the support libraries of the C compiler, which is then used to compile new applications and new compilers, which are used to compile new applications and new compilers ... until one day it is discovered and that is the end of civilization.
I'll never forgive Thompson for this.
January 28th, 2005, 05:54 PM
Erm, you guys do know that disassembly would show that **** right?
I used to be paranoid, but then I realized it was an utter waste of time. Logic is a far more efficient tool for living your life.
EDIT: N/m, technically a perl script that acts as a compiler I suppose is a compiler. It's just not a *compiled compiler*. :P
The Nelson-Shepherd cutoff: The point at which you realise someone is an idiot while trying to help them.
\"Well as far as the spelling, I speak fluently both your native languages. Do you even can try spell mine ?\" -- Failed Insult
Is your whole family retarded, or did they just catch it from you?