September 27th, 2005, 12:26 PM
Hi Agent... The reason why they call it social engineering is because they don't always have to tell any lies. All they have to do is pretend they need help accessing some system. Many people tend to assume things and are willing to help even a total stranger who turns up at the office and says he needs access to one of the system and thus needs a password.
He's in the office so he's either an unknown collegue or some outside advisor and not everyone tends to question them why they need access. There's a job that needs to be done and they don't want to waste time on "formalities"...
"Hi, I am new here and need to access system X. Do you happen to know the password?" is a short and very true statement for such a person. It's not a lie, but he knows the other will assume he's a collegue or some hired help.
Do you know how many laptops get stolen from offices? Some stranger just walks inside with the other employees trying to avoid any security checks. They just look around for an abandoned laptop and once they find one, they take it and walk outside again, with the laptop. And it might take days before someone realises that the laptop isn't stored in some safe place but actually got stolen. Maybe even longer.
September 27th, 2005, 12:30 PM
Humans are the weakest link in any IT system. Always have been, always will be.
Staff need to be aware of the concequences of their actions. 'Aware' being the key word. The only way to defeat a social attack it to have staff trained to expect it and knowing how to deal with it. We are in the process of building a staff infosec awareness program which will cover password responsibilities and social engineering.
After which they should know that if they give out a password to an external agency without authorisation they will be collecting their last pay check soon after.
Now you're starting to talk about risk. If the assets you are protecting are extremely valuable it starts to make sense to protect against the unlikely and to take more outlandish measures.
this would be extremely rare but when the stakes are high and the information extremely valuable, some people might go as far as this
I think Catch is very knowledgable about risk and managment of protectively marked materials etc.
September 27th, 2005, 12:48 PM
I think that we are slightly losing track of fundamental security principles?
1. All users have a unique ID and password.
2. Users are authorised to do ONLY what they need to do to perform their job function. No more and NO LESS (that way they have no reason to ask for help?)
3. Identification must be worn at all times.
4. Strangers/visitors must be identified to employees by line management, and their terms of reference clearly stated. Any deviation must be authorised by line management.
5. Where practicable, users should only be able to log on once, and only from their workstation.
6. Applications and databases should only be available during permitted hours.
I have no problems with shared passwords and IDs where these are purely "look up" applications of a non-sensitive nature. Basically, I mean "read only" stuff like part numbers, library books etc........where you only have an ID and password because the system requires it.
Many security problems can be put down to sloppy policies, ignorance and poor management.
September 27th, 2005, 01:31 PM
Also in any organisation you will have different levels of security for different job profiles. The level of security needs to be relative to the responsibilities of the job. The biggest problem with security is often company’s go from one extreme, no security policy, to the other extreme of too much security. The end result that is that the policy is often ignored as it is too restrictive to the business of the company.
Security awareness needs also to be instilled from the top down. It is not much good to have the mail room worrying are their passwords able to stand up to a brute force attack if the financial director is walking around with his password taped to the screen of his laptop.
\"America is the only country that went from barbarism to decadence without civilization in between.\"
\"The reason we are so pleased to find other people\'s secrets is that it distracts public attention from our own.\"
September 27th, 2005, 01:41 PM
Good points, nihil! Very wise comments but unfortunately not everyone listens that well to good advise. Let's consider possible weaknesses in those rules...
1.) Sometimes several people will all need access to the same system regularly. Think about a cash register system in a restaurant or shop. Of course, such a register won't provide lots of access but it might still contain some valuable data. (For example, the amount of cash in it.) Of course, with proper hardware the employees might never need to access the computer system behind it but it's still there. And I think there can be other situations where multiple employees share the same computer system because they just need it once in a while.
2.) The biggest problem is determining how much access someone needs. In general, a software developer will need lots of access to the system they are using. They might even need to install all kinds of additional tools on a regular basis. And unfortunately it's not always possible to finetune a system to the exact needs of the employee.
3.) Identifications can be forged or even stolen. It would not be that troublesome for someone to just make a copy of such an identification. Or use the identification of someone with more access. Especially when identifications get copied, things become quite difficult. Which is why biometric identification tends to become more popular.
4.) If you're in a big company where many people walk around many different departments, it does become a bit difficult to identify someone as a stranger. If a building has over 2000 employees then keeping track of strangers becomes quite challenging, simply because most people are strangers for all others. Besides, strangers might have all kinds of different ways to get inside a building, with walking through the front door as one of the most daring techniques. But climbing through an open window or through an open fire-exit is also a good way to get strangers inside. With a fire-exit, a regular employee can even just open the exit for a stranger in return for cash or whatever else.
5.) Not always practicable, but it would be reasonable secure. Problem is that some systems are needed to be shared between multiple users. For example, consider a company where people are developing a client-server application. They would need access to multiple systems to test their work, and preferably need administrator access. Test systems tend to be systems where multiple employees share a single account to access the system. And sometimes this test-account might have too much access to the whole system simply because one of the developers needed this access.
6.) In a 24-hours economy there are many applications that must run 24/7. A helpdesk will need continuous access to the helpdesk database. And although people will just work 8 hours per day, there's always a team that will take over their shift. (And sometimes those helpdesk people are located on different parts of this planet so these teams always work during the day. One team in the UK, one in India and the third in Brasil and you have a requirement for 24/7 access.
But still, you make very good points. It's just that poor managers will find poor reasons to ignore these simple rules.
One other consideration is when companies start to outsource their projects and those external companies need access to possible sensitive company data. How much access should they be given and how to check that they keep the information you provide them as secure as possible. In that aspect, outsourcing could be a real security problem. Unfortunately, it's also very inexpensive and thus a reason for management to choose this option.
September 27th, 2005, 03:06 PM
1. You are talking retail (EPOS) here. The only way to access the system is via an enquiry or a transaction. As people (customers) don't like to be kept waiting these systems are usually driven by a swipe card, key, token or whatever. As your performance is measured by your transactions, you would not want someone else's identity.
Anyone else has ID: Password: Workgroup: Profile. If they need to do things infrequently, then the task should be delegated to an authorised expert, remember "infrequent" often means "inaccurate".
2. Developers should not be allowed anywhere near the live production systems.
3. Beyond the scope of social engineering............more like breaking and entering?
4. "Walking around" that means they are not working.................we are now looking at some serious business process re-engineering requirements. There are also some quite nifty electronic surveillance and access control systems I have worked with.
5. See #2 above. You need a complete test environment or you will not be able to do End-to-End and User Acceptance Testing.
6. No problem you can run it on a "shift system" based on location/timezones.
Outsourcing?....................you do NOT give them sensitive data. You have the database design specifications, file layouts, screen layouts, transaction specifications, etc....................you give them test data which you have invented.
September 27th, 2005, 03:49 PM
1. Retail is one area where many people can have access to the same system and often overlooked from a security viewpoint. Especially when people pay with cash, it's quite easy for someone who operates the system to steal small amounts every day hoping it won't get noticed. With access to the system they could even hide all traces if they know how they should modify the system. Then again, in retail many people operate the system through some additional hardware that is linked to the computer and never really have to deal with the computer itself. Still, the computer is often located very close to their work area.
2. True! But developers do need some data to test with and often such data is just retrieved from any live systems. A system that is processing financial transactions would need some transaction data from a live environment just to make sure it works fine with it. Will companies take the trouble to create just spoof transactions for testing purposes or do they trust their developers well enough to provide live data?
3. True. But breaking and entering is one way to collect sensitive information, especially if it's stored in laptops, small PDA's and mobile phones. Or maybe loose CD-roms or external disks. Or even those small flash-disks! A secure system does also include making sure no sensitive data can be found on very portable devices.
4. Well, walking around could happen from their work area to the local canteen or even to the exit for a sandwich in the sandwich store next-door. Or maybe for a visit to the toilet. Or maybe someone needs to find someone else in another department. People will walk around the whole building. (They might just admire the architecture.) Of course, a good surveillance system would help but how many companies are really willing to invest a lot in such systems?
My dad told me about a company he once visited. He saw security camera's all over the building and thus asked one of the employers if they felt it affected their privacy. The employer didn't care and pointed out that all camera's were just made from cardboard and not at all operational. Rather than making things secure, they just made it appear secure...
5. As 2, indeed. But test environments sometimes use data from live sources. (Means they have read-access to those sources.)
6. Shift system? Never thought of that. Now, I'm not a manager so I assume that managers will think of such a system. Yeah, I'm young and innocent...
Outsourcing also has the same problem as with regular development. They will need some kind of data to test their systems with. Creating (spoofed) test data costs time and thus money so many companies just provide data from live systems. This means that outsiders have read access to what could be sensitive information. This is especially a problem when a company just needs a small project to be outsourced. The time required to generate test-data could be more work than developing the whole project and testing it with test-data retrieved from the live system. Think about situations where thousands of records need to be processed within a small amount of time. You can only test this with large sets of testdata.
In my opinion, most security risks occur because someone wants to save some time, because time equals money. And these days, things need to be done fast, and often have to be finished by yesterday. So a good rule I've heard from my dad is that if something saves you some time, consider if it also increases any security risks.
September 27th, 2005, 07:31 PM
You tell them the rules and they invent the data................otherwise don't outsource to them? as in: "show it to me as you think it works" and I will tell you if you followed instructions and have a half chance of getting paid sometime
Creating (spoofed) test data costs time and thus money so many companies just provide data from live systems
As for live systems.................any developer who is caught even thinking of going near them is shot!
What you do (as a workaround) is go to Infrastructure Support/Operations and tell them to restore any required files to the development/test system from backup.............that does not compromise your SLA with your users?
Hey Katja.................the sooner you start thinking like a manager, the sooner they will make you one. Unfortunately, this does involve some politics?
Also remember that real test data could be subject to legal compliancy?
September 27th, 2005, 07:59 PM
Okay, so you tell the other company what the data should look like and tell them to generate their own testdata. And so they will, and charge you two additional weeks that they will need for generating the testdata.
So, not only does it mean that the company has to pay more, but the project will also be delayed for a fortnight. Many managers want things to be done as cheap as possible and as fast as possible so they are tempted to just provide some live testdata from their systems.
And well... I am considering to become a freelance developer so I'd probably become my own manager. I already have a few contacts but want to finish my studies first. It's not an easy job since I will have to do several things at the same time. Write code, advertise my abilities, contacting and visiting customers, designing products and even testing it all. But it looks a real challenge for me.
But that also explains my view on security. I look at it from a developers viewpoint, who needs full access and yet also a secure environment. As a developer, I don't care much about where any test data originates from, as long as it doesn't cost me any time to generate it. I'd have to charge more for something that's not challenging.
September 28th, 2005, 08:05 AM
Aside from educating your users about not giving their passwords out, you also need to educate them on choosing a GOOD password. As mentioned above, MANY people use passwords such as "god", "password", "administrator", etc...
Originally posted here by MURACU
Add a password scan with details on how long it took you to get 90% of the passwords in the company and most people sit up and take notice. Especially with things like 25 people out of 50 use "god" as their password. Basically what i am saying is moer or less the same as what has been said before educate your users and communicate the importance of security to them.
A good password should include both letters and numbers as well as - if possible and allowed - special characters. examples would include the following (and these are just made up):
While many places limit the length of a password, I have always felt that people should be able to set their own PW lengths as long as they are able to remember them. Hence in my company, all employees are not only encouraged but required to set a minimum PW length of 12 alphanumeric/special characters.
[gloworange] Windows XP = Windows Xtra Problems[/gloworange]