Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 33

Thread: Third Linux Question

  1. #11
    Banned
    Join Date
    May 2003
    Posts
    1,004
    I think these Linux questions are very misunderstood... initially I supected and in fact hoped that I would have been wrong in my assumptions. I have not followed the Linux development too closely in recent years because I saw no reason.

    The original idea was to write a paper about "Why I think Linux dosn't meet my needs" and have it be focused more on the difficulties of viable information in the land of open source.

    I was genuinely looking for some people to say: "Yes, Linux can do that like this blah." and then I would have replied "Wow, that exactly meets my requirements, excellent answer." And then I would have been off to research from the answer back to my initial understanding to determine why those gaps in knowledge existed in the hopes of finding a more reliable route to viable information.

    Unfortunately I seem to mostly get answers that don't address my question, yet get positive APs because people think I am trying to bash Linux and feel some burning need to defend arguments for the underdog regardless of how irrelevant or inept they may be.

    cheers,

    catch

  2. #12
    Senior Member
    Join Date
    Oct 2002
    Posts
    1,130
    So... under your system I have openoffice jailed... how do I share documents that I've created? And that is just the first of many concerns I have.
    Ok, rather than OpenOffice, I have chrooted pico. This chrooted instance of pico cannot see, and therefore, cannot read, any data beyond its jail. Also, it has read/write access to the user home directories, which reside on a separate partition. This partition has been moutned at /chroot/pico/home with the noexec flag set. The /chroot partition on which the pico instance resides is set as read-only. So, it cannot write to its own jail -- only to the home directories of the users accessing it. Any data written by this instance of pico cannot be executed. I believe this solves both of your main questions... it also solves the problem of sharing documents you have created. Any user can write documents to his or her home directory with this jailed instance of pico, and after the fact, can share those documents as they normally would.

    If pico was used as an avenue of attack, the kernel would prevent any data from being written to the jail. The only place the kernel would allow data to be written would be in users' home directories, at which point it cannot be executed -- even with root permissions. Since the libraries containing the system calls necessary to remount this partition with execute permissions do not exist within the jail, one would find it very difficult to execute code by the remounting of the partition, which is the only way I can see execute permissions being granted. Pico was used for simplicity's sake; I have no doubt that if I can manage this with pico, that I can also manage it with OpenOffice, or for that matter any application which I may wish to jail in this manner.

    Actually jail is just a less secure, lower overhead emulator as for as sandboxes go. Jail is frequently refered to as a secure subsystem... but this is simply not true and wikipedia is dead wrong here. Jailed subjects still use system resources with the same level of access as any other aubject... you merely have a wrapper.
    Granted. But I cannot see how this is in any way less secure than the method you suggest using Windows. As far as I can tell, the creation of a chroot jail and the denial of directory traversal permissions basically create the same environment then.

    I use this method currently to allow my webserver access to user home directories, where they keep public html files. A chrooted shell server is also running in a seperate jail, with access to the same public html directories that the webserver has.

    ALSO... the user database being used is currently a mysql database, which is also jailed. This as well solves one of your earlier questions of finding a centralized way to manage users. I believe it was your first linux question that this problem came up in, although it does not solve every question asked in that post.

    If you want to have a look at my setup, PM me and I will provide you with access to my server. Tell me if my methods meet your requirements for this question.
    Government is like fire - a handy servant, but a dangerous master - George Washington
    Government is not reason, it is not eloquence - it is force. - George Washington.

    Join the UnError community!

  3. #13
    Banned
    Join Date
    May 2003
    Posts
    1,004
    If pico was used as an avenue of attack, the kernel would prevent any data from being written to the jail. The only place the kernel would allow data to be written would be in users' home directories, at which point it cannot be executed -- even with root permissions.
    Unfortunately this creates a few new problems... who owns the files created by this process.. root? What if users need to alter the permissions?

    What if users need to be able to develop and execute other applications? (I did say it was a development environment)

    What if the offending malware is a script rather than a executable?

    Which if users may need to acquire new executables to extend (in your example) pico, but they must be confined to the same sand box?

    All of these are very real questions (the last most obvious with web based shipping/receiving applications where an exe is downloaded to support advanced functionality like the handling of technical drawings, CAD, etc)

    Granted. But I cannot see how this is in any way less secure than the method you suggest using Windows.
    I have never said it was less secure than Windows, I said it is less functional... and changes to make it equally functional will reduce the security.

    I use this method currently to allow my webserver access to user home directories, where they keep public html files. A chrooted shell server is also running in a seperate jail, with access to the same public html directories that the webserver has.
    You keep going back to server environments... and yes I agree that for those environments they are fine since each user only deals with one jail... but when you have a single desktop accessing half a dozen jails... this gets insanely complicated and the lack of lateral sharing makes the issue even worse. Add thin clients and users changing workstations and again you suddenly need to manage the sharing of thousands of jailed environments.

    If client applications need the ability to write and execue data... clearly you can't have multiple instances of the same process, even if not at the same time share the same jail or you introduce new privacy risks if nothing else.

    See what I'm saying?

    cheers,

    catch

  4. #14
    Senior Member
    Join Date
    Oct 2002
    Posts
    1,130
    who owns the files created by this process.. root?
    The files created by this process are owned by the user running them. After the chroot() call is made, the user ID is changed to the user requesting the process, and runs as it normally would run... with the permissions of the user running it.

    What if users need to be able to develop and execute other applications?
    If we are to allow users to develop and execute applciation, what is to prevent a threat agent from using that users' credentials to execute other code? In my solution, any application process that a user requests would be jailed, and only able to run code within that jail, if we provide the necessary permissions to do so. These applications would be run as normal, with the permissions of the user running them, provided they are not setuid root. But, since any application would need to access data from a shared resource where users would store this data, that shared resource (i.e. disk space with execute permissions) could become an attack vector with which to attack other applications. One solution might be to limit code execution permissions to the owner of each file, and deny those permissions to everyone else. While this would not prevent shared disk space from becoming an attack vector, it would slow the process down. But still, any threat agent with the credentials of that file's owner might execute it. You may have a development environment that uses this shared resource for code execution. A web browser might also use the same disk space. Any file placed on that resource by the web browser might then be read and executed by the development software with the proper credentials. But I cannot see how your solution would prevent this. In essence, if we are to provide code execution permissions at all, we must accept the risk that that code may be malicious. In your solution, user data would also be stored on a shared resource, and be readable by many different client applications, all of which may read and execute that code.

    Which if users may need to acquire new executables to extend (in your example) pico, but they must be confined to the same sand box?
    Again, if we are to allow suers the permissions to install plugins and extensions, we must also accept the risk that those plugins and/or extensions may be malicious. Steps can be taken to limit damage and slow progress, but I doubt any security policy can entirely prevent either.

    What if the offending malware is a script rather than a executable?
    Now we open a whole new bag of beans. A script can be read, and executed, from any application with the appropriate programming, so long as the application executing it has sufficient permissions. But the same could be true of any executable code, whether a script or in binary form. A virtual machine may load a binary file from any place on a filesystem it may be allowed to read it, and execute it from somewhere else. The problem is that the actual code being executed does not reside on a portion of the filesystem where we have denied execute permissions. With your solution, what is to prevent a virtual machine from reading binary code from a non-executable file, and then executing that code from somewhere else?

    If client applications need the ability to write and execue data... clearly you can't have multiple instances of the same process
    Your original question was to find a method to prevent the execution of incoming data. If client application can write and execute data, then we cannot prevent the execution of incoming data, unless I'm way off the mark on that one.

    I agree that creating jails for each application process would be very complicated. But you would hardly need a new jail for every process. You would, however, require a new process for each user acessing that application. But I disagree that that would make things overly complicated. Now when you speak of a lack of lateral sharing... my solution is specifically designed to prohibit lateral sharing, as this becomes, as I said eariler in this post, an attack vector by which malicious code can spread from one application to another, and has privacy concerns as well. Where lateral sharing is required, I have allowed applications to read data from a shared resource, yet still limit the damage done by said application, should that shared data prove to be malicious, to a single sandbox at a time.

    I think the same problems you give to me in a linux based environment would also apply to a Windows environment, although the solutions may be different. If any code can be executed, malicious code can be executed. If lateral sharing is allowed, privacy risks are introduced. How would you plan to solve these problems in a Windows environment? Specifically, how would you allow users the ability to generate and execute code, extend application functionality with executable code, yet deny the same execution to malicious code? How would you allow lateral sharing, yet avoid associated privacy risks?
    Government is like fire - a handy servant, but a dangerous master - George Washington
    Government is not reason, it is not eloquence - it is force. - George Washington.

    Join the UnError community!

  5. #15
    Banned
    Join Date
    May 2003
    Posts
    1,004
    Ok your solution is going in circles... and you haven't introduced any new information over my original post.

    The files created by this process are owned by the user running them. After the chroot() call is made, the user ID is changed to the user requesting the process, and runs as it normally would run... with the permissions of the user running it.
    Then how are these files any different? The jailed process creates files outside of the jailed environment (why have the jail?) that fall under the original users credentials. This seems like it meets the requirements to you?

    If we are to allow users to develop and execute applciation, what is to prevent a threat agent from using that users' credentials to execute other code?
    In my solution, the threat agents lack the access... in yours, nothing.

    In my solution, any application process that a user requests would be jailed, and only able to run code within that jail, if we provide the necessary permissions to do so.
    Yet according to your point above these jailed environments can create object outside of their jailed environments... object that lose the jailed credentials and acquire the user's credentials.

    In your solution, user data would also be stored on a shared resource, and be readable by many different client applications, all of which may read and execute that code.
    Not true at all... see in my solution I can provide process credentials and user credentials at the same time... yours does not. Not only that my credentials are more expressive allowing again for greater granularity.
    The only way for you around this is to create a seperate jail for every application used by every user (so userX, userX_firefox, userX_clientapp, etc) otherwise if you have jail_firefox as an account all that data is availible to every user that uses that jail.

    With your solution, what is to prevent a virtual machine from reading binary code from a non-executable file, and then executing that code from somewhere else?
    How will the code get somewhere else? Yours requires moving objects around, mine does not.

    Your original question was to find a method to prevent the execution of incoming data.
    No it wasn't... this is the second time you have clearly misread my original question.
    "How can I restrict any number of client applications on a large-scale system, so that incoming data cannot be executed nor can it read or written to beyond the application’s sandbox?"
    "executed... read or written "
    "beyond the application's sandbox "

    I think if you take the time to review my question fully you will see that I addressed and dismissed your solution because it doesn't work. It doesn't matter if you use a jail, an emulator or any other subsystem... they all result in the same problems.

    Now when you speak of a lack of lateral sharing... my solution is specifically designed to prohibit lateral sharing, as this becomes, as I said eariler in this post, an attack vector by which malicious code can spread from one application to another, and has privacy concerns as well.
    Lateral sharing isn't an issue as objects move across they will inhierit new rights and privacy isn't an issue since they are only laterally shared across a single user.

    If any code can be executed, malicious code can be executed.
    Hence the wish to sandbox it.

    Specifically, how would you allow users the ability to generate and execute code, extend application functionality with executable code, yet deny the same execution to malicious code? How would you allow lateral sharing, yet avoid associated privacy risks?
    Who said anything about denying malicious code? I merely wish to keep code that originated from the sandbox to remain in the sandbox and code that originated outside the sandbox to be useable nearly anywhere (the specifics of where and those security concerns are beyond the scope of this conversation... so nearly anywhere will have to do).
    Privacy issues do no occur because users still own the processes and are still bound their their access controls... additionally they are bound to the process' restrainsts. Since everything occurs under the same uid... no privacy issues are introduced.

    cheers,

    catch

  6. #16
    Senior Member
    Join Date
    Sep 2005
    Posts
    221
    Quote Originally Posted by catch
    And I doubt you can apply all linux security models to Windows.
    Yup... every aspect of it's access control model can be replicated and then some.
    Really? I'm extremely interested in how that works; could you please explain this? It's way too relevant to my current job for me to leave this dormant.. This is knowledge that I need.

    I agree with rcgreen and striek as far as the belief that your problems are Windows problems and that you are addressing them in a Windows way (but, after all, you're not disagreeing with this, are you? You offered us solutions for your problems in Windows together with the questions).

    If all code remains in the sandbox, how do you share the code, anyway? Do you give multiple users access to the sandbox? Where's your security then?
    You do realize that if you share your work, then people can steal it, right? Catch, you're being unfair because you always bring out one more layer where the system can be abused, and I would like to throw social engineering in your face, but your clever answer would probably be something along the lines of "Oh, but we're only talking about the system's security!".. And I hope it isn't, because the users are part of the system's security. So where's your security if we talk about social engineering? What exactly in all of your questions is helpful in strengthening the weakest link... The users?
    Definitions: Hacker vs. Cracker
    Gentoo Linux user, which probably says a lot about me..
    AGA member 14460 || KGS : Trevoke and games archived

  7. #17
    Senior Member
    Join Date
    Oct 2002
    Posts
    1,130
    Well I created a method for jailed process to create objects outside their jail to allow those objects to be shaed with other applications and/or users. If a process can only create objects within its own jail, then how are we to share those documents without the use of a higher level process, outside of that jail? In your solution, mustn't there be some process that can read those objects from outside the jail so they may be used in other sandboxed appilcations?

    You wish to create a sandbox right? Am I not doing the same thing with a jail?

    But perhaps you're right, I may not fully understand the question. So... let's say we have a small system with three applications - a CAD application, to which we wish to give users the ability to extend functionality with their own executable code brought in from third parties, an IDE with which they need to be able to develop and execute their own code, and a web browser, which may also need to be extended with plugins. All three of these application must be able to see and use the data created by the others, and store that data on a remote fileserver. We then have five users (let's call the Alpha, Beta, Gamma, Epsilon, and Omega) who need to use all three of these applications each. There are currently no access controls such as we have discussed in this thread.

    Provide me with a sloution that meets your requirements in a Windows environment, and I will attempt to provide a similar solution for a linux environment. I still say it can be done, although not as easily as in a Windows environment.
    Government is like fire - a handy servant, but a dangerous master - George Washington
    Government is not reason, it is not eloquence - it is force. - George Washington.

    Join the UnError community!

  8. #18
    Senior Member
    Join Date
    Jul 2003
    Posts
    813
    I don't know if this is pertinent but it sounds like it could be employed.. whenever Singularity becomes available...

    http://research.microsoft.com/os/singularity/

    The part about SIPs.

    Some comments on http://www.darksideprogramming.net/2...syst.html#more

    [It was on Slashdot and I hate starting threads that simply copy those news ] But after a quick read it seemed that it could apply.
    /\\

  9. #19
    Banned
    Join Date
    May 2003
    Posts
    1,004
    Simplified Question:
    The problem is the very real issue of malware being surreptitiously delivered via client software.

    Functional Requirements:
    - Processes must not be able to process objects beyond their respective sandboxes.
    - Processes must be able to read, write, and execute objects within their sandboxes.
    - Users must be able to seamlessly move objects in and out of the sandboxes.
    - User must be able to read, write, and execute objects within their environment.
    - Normal user permissions must not be extended via proxy agents or in any other manner.

    Striek, if you don't understand how your solution fails to meet these requirements, perhaps you are not the best person to be answering this question.

    Really? I'm extremely interested in how that works; could you please explain this?
    Linux uses an RWX scheme based on UID, GID, and everyone. This scheme is used for anything that can have permissions.
    Windows on the other hand allows you to set many more permissions (traverse folder, read attributes, change permissions, take ownership, read permissions, write attributes, delete, delete subfolder, etc) and thee permission may be set to allow or deny (which is very useful in preventing rights from propagating via shared subjects). Windows allows these permissions to be set by UID, GID, and everyone... as well as by device or system. Additionally while Linux only provides for one UID (the owner), one GID, and everyone... windows allows as many UIDs and GIDs as you wish... the owner doesn't even need to be one of them.
    In addition to all of that, the security policy editor allows you to further define rights such as: backup files, change system time, bypass traverse checking, create global objects, replace process level tokens, restore files, take ownership, create permanent shared objects, etc. Again these can be defined to many individual users and or groups.

    Understanding this, it is clear to see that every expression of Linux's permission bits is contained within the expressiveness of Window's ACLs.

    If all code remains in the sandbox, how do you share the code, anyway? Do you give multiple users access to the sandbox? Where's your security then?
    By assigning different security models to the sandboxed processes and the user using those process. The security for the user is the same as if no sandbox existed.

    You do realize that if you share your work, then people can steal it, right? Catch, you're being unfair because you always bring out one more layer where the system can be abused
    People stealing the work isn't much of a concern... my concern is users only share that which they intend to share... not sharing things they are unaware of.
    Everything I've said has been at least implied by the original post if not outright declared.

    And I hope it isn't, because the users are part of the system's security. So where's your security if we talk about social engineering? What exactly in all of your questions is helpful in strengthening the weakest link... The users?
    The social engineering risk is largely decreased by the client application's in ability to surreptitiously execute malware in a harmful manner. Additional steps are required on the user's part that would fall beyond their typical manner of operating, which would make automated social engineering very difficult. As a point beyond the scope of this question... each user environment is sandboxed from the system itself as well with regard to the scope of executables and the limitations of where data can be written to and read from. So even socially engineered or intentional malware will be confined to a single user account... without some serious inside effort. (which should all be auditable)

    cheers,

    catch

  10. #20
    Senior Member
    Join Date
    Oct 2001
    Posts
    748
    Processes must be able to read, write, and execute objects within their sandboxes.
    Catch,
    I understand where you are going with this but I need for you to clarify something. There is no permission called just "traverse directory" It is "traverse directory/execute file."

    It's not a permission that I use a lot because in my environment I don't have terminal servers, or machines where clients have active logins and their own directories. But from the MS help it would appear that this setting stops directory traversal and executing files. So are you setting this permission on the applications or on the directories? Am I mistaken that if you put it on a directory you would not be able to run anything in that directory, or move out of that directory? How does this still allow a user to execute code?

    if you set this on the root of each client application-
    and disable directory traversal would be enabled on the root of each application’s paths.
    For which users?

    How do you run the applications? Your explanation does not really explain the actual configuration and how the client_app accounts ties into the what the user is doing. For whom do you disable the right to traverse directories and execute files? Do you disable it for all users? That is what I think is missing from your explanation.

    Wouldn't you need to setup the application to always run as the client_app account? If you only deny the directory traversal/execute file permissions for the client_app account, and the user program is started in the context of the user, doesn't it still have that permission?

    I did some searching and I can't find any resources that detail this configuration method. The best I can come up with is this MS research article that talks about using restricted SIDs to prevent malware from being a problem. It doesn't detail how to actually do it though. Most notably is this reference from page 32- "Restricted contexts can
    implement simple security policies, such as disabling administrative rights and privileges
    for most programs, as well as more restrictive policies such as limiting a program to
    accessing only a single file."

    http://www.cs.washington.edu/homes/m...ers/tissec.pdf

    This definitely does what you are talking about, but it would appear in a much different fashion. Or is the secondary user account how you configure the restricted SID? Meaning is the restricted SID the client_app account?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •