Results 1 to 2 of 2

Thread: Process management problem

  1. #1
    Old-Fogey:Addicts founder Terr's Avatar
    Join Date
    Aug 2001
    Location
    Seattle, WA
    Posts
    2,007

    Process management problem

    At work, I'm trying to set up an otherwise-very-neat networked backup system called Bacula. (Backup Dracula, as it were. Tagline: "It comes by night and sucks the vital essence from your computers")

    Anyway, I've gotten it configured like I want it, except for the fact that I'm having problems with it's ability to run a script on the remote being-backed-up machine.

    Basically, it works fine, as long as I don't create any child processes in the script. If I do, it waits until they finish. (And in this case, it causes a deadlock.)

    This is Linux 2.6. I try launching the worker scripts with
    Code:
    bash -c "worker.sh" &
    
    Code:
    nohup bash -c "worker.sh" &
    
    Code:
    setsid bash -c "worker.sh" &
    
    Code:
    bash -c "worker.sh" </dev/null >&/dev/null &
    

    None of it seems to work. Basically, I don't need this script to do anything except launch a bunch of entirely-independent scripts which I don't need input or output from. But... somehow... Bacula knows they're still running and refuses to stop even when the original script exits. (And it knows it exists, because it recieves a SIGCHLD signal...)

    When I look at the processes during this blockage, the workers have no discernible relation to the original process, yet it still blocks until they are killed off. Any *nix gurus have suggestions?

    (Even if the main script is a python script using spawnvp(), it still waits until the new processes are killed or end normally, even if the process is "sleep 5" or something.)
    [HvC]Terr: L33T Technical Proficiency

  2. #2
    Old-Fogey:Addicts founder Terr's Avatar
    Join Date
    Aug 2001
    Location
    Seattle, WA
    Posts
    2,007
    Solution: It seems that despite some "> /dev/null" bits, I somehow wasn't closing the file descriptors being used for STDIN/STDERR/STDOUT. Since the original process was being launched through some funkadelic pipe interface, it was waiting for output to be closed--not for the script to terminate.

    Solution was this little internal shell script bit, which seems to close STDOUT and STDERR from within a running script.
    Code:
    exec >&- 2>&-
    [HvC]Terr: L33T Technical Proficiency

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •