April 22nd, 2011, 10:24 PM
Recovering deleted files on Ext3 filesystem
Today I mistakenly ran rm -r on the wrong directory on a amazon cloud EBS instance - ubuntu 10.4 - ext3 file system. Shortly afterward the panic began.
Naturally we don't have backups of these files.
Is anyone aware of any tools or pointers that can help to restore these files?
I've come across http://extundelete.sourceforge.net/ but I'm uncertain if this is an appropriate solution for an amazon instance
A mind full of questions has no room for answers
April 23rd, 2011, 01:24 AM
I've used this in the past: http://www.xs4all.nl/~carlo17/howto/undelete_ext3.html
I've never used extundelete but it sounds like it is closely related to the link above if not a more maintained forked version of it.
The most important thing though would be to make sure that you stop writing anything to that filesystem.. preferably just umount instead of remount read-only before you do anything.
An EBS volume should just be able to be treated them same as any other block device so treat it like it it was directly plugged into your system and you should be fine. Are you sure you don't have any EBS snapshots? You might be able to attach one of those snapshots to your instance too.
Good luck. If ext3grep or extundelete don't get you your data then take a look at some file carvers to get as much information as you can. (I like foremost and scalpel, but there may be others out there that are better now)
April 23rd, 2011, 05:29 PM
No one makes snapshots? time for a board meeting.
May 3rd, 2011, 05:30 AM
Dude....Not having Back ups is ALWAYS a bad idea first off, and I agree that you need a board meeting to set this right. I set up a system myself where I took the VERY first Computer I EVER bought, installed a 160 GB HD in it, and then, took the Hard Drive it came with (Which is 43 GBs....Yea, 43) and then I installed Slackware Linux 12.0 on it, and set the old Hard Drive as one big partition called "/storage" and made it writable by users, and then, the new HD, I made the Root Partition, so that way if the old one craps out, I won't lose the OS and most of the Data, and I have the second HD backed up to a little External USB HD which is 80 GBs.
I set up FTP on it, and made it so that only users with valid accounts could log in, put a Hardware Firewall in front of it, and then, all the machine on the Network My Wife and I have here, can log into that box, and upload the files We need to back up. This is everything from music and movies, to software, games, and, of course, System Files. My FreeBSD machine has an FTP Service set up too, which I use as a secondary since it has more HD space than I use normally, so, I set up one on that, and use it too. Then, I set up an OLD machine, whacked VSFTPd on it, and now, I have three machines that all have an FTP Service running on them, so I can back up not only important files, but, the FreeBSD machine has two HDs in it, all for FreeBSD, and then, the Slackware FTP Server, which has SSH and all that, I can log into it with a client, upload system files I need, and then, have them stored in multiple locations. I have like 8 HDs here, and the REALLY important stuff, is saved on all of them.
The point of all that, is that basically, you could have prevented this by simply doing the following TWO things:
#1: Edit one of your Shell's Configuration files, and make rm ALWAYS do rm -i instead, so that way you can't delete anything by accident.... rm -i will ask you if you really want to delete a file, before doing it, and this can save you from what just happened. Even if you run rm -r /usr/home/username/files/* it will then ask if you're sure. And it costs nothing but 5 minutes....
#2. Set up a cheap machine with Slackware or FreeBSD, or something you're comfortable with, and put a little FTP Service on it, and then, when you're done, set up the FTP Configuration to only allow certain IPs to log in, or only allow certain users from certain IPs, and then, upload the files you need to it. If you ONLY uploaded the /etc directory and or all your configuration files, you'd barely need a gig.
May 3rd, 2011, 01:28 PM
I don't know anything about this sort of cloud and instance stuff, but if you can identify and access the drive, then you might try roadkil's "unstoppable copier". At the very least, clone the drive.
Also, it should be policy that if you are going to do destructive maintenance on a production system you should make and test a backup first.
Power cuts do happen, and the consequences can be very unpredictable.
43GB?.................yeah I remember that sort of thing, I actually have a couple of 123.5GB drives myself (that's what is printed on the label). I guess they all go for round numbers these days?
Last edited by nihil; May 3rd, 2011 at 01:32 PM.
If you cannot do someone any good: don't do them any harm....
As long as you did this to one of these, the least of my little ones............you did it unto Me.
What profiteth a man if he gains the entire World at the expense of his immortal soul?
May 3rd, 2011, 04:28 PM
Yea man, it's 43 GBs heh. And yea, I think now, they go for nice round numbers now. I don't think I've seen any of those weird sizes since.
May 5th, 2011, 11:16 PM
I've regularly advised this client to create backups, they have now learned but prior had difficulty following the advice. Appreciate all the advice
A mind full of questions has no room for answers
May 7th, 2011, 03:58 PM
Well, most of us in this industry, or Hobby, have done it at least once. I remember once, I was getting ready to install FreeBSD 4.0 on my box, which already Dual Booted Windows 98 SE, and Linux, and was about to tri-boot with FreeBSD. I for some reason (I'm assuming lack of sleep) thought for a split second, that a good idea was to "Delete my back ups, and then re-make them AFTER the installation"..... The installer failed while doing the MBR, and I lost all my Data..... So yea, I take back ups a lot.
May 17th, 2011, 11:12 AM
apt-get install safe-rm or,
alias rm=’rm -i’
power doesn't corrupt, apparently it deletes.
you're screwed dude.
Every now and then, one of you won't annoy me.
May 17th, 2011, 07:40 PM
Well, not totally.... I mean this is, after all, Magnetic Medium right? So actually, he DOES have the option to recover, but a few things come into play:
How long ago did it happen? I've heard some "Unix Wizard" Stories where a decently skilled Unix Wizard was able to recover deleted files by himself without resorting to the next option I'm bringing up:
Paying for Data Recovery.... I don't personally have a clue on prices for this, because I know I can't afford it. I mean to put this mildly; You'd be charged PER MB..... If you lost a 200 GB HD and a company recovers the whole thing, it's actually cheaper to have you killed for the screw up.
Another thing I know of, is back when I FIRST started in Computing, I was told "If you ever accidentally delete a file; TURN THE MACHINE OFF RIGHT AWAY! Do NOT wait for a shut down, just hit the Power Button, or pull the plug. If you're fast enough, the OS doesn't have time to write the data to disk, and if you're fast AND lucky, it'll have still been in Cache when you hit the power button, and it won't save it to disk".
Mind you, that was like 10 years ago. I got into the game pretty late lol. I've only owned a Computer for like 10 years, but even then, they were saying that you could beat the HD in speed.
There are just so many variables in this that it would take days to count them. I mean, the Operating System itself counts, as most have a way to bring something back, or "un-delete" for you DOS losers. And then the TYPE of File System matters.... Are we talking some crappy File System like FAT? Fat16? Fat32? NTFS? Ext2? Or are we talking a decent File System with a Journal like Ext3? ReiserFS? Ext4? Others? Or, are we talking FFS / UFS? ZFS? XFS? XFS was done by SGI and as far as I know, does good Journals, and Linux can use it too. BSD uses their own style of the FFS (Fast File System. Written at Berkeley) and then, Soft Updates.
I think when I was just getting started, they were actually working on this to make it better. For a time, it was one of the "selling points" that Linux VS BSD could be decided on by the File System, and it's capabilities; BSD's File System with Soft Updates, can do the whole speed thing without fragmenting the crap out of the disk, but without the loss of speed associated with it normally.
Again, your only option after this long of a time, is either pretend you had a back up and all, or, find a company who does Data Recovery, and pay out the ass for it.
I know of one company where a Magazine actually wanted to test their services and see just how good they were, so they ended up taking.... I think it was a Laptop, and they had Data on it, and actually threw it in a fire. They tossed a Laptop in a Bon Fire, and sent it in. The Company was actually able to get the Data back for them. The price of course, was well past that of most cars.
By Nokia in forum Tips and Tricks
Last Post: June 12th, 2004, 06:36 PM
By Nokia in forum Tips and Tricks
Last Post: June 12th, 2004, 06:13 PM
By t3gilligan in forum *nix Security Discussions
Last Post: February 28th, 2004, 02:31 AM
By micky05 in forum Programming Security
Last Post: April 14th, 2003, 01:39 PM
By Badassatchu in forum Non-Security Archives
Last Post: November 23rd, 2001, 11:13 PM