I quite enjoy being a DBA overall. Every now and then I get hit with odd problems that make me think and hopefully I have time to think and resolve it and it isn't a "everything is broken" moment.
My worst day was actually a 2 day process. Wednesday, in the middle of my vacation, I get a phone call that nobody can connect to the SQL Servers. I have no laptop, no internet access, nothing that I can use to be helpful. My suggestion - reboot server X. We have 3 servers, reboot 1 of them and see if it comes back up. Normally I would have suggested checking the logs, but this was taking the whole company offline so get it back up ASAP and do root cause analysis after it is working again. They rebooted it, waited 5 min for it to come back up and things were working again. So reboot the other 2 boxes and call it a day. I'll look at it more when I get back to work. I do mention that they should check the logs just to see why everything died, but was told I could look at it when I got back.
Thursday it was rainy, so cut my vacation short and went home for the remainder of my vacation so I could relax and still have fun.
Friday, I get a call that everything is down again and rebooting it didn't help this time. I go into work, poke through the logs... all of the SQL disks were corrupt. Spent part of the weekend on the phone with support and ended up needing to format the disks. So the remainder of the weekend was getting everything restored from backup. But come Monday morning, everything was back up and running 100% AND thankfully we only had about 15 minutes of data loss which was within our acceptable loss window.
Not as bad as some of the others who posted, but not exactly a good way to have a vacation either. Since no critical data was lost and thus I was happy with that experience, just not happy to have my vacation cut short. It was a good learning experience and a way (not a good way) to test the DR plan.
Another fun one I had was a database that had broken the 1 TB mark in size at a company that doesn't need THAT much data, I got tasked with working with the company to reduce the size. I had quite a few meetings with the data owners (roughly a month of meetings and analysis) and it was determined that the data in the tables was unneeded as long as the table structure remained as a 3rd party tool needed the tables to be there. So, weekend comes around and a bunch of truncate scripts later, the database is empty so I shrink it and get IT to reclaim some of the disk space back (they were the ones pressuring me to shrink things to get some disk back). Come Monday, I get into work to 5 emails about some things not working properly. Long story shorter; turns out about 5 tables (small tables) were required by some in-house software and when those got blown away, we could no longer get anything to pass our internal tests. So restore the database onto the dev system and move the tables over and we were back up and running before lunch. Flash forward about 2 months (so now the backups are on tape and off-site thus not easily restored), somebody loads up the 3rd party tool and cannot log in. I go to the database, dig through some tables and views and realize that in my cleanup - I blew away the username table. Thankfully they told me they don't need the 3rd party tool.
The bad part of that story - they told me that they would change their code to stop using the database as nobody looks at that data and nothing should be reading it. That was 3 years ago and I see the database autogrow every now and then... Gotta love data graveyards, eh?
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.