I used to work in the restaurant business. I got started after my first year at college, landing a job waiting tables in a new hotel. It was eye opening for me to see just how stressful and difficult that job can be. However, I wanted to earn money and jumped at opportunities. When we needed room service waiters to pick up shifts and elevators didn't work, I carried trays up staircases, taking advantage of the opportunity. When a bartender didn't show up, I volunteered to take the lunch shift. I had no idea where things were in the bar, I was 19 and didn't know how to make drinks, and I'd make less money, but it was an opportunity that I knew would pay off at college.
It did, and at the next three jobs, I wound up me getting hired, starting work, and having either no one to train me that day or a very busy shift where I was mostly on my own to survive. I had to muddle through and learn on the fly. The ability to do that, without panicking being overwhelmed, or giving up has served me in quite a few positions since then.
When I started working for various companies as a developer or DBA, I found myself in similar situations. Problems would arise, often the day or week I started, and I'd have to solve them. Usually with DBA positions, I was the only one there, so I couldn't depend on anyone else. It was a good thing as I often found that sysadmins or developers were not managing or configuring databases in an efficient way. As I gained experience, I could make more and more of a difference earlier on at each organization.
Kevin Feasel wrote a bit about his experiences with Lucerne (near the bottom), muddling through the need to write queries. I've seen similar stories from other friends working with SSIS, SSRS, Redis, Azure, and more. They don't know a lot, but they dig in and learn, making mistakes, but getting tasks done for their employer.
The ability to work through adversity, have some confidence, learn quickly, and be effective are valuable skills. Those data professionals that can do so often find more opportunities, challenges, they grow their skills, and get more compensation. Those that find reasons to avoid learning, that lack confidence in their ability to find a way to solve a problem, or are unwilling to tackle challenges often stagnate a bit. I'd like to think there are more of the former than latter in this business, but I constantly seem to find people that just look to repeat the same work they've done over and over for a long time.
It can be mentally difficult to start a project using technology with which you have little familiarity. It can be disconcerting to have someone ask you a question that you can't answer because you've barely begun to learn. That ability to muddle through, to keep learning, accepting that you don't have answers but can find them, knowing that some of your answers will be wrong and you'll need to backtrack. That ability to muddle through will serve you well.
The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music.
Database migrations inside Visual Studio
Feeling the pain of managing and deploying database changes manually? Redgate ReadyRoll creates SQL migration scripts you can use to version control, build and release, and automate deployments. Try it free
Sign up for more free training from Redgate
Redgate has committed to hosting a free virtual event in every quarter of 2018, and will be kicking this off on February 28 with a livestream themed around data privacy and protection. The agenda has now been released, so you can see who will be presenting, what they will be presenting, and how you can tune in to watch. Find out more about the sessions and register your place
Automate your workload and manage more databases and instances with greater ease and efficiency by combining metadata-driven automation with powerful tools like PowerShell and SQL Server Agent. Automate your new instance-builds and use monitoring to drive ongoing automation, with the help of an inventory database and a management data warehouse. Get your copy from Amazon today.
Yesterday's Question of the Day
(by Steve Jones):
I have two data frames that I want to compare. The first is passing.2017, and contains this data:
rank player.name year2017 yards2017
1 1 Tom Brady 2017 4577
2 2 Philip Rivers 2017 4515
3 3 Matthew Stafford 2017 4446
4 4 Drew Brees 2017 4334
5 5 Ben Roethlisberger 2017 4251
The second is passing.2016, with this data:
rank player.name year yards2016
1 1 Drew Brees 2016 5208
2 2 Matt Ryan 2016 4944
3 3 Kirk Cousins 2016 4917
4 4 Aaron Rodgers 2016 4428
5 5 Philip Rivers 2016 4386
If I want to combine these so that my final data set has 5 rows and compares the top ranked passers, which function should I use?
The script executes the procedure 'sp_estimate_data_compression_savings' for each physical object in the database for which the page compression has not been implemented. It shows the result in the tabular form ordered by the size of the object in descending order.
Just execute the script against the user database on which you are planning to implement compression. Bare in mind that the compression is available only on certain editions of MS SQL Server - https://docs.microsoft.com/en-nz/sql/sql-server/editions-and-components-of-sql-server-2016.
I also found that on some databases the procedure 'sp_estimate_data_compression_savings' causes deadlocks and only way to fix that is set instance wide MAXDOP option to 1 in order to prevent parallel executions.
This newsletter was sent to you because you signed up at SQLServerCentral.com.
Feel free to forward this to any colleagues that you think might be interested.
If you have received this email from a colleague, you can register to receive it here.