Chris Harshman wrote: MVDBA (Mike Vessey) wrote: Michael L John wrote:
In today's meeting of the entire IT staff, we were discussing how interruptions prevent us from getting projects completed.
My boss, using me as an example, stated that I was too nice that that I typically drop everything to help people when then come running.
I told him that if he ever says I am too nice in a public meeting again that we are going to have words. He's ruining my reputation!
I'm also told (in private) that i'm too helpful and I juggle too many projects.... i'm tempted to take a week off and make him be the DBA
he-he, my boss told me a couple of months ago that I need to let go of some of the things I was worried about. I told him I'd gladly let go of some of those things if there was someone competent to hand those things off to. 😀
Well, this week just proved the need for a DBA. AGAIN.
Our devops team has essentially taken over the duties of provisioning resources in Azure. My group (infrastructure) has literally been reduced to nothing when it comes to the nice shiny cloud thingies. Why? We take the time to look deeper into things, as opposed to clicking through and ta-da! We have a new server.
A deployment to production failed miserably on Monday.
"Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached."
The devs pushed code that had never been reviewed, they were not closing their connections, and thus the connections in the app pools were being exhausted. This SAME THING has occurred at least 10 times in the 5 years I have been here, and I simply sent the same email I previously sent in November, with the explanation, and the fix. But hey, we don't need a DBA!
They still wanted me to dig into the Azure database to see what was wrong. I had spent a ton of time architecting this app with the dev team who originally developed it. It was deployed last June. It SCREAMED. Every proc or ORM code ran in milliseconds against the DB, which is 500 GB. Aside from some minor index tweaks, I've not had to do much. Now, after 6 months, other dev teams have started deploying code against this database. They basically took the Mona Lisa and threw paint on it. I have 4 pages listing all the less than optimal things that have been done in the recent code.
When it comes to DevOps configuration of the infrastructure, the database is being geo-replicated and is part of a failover group. DevOps had it all set up, the read only connection strings to secondary, etc. etc.
None of it works. No queries have ever been executed against the secondary. We are not at a high enough pricing tier to leverage these new shiny features. We are out of compliance with licenses. The backups were not configured.
The list goes on an on.
Yes, I am cranky. Because I am tired of being the guy with the shovel and broom who walks behind the horses in the parade.