Investing for AI

  • Comments posted to this topic are about the item Investing for AI, which is is not currently available on the site.

  • I'm going to buckle down and watch the Satya interview you linked later, but right now I'm very firmly in the "distrustful of so-called AI" camp.

    I think part of the problem is, when you say it's "AI" (Artificial Intelligence) that implies that it "thinks" and "reasons" and that gives people a certain expectation.  Which, right now, isn't what's being delivered.  Granted, calling it "Idiot Savant" wouldn't convince people to use it, no matter how much they could demonstrate that within the confines of it's training and available back-end data, it could be useful (think someone with a LOT of information at the fingertips, but they are absolutely UNABLE to make inferences from said information, they can only work with the data itself and any references that already exist within the data.)

    I've puttered a bit with AIs, I've used the summaries when I search for something, but so far I've not been overly impressed.  Now, I will grant, I've not used some of the new tricks that have been coming out, such as Copilot in SSMS, but that's more a case of I can't access it from my work network (much less install it with SSMS), so maybe I'm missing out here.

    I know some of my co-workers have used our internal AI to "clean up" and "professionalize" some of their writing, but that's another use where, well, I'm stubborn.  Plus, generally, AI-re-written documents all end up sounding the same, and how much does it impress the boss if everyone's using AI to make their work sound "better," rather than tackling it themselves?  I suppose the argument could be made that if it's getting things done faster AND better, then they'd be happy, but at some point they'll also think "why do I need this guy to write this stuff and have an AI clean it up?  I can do this myself / move the task to one of the other staff and reduce headcount..."

  • I find that the more talented people are with a skill, the less they trust an AI or want to use it. It's interesting, because it "reasons" about as well as lots of humans. Humans predict things based on past experience, which is somewhat how a GenAI LLM works.

    Writing does get genericized, but who really stands out? Very few do on the "great side". Lots do on the "how did you graduate high school" side.

    It is somewhat amazing at times, and horrific at times. Like lots of coworkers.

     

  • I read an article this morning about a consultancy that did a test with 5 teams of senior consultants.

    • 4 were told that reports had been written by a team of graduate interns
    • 1 was told that it was GenAI

    The 4 blind tests scored the reports highly with 95% accuracy.  The one who knew it was GenAI rejected them out of hand.

    When the teams were brought together, informed that GenAI had generated the reports, and asked to redo the review, they all gave them 95% accuracy.

    I think that there is a lot of fear and a wish that AI would go away.  I can understand that.  I'm near the end of my career, and I very much want the end to be my choice, not a tap on the shoulder on a Monday morning.

    The questions I am asking now are

    • How can I gain a benefit from AI?
    • What skills do I need?
    • What does data governance look like in the AI world?
    • For Agentic AI, what practices do I have to adopt or more likely champion to make it successful?  This feels like a turkey voting for Christmas.
  • David, I'm not one of those "reject AI output out-of-hand" people (and I know you weren't calling me out, either,) but I do take the comment Steve made in the editorial to heart, this one:

    Like more of you, I think it can be a really poor partner and it produces output I can't trust. I think one of the major challenges is learning to treat an AI like a colleague whose work quality is erratic. It's not that I can't work with them and use their work, but I need to test, validate, and verify the code they give me does what I need, at some acceptable quality level.

    I've read comments elsewhere from a lawyer who's seen many articles about other lawyers using AI to write briefs where the AI basically hallucinated entire case law.  Said lawyers get slapped down HARD by the judges in these instances.

    Vibe-coding using AI to generate the code and just plug-and-playing (praying) said code, which then doesn't perform well so they go back and vibe-code a fix, rinse and repeat and who's going to be able to support such code?

    For me, at least for the foreseeable future, I'm going to treat AI as that shiny new co-worker who has at least a very basic grasp of a topic but I still need to double- and triple-check their work before approving it.

  • Jason,

    ask yourself this. Is the vibe coding any different from blindly approving a PR from a colleague? Or having to support someone else's code who you know takes shortcuts?

    Those lawyers are the same ones who take a brief from a para without reading it. Paras make mistakes all the time, or misinterpret case law. They might not hallucinate in the same way, but some do.

  • Steve,

    I'm torn on the vibe coding point you raise.  On the one hand, at least if the code is coming from a person, it's possible to go to them and ask "What the heck?" later and get an answer.  On the other, though, well, yes, it's not much different than the "vibe coding."

    Actually, that might be the big difference, being able to go back to a person to get more information, vs going to an AI which will pretty much come back with "I have no knowledge of this place"

    As for the lawyers, well...

  • You can ask the AI what they did. I asked to restore a single table. Got "RESTORE FROM DISK WITH SINGLE_TABLE". Told it didn't exist but I could have pasted the error in and it then corrected. You're right, no option, try this.

    one of the best tricks I've seen is when you get a good answer from an AI LLM, ask it what prompt would have gotten you here quicker.

  • I think I see the disconnect between what you're saying and what I'm saying.  I'm coming at this from a "production DBA" mindset, where I don't normally develop code for an application, you're coming from a "development DBA" mindset.  So in my mind, I've been thinking about it as "I have this code running in a DB / application someone else developed somehow and my monitoring application was throwing warnings about it and when I looked at it, it was like looking into an abyss of endless spaghetti code," while you're thinking from a "the AI gave me this code which my experience tells me isn't quite right and won't work, so lets make the AI try again to get something that will work and I can clean it up the rest of the way if needed."

    Neither view is wrong, obviously.

    As for your other trick of asking the AI what prompt would've gotten to the correct answer quicker, I recall that being brought up at one of the sessions at Summit, thank you for reminding me!

  • I saw this post earlier last week but didn't have the time to read it. Its Sunday morning, so I've got more time. Very interesting, Steve! I'm going to go back and watch the interview with Satya Nadella and Scott Guthrie, when I have an hour and a half (not today).

    AI, as it now is with LLMs, is in my opinion the most significant thing to occur since the Internet. However, where I work, we will not have opportunity to use it. I've mentioned here before where a coworker (I don't know who she is) unwisely used ChatGPT to process patient data, which is wrong. But rather than instruct the person not to do such a thing and then offer training for everyone on the proper and wise use of AI, the CISO took the nuclear option and band all AI to everyone, forever. In the short term this will have the greatest impact upon all employees, as we won't be able to gain new skills that would best be gained in the workplace. (And no, WFH is not allowed and therefore not an option.) In the long term it will hurt the employer, because we'll still be working at a slower speed then other agencies. And it might encourage some to use their phones to capture data and input, then call various chatbots to process it.

    Kindest Regards, Rod Connect with me on LinkedIn.

Viewing 10 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply