How Helpful is a GenAI Copilot

  • Comments posted to this topic are about the item How Helpful is a GenAI Copilot

  • One very simple but useful task I use AI for is to summarise documents I author.

    Adobe Reader allows you to generate a summary of the open document. I do this this and check if the generated summary contains what I consider the main points of what I have written. If the summary misses what I thought important, then I know that my writing is not as clear as I had hoped, and I need to rework the document.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • We have a  Co-Pilot subscription and I have mixed results with it.  I have been using it with two extensions in VS Code

    • GitHub Co-Pilot
    • GitHub Co-Pilot Chat

    The accuracy of GitHub Co-Pilot Chat is no different from ChatGPT or Claude.  It depends on how good you are at asking it questions.  This should be no surprise. You can choose which LLM backs the extension and both ChatGPT and Claude are in the options available.

    The problem I find is that there are so many chords (keystroke combinations) in VS Code that I keep forgetting which one starts the chat window, so go back to using ChatGPT in a browser.  The chords in VS Code, once you have more than a handful of extensions, is like trying to play La Villa Strangiato on a sitar.

    Some of us have switched off the GitHub Co-Pilot extension.  When it gets it right it is useful but, in general, it is over-enthusiastic and leaps in with suggestions that not correct or what I want/need.

    The way the code suggestions are presented look like they are already inserted.  They look like they are rendered with 5% transparency so almost indistinguishable from your actual code.  I find it distracting.  I think this is more of a user interface complaint for the extension.

    If your CICD pipeline use GitHub workflows there is the option to choose Co-Pilot as a reviewer of pull requests.  Again, mixed results.  As a reviewer Co-Pilot will try and summarise what your pull request is doing.  The results are quite generic but can save time if the approach to providing PR descriptions is laborious.  Our goal is to get everyone in our teams capable of being a reviewer of code so a human readable description of a PR helps ease them in gently.

    In terms of providing useful reviews I find it rather weak.  At present I am doing a lot of Terraform work and really looking forward to not touching Terraform once I have finished.

    Co-Pilot as a reviewer will give me a "Language Not Supported" message, which is surprising.  It will give me a partial review but little more than a description of what it can see and which files it has ignored.

    For Python, it is slightly better but not as good as the Sourcery.ai plug-in for PyCharm and/or VS Code. SonarQube and SonarLint also beat it.  To be honest Sourcery.ai is such a great teacher that after a while it does itself out of a job if you pay attention to what it tells you.

    I haven't submitted a SQL PR for ages so can't comment.

    If the threshold for ROI on Co-Pilot is low then I'd say get what benefits you can from it but monitor its usage.  Like all tools brought into the organisation, some are initially useful but only a few remain survive to become trusted friends.

  • EdVassie wrote:

    One very simple but useful task I use AI for is to summarise documents I author.

    Adobe Reader allows you to generate a summary of the open document. I do this this and check if the generated summary contains what I consider the main points of what I have written. If the summary misses what I thought important, then I know that my writing is not as clear as I had hoped, and I need to rework the document.

    That's an interesting use case. I've tried it the other way, let it summarize things for me and it is very hit and miss, so I can't trust it. Hadn't thought about the other author being unclear.

    Thanks, I'll try this.

  • From the article:

    "The headline says that the staff rated the GenAI less useful than expected. Those last two words are interesting because your expectations shape a lot of how you view anything in the world.

    Considering how they are advertising it as a replacement for programmers, why wouldn't it turn out to be "less useful that expected".  It can't even to a proper conversion from DATETIME2 to DATETIME or even DATETIME2(7) to DATETIME2(3).  Of course, the Microsoft documentation in is area is also correct.

    Blaming the performance (lack of, actually) of GenAI on human expectations isn't the right way to go here.

     

     

     

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • ...and I think I still spend more time learning and typing with GenAI tools than I'd like.

    If that means what I think it does, then I agree, and think it is a point that I haven't seen anyone else bring up before. Yes, we've all brought up the fact that, "I still have to review the code that the AI wrote." But while writing any of your code, you are constantly being shown the semi-transparent suggestion code of the AI, and it can slow you down, because you stop to look at it, and see that it's not what you need. Do that dozens of times, and the one time the AI is correct, you have only just broken even on the time wasted versus the time saved.

  • Jeff Moden wrote:

    Considering how they are advertising it as a replacement for programmers, why wouldn't it turn out to be "less useful that expected".  It can't even to a proper conversion from DATETIME2 to DATETIME or even DATETIME2(7) to DATETIME2(3).  Of course, the Microsoft documentation in is area is also correct.

    Blaming the performance (lack of, actually) of GenAI on human expectations isn't the right way to go here.

    Trying to force it to write a specific thing like the conversion, one that many people don't get right, isn't a good test. The benefits of the GenAI models are saving time for tedious or simpler tasks, not writing a specific thing you know how to write. That's mis-using the tool.

    There are benefits, likely not the level of the hype people have, but I find AI saving me time in small places that are very useful. Less so with SQL, but more so with other tasks.

  • kevin77 wrote:

    ...and I think I still spend more time learning and typing with GenAI tools than I'd like.

    If that means what I think it does, then I agree, and think it is a point that I haven't seen anyone else bring up before. Yes, we've all brought up the fact that, "I still have to review the code that the AI wrote." But while writing any of your code, you are constantly being shown the semi-transparent suggestion code of the AI, and it can slow you down, because you stop to look at it, and see that it's not what you need. Do that dozens of times, and the one time the AI is correct, you have only just broken even on the time wasted versus the time saved.

    It's a weird calculus. The more I see the suggestions, the more it starts to become part of my habit and I know I can trust some things, hit tab, and save time.

    Early on, however, I am losing time. I'm learning how to use the tool, changing a habit, so I should be less productive.

     

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply