Influencing a Local AI

  • Comments posted to this topic are about the item Influencing a Local AI

  • I've been attending a lot of DataBricks webinars and AI meetup groups.

    There has been a lot of talk about agentic behaviour, that behaviour that acts like an AI agent.  There are 3 characteristics of an AI agent

    • Autonomy
    • Goal orientated
    • Iterative improvement

    Agentic behaviour is on a spectrum across those 3 characteristics.

    "Routing" would be regarded as low on the spectrum because you are telling the AI to take a particular path.  It is one step up from an CASE statement.

    "Tool calling" would be high on the scale because the AI decides which tool to call, when and under what circumstances.

    The demo I was shown responds to text or voice questions.  Two products were demonstrated.

    • AI BI Genie which is currently in beta within DataBricks.
    • LangChain

    The idea was to ask a database questions and it would write the SQL. The demo used Nemo Guardrails to prevent people asking for malicious things like DROP DATABASE.

    What was really interesting was that one of the tools (sorry, can't remember which) output its thought process in terms that a human could understand.  That to me is a HUGE deal.  Throughout my career "Trust in Data" has been a major issue.  The ability for a black box to expose its reasoning provides a means to build trust.

    Another important thing was that the AI picked up on the DataBricks equivalent of the MS_DESCRIPTION extended properties within the database to help it decide which tables, views, column and functions it needed to do its job.  It also made use of any Python DocStrings in Python code as context for Python functions it may decide to call.

    Another interesting part was the use of "Judges".  These are facets of the reliability of information coming out of Generative AI and are designed to be tuneable measures to tell AI how much freedom it has to interpolate information from facts.

    The people running the demo pointed out that it an AI can decide which functions to call, what queries to run and how to iterate over the answers to its questions until it gets a satisfactory answer then that answer can be very expensive.

    An individual AI Agent response may cost a fraction of a US Cent but the permutations and combinations of the what the agent does to provide you with an example can be in the tens of thousands of dollars.  The big worry for businesses is not the capability of AI, it is the cost of that capability.

  • Interesting points. I feel AI has a lot of potential, but the guardrails and trust are huge. Hadn't thought about cost, but that can be a concern. I already wonder how well the economics of using GenAI will work for the large vendors. We saw things like Blockchain be a bit of a solution in search of a problem, but also a potential huge  cost item that isn't justified

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply