The Evolution of AI

  • Comments posted to this topic are about the item The Evolution of AI

  • While I don't know that I want AI/ML systems making decisions for me,

    I wonder how much this is already happening without us knowing.

  • I think that would be a better way of phrasing it:

    I'd like to know the decisions AI and ML are making for me.

    I'd love to have AI go through my photos and give me a tag for the year and the identifiable content.

    At work I have hundreds of millions of rows  of data that I'd just like more metrics on.

    No one is really providing an economic argument for smallish data set or even what smallish means.

    It is also very hard recently to find any open source tools for facial recognition that have a meaningful level of support.

    412-977-3526 call/text

  • It's all predicated on the false assumption "a billion flies can't be wrong"

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Your examples simply point out the wrong questions were being asked. Of course if you leave later you tend to be late. Unless, of course, the intent of the question was to see if the AI could spot an obvious answer.

    The problem with AI systems isn't their stupidity, it's their lack of transparency. You can't ask an AI to explain how they arrived at a conclusion, any more than you can a human. (Think a human can? Think about all the unconscious biases humans fall prey to and how many times they suffer from logical fallacies. Still think they can? :))

  • The problem with AI systems isn't their stupidity, it's their lack of transparency. You can't ask an AI to explain how they arrived at a conclusion, any more than you can a human.

    I would have to categorically disagree with you on this. While not an expert, from what I have seen one can go and see how the model formed in AI. The human will tend to cover their preconceptions (especially today). Further, often humans will fail to acknowledge their bias and tend to gravitate heavily toward that same bias even knowing it is false.

    The thing is that you could have the AI system spit out a rating along with the factors that went into the rating. And that can help spot bias. The models can often be manually adjust to help overcome that bias. The big thing is that data analysis and AI need to be considered together. Then the model can be built on the correct factors.

  • You can determine much of how an AI works, but it's very hard. Also, few people do it, so as Roger noted, in the practical world we won't know without large lawsuits or some regulation to require this disclosure.

    We do ask the wrong questions of AI quite often, but we humans are still learning how to work with this tech and how to build these systems.

    What an AI/ML system often is doing is considering way more factors than a human can. We use linear regression often because that's easy for humans. Considering 100 factors isn't something many of our minds can handle. When asking the question of why a flight is late, the ML system digs through a 100 factors to try and determine if they are significant. In this case it found they weren't. While that's good to know, it's not terribly exciting or helpful.

  • While that's good to know, it's not terribly exciting or helpful.

    Exciting no - but considering how often we consider the wrong thing it is helpful. There are plenty of times when we wouldn't want this information to be generally available. In this particular case, there would seem to be an argument for more accountability in being on time. (If the system considered everything that might have played into the situation.) Airlines have long argued that they shouldn't be held to their times because too much is out of their control. In this case the argument becomes weaker at least until someone analyzes the reasons aircraft are late in leaving.

  • The other thing is that not everyone needs to understand the model for release to be useful. A single party that does understand can validate/refute the model for many others.

  • We must remember that AI is based on machine analysis of data, and that someone had to create the logic that is used.  That someone is going to have personal opinion and bias that will certainly affect the logic used.  To me, THAT is the 'artificial' in AI.

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply