• David.Poole - Thursday, February 22, 2018 4:54 AM

    Perhaps the next step in AI evolution is the ability of the system to explain its workings.  AI makes its decisions based on observation and data but humans are capable of making intuitive leaps.  If AI was able to explain itself then the human element should be able to add a new, and hopefully ethical, dimension to it.

    This is surely the next evolution. I just went to a talk mostly of data scientist speaking on the subject of automated data science solutions and how this is going to disrupt the data science community for the mere reason a business can just automate what the data scientist is doing through a new API or tool.

    It was also expressed that these tools would also explain how they are working and how they got the results along with the results in order to help sell their value. In meaning, having a data scientist to interpret and explain the results will also be automated with these tools.

    I could totally see this happening and being a major disruption to the field. I however, do not believe it will be good enough just yet. You still have to put a lot of trust into a third-party in being right and not be bias with your data versus owning the process yourself as the business. I would also say that interpretation is also not going to be something easily tackled as I've seen even automated ETL and data management solutions that are dead easy to setup and run still fall in a business because no one has the skillset to troubleshoot and explain the smallest of data issues. This is because the assumption is that end user was not needed, but in reality, you always need at least ONE expert.