• patrickmcginnis59 10839 - Thursday, August 24, 2017 7:04 AM

    On one hand, you have the ability to look under the hood. On the other hand you may not. You take any of the Windows applications we may use on a daily basis. It's not open source and we don't know what's going on. Why would AI or ML change the fact we don't know what's going on with the applications we use to make our business thrive? We just trust in the ability to work and make the magic happen regardless of how it was programmed.

    With closed source apps, you can still log, set flags, get dumps (that are useful anyways). Neural nets aren't like this, values given to interconnections aren't specifically set, they're implicitely set via the learning process. With closed source, you can often duplicate issues that the vendor can then subsequently act upon (that is, if the bugs are deterministically enough, race conditions and the like are often difficult to recreate by their very non deterministic nature). With neural nets, its teriffically difficult to understand them even when the individual nodes can dump their values regarding their inner connections. (obviously probably not even using the  correct terminology here). Maybe one path would be to log all inputs to the network and try to replay, but what if the neural network is such that it CONTINUES to update its learning while in use, the exact state of failure might not even be reproduceable.

    The other issue is that with coded apps, if I don't upgrade, I can usually get a sense of the determinism that exists for that app and its behavior. I know when it will do x based on y.

    For ML/AI, they can grow and change later, and because there are often multiple data inputs (features), I may see behavior changes over time that I can't explain, and a scientist might struggle with as well. It's not likely to be dramatically different, but you never know.