• On one hand, you have the ability to look under the hood. On the other hand you may not. You take any of the Windows applications we may use on a daily basis. It's not open source and we don't know what's going on. Why would AI or ML change the fact we don't know what's going on with the applications we use to make our business thrive? We just trust in the ability to work and make the magic happen regardless of how it was programmed.

    With closed source apps, you can still log, set flags, get dumps (that are useful anyways). Neural nets aren't like this, values given to interconnections aren't specifically set, they're implicitely set via the learning process. With closed source, you can often duplicate issues that the vendor can then subsequently act upon (that is, if the bugs are deterministically enough, race conditions and the like are often difficult to recreate by their very non deterministic nature). With neural nets, its teriffically difficult to understand them even when the individual nodes can dump their values regarding their inner connections. (obviously probably not even using the  correct terminology here). Maybe one path would be to log all inputs to the network and try to replay, but what if the neural network is such that it CONTINUES to update its learning while in use, the exact state of failure might not even be reproduceable.