Disclosing the How

  • Comments posted to this topic are about the item Disclosing the How

  • I read about an experimental language where you designed a software "gene" that would evolve to do a set task. The language was fiendishly complicated so didn't catch on.

    The story goes that one gene prooduced a highly optimised sort algorithm. It produced the right results in all tests but noone could work out how. You couldn't produce a proof of the algorithm so that algorithm could not be used safely.

    It would be fantastic if these systems could teach us how they did stuff. Without that knowledge we'd have to take too much on trust

  • Unfortunately I don't think we'll ever see companies disclose the how to such a degree that anyone(with the right expertise) could do a proper audit and confirm their claims about the process.

    I also think it's wrong to call a process impartial just because an algorithm makes the final decision. Whether it's impartial or not depends entirely on how it was implemented. Which means disclosing what it was designed to do is largely irrelevant, especially in a machine learning scenario.

    I do think this problem could be solved by requiring the software to be open source and only allowed to use open datasets.

    • This reply was modified 3 years, 1 month ago by  dsor.
  • I hope regulatory agencies require disclosure. Otherwise, I don't think companies will do it. Without this, we can't determine whether things are impartial or fair.

     

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply