Unfortunately I don't think we'll ever see companies disclose the how to such a degree that anyone(with the right expertise) could do a proper audit and confirm their claims about the process.
I also think it's wrong to call a process impartial just because an algorithm makes the final decision. Whether it's impartial or not depends entirely on how it was implemented. Which means disclosing what it was designed to do is largely irrelevant, especially in a machine learning scenario.
I do think this problem could be solved by requiring the software to be open source and only allowed to use open datasets.