Black Box Problem

Black Box Problem

Black Box Problem Definition:

The non-transparency of AI systems. This is when an AI system comes to a decision, but no one — not even its creators — can clearly explain the line of thought behind the decision. In such cases, the path from input to output is hidden, especially in complex deep learning models, making it challenging to spot bias or guarantee transparency.

Black Box Problem

Example

A loan application gets rejected by an AI, yet the developers can’t say for sure which data or steps were behind the decision.