But basically, could the involvement of AI in arbitration not undermine the perception of impartiality, especially so if parties believe the algorithm’s training data, prompts, or prior awards used for fine-tuning reflect bias or structural inequality in the dataset?
It will seem that it could in theory at least. Perceptions of impartiality depend on trust and transparency. If the AI’s training data or reasoning process appears biased or opaque, parties may have to question fairness and procedural legitimacy. It's like walking on thin ice.
But basically, could the involvement of AI in arbitration not undermine the perception of impartiality, especially so if parties believe the algorithm’s training data, prompts, or prior awards used for fine-tuning reflect bias or structural inequality in the dataset?
It will seem that it could in theory at least. Perceptions of impartiality depend on trust and transparency. If the AI’s training data or reasoning process appears biased or opaque, parties may have to question fairness and procedural legitimacy. It's like walking on thin ice.