On the latest episode of Practically Intelligent, Akshay Bhushan and Sinan Ozdemir are joined by Giada Pistilli, Principal Ethicist at Hugging Face to discuss how training data bias in AI has unpredictable downstream implications. If talking about inconsistencies and societally difficult questions – such as those related to public policy or cultural issues – how do we decide what LLM output is correct? Giada talks about how creating ethical training guidelines, alignment principles, and a ‘moral charter’ for your model is essential, and how we need to avoid Ethics Shopping as AI progresses.