In April 22, 2025, Colorintech hosted the latest instalment of its AI for Everyone event series at Fora – Sixty London Wall. The event brought together technologists, policymakers, executives, and curious minds to explore the evolving challenges and possibilities shaping AI safety, privacy, and inclusion today and tomorrow.
Co-hosted by Dion McKenzie, Co-founder and Chair of Colorintech, and Yvette Schmitter, CEO of Fusion Collective, the evening delivered a nuanced conversation on the current state of responsible AI, with perspectives from industry leaders navigating the frontlines of innovation and governance.
We kicked off the evening with a spotlight on Colorintech’s Inclusive AI Standards, a framework that aims to encourage organisations to build equity, accountability, and transparency into their AI strategies from the ground up. As AI continues to evolve at an unprecedented pace, so must our commitment to ensuring that it benefits everyone and not just a privileged few.
The central panel discussion brought together:
Moderated by Yvette and Dion, the panel unpacked the shifting definition of “AI safety.” Once primarily focused on technical failures, it now encompasses a broader set of social risks such as algorithmic misuse and the erosion of public trust. Sneha Bedi highlighted that companies are becoming increasingly aware of AI regulations and are actively seeking guidance on best practices from legal advisors and larger tech firms to ensure the safety of customer data, particularly as they navigate partnerships and their own innovation strategies.
Edward emphasised a crucial element for ensuring safety, stating the importance of "embedding interdisciplinary teams into the AI lifecycle." He elaborated that this integration of diverse perspectives throughout the AI development process is vital to identify and mitigate potential risks and biases before they are deployed.
Jude highlighted the shared responsibility in AI safety, explaining, "responsible AI is always about making sure that there are guardrails, so it does not do beyond what you want it to do... but, responsible AI is also a responsibility for the customer, for the person doing implementation, not just the implementer."
Panelists also explored the role of international collaboration in confronting global data privacy challenges and designing systems that safeguard user agency.
Edward touched on the shifting landscape of responsibility, mentioning recent discussions around foundation model developers potentially relinquishing direct safety assessments to focus on misuse prevention at the user level (OpenAI 2025).
In a time where data fuels innovation, the panel explored a critical tension: how to ensure privacy without stalling progress. The speakers discussed the limits of current governance models and stressed the urgency of user-centric design principles. Sneha Bedi noted that in the evolving AI landscape, understanding privacy implications is a continuous learning process for everyone, including legal teams, as clear governmental guidelines are still developing. Jude Umeh brought the conversation to intellectual property, a significant privacy-adjacent concern, noting, "The real issue with this is that the training data that AI uses in so many instances are based on... publicly available information, including copyright data."
He highlighted the legal complexities and the potential for significant financial rewards for those who can fairly compensate content creators.
Edward raised concerns about data retention and potential breaches, referencing a DeepMind study that found large language models, such as ChatGPT, could eventually return training data, including personal information, when prompted repeatedly (VICE 2023).
The evening underscored that inclusion is the centerpiece of responsible AI development. From tackling algorithmic bias to making room for non-traditional voices in the AI workforce. Our panelists affirmed that equitable AI doesn’t happen by accident. It takes intention, representation, and a culture that values ethical reflection just as much as technical achievement. Jude Umeh reminded us that, “You can’t separate inclusion from innovation. Diverse teams aren’t just a moral good; they’re a business imperative.”
Edward highlighted grassroots efforts within underrepresented communities, citing examples like the InkubaLM project for African languages, emphasising that these communities are "really taking it into their own hands to create something that works for them" (ICTworks 2024).
The event closed with a look ahead: What gives us hope? What should we be doing now?
Panelists were aligned on the need for shared responsibility. Governments must move quickly to set enforceable standards. Companies must lead with transparency. And communities must be brought to the centre of the AI conversation. Jude Umeh urged everyone to "be owning your part of this game. You've got to have your voice in the game to say, I don't want my data used this way. I don't want my content used for training your models unless I give you express permission."
Yvette summed it up, powerfully: “The future of AI doesn’t just happen to us. It’s something we build together.”
As part of Colorintech’s ongoing mission to make tech more inclusive, we’ll continue exploring the societal impact of emerging technologies and spotlighting the voices that need to be heard.
Want to stay updated on future events, tools, and insights from the AI for Everyone series? Visit www.colorintech.org or follow us on LinkedIn.