"Your Strength Is Your Aggregate GDP × Your Motivation to Stay Unified": Joanna Bryson on AI, Regulation, and Global Cooperation
Joanna Bryson is a Professor of Ethics and Technology at the Hertie School in Berlin, where she is a founding member of the Centre for Digital Governance. With degrees in psychology and artificial intelligence from the University of Chicago, University of Edinburgh, and MIT, her research spans AI, ethics, human cooperation, and technology policy. She has advised governments and organizations globally, including the UN, EU, and OECD. Since 2020, she has been a leading voice in AI governance and ethics.
Gulan: Google and other tech companies are now using AI tools like NotebookLM and AlphaGenome to assist with scientific research and even analyze human genomes. As AI moves deeper into critical areas like healthcare and education, what kinds of legal standards, testing protocols, or independent oversight do you believe are essential to make sure these tools are safe, accurate, and used responsibly?
Joanna Bryson: I’m very persuaded by the efforts of the EU in these areas. Please see my recent (actually written February 2024) paper about this https://ojs.weizenbaum-institut.de/index.php/wjds/article/view/3_3_8/111
Of course, even if these solutions would otherwise work, there is an enormous problem of enforcement, particularly now with Trump. I am excited about the new trade cooperation described here, that already has over 50% of the world’s GDP behind it:
https://www.dw.com/en/eu-and-germany-push-for-new-world-trade-organization-wto-amid-gridlocked-dispute-resolution/a-73143928
Gulan: Given your expertise in intellectual property and eDiscovery, we’d value your insight on an issue that’s generating growing concern in the media world. Many independent publishers and news organizations are reporting that AI systems—like chatbots and automated overviews—are repackaging their original journalism without proper credit or compensation.
Do you think current legal frameworks around fair use or IP could be extended to protect these creators? Or do we need entirely new mechanisms to ensure their work is acknowledged and fairly treated in the age of generative AI?
Joanna Bryson: I wouldn’t call myself an expert in IP. But I do think there are at least three points of intervention in present copyright law that need to be pursued. Again, as per above, we need trade treaties to enforce these though:
1. Use of data – should be paid for at agreed market rates. The US had some breakthroughs in this legally in the Anthropic case https://www.fastcompany.com/91357755/anthropics-ai-copyright-win-is-more-complicated-than-it-looks There’s a question about what’s fair given that AI systems can make use of what’s learnt way more frequently, but also compose outputs based on far more inputs than a human would, so it may kind of balance out.
2. Avoidance of plagiarism – AI outputs that are too similar to any one input should be fined for plagiarism just as a human would be. In fact, AI companies might want to build detectors for this and just pay fees where appropriate. This might also help IP defense from human abuse.
3. For really large entities e.g. Disney, Universal Studios, musician’s unions etc, there might be a chance to demand a test of value to the AI product. It is VERY costly to train a model with and without a set of data, but there might conceivable be class-action ways to get at the real value produced from having access to a dataset.
Gulan: The upcoming WHO “AI for Good” summit is expected to promote new global standards for the use of AI in healthcare, we're curious to hear your thoughts on this effort.
Do you see global AI governance in the health sector as a realistic goal, especially considering how fragmented and inconsistent privacy and liability laws still are across different countries? We’d love to hear your own perspective on what meaningful progress in this area might actually look like.
Joanna Bryson: I haven’t kept up on this. To be honest, I worry about ethics washing. Do good, use AI, but everyone uses AI, why call it “AI for good”? Cf
https://joanna-bryson.blogspot.com/2021/04/two-ways-ai-technology-is-like-nuclear.html
Gulan: Many AI systems today are still trained primarily on English-language, Western-centric data, which often doesn’t reflect the realities of countries in the Global South.
In your view, how can countries like ours—especially in regions like the Middle East—ensure that AI tools used in areas like healthcare, law, and education are culturally and linguistically relevant, and not simply importing bias from systems built elsewhere
Joanna Bryson: The only way to really not have any outside bias is to build entirely from your own data sets, and these are obviously limited in scale due to availability of data. There is just no way around this.
If you are willing to start from Western (or Chinese or something else dataheavy) biases, then you can use local data to drag the model towards your culture, but you would always be using the larger dataset to fill in the gaps.
Gulan: The recent SAG-AFTRA agreement in the U.S. included strong protections for voice actors against the misuse of their likeness and biometric data by AI systems. But in many parts of the world—including our region—there are still very few legal protections in place.
From your perspective, what steps can developing countries take, whether through law, policy, or public awareness, to safeguard their citizens from having their voices, faces, or personal data used or monetized by AI without their consent?
Joanna Bryson: This is very related to the answers I gave for 1 & 2. First you need to construct the legal framework, second you need enough power to enforce it, and that power can only be established through cooperation. This is the real “Brussels Effect” – a group of countries motivated by a long history of blowing each other to bits finally harmonised their laws and policies enough that they could become an effective negotiator. Your strength is your aggregate GDP x your motivation to stay unified.
Gulan: We’ve seen how AI is increasingly being used in modern conflicts—whether through deepfakes, coordinated both campaigns, or autonomous drones. In regions like the Middle East, where tensions are already high, these tools could easily be exploited by authoritarian actors or non-state groups.
From your point of view, what kind of international legal framework or cooperation is urgently needed to prevent AI from escalating conflicts or being misused in ways that destabilize already fragile regions?
Joanna Bryson: It isn’t useful to think of AI as any kind of actor, and certainly not a unitary actor. I think we need to do all the same things we’ve ever done, but just realise that there are new capacities. In particular, the digital just makes it way too easy to influence at a great distance, and to change on a dime. Everyone should be terrified by the fact that a government might have the power to just pull the plug on their devices and/or the upgrades for these.
