Maura R. Grossman Warns: Over-Reliance on Generative AI Threatens Human Thinking and Analytical Skills
Maura R. Grossman is a research professor in the School of Computer Science at the University of Waterloo, an adjunct professor at Osgoode Hall Law School, and an eDiscovery attorney and consultant. She is internationally recognized for her pioneering work in technology-assisted review (TAR) and the use of artificial intelligence in legal practice. Prof. Grossman has published extensively on eDiscovery, information retrieval, and legal ethics, and she frequently advises law firms, corporations, and courts on the effective use of AI and advanced analytics in litigation and regulatory matters.
Gulan: Google and other tech companies are now using AI tools like AlphaGenome to assist with scientific research and even analyze human genomes. As AI moves deeper into critical areas like education and healthcare, what are your greatest concerns? How can we make sure these tools are safe, accurate, and used responsibly?
Maura R. Grossman: An issue that has appeared in recent media, which should give us all pause for concern, is whether using generative AI tools for research and writing impacts our long-term ability to think. A recent MIT study suggests that using such tools for writing imposes a “cognitive debt,” such that those who use these tools show less brain activity and have less memory of what they have written than those who do not.
I worry that the over-reliance on generative AI tools in education will lead to a generation that has less ability to think analytically and to problem solve because they have grown accustomed to finding easy solutions in response to a prompt.
That said, it is indisputable that these tools can process information at speed and scale beyond the capability of any human and that they will lead to new discoveries in medicine. For example the 2024 Nobel Prize in Chemistry was awarded to David Baker, Demis Hassabis, and John Jumper for using AI to crack the code for proteins.
My biggest concern with the use of AI in healthcare is bias. Because health-related datasets often lack adequate representation of marginalized groups, therefore, the systems developed from such data sets may not make accurate predictions for those groups, which are already underserved in healthcare systems.
The key to all of this is establishing proper safeguards, including regulations that require independent testing for safety, accuracy, and fairness, and ongoing monitoring and updating of those safeguards as the technology evolves.
Gulan: Given your knowledge of intellectual property (IP) law, we’d value your insight on an issue that’s generating growing concern in the media world. Many independent publishers and news organizations are reporting that AI systems—like chatbots and automated overviews—are repackaging their original journalism without proper credit or compensation. Do you think current legal frameworks around fair use of IP could be extended to protect these creators? Or, do we need entirely new mechanisms to ensure their work is acknowledged and fairly treated in the age of generative AI?
Maura R. Grossman: Court rulings in recent U.S. legal cases have held that technology companies that use original journalism or creative media to train their generative AI technology are protected by the fair-use doctrine because the output of their systems is transformative and not an exact replica of the input. That is not to say that scraping the Internet for copyrighted or known pirated material is permissible—that has not yet been ruled on definitively—and is likely to be found an IP violation.
There needs to be a fair balance between the rights of journalists and creators, and those of tech companies. The obvious answer is licensing agreements that fairly compensate the former for the use of their work. We are starting to see the appearance of some of those.
Of perhaps greater concern is that users are now getting their news and entertainment directly from generative AI tools like ChatGPT and never clicking through to the original sources. This is problematic because it will no doubt impair the businesses and livelihoods of the original creators, but it also opens up the door to the distribution of massive amounts of misinformation and disinformation with few, if any, guardrails.
Gulan: The upcoming WHO “AI for Good” summit is expected to promote new global standards for the use of AI in healthcare, we're curious to hear your thoughts on this effort. Do you see global AI governance in the health sector as a realistic goal, especially considering how fragmented and inconsistent privacy and liability laws still are across different countries? We’d love to hear your own perspective on what meaningful progress in this area might actually look like.
Maura R. Grossman: While global regulation might be the ideal solution, sadly, in the current geopolitical climate, I fear it as extremely unlikely. Presently, the U.S. and China are in an AI “arms race” and view the stakes of who gets to artificial general intelligence (AGI) first as too high to risk, and therefore will not cooperate. As long as these two global powers decline to place restrictions on the development and commercialization of AI systems—whether in healthcare or otherwise—I see very little hope for progress on a global front. Recently, even the European Union has backed off from its far-reaching AI Act because of concerns it will inhibit innovation.
Gulan: Many AI systems today are still trained primarily on English-language, Western-centric data, which often doesn’t reflect the realities of countries in the Global South. In your view, how can countries like ours—especially in regions like the Middle East—ensure that AI tools used in areas like healthcare, law, and education are culturally and linguistically relevant, and not simply importing bias from systems built elsewhere.
Maura R. Grossman: You raise an issue of great importance. I think the answer is that English-language and Western-centric tools should not be used off-the-shelf without additional fine-tuning on local data that is more representative of the population(s) on which the tools will be used. Without that extra step, there is no doubt that the tools will exhibit bias. Ideally, the Global South and the Middle East should be given the resources to build their own tools that are linguistically and culturally relevant. That’s the long-term solution.
Gulan: The recent SAG-AFTRA agreement in the U.S. included strong protections for voice actors against the misuse of their likeness and biometric data by AI systems. But in many parts of the world—including our region—there are still very few legal protections in place. From your perspective, what steps can developing countries take, whether through law, policy, or public awareness, to safeguard their citizens from having their voices, faces, or personal data used or monetized by AI without their consent?
Maura R. Grossman: Other jurisdictions can enact legislation like Tennessee’s 2024 Elvis Act to protect individuals from unauthorized AI-generated likenesses, including deepfakes and voice clones. It safeguards both living and deceased individuals from digital exploitation. Short of greater public education on the risks of deepfakes and the promulgation of actual laws, I see very little else that individuals can do to protect themselves in this era.
Gulan: We’ve seen how AI is increasingly being used in modern conflicts—whether through deepfakes, coordinated bot campaigns, or autonomous drones. In regions like the Middle East, where tensions are already high, these tools could easily be exploited by authoritarian actors or non-state actors. From your point of view, what kind of international legal framework or cooperation is urgently needed to prevent AI from escalating conflicts or being misused in ways that destabilize already fragile regions?
Maura R. Grossman: This is a great question and an issue that concerns me deeply. We are already seeing state and non-state actors use bot farms and the proliferation of deepfake images to disseminate propaganda to their own people, as well as to citizens of other countries, with malevolent intent. We have seen this used as a military strategy in the various present conflicts around the world. As mentioned above, I am not hopeful about global cooperation in this regard as I see very little interest in it on the part of China, Russia, and Israel, by way of examples of countries engaged in such tactics to transmit mis- or disinformation. As long as that is the case, no global solution is possible.
Right now, the best hope is (i) to increase public awareness about AI-generated deepfakes, misinformation, and disinformation, such that people are critical and verify what they see, hear, and read; and (ii) that in due course, there will be readily available, effective AI-detection technologies accessible to everyone. Right now, the existing technical solutions are insufficiently reliable or robust to manage the constant barrage of fake media.
