Robin Feldman AI Versus IP: Rewriting Creativity Cambridge University Press 228 pages, 6 x 9 inches, ISBN 978-1009646864
In a Nutshell
The United States Constitution laid the groundwork for the intellectual property regime—giving Congress the power: “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Across hundreds of years, the core concepts of what we protect and why we protect it have remained relatively stable. Through tectonic technological shifts—the industrial revolution, the digital revolution, and the proliferation of the internet, smartphones, and social media—these core concepts have persisted. But artificial intelligence poses a different kind of challenge.
Artificial intelligence is commonly defined as ‘the use of computing systems for automating tasks that would normally require human intelligence.’ But AI involves so much more than that simple phrase suggests: AI is a combination of fields that enable a system to function in a manner remarkably reminiscent of the human brain. But AI is not a “thing” or an “it;” AI is a way of going about doing something.
Scholars have discussed numerous issues in the realm of AI and intellectual property (IP), including: whether AI itself should be deemed a creator; whether the output of an AI can be protected under intellectual property regimes; whether AI infringes on the IP rights of others; and how AI should be regulated. One issue, however, remains largely unexamined: As AI continues to embed itself throughout society, it will progressively shake loose the foundations of what we choose to protect with IP, forcing us to reconsider how IP derives its value.
My book AI Versus IP: Rewriting Creativity examines the rapidly developing intersections of AI and the IP regimes. Through examining each pillar of IP in turn—copyright, patent, trademark, and trade secret—I delve into the specific challenges and possibilities AI offers for each case. Using analogies to the Bridgerton fantasy series and the Good Housekeeping “Seal of Approval,” I describe how AI is set to shrink not only the pool of materials eligible for intellectual property protection but also the value of IP regimes as we know them. In other words, AI may decrease the value of the protection umbrella itself.
These rapid changes do not doom the system wholesale. The book describes how the legal system can trim what is classed as protectable, casting the net only around the remarkable and thereby preserving value. Furthermore, the legal system could restore confidence in both AI and IP through the establishment of a public–private certification body. I conclude that, together, these approaches would mitigate the problems looming ahead for the four intellectual property regimes.
The Wide Angle
From a broad, theoretical perspective, intellectual property is underpinned by the philosophy of utilitarianism. At the risk of wildly oversimplifying moral philosophy, utilitarianism considers the ‘utility,’ that is, the total outcome of an action, on balance. There is broad agreement among scholars that US intellectual property regimes in the modern context are largely conceptualized in utilitarian terms – that is, they are conceived as vital policy mechanisms designed to promote broader societal benefit by incentivizing innovation and creative expression.
In this utilitarian context, intellectual property regimes have largely assumed the centrality of humans to the innovation and creativity process. The rapid progress in AI challenges that human-centered assumption, forcing us to confront our conceptions of what we protect and the value of human contribution to progress.
For example, if AI systems can easily produce much of what humans invent, create, or capture through trade secrets, many human contributions to creativity may no longer satisfy the requirements of protection. Do we care whether we protect human innovation? From another perspective, if consumers use influencers or private rating systems as their indications of quality, rather than looking for the trademark, what is to become of the trademark system? As I comment in the book, with due apologies to my distinguished publisher Cambridge University Press, readers seem to be relying less and less on the hallowed publishing houses, whose reputations are protected by trademark, as a reliable indicator of what to read.
Beyond the theoretical, the book addresses current conversations about AI and IP. Thus, the book examines the pending court cases concerning whether large language models (LLMs) such as ChatGPT, Gemini, Claude, and Grok, infringe copyright by training on existing works or by reproducing near copies of existing works in response to particular prompts. The book predicts where the courts and the industry are likely to go in resolving these problems and suggests possible points of resolution.
Finally, the book bridges gaps in knowledge. Legal scholars may understand intellectual property, but most do not understand the math and science of AI systems. Computer engineers may understand AI, but most do not understand the strange beasts that make up intellectual property. How can policymakers design appropriate AI policies without some depth of knowledge of what modern AI systems do; and how can engineers design AI systems without a basic understanding of the laws we would like these systems to respect? I wanted a book that would speak to both groups and leave everyone understanding the quandaries and potential pathways ahead.
As a law professor, I direct the AI Law & Innovation Institute at UC Law San Francisco, which frequently provides technical advice to government; as a scholar, I published my first piece on AI more than 20 years ago and have written about numerous aspects of AI and society; and as a science writer, I’m committed to making complex technical topics clear to the intelligent reader. Thus, the book attempts to help others identify the connections, ideas, and issues arising from this convergence of AI and IP.
A close-up
I would love if a “just-browsing” reader were to land on the section explaining how LLMs work. I tried to craft an analogy that has the virtue of precision while using no math so that anyone can understand it. Specifically, I use the analogy of mapping everything there is to know about Washington D.C. by creating a multi-dimensional map – not just a three-dimensional map, but a 300-dimensional map. The objective is to create a map so detailed that it captures not just landmarks like the White House, but also the vibe of neighborhoods, the flow of the Potomac, and even the buzz of government in action. Readers will see how the mapping process employs an army of tour guides who begin with useless, random numbers and end up with finely developed expertise. At the end of the section, readers should have a much clearer and more accurate picture of how an LLM works.
One of my favorite quotes lies in the section describing how society bestows value on intangible things such as intellectual property. Here is the relevant excerpt, with the favorite part in italics: “[O]ne should begin by understanding a remarkable ability within our society: We are able to bestow value on things that do not exist simply by creating a myth that everyone believes. In other words, things we cannot see or touch have value simply because we believe they do. [For example,] it is only our collective belief in the value of money that grants it the status of “the root of all evil” (as noted in the King James version of the Bible) and the driving force that “makes the world go round” (as noted in the musical and movie, Cabaret). Or, as a country western song explains, “Money can’t buy everything. Well, maybe so. But it could buy me a boat. If humans simply stopped believing money has value, the global economy would collapse. We’d find our pockets full of little more than shreds of paper and shiny disks.”
Another favorite quote is tucked into the section on shared understandings. “To communicate effectively, all societies need a certain level of commitment to – and shared understanding of – those things that we believe exist. … These shared understandings help us grasp and categorize the world around us and enable us to communicate with one another coherently. Without them, we run into problems. If, for example, John believes sidewalks are infinitely expandable, then he may have difficulty explaining to a police officer – who believes in the finite nature of space – why he tried to drive his car on the sidewalk.”
Asking an author to pick favorite quotes is a little like asking a parent to pick a favorite child. How could I possibly choose? With that in mind, I can’t resist adding another. The following excerpts are from the section exploring solutions: “Consider the ubiquitous Nutrition Facts label, which can be found on most packaged foods. From bread and milk to cereal and seaweed snacks, virtually all packaged foods display basic nutrition information in an easy-to-assimilate format. Wouldn’t it be nice if people could similarly determine the extent to which products we consume from information-related industries are made of high-quality ingredients . . . so that people know the “nutritional quality” of what they are consuming. Eventually, a small box, about the size of the Nutrition Facts label, could become instantly recognizable and universally trusted for evaluating information products.
And this final quote is a favorite because it captures issues of particular importance to me: “This is not to suggest that the certification body would provide an analysis of whether the contents represent ‘The Truth.’ Rather, the goal would be to provide information on the sources and methods, letting consumers make their own choices in the marketplace of ideas.”
Lastly
If the book is successful, it could help legal scholars and computer engineers find a common frame of reference. Beyond contributing to shared understandings, I hope the book prompts industry leaders and policy makers to engage in the joint enterprise of trust benchmarking. Developing trust mechanisms in this manner will strengthen the intellectual property regimes, ensure consumer confidence, and help AI develop to its full potential.
More broadly, the book can highlight the way in which AI may substantially shrink the pool of things eligible for IP protection, as well as shake confidence, dissolve mystique, and undermine the value proposition of the various IP regimes themselves. This awareness could lead policy makers to adjust intellectual property regimes in the manner recommended.
Moving from the broad to the narrow, the chapter on Hot Topics points the way to resolution of the numerous lawsuits regarding whether LLMs violate copyright. I hope the analysis helps inspire settlement of the issues.
Finally, and most important, I hope the book fosters a sense of caution that any of us know the perfect answer to the questions AI will continue to bring. With this in mind, I close my comments with a combined excerpt from the introduction and conclusion: One can predict much wailing and gnashing of teeth as we step into this next iteration of human–technological interaction. Nevertheless, we should borrow a concept from both existential philosophers and their arch opponents, theologians, to note that the enterprise we are embarking on demands a little humility. An expert in the field of AI recently told me—we always thought that when we reached this point with AI, we would understand much more about cognition than we do now. And, indeed, the gap between the state of our technology and our understanding of it, as well as its impact, is vast. Not only have we come very far, but also, we have very far to go. In that context, one of Alexander Pope’s three-centuries-old observations remains remarkably prophetic: “Fools rush in where Angels fear to tread.” In truth, as AI continues to develop and society scrambles to adapt, we still have a lot of learning left to do—about the technology, about its impact, and most important, about how society should best approach it.