The Line We Draw: James Boyle on AI, Personhood, and Navigating a Future We're Still Defining

Remember in legal terms, corporations are recognized as entities, specifically a "Legal Entity", meaning they have separate legal rights and responsibilities from their owners (shareholders) and can sue or be sued in court.

Well we reached that point, where James Boyle, a Duke Law professor and Creative Commons figure, brilliantly unpacks this intersection of AI tech with law and the humanities.

On Unsiloed, he and host Greg LeBlanc discuss Boyle's book "The Line," examining AI's urgent challenge to personhood. Their conversation explores how technology is forcing us to redefine who—or what—qualifies, a question we may not be ready to answer.

Orignal Podcast

Here are some of the key insights from their discussion:

  1. We’re Not Quite Ready for “The Talk” About AI Rights: Boyle kicks things off with a sobering thought: we're largely unprepared for the legal and ethical quandaries AI will bring. He suggests that the first real confrontations with these issues won't likely happen in a courtroom, but will brew in popular culture—literature, film, and everyday moral confusion—much like the environmental or animal rights movements did before they reached the legal system. Think about those eerie ChatGPT responses; they’re the early tremors. ⏰ 00:02:14
  2. The Multi-Layered Human Approach to Drawing Lines: When we grapple with these big questions about who or what deserves moral consideration, Boyle points out that we don’t operate in neat, isolated silos. Instead, we bring a whole toolkit to the table: our art, our capacity for empathy, our economic reasoning, and our historical experiences with how we've treated corporations and non-human animals. His book, he explains, isn't about telling us what to think, but rather offering a framework for how to think about these complex issues, in the true spirit of the humanities. ⏰ 00:03:56
  3. Beyond Expanding the Circle: Technology is Creating New Contenders: Greg LeBlanc brings up a crucial distinction: there's the historical "liberal story" of expanding the circle of who counts as fully human, gradually including marginalized groups. But Boyle argues that what we're facing now is different. Technology isn't just making us rethink our existing definitions; it's actively creating the possibility of entirely new forms of intelligences and organisms that might, even by our current standards, start to cross the line into what we consider "personhood." ⏰ 00:04:33
  4. The Unprecedented Power of Designing Beings: A truly novel aspect of our current technological moment, Boyle emphasizes, is that we are designing these potential new entities. He introduces a thought experiment from his book about "chimpeys"—transgenic beings designed with specific traits like an IQ of 60 and loyalty, but lacking the capacity to form unions. The inventor in his story declares, "I am their creator, and I gave them no such rights," highlighting a chilling new dynamic: we can potentially engineer beings to fall just short of whatever criteria we set for personhood. This raises profound ethical questions about hubris and our right to create and define life. ⏰ 00:06:16
  5. When Sentences Don't Equal Sentience: The conversation touches on recent events where people have attributed human-like consciousness to AI like ChatGPT. Boyle clarifies that current AI, while brilliant at syntax, lacks semantic understanding—it's like John Searle's "Chinese Room" thought experiment, an incredibly sophisticated autocomplete. He notes that historically, language (as Aristotle and Turing both suggested) has been a key marker for humanity. But ChatGPT's linguistic abilities, devoid of true consciousness, force us to realize that simply producing fluent sentences isn't enough to signify sentience. ⏰ 00:09:21
  6. Moving the Goalposts: Defensiveness or Deeper Insight?: Are we simply "moving the goalposts" for what defines human-like intelligence now that machines can pass the Turing test? Boyle acknowledges this is happening but poses a critical question: is it just defensiveness, a desire to maintain our uniqueness? Or, are these new AIs actually giving us deeper insights, making us realize that what we valued wasn't just the pattern of words or a pretty picture, but the thinking, feeling mind and the shared human experience behind them? Perhaps, he muses, this is the "humility square we have always needed," prompting us to reconsider if we ourselves are more like AI than we'd care to admit in our more scripted moments. ⏰ 00:13:04
  7. The Peril of Capacity-Based Rights: If we define personhood solely by a set of mental capacities (like moral philosophers often do when arguing for animal rights), we run into a serious problem, as Boyle highlights. What about humans in a coma, or anencephalic children? He argues passionately that the 20th-century fight for universal human rights taught us the desperate need not to condition rights on subjective assessments of intelligence or capability. He proposes a hybrid approach: if you're human, you're in the club, regardless of mental status; and additionally, if a non-human entity demonstrates certain capacities (which we'd debate), they too could be brought inside the line. ⏰ 00:17:51
  8. Beyond Philosophy: The Role of Imagination and Empathy: Boyle champions the idea that moral philosophy alone can't guide us through these uncharted waters. He underscores the vital role of literature, film, and other narrative forms in cultivating our imagination and empathic capacities. Drawing on his experience teaching law and literature, he explains that these art forms allow us to engage in thought experiments, to inhabit, even if imperfectly, the minds of others—whether human or imagined non-human entities like those in Philip K. Dick's work or Samuel Butler's “Erehwon”. This engagement of empathy, he believes, is crucial for the evolving debate on AI. ⏰ 00:24:18
  9. The Expediency of Artificial Persons: From Corporations to AI?: The conversation pivots to the very practical idea that we already have “artificial persons”, aka, corporations. Boyle notes that this legal personhood was largely granted for reasons of expediency and efficiency. Could a similar path be followed for AI, perhaps to determine liability when a self-driving car errs, or as AI systems increasingly make complex decisions for companies? He points out that the EU was already discussing limited legal personality for advanced robots back in 2016, strictly for pragmatic reasons, though it sparked public outcry. The concern, of course, is the slippery slope from narrow economic rights to broader political and social ones. ⏰ 00:32:03
  10. Shades of Personhood: A Graduated Approach?: Just as corporations have some rights (like free speech) but not others (they can't vote), Boyle suggests that we're likely to see, and perhaps should see, gradations of personhood for other entities. He mentions becoming convinced that great apes, for instance, deserve more than just protection from cruelty; their complex cognitive and emotional lives warrant a greater recognition, though not necessarily full human rights. He predicts a similar staged approach for AI, regardless of whether we "should" grant them rights, starting with limited recognitions for practical reasons, followed by ongoing debate. ⏰ 00:36:33
  11. The American Courtroom: Crucible for Moral Evolution?: LeBlanc raises a pertinent question about whether the courtroom, particularly in the American context, is the right venue for these profound societal debates. Boyle acknowledges that courtrooms have profound limitations, including the influence of power and wealth, but also strengths. They operate with a different vernacular where certain arguments (like those based on evidence rather than unsubstantiated claims) are privileged. While not the be-all and end-all, and many decisions should be democratic, he sees courtrooms, historically and globally (citing France's Dreyfus case), as important crucibles where moral changes are forged through stylized, narrative-driven argument. ⏰ 00:41:55
  12. The Elusive Political Narrative: Predicting the political alignment on AI personhood is tricky, Boyle admits. He illustrates how easily narratives can shift, citing how vaccines went from a non-political issue to a highly divisive one in a short span. He can envision both conservative arguments against AI rights (idolatry, hubris) and for them (celebrating the "freely choosing mind," potential wealth generation). Similarly, liberals might champion expanding the circle of personhood yet balk if it echoes corporate rights arguments. It will likely depend heavily on how the initial issues are framed and which ongoing "personhood wars" (fetal rights, corporate rights) they get drawn into. ⏰ 00:46:57
  13. The "Yuck Factor" and Its Place: What role should visceral emotional reactions like repugnance or the "yuck factor" play when we consider things like transgenic hybrids? Boyle steers a middle path. He dismisses the idea that such feelings are irrelevant, but also cautions against letting them dictate law without scrutiny, given historical misapplications (e.g., revulsion towards miscegenation or homosexuality). He suggests these strong intuitions should serve as "alarm bells" or "intuition pumps," prompting us to pause and deeply consider why we're having that reaction. There might be a genuine, unarticulated wisdom there, a precautionary signal about where a new technology could lead. ⏰ 01:01:51

James Boyle & Greg LeBlanc on AI & Personhood.

Read more