🛠️  Hacking with Hamlet  đź‘‘

The Burghers' Quiet Truth

This essay is unusual for this website: longer and more thoroughly sourced. The reason for that is that it has grown and changed over the months, as the world and my thinking have grown and changed. It has become a kind of living document for me to nurture. It was last updated in March 2023.

Once upon a time, when I was a young computer science student at Stanford, I got myself into trouble. Never mind how, exactly— that is not the point of this essay. What matters today is how I got out: walking home on campus that night, I passed a sculpture that changed my life.

If you’ve ever visited Stanford, you’ll know the sculpture I mean. It depicts a scene from the Hundred Years’ War: six noblemen in the beseiged port city of Calais sacrifice themselves to save their fellow citizens. You would expect these heroes to be towering, glorified figures, but walk up to them and you’ll see them as the sculptor Rodin saw them—fear in their bowed eyes, despair in their knotted hands—posture so unconventional that Rodin’s patrons simply refused to display the sculpture at first. A century later, I stood among those “Burghers” and sobbed as I, too, saw the gritty everyday horror of doing the right thing.

This is what artists do: they see anew, they show anew. In ways small and large, they tap into our shared humanity to change our minds and our lives.

So why are we so eager to replace artists with AI— which has no humanity to share, no life to change?

The allure is clear, of course: today’s AI products output content instantly and for free. Language models like OpenAI’s GPT-3 output strikingly coherent text, even fooling some into thinking they are sentient. Diffusion models like DALL-E 2 or “Stable Diffusion” output visually attractive images that are already winning blue ribbons in art competitions. Google and Meta are exploring video and Stability is working on music. It’s no surprise that people are excited.

But as an AI researcher myself, now a PhD student studying AI and computer graphics at MIT, I do not feel excited. As much as I want to love these beautiful algorithms—as much as they are manifestations of my childhood dreams, dreams that guided me into my very career—I can only find myself heartbroken when I think about today’s AI products.


Heartbreak? But what is there to mourn? “Generative AI” products are marketed as “creative tools”—“magical” and the stuff of “dreams”—so you might wonder how they could possibly harm anyone. Know, first, that they are built by ransacking and misappropriating artists’ work: that is, by training on enormous corpora of human-created texts and images bulk-harvested without their consent. These corpora were collected for research purposes, but are now used in commercialized technology (some call this a kind of “data laundering”). The corpora may include contributions from you, maybe even from your private medical records.

Because this data is all that AI products have seen, it is all they can output: hence, language models are easily coaxed into regurgitating entire copyrighted paragraphs and sensitive information verbatim from the Internet. Diffusion models are known to perform best when deliberately “steal[ing] the look” of specific artists and their characteristic styles, occasionally outputting stock image watermarks and artists’ signatures, and researchers have shown that they “blatantly copy” images from their training sets and often output copies of images “memorized” from the training dataset.

This behavior would be unacceptable from any human writer or artist, but is inevitable from AI optimized to reproduce its training data. The products are pastiche machines, clichĂ© factories. Even the human “prompt” given to the AI product is often largely an instruction for whose work to exploit. Prompts often end with the brazen phrase “in the style of so-and-so,” which leaves the exploited artist feeling violated, dehumanized, anxious, and harassed. It is not because a machine can so easily imitate their life’s work—even a photocopier could do that. Rather, it is because that machine’s users so easily treat pieces of the artist’s selfhood as cheap fuel for their own profits.

I do not think those users are malicious, for the most part. I think they are deceived. It is so easy to sit back and enjoy the illusion of originality these models give you, so easy to commend yourself for adapting to a new tool, so easy to fool yourself into thinking you are thinking. But, as Ted Chiang explains, “starting with a blurry copy of unoriginal work isn’t a good way to create original work.” To press that button is to stir an unfathomably deep cauldron of stolen labor, and I can’t bring myself to do it.


Though today’s AI content creation products are obscured by the mystique and impenetrability of neural networks, their business model is really nothing new. When I think about the products, I’m reminded of the infamous “First Quarto”: a bizarre bootleg Hamlet that was produced (scholars believe) by memorizing, reconstructing, and reselling Shakespeare’s script. “To be, or not to be / ay, there’s the point!” it parrots stupidly— it can capture the Bard’s cadence, perhaps, but never his piercing insight into human nature. Today’s AI products are no better than that 17th-century pirate: they peddle cheap patchworks of memorial reconstructions. It is creation born of greed and impatience, for those who crave aesthetic payoff without facing what one filmmaker calls the “development of the soul” and one songwriter calls “the complex, internal human struggle of creation.”

But of course, soulless imitation is still imitation. Well then, can the original artists at least demand acknowledgement and compensation from AI companies? You would think we in the AI community would know the value of citations, on which our careers are made— but for now, we have thrown up our hands and decided that it is simply impossible at this colossal scale.

Instead, AI companies claim that their products are “fair use,” likening their algorithms’ “training” procedures to a writer reading many books or a painter seeing many pictures in a lifetime—a legally untested and questionable theory, which at its worst is nihilistic and dehumanizing. To be clear, your media consumption is a miniscule fraction of AI’s diet, and as researchers have repeatedly argued, you don’t “learn” by memorizing patterns in data the way AI does (you didn’t need an entire Internet full of giraffe pictures to learn what a giraffe is— the first one or two were enough!). Your understanding of the world is grounded in lived experience in a physical and social world, and thus you have your own original, transformative ideas and experiences to add to your influences. We have always drawn such lines around questions of originality and engagement with source material: we distinguish morally between Thomas Kyd’s The Spanish Tragedy, which inspired Shakespeare, and the bootleg “First Quarto,” which plagiarized him.

Similarly, we accept parodies, remixes, collages, and “appropriation art,” because they require artists to engage thoughtfully and empathetically with the source materials, recontextualizing them through their own lived experiences. It is one thing for Mostly Other People do the Killing to perform a note-by-note re-enactment of Kind of Blue, or for Andy Warhol to painstakingly reproduce Brillo Boxes by silkscreening, or for Pierre Menard to understand and embody Cervantes so well as to reproduce Don Quixote himself. It is entirely another to use an algorithm to conveniently stitch together words and images from a dataset without even looking at them—perhaps precisely to avoid looking at and learning from them.


I began writing this essay in August of 2022. Week after week it grew, and draft after draft spread around MIT. Patterns emerged in people’s responses. I learned that even as a practicing AI researcher, it is easy to find yourself dismissed as anti-progress or “Luddite” for questioning how the current generation of AI content creation products are being deployed. When I ask my colleagues what they think, they are likely to shrug and explain to me how art has long benefited from technological innovation—how the printing press, the camera, and the synthesizer all created “new artistic possibilities.”

It frustrates me: a tempting analogy, easy and candy-coated, superficial and erasing of nuance— like “AI art” itself. The analogy forgets that none of those innovations were powered solely by a monstrous injustice to artists past and present. In its monomaniacal preoccupation with “new artistic possibilities,” it forgets too that we already turn away from some existing modes of expression for their harms: we regulate the trade of ivory though its carvings can be beautiful; we hold blackface minstrel shows to be in horrifyingly poor taste though in their time they were popular and well-attended. (I am not writing about the broader harms of generative AI products today, but I would be remiss not to mention that they already facilitate widespread harm through abuse and negligence: nonconsensual pornography, academic dishonesty and cheating, insecure code, bias and falsehoods in scientific papers, floods of disinformation, irresponsible use in medical settings, and more.)

Most disappointingly, the analogy forgets that at its best, AI does create genuinely new artistic possibilities— as a medium for interrogating what it means to be human, but also as a technology that complements and even grows the artistic enterprise. Think of NVIDIA’s AI-accelerated rendering, which enables the beautiful realtime 3D graphics of today’s video games, or my labmates’ work with Adobe on AI-assisted cameras that enable “night mode” on smartphones. The complaint is not with all AI technologies current, past, or future: it is with the few truly problematic ones that dominate our attention and discourse.

In general, there is nothing intrinsically wrong with using AI technology to offset meaningless drudgery for humans, in art or otherwise. For example, automated programming assistants built on the same GPT-3 technology are broadly accepted and welcomed (as long as they are built fairly, without ignoring licensing terms of the training data or regurgitating proprietary code verbatim, behavior for which a class-action lawsuit has been filed against GitHub’s “Copilot” product). But what the current wave of “generative AI” products target isn’t meaningless drudgery—it is precisely that irreplaceable work of being human, of seeing-and-showing-anew that is ceded to databases of the past. Totalizing by design, and at unprecedented scale, AI companies propose that a few minutes of unskilled “prompt engineering” (“engineering,” not “artistry”) is all anyone needs now that they have conquered and colonized creation. They speak of “democratizing” art—as if freelance illustrators and novelists were somehow holding it hostage!—but to me their language of “force multipliers,” “killer apps” and “explosions of creativity” to which artists “adapt or die” rings only of violence and devastation. “Art is dead, dude,” says the enthusiast who won the blue ribbon with his AI output, “It’s over. AI won. Humans lost.”


It may be a little soon to say “lost.” But it is true that at stake here is not only artists’ professional dignity, but also their livelihoods. In a world where passable, just-good-enough content can be summoned instantly for pennies, it is hard to imagine a sustainable market for all but the highest strata of artists (AI products acknowledge this themselves). As Roald Dahl writes in a prophetic 1954 short story, “The Great Automatic Grammatizor,”

The quality may be inferior, but that doesn’t matter. It’s the cost of production that counts. And stories—well—they’re just another product, like carpets and chairs, and no one cares how you produce them so long as you deliver the goods. We’ll sell them wholesale, Mr Bohlen! We’ll undercut every writer in the country! We’ll corner the market!

Of course, some artists will have the economic security to keep pursuing their passions. Some will think deeply about AI and use it thoughtfully to create marvelous experiences. And some will mindlessly ride the wave to fame and fortune: “Make money from this sector if you want to make money,” suggests the founder of Stability AI, “it’ll be far more fun.”

But how about the rest? How cruel and entitled would we be to displace the very artists who fed the machines replacing them? Not only the renowned poets and painters, but also the freelance anime artists, voice actors, and beloved children’s storybook authors, who each bring the little bit of love and creative oomph that in aggregate turns our cultural wheels: why are we so eager to deny them their already-grueling livelihoods?

Even if we decide that artists are the inevitable economic casualties of technical progress, I do not think we are ready for the impact of their loss. Are we sure we want a world of endless simulacra, churning and re-churning the past, pressing that button until we are given something we like—what one former algorithm artist calls the slot machine-ification of art, the “corporate capture of the imagination”—what the founder of Stability AI himself calls “poop[ing] rainbows,” as if mixing all our paints produces anything brighter than a dull gray sludge? Do we really wish to be served up an infinite feed of sterile content to consume, as if we are livestock to be fattened and not live minds to be challenged—comforted—changed?

I have to say, I have not prepared myself for the overwhelming loneliness of that world. I don’t know how. In that world I am no longer primed to attribute expressive intent—that fundamental affirmation of shared humanity—to texts and images. The bully on my left taps my shoulder on the right; day by day, I learn not to look. I become skeptical of honest expression and anxious that even my therapist is a mirage; even in moments of profound sorrow and loss I am given hollow AI-output text to chew on for solace. Where will I turn for the wisdom-intimacy-truth of one mind engaging another, for the moment of communion that an artist facilitates with their work—to be shown anew? Will my future children doodle me on their tablets, or will they ask a diffusion model for an aging Indian man? Will they write their own wedding vows? Will someone stop to write my obituary, or will a language model be prompted to synthesize my visible achievements from the statistical wash of countless others?

Call me dramatic if you will— even that would acknowledge my human vulnerability. I am young and this is my field and my future. I worry. Day after day I walk past the graffiti on Massachusetts Avenue, and I find myself hoping that artists won’t let us get away with it.


I hope because I know it doesn’t have to be this way. Tech enthusiasts speak in the language of inevitability: “cat out of bag,” “genie out of bottle,” “floodgates open,” “no going back now.” But while new technological discoveries may be inevitable, their place in our society is up to us all. We embrace most of them, but not everything makes the cut— and when we exercise that agency, we are not stepping back, we are stepping forward. Just like our predecessors dared to imagine a future beyond leaded fuel, CFCs and DDT (often to great ridicule from oil and chemical companies), I challenge us now to imagine a future beyond the current generation of AI content creation products: a future that regards humanity not as an inefficiency gumming up the creative enterprise, but as the very enterprise itself.

Some days I wake up and find the world reacting already. AI output has been banned or limited by artistic communities like Newgrounds, marketplaces like Getty Images, writers’ platforms like Medium, and even programming communities like StackOverflow, AI conferences like ICML, journals like Science and Nature, and NYC schools. Artists have pushed platforms to enact protections for them on platforms like DeviantArt and Kickstarter, and they are organizing to protest platforms that refuse, such as ArtStation. Organizations like Bad Hand Books, 3dtotal, the Society of Illustrators, and even machine learning publication Towards Data Science have stated that they will proudly commission original, thoughtful work, as publishers have for centuries. Researchers are designing new techniques to protect artists. Meanwhile, artists continue to speak out against publications that economize with lazy AI illustration, as readers did at The Atlantic, publishing houses like Tor Publishing that try to pass off AI output for cover art, video game developers that use AI content creation to generate free assets, artistic tools like Adobe’s that threaten to scrape artists’ works-in-progress, news sites like CNET that are publishing AI-written articles rife with plagiarism and apps like Lensa that profit from their users’ images. AI output has for now been deemed ineligible for copyright protection. A class-action lawsuit has been filed against Stability AI, DeviantArt, and Midjourney; Getty Images has sued Stability AI as well.

Yes, on those days I feel some hope— but even that hope is swaddled in a dark sorrow. Ad-hoc bans are stop-gap solutions, thoughtful regulation takes time to forge. And what does that boarded-up future look like, anyway?

Between you and me, I’m playing for higher stakes here. If there is anything I stand for, it is a profound unity between the arts and the sciences—in Lisel Mueller’s words, a vision of islands as lost children of one great continent. Today’s AI products threaten that oneness: technologists who are embarrassingly hostile to artists, artists who then take up arms against technology. But I love both my parents. I want us to do better. I want us—all—to create with intention and empathy. And I want us to listen: if not to each other, then at least to Rodin’s Burghers whispering their quiet truth into the heart of Silicon Valley.