Children's literature about homework machines
Social media sent me this piece in The New Yorker that discusses Jay Williams and Raymond Abrashkin’s novel Danny Dunn and the Homework Machine in relation to today’s AI products. Students the world over are using AI to cheat on their homework, much like how Danny uses a machine to do his homework in the novel. The article seems to side with Danny, who resolves the moral issues at play by deciding that homework is simply useless.
I never read that book, but I was reminded of two other pieces of children’s literature about “homework machines.” Because I cannot bring myself to do my own homework this snowy Sunday morning, let me put it aside and tell you about homework machines instead.
The first piece of children’s literature on my mind is Dan Gutman’s The Homework Machine (2007), a short novel about a group of four fifth graders who invent a “homework machine.” Looking back at the novel, I am struck by its prescience—not only in the ideas it offers, but by the characters it offers them through.
Take Brenton, the clever elementary school “geek” who invents “Belch,” the titular homework machine. Belch can solve fill-in-the-blanks tasks (a mode of operation eerily reminiscent of “BERT” a major language model), finding information by looking it up on the Internet and cross-checking it against multiple websites. We do not get more details of how exactly Belch works, but it would not be a stretch to imagine Brenton training a little language model based search engine. After all, Brenton is clearly preoccupied by the statistics of language, at one point graphing the rate at which people call him “geek” over time.
In Brenton I see the irresponsibility of a researcher so enraptured by the possibility of technology that he cannot pause to consider its effects. For example, in one incident he creates and deploys a computer virus just “for the fun of it.”
Somebody somewhere has to be the first one to do something. I thought it would be interesting to create a fad.
So I designed this software program. It was fairly simple. It took the words “wear red socks to school on Thursday” and duplicated it and inserted it randomly into documents. I guess you’d call it a virus because I sort of let it loose all over the Internet and people passed it around. I didn’t tell anyone at school about it. I just did it for the fun of it.
Brenton also shows a behavior oddly reminiscent of today’s language model researchers: giving vague, anthropomorphized diagnoses of poorly-understood capabilities, and expressing pride rather than consternation at that lack of understanding.
It had probably discovered some obscure Web site that described an alternative energy source that we couldn’t begin to understand. I was frustrated that I could not seem to turn the thing off, but at the same time I marveled at the power of artificial intelligence. I was proud of it, in a way. It had evolved, with no help from me.
For all his foresight, though, Brenton gets one thing wrong: he thinks it is harder to make machines that are sometimes-wrong than machines that are never-wrong (“It’s easy to design a machine that will work perfectly all the time. It’s harder to design one that will work perfectly just most of the time. It goes against the nature of machines”). Of course, today we know the opposite to be true: language models “hallucinate” plausible but incorrect information all the time.
If Brenton is the “brains” of the Belch operation, then the “business side” is handled by his classmate Sam Dawkins. Sam is opportunistic and enterprising: his response to Brenton’s red sock virus is simply “Think of the power!” As you might expect, he is immediately enamored by Belch, and his first reaction is to capitalize on it by befriending its creator (“Suddenly I realized that for all his dorkiness, Brenton [who built Belch] was the kind of kid I really want to hang out with”). His defense of Blech is all but identical to an AI enthusiast’s defense of ChatGPT today: it begins by counter-accusing his accuser of being a Luddite. He is smooth and manipulative, and can speak at length about why it is not just not-immoral, but perhaps even moral to use Belch.
When cars were invented, people didn’t keep using their horses and buggies, did they? When the telephone was invented, people didn’t keep sending telegrams. When the computer was invented, people gave up their typewriters. Same thing here. …
… Wouldn’t you like to take it easy a little? Wouldn’t you like to have more time to do other things besides homework? For all your work, you’re still only the second smartest kid in the class. Brenton is the smartest, and he doesn’t have to do any homework at all. He’s got a machine that does it for him. Is that fair to you?
In short, Sam reminds me of the typical Silicon Valley entrepreneur, hustling to extract all he can from this new technology (“We could call it McHomework”), with characteristically little regard for those who make it possible.
For all his smarts, the guy [Brenton] just didn’t know how to cut a deal. Fine with me. That was the way I looked at it. If somebody’s gonna give you something for nothing, take it.
Then there is their classmate Judy, who teaches us the peril of becoming dependent on homework machines. Of course, though she intends to keep her distance from Belch, she is soon hooked to its convenience.
I thought that I would just try Brenton’s machine a few times and then go back to doing my homework the old-fashioned way. But I realized that I liked not having to work so hard on my homework. …
… I had stopped doing my homework on my own entirely. I never even thought about doing it that way anymore. It was so much easier just using Belch.
She compares the machine first to the convenience of microwave popcorn, and later to the addiction of cigarettes. By the end, she is convinced of its harmfulness: “The machine was using us instead of us using it,” she says.
Judy’s mother has no idea what Judy is up to, and has an interesting reflection on the way homework machines can make their users look smarter than they really are, particularly if that is how we want them to look.
She [Judy’s teacher] said that Judy was just an excellent student, and that was probably why she finished her homework so quickly. I wanted to believe that, and so I did.
Finally, there’s the teacher Miss Rasmussen, who teaches us readers that overworked public-school teachers simply won’t be able to help students who appear to be understanding everything “on paper”—not only because homework is a means of diagnosing students’ strengths and weaknesses, but also because teachers are easily lulled into a false sense of efficacy.
I tend to notice the kids who make a lot of mistakes, not the kids who get everything right. … I hope that after I’ve been teaching for a few years, I won’t be so overwhelmed with work and will be able to pay more attention to things like this. … I so wanted my students to be successful, and I wanted to think they were successful because of my teaching.
She eventually decides to give a “surprise test” to see whether her students have really learned anything—and indeed, the homework machine users do suspiciously poorly. In the end, she offers that you already have a homework machine—your brain.
As I reflect on these children and the adults in their lives, I can’t help but see us (AI researchers, engineers, and entrepreneurs) as children, and wonder if the adults in our lives (policymakers, journalists) are just as oblivious to what is going on as Judy’s mother and Miss Rasmussen were to Brenton and his friends.
Just as Judy’s mother wanted to believe that Judy was an excellent student, and just as Miss Rasmussen wanted to believe that her students were successful because of her excellent teaching, perhaps those who should be keeping an eye on us also want to believe that our technological endeavors are harmless and beneficial. But it is still their job to check, to throw out “surprise tests” from time to time—and I hope they do, because if we are being honest, I do not think we have “done our homework” before deploying our current generation of AI products.
I said I had a second piece of literature in mind. It is Shel Silverstein’s poem “The Homework Machine,” from A Light in the Attic (2005). Read it below, or listen to it read aloud by the poet.
The Homework Machine
by Shel Silverstein
The Homework Machine,
Oh, the Homework Machine,
contraption that’s ever been seen.
Just put in your homework, then drop in a dime,
Snap on the switch, and in ten seconds’ time,
Your homework comes out, quick and clean as can be.
Here it is— ‘nine plus four?’ and the answer is ‘three.’
Oh me . . .
I guess it’s not as perfect
As I thought it would be.
I almost needn’t say anything more at all! The poem so perfectly captures the contemporary AI experience: to be captivated by a product (note the need to “drop in a dime”), to be mesmerized by its superficial eloquence (“quick as clean as can be”), but ultimately to be disillusioned by its endless common-sense failures and “hallucinations” (“‘nine plus four?’ and the answer is ‘three’“).