Last summer I got involved in some conversations with our OpenStax team about the role of AI in education, specifically because I was worried that the newest forms of AI would be very harmful to students and schools. I thought AI’s capacity to entrench historical norms by presenting them as new ideas would do nothing good for an education system in need of more resources and new ideas. I also thought that the wild emissions these tools produced would consistently make life worse for the populations it purported to help.
It turns out I failed to recognize the most terrifying aspect of the 2024 version of AI – it’s perniciousness. The convo would not go away. In fact, despite my intervening, it started taking over work conversations with more and more gusto. Here I am, about a year later, being asked to outline a grant for a teacher-facing AI tool because I am the most involved in the conversation... wut.
This comes in the wake of a project I ran that was attempting to disprove the utility of generative AI as a tool by comparing its effectiveness to a non-AI tool. It was during this project when I first noticed this phenomenon of perniciousness: No matter how bad the AI tool currently is, everyone thinks that it will improve infinitely and eventually become valuable. So regardless of my insistence that this technology is remarkably bloated, expensive, and unreliable right now, my company remains enthusiastic about my capacity to go out, gather more money, and fix it.
This thing cannot seem to escape the mind of our funders, our organization’s leadership, even some of the teachers, even if it’s hard to tell exactly when or how this will make a 9th graders learning journey more fruitful. I can, very easily, imagine a 9th grader being confused about why he’s learning to write or learn certain topics, when it seems like this machine can do it for him.
The Spicy Autocomplete mind-worm has also metastasized at a time when investor dollars are in relatively short supply, and I’ve seen first-hand now how a company with a steadily growing user base can get upended by funders who want to think on AI instead. Now, if we truly are on the precipice of a new world order where AI assistants and their big tech masters support every digital task of one’s day, then perhaps this focal shift is for the better. But what if… nothing truly pivotal is happening, and instead the pattern matching machine has simply taken our imagination and run away with it?... Just worth a pause. People love to forget that blockchains and DeFi had a very similar frenzy in 2020, and then when that fizzled out, the “metaverse” narratives stepped in and occupied our minds for another 12 months. Oh how I would love that to be the case.
There is a difference to this excitement for sure. ChatGPT is much more understandable of a technology, and interestingly, the software developers definitely have found real use cases of these LLMs – writing SQL queries, understanding gibberish code, or writing starter code – and so their bullishness has definitely spread to the tech business heads. But as someone who’s been pushed to find other use cases for LLMs over the last year that go beyond search – I would caution that this rocket-engine-of-a-product doesn’t necessarily need to be used to solve household problems. For the most part, a much smaller battery-sized solution will do.
Anyways, I wanted to connect this craziness to something I’d come across recently. Henry David Thoreau who was a 19th century novelist, once wrote a theory of value that my energy anthropology professor would enjoy - “the price of anything is the amount of life that you exchange for it.” At face value it reads as the sweat and energy that went into build a thing, but of course can also refer to the carbon life required to run something, or the life-hours that are devoted to staring at something. Product-market fit be dammed, under this idea, the pricey things in capitalism are those that burn as much energy as a jet engine, or that directly consume life-hours via eyeballs-on-screens.
I was hoping this idea would be helpful so that we could start identifying who’s lives go into certain products, and not just lump them all together into a brilliantly valuable number. Is it the anxiety ridden children of the future who have only seen screens? Is it the families whose houses flooded? Is it the Kenyan digital workers doing prompt engineering all day? Idk. Do we think those lives get to haunt their products? Do we think they peddle curses? How can I explain the risk associated with tortured souls to my shareholders in terms of revenue?
I would contrast this kind of haunting to the kind generated by an honorable harvest – one in which the lives of the plants/animals are honored, thanked, and never wasted. Those plants and animals love the people who sustain need them, their populations work better together, and they share a spiritual connection that extends them both. This is the opposite of haunting - and something I learned about from Robin Wall Kimmerer in Braiding Sweetgrass.
I think I would like to build tools that do not haunt their users. I want to source the tools and the products I make efficiently, and help my users understand each and every life that went into those tools so that they too can share in the spiritual gratification of thanking those lives. I think people will realize it feels better.
For material products, I think this means being upfront about where materials were found, who crafted them and how, what kinds of networks were used to deliver those tools to their hands. For software, it means open sourcing code, making it easy to credit and appreciate tireless contributors, but most importantly, illustrating the grand history of unsexy software tools in ways that the everyday person can enjoy and revel in.
Comments