Good afternoon support group!
I’m on a plane right now departing from Newark Airport headed to Houston, and it was so lovely to emerge out of the disgusting New York pollution to see how many freaking trees actually exist in New Jersey. Makes me wonder what the ratio of trees to people is in the world.
Anyways, today I wanted to complain about that Google employee that got (rightfully) fired for claiming that the LSAM model he was working on was sentient, because,,, god,,, why. I’m pretty sure actually he was fired for leaking company work, but still. How… deranged do you have to be.
This might get mean, but I feel like in the same way that Stanford undergrads are often deranged into weird superiority complexes, Google employees really do live in a world of their own - where they can do no wrong, the world is at their fingertips and they have been identified by God as technological warrior-martyrs. (I love my Stanford friends tho Haley Hodge you’re fantastic:)
Because your statistical model is not sentient sir. And to believe so is… embarrassing… it’s a testament to your pathetic faith in humanity and cloistered view of the world.
The things that are widely known as “Machine Learning” models or “AI” are statistical pattern matching models. They do nothing more than regurgitate outputs based on a series of inputs they’ve seen in the past. They do this by associating statistical values between things they’ve seen in the past with those that are provided inputs. This also means they are completely incapable of creating anything wholly new by synthesizing mediums in a way that has not been done by a human before.
For example if you were to ask a large language model to generate a brand new medium in which it could respond to input questions, it could not do it. If you asked a large multi-modal model to learn to incorporate and create on mediums in which it was not designed to incorporate, it could not do it. The simple fact that statistical models are limited by the mediums though which they measure and then synthesize reality means that they are wholly incapable of “Creating” because they cannot fathom beyond those mediums. This ability, to imitate in a known space based on various inputs, is not creativity, this is regurgitation. Creativity only happens non-digital beings synthesize ideas from different mediums to create a new medium.
For clarity I am associating the ability to Create with sentience.
I’m very sorry to my English teachers for the structure of this argument.
To think that your statistical algorithm is sentient is to belittle the Holy process that is the brain’s capacity to jumble individual inputs from our material environment in a synesthetic manner before storing them in an equally as quantum and evolving space. Creativity comes from this quantum spew. It happens in the infinite dimensions between one medium of thought and another and requires a quantum processing power that plots those various mediums on a cohesive frame of reference. Only then can one unfold the space, transpose and transform those ideas onto a new frame of reference, and build something wholly original. This is entirely impossible to model with digital technologies.
Regardless, the argument can be summed up to - no digital statistical model is going to replicate the quantum processes that process information in our brain. Nor will any digital system fully understand and incorporate the heap of information that we process in between and separate from time and space. Shout out the witches. Love y’all.
To conclude (what I really wanted to get to), When technologists claim that their “AI” is sentient or that their machine is “Learning,” all they’re really attempting to do is shift any responsibility for the negative impacts of these model’s decisions from the creator of the model, to the “Sentient Model” itself.
Sir, just because the statistical model you trained on 4chan uses a neural network, does not mean that we should blame it and not you for how it was used. I do not care that you can not understand it’s black box inner workings, you designed it that way and you should be responsible for its fucking insanity.
If you’ve noticed so far I am very opposed to using the words “AI” or “machine learning” without quotes because those labels work to reify the idea that these creations are somehow not the responsibility of their creators. They are statistical models and they should be referred to as such.
I very much agree and appreciate the use of quotations around "AI" and "machine learning", Very insightful points made regarding accountability. a great read!