In all of this discussion of AI and its impact, there is a certain myopia, as if the technologically driven modern world has all the data on life on earth to be able to feed into AI. I was reminded of how little we actually know this week when I posted an article on learning the Wolof language, an oral tribal language that only within the last few decades has begun to have a standardized alphabet and a few grammar manuals created. As an experiment, I found an internet AI translator that claimed in could translate Wolof and entered a well known Wolof proverb. The translation generated wasn't remotely close to the actual meaning, because too little internet data on Wolof exists. There are hundreds of such languages spoken by people in mainly oral cultures with little to no access to the internet. And language is only one of many areas of human knowledge where we lack all the data, and it will take human effort and ingenuity to find all that missing data. Since we have only got as far as we have in several thousand years of recorded human history, this present civilization will probably decay as all previous civilizations have long before AI has all the data.
Furthermore, the thing about the useful applications for AI is that those applications require human knowledge to restrict the amount of information AI recieves, or the results will be inaccurate. As the reports that triggered the stock market crash in July showed, AI gets stupid when fed irrelevant data and learning from itself further confuses it. An AI medical imaging application for humans would become inaccurate by feeding it information on the anatomy of non-human mammals and reptiles. It would serve no purpose to feed the AI used decode the Herculaneum scrolls information about languages other than those in which the scrolls are written. An AI poetry generator that is fed everything from beatnik poetry to Petrarchan sonnets has to be told which type of poetry is wanted.
But there is no limit to the ways in which humans can use information. Our five senses and knowledge of what it means to be human mean that we can learn quickly to lay aside and rule out irrelevant data for any given task, but that we can also recombine apparently unrelated data in creative ways to create something new. An AI fed only poetry written before 1944 won't create beatnik poetry, but humans did.
AI is definitely not good at independent activity. It has to be directed and effectively prompted to produce results we want. What it is good at doing is the intellectual grunt work of data processing, combing through data looking for patterns, and so on. At fundamental level it’s not creative, nor can it be creative, because it can’t frame the problems it works on. The humans prompting and managing it do that.
Exactly, which makes AI only a very sophisticated data processing unit. We really have had limited versions of AI for years - spellcheck and its frequently aggravating updates, autocorrect and autofill, for instance, or internet search algorithms.
What makes AI seem like such a monstrous leviathan now is that the tech giants are releasing their AI into tools we have used for a long time, like search engines and word processing software, in the mistaken belief that AI is sophisticated enough to weed out irrelevant data. They assume make processes my generation, the millennials, have done for ourselves since we started learning on Windows 95, easier. But, because AI gets stupider when fed irrelevant data, it is actually starting to make those tasks harder. Take the process of googling information on the search engine. I used to be able to type a complicated phrase about a relatively obscure topic into the engine and get a cluster of useful links on the first page of the results - even the marketing technique of prioritizing those pages who paid for their results to show up first didn't foul it up too badly. Now I find the first page is far too often filled with irrelevant nonsense and I have to scroll through the results much longer to find anything useful. When you see personal computer functions you have been familiar with for decades start to noticeably decay in quality due to AI interference, it is genuinely troubling.
Here is are some well known sites that have suddenly gotten much more difficult to navigate since the release of AI.
-YouTube: I used love to find new music on YouTube. But now when I type in genre phrases that used to bring up live performances, original music videos, and official album tracks, I get dozens of AI generated "mixes", all with AI generated screen grab images. I sometimes scroll past an estimated hundred of these before spotting a screen grab of a live or album cover.
Facebook - I only climbed on the FB bandwagon a decade ago to keep up with very scattered friends and family and my interactions on there were almost entirely limited to the social side. In the last couple of years, my feed was suddenly filled with AI generated pages spewing out content that read like robotic Wikipedia entries and I could barely see posts by my friends and family anymore. When I do see them, many of the older generation are sharing AI generated images and memes, and liking AI generated posts. It is as if they have all become cyborgs of themselves - sometimes I wonder if they've all been hacked by bots.
Etsy - this site was ostensibly for artisans to sell their handmade items. It was amazing to see the creativity on display. I sew and do handcrafts myself, so I often look for patterns to download. There has been a sudden increase in the number of downloadable patterns, but it is buyer beware, because the often cheaper AI generated patterns do not actually work.
As an author, this makes me uncomfortable. “… writers with a PhD or master’s degree,” Great, MFA writers helping to program AI writing machines. Hasn’t Big Publishing already been taken over by young, inexperienced, entitled ‘new-think’ writers as it is.
But I suppose as everyone says, ‘change is good.’ Our betters in the scientific and political realm have already inflicted National Socialism and Communism on humanity. Next it will be AI-driven paradise where everyone will sit around and eat rainbow stew and God knows what will happen.
But, on the other hand… AI is the sorcerer’s stone and if China, Russia, Iran, and North Korea have it, so must we. I can already hear, “If we don’t put more money/effort/etc. into AI, China’s AI will kick our AI’s ass.
“I see tremendous upside from [AI’s] ongoing development as a creative partner in all sorts of human endeavors,”
Yeah. Me too. But ‘partner’ implies equality. Will it stay a partnership? I doubt it. I’m inclined to say, “it’s human nature that one party will, in time, consider itself a more worthy partner than the other. But we’re not speculating about ‘human nature.’ Or… are we? Aren’t we designing that into the machine somehow? Human nature and all its flaws?
“As our tools allow us to transcend our native limitations, we are free to pursue other, higher aims—such as, for instance, art.”
Hmmm. I find myself recalling how mental hospitals in the 20th century provided ‘arts & crafts’ to fill up the patients’ time, and prisons had ‘shop’ where the prisoners could work at something so they wouldn’t go ‘crazy.’ And weren't both of these institutions places where humans were prevented from leaving?
Did I ever tell you I was paranoid? Well, I am.
Joel, this is fascinating. Thanks for sharing it!.
I don’t think so in this case, but I could be wrong. One thing seems very apparent to me: Regardless of what we think of technological change, it’s pretty much impossible to stall or reverse. Once humans imagine a use for a tool, they will continue to use the tool.
Joel, you're right about that. From the wheel to the cotton gin, to the safety pin to the washing machine. Change, invention, technological change has all been overall benign and good. But this tool, AI... Above you said we were partnering with AI. Maybe. A tool? I don't see AI as a tool. I see AI as a new species of intelligence (not life). Yes, we can use it as a tool now, for busy work, crunching numbers, 'remembering what 75 thousand pictures of a certain cancer look like... that kind of work. Even our name for 'it' is not quite right. It's an intelligence, but we call it 'artificial.' When (and if, and I believe it's a when) it gains sentience, will it still be artificial? And will it still be a tool, like a slave, and will it still be a partner, like an equal? Not so sure.
Yes! I’m glad you latched hold of the hopefulness I was attempting to communicate. I think a lot of people see the AI question in such black and white terms, they can’t imagine a scenario in which we use it for valuable, life-giving ends. That’s something humans can bring to any technology.
Joel, I love the article. Off and on, I have been reading a biography of Bonhoeffer. This spring I went to the dark side and spent 3 months training AI. I found the process chaotic and AI dumb. AI cannot function without humans.
I’ve been playing around with custom GPTs for work and find them fascinating and very helpful—but that’s only because I’m learning to use them well. As you say, AI cannot function without humans.
What an inspiring piece. Thank you. For various reasons, I have been thinking about abandoning Substack. Your writing is a good reason for staying around.
A very interesting combination of topics! I think, as a Christian, thinking about AI can be informed by a theology of work as well.
Work is not an evil thing. In of itself it is good to be productive, it's satisfying and it's a wonderful part of a healthy society. It's a lot more than just paid labour. But work also channels a lot of human injustice. It's a means for dehumanisation (a cog in the machine, or a person only having value if they're 'productive' in paid labour) and exploitation (unfair conditions, dependence on employers, signing rights away, those most affected not having a say or reaping any benefits). It can be an idol - we can put our hope in it, it's an end to itself. It's easy for someone's identity to become their work, and for that to impact how they view others.
All of this can be applied to AI. Thinking of AI as a solution to the evil of humans having to do any work is always going to cause problems. Likewise AI will have the exact same societal problems of exploitation. We need to be aware and responsible and act appropriately. Pattern recognition of datasets is a clever application of human ingenuity
Probably not the best expressed, but hopefully those with an interest in the topic can follow. Also, I haven't touched on the technological side (the many technologies and applications grouped under the one name, the resources required, where the data comes from, etc).
In all of this discussion of AI and its impact, there is a certain myopia, as if the technologically driven modern world has all the data on life on earth to be able to feed into AI. I was reminded of how little we actually know this week when I posted an article on learning the Wolof language, an oral tribal language that only within the last few decades has begun to have a standardized alphabet and a few grammar manuals created. As an experiment, I found an internet AI translator that claimed in could translate Wolof and entered a well known Wolof proverb. The translation generated wasn't remotely close to the actual meaning, because too little internet data on Wolof exists. There are hundreds of such languages spoken by people in mainly oral cultures with little to no access to the internet. And language is only one of many areas of human knowledge where we lack all the data, and it will take human effort and ingenuity to find all that missing data. Since we have only got as far as we have in several thousand years of recorded human history, this present civilization will probably decay as all previous civilizations have long before AI has all the data.
Furthermore, the thing about the useful applications for AI is that those applications require human knowledge to restrict the amount of information AI recieves, or the results will be inaccurate. As the reports that triggered the stock market crash in July showed, AI gets stupid when fed irrelevant data and learning from itself further confuses it. An AI medical imaging application for humans would become inaccurate by feeding it information on the anatomy of non-human mammals and reptiles. It would serve no purpose to feed the AI used decode the Herculaneum scrolls information about languages other than those in which the scrolls are written. An AI poetry generator that is fed everything from beatnik poetry to Petrarchan sonnets has to be told which type of poetry is wanted.
But there is no limit to the ways in which humans can use information. Our five senses and knowledge of what it means to be human mean that we can learn quickly to lay aside and rule out irrelevant data for any given task, but that we can also recombine apparently unrelated data in creative ways to create something new. An AI fed only poetry written before 1944 won't create beatnik poetry, but humans did.
AI is definitely not good at independent activity. It has to be directed and effectively prompted to produce results we want. What it is good at doing is the intellectual grunt work of data processing, combing through data looking for patterns, and so on. At fundamental level it’s not creative, nor can it be creative, because it can’t frame the problems it works on. The humans prompting and managing it do that.
Exactly, which makes AI only a very sophisticated data processing unit. We really have had limited versions of AI for years - spellcheck and its frequently aggravating updates, autocorrect and autofill, for instance, or internet search algorithms.
What makes AI seem like such a monstrous leviathan now is that the tech giants are releasing their AI into tools we have used for a long time, like search engines and word processing software, in the mistaken belief that AI is sophisticated enough to weed out irrelevant data. They assume make processes my generation, the millennials, have done for ourselves since we started learning on Windows 95, easier. But, because AI gets stupider when fed irrelevant data, it is actually starting to make those tasks harder. Take the process of googling information on the search engine. I used to be able to type a complicated phrase about a relatively obscure topic into the engine and get a cluster of useful links on the first page of the results - even the marketing technique of prioritizing those pages who paid for their results to show up first didn't foul it up too badly. Now I find the first page is far too often filled with irrelevant nonsense and I have to scroll through the results much longer to find anything useful. When you see personal computer functions you have been familiar with for decades start to noticeably decay in quality due to AI interference, it is genuinely troubling.
Here is are some well known sites that have suddenly gotten much more difficult to navigate since the release of AI.
-YouTube: I used love to find new music on YouTube. But now when I type in genre phrases that used to bring up live performances, original music videos, and official album tracks, I get dozens of AI generated "mixes", all with AI generated screen grab images. I sometimes scroll past an estimated hundred of these before spotting a screen grab of a live or album cover.
Facebook - I only climbed on the FB bandwagon a decade ago to keep up with very scattered friends and family and my interactions on there were almost entirely limited to the social side. In the last couple of years, my feed was suddenly filled with AI generated pages spewing out content that read like robotic Wikipedia entries and I could barely see posts by my friends and family anymore. When I do see them, many of the older generation are sharing AI generated images and memes, and liking AI generated posts. It is as if they have all become cyborgs of themselves - sometimes I wonder if they've all been hacked by bots.
Etsy - this site was ostensibly for artisans to sell their handmade items. It was amazing to see the creativity on display. I sew and do handcrafts myself, so I often look for patterns to download. There has been a sudden increase in the number of downloadable patterns, but it is buyer beware, because the often cheaper AI generated patterns do not actually work.
As an author, this makes me uncomfortable. “… writers with a PhD or master’s degree,” Great, MFA writers helping to program AI writing machines. Hasn’t Big Publishing already been taken over by young, inexperienced, entitled ‘new-think’ writers as it is.
But I suppose as everyone says, ‘change is good.’ Our betters in the scientific and political realm have already inflicted National Socialism and Communism on humanity. Next it will be AI-driven paradise where everyone will sit around and eat rainbow stew and God knows what will happen.
But, on the other hand… AI is the sorcerer’s stone and if China, Russia, Iran, and North Korea have it, so must we. I can already hear, “If we don’t put more money/effort/etc. into AI, China’s AI will kick our AI’s ass.
“I see tremendous upside from [AI’s] ongoing development as a creative partner in all sorts of human endeavors,”
Yeah. Me too. But ‘partner’ implies equality. Will it stay a partnership? I doubt it. I’m inclined to say, “it’s human nature that one party will, in time, consider itself a more worthy partner than the other. But we’re not speculating about ‘human nature.’ Or… are we? Aren’t we designing that into the machine somehow? Human nature and all its flaws?
“As our tools allow us to transcend our native limitations, we are free to pursue other, higher aims—such as, for instance, art.”
Hmmm. I find myself recalling how mental hospitals in the 20th century provided ‘arts & crafts’ to fill up the patients’ time, and prisons had ‘shop’ where the prisoners could work at something so they wouldn’t go ‘crazy.’ And weren't both of these institutions places where humans were prevented from leaving?
Did I ever tell you I was paranoid? Well, I am.
Joel, this is fascinating. Thanks for sharing it!.
Sometimes the paranoid are right :)
I don’t think so in this case, but I could be wrong. One thing seems very apparent to me: Regardless of what we think of technological change, it’s pretty much impossible to stall or reverse. Once humans imagine a use for a tool, they will continue to use the tool.
Joel, you're right about that. From the wheel to the cotton gin, to the safety pin to the washing machine. Change, invention, technological change has all been overall benign and good. But this tool, AI... Above you said we were partnering with AI. Maybe. A tool? I don't see AI as a tool. I see AI as a new species of intelligence (not life). Yes, we can use it as a tool now, for busy work, crunching numbers, 'remembering what 75 thousand pictures of a certain cancer look like... that kind of work. Even our name for 'it' is not quite right. It's an intelligence, but we call it 'artificial.' When (and if, and I believe it's a when) it gains sentience, will it still be artificial? And will it still be a tool, like a slave, and will it still be a partner, like an equal? Not so sure.
Love going down this rabbit hole with you.
Such a hopeful reflection. It made me think a lot of Tolkien and his essay “On Fairy Stories” when he discusses the importance of the subcreative act
Yes! I’m glad you latched hold of the hopefulness I was attempting to communicate. I think a lot of people see the AI question in such black and white terms, they can’t imagine a scenario in which we use it for valuable, life-giving ends. That’s something humans can bring to any technology.
Good read. Enjoyed it.
My pleasure. Glad you enjoyed it!
Joel, I love the article. Off and on, I have been reading a biography of Bonhoeffer. This spring I went to the dark side and spent 3 months training AI. I found the process chaotic and AI dumb. AI cannot function without humans.
I’ve been playing around with custom GPTs for work and find them fascinating and very helpful—but that’s only because I’m learning to use them well. As you say, AI cannot function without humans.
Have you found that prompting AI takes a lot of time? I have talked with people who have spent hours and days working to produce an image they wanted.
What an inspiring piece. Thank you. For various reasons, I have been thinking about abandoning Substack. Your writing is a good reason for staying around.
How kind of you to say, Christopher! I’m glad you enjoy the work. I certainly enjoy doing it and engaging with readers who care about it.
I read Eric Metaxas’ biography of Bonhoeffer and was truly in amazement at his life and work. Thank you for this thoughtful piece
A very interesting combination of topics! I think, as a Christian, thinking about AI can be informed by a theology of work as well.
Work is not an evil thing. In of itself it is good to be productive, it's satisfying and it's a wonderful part of a healthy society. It's a lot more than just paid labour. But work also channels a lot of human injustice. It's a means for dehumanisation (a cog in the machine, or a person only having value if they're 'productive' in paid labour) and exploitation (unfair conditions, dependence on employers, signing rights away, those most affected not having a say or reaping any benefits). It can be an idol - we can put our hope in it, it's an end to itself. It's easy for someone's identity to become their work, and for that to impact how they view others.
All of this can be applied to AI. Thinking of AI as a solution to the evil of humans having to do any work is always going to cause problems. Likewise AI will have the exact same societal problems of exploitation. We need to be aware and responsible and act appropriately. Pattern recognition of datasets is a clever application of human ingenuity
Probably not the best expressed, but hopefully those with an interest in the topic can follow. Also, I haven't touched on the technological side (the many technologies and applications grouped under the one name, the resources required, where the data comes from, etc).