4 Comments
User's avatar
Ruth Gaskovski's avatar

Wow, this was a long read! I wish these conversations could happen live, but presenting your perspectives in an epistolary format is the next best thing. Framing student LLM use as "triple performance" hit the spot, as did Hollis' point about the importance of individualized relationship (although I think the learning outcomes are vastly different with ChatGPT as teacher). I appreciate that you spent time digging deeply into finding the right metaphors, as it helps to clarify the muddied waters of conversation around AI.

Expand full comment
Joel J Miller's avatar

Thanks for weighing in, Ruth! I know you and Peco are busy trying to sort these issues out on a daily basis. Interesting—and challenging—times!

Expand full comment
Mark Armstrong's avatar

Jazz vs. Twine-- kinda fitting since "cool cats" are associated with both.

I see your header image was created by ChatGPT. It's a three-quarter shot, which means the upper part of the cat's back hind leg should have been visible, framing his stomach. "Chat" needs to repeat his course in Figure Drawing.

Had to smile about Hollis reading an advance copy of your "brilliant book." 😽😅

Expand full comment
Holly A.J.'s avatar

That was a lot of words, and the wordiness reminded me that yesterday was Pentecost, the reversal of Babel. But what was the cause of the division at Babel?

"If they have begun to do this as one people, all having the same language, then nothing they plan to do will be impossible for them."

The creators of AI are telling themselves the same thing as the builders of Babel, "Let us make a name for ourselves; otherwise, we will be scattered throughout the earth."

Yet they are building on the sand of human nature, and their tower too will fall. Every mighty civilization has crashed in history, why should the one inventing AI be any different? AI is even more easily obliterated than fortresses, all one has to do is pull the plug. I often think wryly of how our blank civilization would appear to archeologists of the distant future, how unreadable will be the information on hardrives and microchips when the software and hardware to read them no longer exists. There is already information lost to digital rot. And much information exists that AI has never read, and never will read. It does not even have five senses, much less a body of flesh and bone.

But enough philosophizing. The reason I was assigned papers while earning my Bachelor's of Science was less to show how I organized my thoughts, and more to show how I understood how to gain information from scientific literature and apply it practically, i.e. to benefit my patients. Here is the problem with using AI in such a case. The type of studies I was consulting involved real world effects on real patients, and I was preparing to use those studies to help other real patients. But AI hallucinates references, which means AI would be hallucinating studies that were not actually done. Also, AI tends to respond to the baises of its interrogator, which means my line of questioning could produce answers that agreed with my thesis. Do you see how dangerous that could be when looking for treatments for patients?

Expand full comment