Previous post Next post

Artificial Intelligence and Irrational Fears

Where’s Jerry Garcia of the Grateful Dead? Seriously, what list of the greatest rock guitarists of all time would not—could not—include him? Sure, I know the internet article was just some teaser to get me to mindlessly click through an ad-laden list. But still, no Garcia. I object: Who wrote this article?

And that is the question of the day: “Who wrote this article?” Was it really written by the suspicious name on the byline—as if the author is the protagonist in some cheap novel, such as Ima Riter? Or, as happens more frequently these days, were the words the product of a large language model (LLM), a class of artificial intelligence (AI) models and a sibling of the seemingly ubiquitous ChatGPT, though under the byline of Ima Riter?

Yes, the AI model, the complex statistical models whose genesis, as the hysteria goes, we will rue when it rules our future. Lately it’s hard to scan a media site without finding at least one headline declaring that AI models are intelligent and sentient entities, capable of creating information in a manner that exceeds the abilities of both the creator and user. Models that will destroy jobs and abrogate totalitarian powers. But is that true?

Despite assertions otherwise, AIs (LLMs in particular) are simply models that provide probabilistic responses to language prompts. At a basic level, ask an LLM to fill in the missing word in the phrase “I ran up the . . . ” and it will return “hill.” Not because the model is intelligent or sentient. No. The LLM returns “hill” because that is the statistically likely response to the prompt.

Challenge it, since the word you are looking for is not “hill,” and the LLM will reach into its statistical memory, based on the decomposed works it was trained on, and provide the next likely response. You can then converse with it, so to speak. After the second or third iteration of entering “That is not the word I am looking for,” the LLM will, like any good conversationalist, ask for additional context to provide a more appropriate answer.

Though, regardless of the impression of your interaction, you are not really conversing. And the LLM is not truly following your thoughts any more than a mentalist—the so-called mind reader—reads your mind. Both the LLM and the mentalist look for contextual clues to provide or elicit a likely response. And the seeming powers of both rely on the audience to attribute abilities which do not exist.

The mentalist says, “I feel something bad has happened to you recently,” when, in fact, nothing bad has happened. However, you equivocate on the meaning of “bad” and look for any instance of some minor misfortune. You play along. So, yes, you imagine the innocuous as bad. It is true that yesterday you had to search for thirty minutes for your iPhone before your son found it between the cushions of a couch. Without thinking, you come under the mentalist’s spell and provide him with additional details of your life—you lost your cell phone, and you have a son. And so the process goes.

As with the mentalist, the LLM also requires your indulgence, where you actively come under its spell. However, both spells are of your own creation. You allow your imagination to ascribe powers that do not exist—the mentalist cannot read your mind and the LLM does not understand you. The power lies completely within you. And the more you unleash this power, the more omniscient you believe the mentalist or LLM to be—with one instance providing entertainment and the other irrational fear.

This is not to say that, just as science was weaponized, folks holding and seeking power will not try to weaponize AI models in the future. Government and its agents will begin saying we must act in a certain way simply because an all-knowing AI has recommended that action. Here, the mentalist will be the state, but we, the audience, cannot let our collective selves be fooled.

Models will be trained to answer for the regime. Expect it to be so.

Now what about the musical talent of Garcia? Where does it rank? And will AI replace many of those creating that and similar lists?

As far as Garcia’s talent, you the listener—the acting human—decide. Create your own top ten. Search and read the hundreds, if not thousands, of other lists on the web. Agree with the ones you like and ignore the others. If an LLM-generated list syncs with your internal ranking, accept it, have fun with it. If not, forget it. Your preferences are yours alone.

As far as jobs being lost, sure, many jobs will be lost to the new technology. But will Al make human efforts redundant? Will we all end up unemployed? Never.

Keep in mind an AI-generated list of guitarists is a synthesis of lists and writings already found on the web—the AI, in essence, employed the same search you would have used. The AI added no new information or no new analysis. It simply provided a probabilistic response to the language prompt: rank the best rock guitarists of all time. However, and this is key, the AI relied on words previously written by humans. It simply responded with a summarized version of all those opinions, a summary that may have more hallucinations than half the audience dancing at a 1980s Grateful Dead concert.

So, yes, AI will be used to generate meaningless lists that occasionally steal minutes from your day. But these synthesized strings of sentences are nothing new, just a rehashing of web articles written by warm, breathing writers. And, yes, many jobs creating similar types of lists, summaries, etc., will be lost, as will many other jobs in various fields and endeavors, just like jobs were previously made redundant by emerging technologies—with new, unrealized jobs created to replace them.

Nevertheless, original ideas—the beauty of humanity—will forever remain the product of acting men and women. And without human hands continually authoring original texts, those supposedly dangerous AIs will summarize nothing and respond with nothing.

Our lives and futures are safe.

Full story here Are you the author?
Previous post See more for 6b.) Mises.org Next post
Tags: ,

Permanent link to this article: https://snbchf.com/2024/04/fedako-artificial-intelligence-irrational-fears/

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.