Wednesday, May 17, 2023

The Frightening Prospects of Artificial Intelligence

ChapGPT CEO, Sam Altman testifying in the Senate yesterday (AP)
2001 - A Space Odyssey. That 1968 film by Stanley Kubrick took a look a few years hence into the future (33 years to be exact) to see what the world could realistically look like. It was science fiction at its best. Although the primary focus was on space travel, Kubrick was quite prescient about the future of Artificial Intelligence (AI). For those of us that saw the movie, who can forget the final scenes where HAL - a computer that was supposed to follow the astronaut's voice commands, instead took control away from him since it was programmed to first save itself and override any other command.  

Think that could never happen? Think again.

With AI in the news, I have had some disturbing thoughts about it. Especially since it  continues to advance at what seems like breakneck speed. Last night out of curiosity I decided to give ChatGPT a whirl. I asked it to write an essay about Orthodox Judaism and LGBTQ issues. After a couple of instructional tweaks, it generated an essay that could mistakenly but credibly be attributed to me. Although it isn’t exactly what I would say, it was pretty darn close. (For those interested, it can be read at my (far) less traveled blog EvE II.)

It is indeed disturbing that there is technology that can so easily be used this way. Anyone with unscrupulous morals could easily decide to ask ChatGPT to write an essay that is the exact opposite of what I would say – and sign my name to it. Who would know the difference if it was written in the style that I write?!

But even without that, AI can destroy creativity. I take great care to try and be an original thinker. And then transfer my thoughts to (virtual) paper. I try and  write in ways that  will be thought provoking and yet easy to read .My opinions have been formed by a lifetime of learning from a variety of mentors of varying degrees of influence - and life experiences. All of which is filtered through the lens of my Hashkafos. 

I have honed my writing skills (such that they are) over many years of practice - writing a new post almost every day of the year. I have been doing this for for nearly 2 decades. That I have a following of few thousand readers (not all at the same time) is a relatively modest number compared to Influencers that have millions of followers. Even though I have readers that are heterodox, secular, and even a few non Jews that drop in  to see what I have to say, my target audience is Orthodox Jews. I take pride in that I reach as many of them as I do. Agree or disagree,  I hope that I've built some credibility over the years.

But AI challenges that credibility. How is anyone to know if what is posted was actually written by me? Who is to stop me from using AI to write a post - other than my own conscience? 

Using AI to generate a post in a matter of seconds is very tempting. But it would be a fraud and something I would absolutely never do. 

AI cannot only kill creative writing. It can kill jobs that do not rely on physical activity - like journalism,  accounting, legislating, and the teaching profession. 

Who is to say that an unscrupulous ideologue will not publish an article with ‘quotes’ by public officials that were never said by them – and link to a fake article as ‘proof’ he said it? 

Voices can be faked with pristine accuracy. Even visuals can be faked. I saw a video not long ago of a politician saying something he never said. You could not tell it was faked. AI did that. 

With the incredibly fast advances taking place in technology these days, there is no telling where this will lead. One scenario could usher in World War III if an unscrupulous programmer with mass murdering tendencies hacks the code the President uses to initiate nuclear war. And I’m sure that I haven’t even scratched the surface. That is frightening.

If you don’t believe me, believe what OpenAI CEO Sam Altman said at a Senate hearing. yesterday. From  AP: 

Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong. 

Altman suggested government regulation that would prevent any damaging misuse. Normally I am  inclined to be a libertarian when it comes to government control. (Translation – I don’t like it because it stifles freedom.) But in this case, I don’t think we have a choice. 

That said, I’m not sure it will help. The horse is out of the barn. People are going to ride it to the best of their ability. Regardless of any so called preventative controls.  As we all know – they can be hacked. Sometimes even by a bright 12 year old in his bedroom.

True AI can be a boon to mankind by shortening the time it takes to advance technologies. It  isn’t that far fetched to say that a cure for cancer will be found a lot faster through the use of AI. That is  the upside that mandates using the technology. But at the same time the downside is too terrible to think about. Because just like it might advance cures, in the wrong hands it might also advance disease that would be as devastating to mankind as would a nuclear war.