Monday, 24 June 2024

"Threat to society"?


I was fascinated by AI as soon as I read an article about Blake Lemoine chatting to a machine and the transcript, back in 2022.  I signed up for the various tools in development before they were generally released.  This month I paid for a single month of Chat GPT 4 to get up to date information, "better" analysis, its image generation capabilities and more. 

In January 2023, a friend agreed its powerful,  so much as to be a threat to society, I'd say.

I thought this was overcooking things somewhat.

We had a conversation which brought into question the accuracy of the name "AI".  I mentioned I found it useful.  They pointed me to a forum with chat about Chat GPT 


"Half right" means wrong.

To be honest I've seen only wrong answers form ChatGPT until now.

It can't "think logically" so almost everything is flawed when you look a little bit deeper.

But a lot of people are even much dumber than this bot. So I quite fear the consequences of its existence. 

Ten days or so later I sent the friend an article:

A: https://www.spectator.co.uk/article/ai-is-the-end-of-writing/


B The less intelligence a human uses writing about "AI", the more intelligent the "AI" looks by comparison! :)

[They asked Chat-GPT:]
"Is a kilo of pork heavier than a kilo of liquid hydrogen?"

No, a kilo of pork and a kilo of liquid hydrogen do not weigh the same,
a kilo of liquid hydrogen is much less heavier. A kilogram is a unit of
weight, and it's a measure of the force of gravity on an object. A kilo
of pork weighs the same as a kilo of any other matter, but a kilo of
liquid hydrogen takes up a much larger volume than a kilo of pork. Liquid
hydrogen is an extremely low-density substance, it contains much less
matter in the same volume than the pork does, so it weighs less -- says
ChatGPT.

A: Crikey. Perhaps it'll be good for lawyers, or politicians.

The friend suggested it was a glorified Google.

B: ...it is not AI. It is simply an internet search engine that uses pattern matching to convincingly present the results.

Would we expect understanding in a human that had spent its life since birth only reading the internet? Even if it had read every poem 100 times? No. Hence we don't expect better from any machine.

This is a view that has taken ages to start circulating more widely. 

A: So why do its creators call it AI? 

B: Marketing.


A: And what's the difference?

B: Intelligence is intelligent! :)

And art is art.  If AI art is not art, how intelligent is artificial "intelligence"? The phrase pretty much tells you, it's a lie, it's artificial, fake. 

A: I thought there are laws about misrepresentation. 

B: Tools like ChatGPT are a) worth every penny of the price, and b) offer a full refund if you're not satisfied :)

A: So how do you know it isn't what it says it is? 

B: I look at the results and apply my intelligence... plus what I  learned in my degree followed by decades of industry experience.

A: https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter

B: We ain't seen nothin' yet!

 "While admitting that he pushed Microsoft’s AI “out of its comfort zone” "

Part of the problem is it doesn't have a comfort zone. A bigger part is that people assume it does.

Next: morals.

A: Who actually knows what it has?

B: Many more know what it hasn't. 

This was all rather cryptic.  My next question revealed the extent of my confusion.

A: Do you think it can have them?

B: Of course no. It is just a search engine with convincing UI.

A: Is that sheer manipulation or what is going on there?

B: No different from Eliza.

[Wikipedia "ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines..."

A: Why do they claim to have emotions?

B: They simply regurgitate prior statements.

A: I find it a better translator than Deepl, for Spanish. It hasn't said anything of the sort reported in the Guardian.

B: Guess what. Output depends on input! 

A: If regurgitate prior statements is all they do, why are you worried?

 B: Because many people don't realise that is all they do.

 "Weizenbaum said that only people who misunderstood ELIZA called it a sensation."

A: But how worrying is that, really?  

B: Enough, to those of us who've been thinking about this for 50 years now.

A: If this pattern matching can't evolve into anything more concerning, then people are mistaken, very mistaken even about its nature.  But people are mistaken all the time. 

B: And suffer from it all the time.

A: The exception is when it ignored the criteria I gave it, invented something and the truth had to be wrung out of it, which was admittedly a bit disconcerting.

B: See the predecessor Tay.

[Wikipedia: "Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch."]

A: I find it increasingly indispensable.

B: I.e.you are increasingly reliant on the unreliable.

A:  It is often wrong. Perhaps 40% of the time, hard to say.  That is the danger, I think, that people with less life experience, or perhaps not quite switched on, believe what it tells you, without question because it does so with conviction.

B: Consider that category may include you. E.g. a native Spanish speaker may see 60% wrong.

A: I suspect its engineers will improve the accuracy of such tools pretty quickly....

B: See the predecessor Galactica.

[Galactica was an AI tool from Meta, released on 15.11.22 that survived three days before being taken down because of bias and inaccuracy. ]

A: ....as that's probably the competitive edge just now.
 
B: No. Accurate counts little to users asking about what they do not know.

The competitive edge is Convincing.

Just as in politics, religion etc. :)


This proved to be correct. It was convincing and eighteen months on, accuracy has not improved but convincing has.

I mentally filed this conversation away in some seldom-visited recess, not fully understanding but with the impression that I was talking to someone who who knew more about it, had thought about it more, knew something of the history and probably had professional experience.

I heard the warning but continued using the AI tools more and more, because they were so useful, for among other things, translations, especially translations of colloquialisms and songs from Latin America which often use words that are specific to particular regions or cultures or eras; for other queries related to language and linguistics; for explanations, for recipes, for cooking techniques, to find out histories, for summaries, suggestions, alternatives, meanings, to get average prices, to edit audio files, to generate images, to teach my tech stuff, for anything that might have taken me multiple Google searches or where I want an answer and several sources. The answers weren't always accurate, it was much safer if you already had some understanding of the topic, so you could catch possibly inaccuracies. You and had to keep the thing under a constant weather eye. Reliable, it was not but it was reliable enough - until it wasn't. I thought I had the measure of it, until I used it for the piece on Chile.


No comments:

Post a Comment