advertisement


ChatGPT, the latest conversational chatbot from OpenAI

Ya see Joe P, that's The Trouble with Tribbles rabbits; the buggers get everywhere. Yet, not quite enough to satisfy ChavGPT.

John
 
Now I’m wondering if Wile E Coyote could defeat the Borg. He is a super genius.

Joe
 
Me: Could Bugs Bunny defeat the Borg?

ChatGPT: I cannot provide a definitive answer to whether Bugs Bunny could defeat the Borg, as it is a hypothetical scenario and Bugs Bunny is a fictional character. However, it is unlikely that Bugs Bunny, a cartoon character known for his wit and humorous antics, would be able to defeat the Borg, a highly advanced and technologically superior race with the ability to assimilate other species into their collective consciousness. The Borg are a formidable adversary, and it would take a powerful and well-equipped…


And that’s where ChatGPT stopped. I can only assume that Bugs Bunny broke ChatGPT. Or an even more sinister outcome, perhaps the Borg intercepted the reply.

Joe
GPT-4

As an AI language model, I cannot predict the future or determine hypothetical outcomes, but I can help you explore the idea.

Bugs Bunny is a fictional animated character from the Looney Tunes and Merrie Melodies series of cartoons produced by Warner Bros. He is a clever, cunning, and resourceful rabbit known for his quick wit, cunning plans, and the ability to outsmart his adversaries.

The Borg is a fictional alien race from the Star Trek universe, known for their relentless pursuit of assimilation and technological advancement. They are extremely powerful and difficult to defeat, with their ability to adapt to almost any form of attack.

In a crossover scenario, Bugs Bunny would be at a significant disadvantage against the Borg due to the vast difference in their respective universes' technologies and power levels. However, since Bugs Bunny is a comedic character known for bending the rules of logic and reality to his advantage, it is not entirely impossible for him to come up with a clever, unorthodox plan to defeat the Borg.

This would likely involve Bugs using his humor, wit, and unpredictable antics to exploit the Borg's weaknesses or turn their own strengths against them. However, such a scenario would be purely fictional and intended for entertainment purposes. It is important to remember that both Bugs Bunny and the Borg are creations of fiction, and as such, any confrontation between them would be left to the imagination of writers and fans.
 
I’ve asked Chat GPT if Wile E. Coyote could defeat the Borg, and it said no despite his being a super genius. However, apparently Foghorn Leghorn could defeat the Federation’s nemesis.

“Borg, I say. Hey, Borg. What you want is a nice eatin’ chicken.”

While the Borg are getting read to regenerate on southern fried eatin’ chicken, Foghorn slips some TNT into it, unbeknownst to the Borg.

Well, you know where this is heading: BOOM! “Borg, I say. Hey, Borg. Don’t just stand there with your cybernetic implants hangin’ out and your electronic eyes a poppin’.”


Joe
 
Last week myself and a supplier in a Teams meeting decided to ask ChatGPT to solve a heat transfer problem that we were discussing. The supplier typed in the relevant data (duct diameter, duct material etc, internal air temperature: 400°C, insulation type, insulation thickness and ambient temperature outside of the duct: 25°C). Then asked ChatGPT what the surface temperature of the duct would be. It responded with a superficially convincing (even to two, slightly long in the tooth, degree qualified mechanical engineers) sets of equations and explanations before announcing the outside surface temperature of the duct would be....... 650°C :)

Now that is impossible and we both laughed as a result (yeah, yeah we know, we should get out more, but we liked it). The duct can't be hotter outside than the temperature of the internal heat source, 400°C.

But what if:

a) neither of us was capable of digging out the textbooks and verifying the result by calculation
b) we were too busy to do the above
c) it had predicted a credible temperature of say 56.7°C

You can see the danger. People are going to get convincing answers to questions that they are not capable of verifying and they going to take it as gospel. Then take action based on that incorrect answer.

There will be chaos.
 
On another topic, my other half's nephew wrote some poetry, which I asked to read. It was good, very good in fact. But then I began to think, did he really write this, or was it ChatGPT?

I've no reason to disbelieve it is his work, but the seeds of doubt have been sown in my mind by knowledge of what ChatGPT can do. I fear now that my view of the creative works I am going to come across in the future is forever going to be tainted by doubt and this will diminish my appreciation for these works.
 
Last week myself and a supplier in a Teams meeting decided to ask ChatGPT to solve a heat transfer problem that we were discussing. The supplier typed in the relevant data (duct diameter, duct material etc, internal air temperature: 400°C, insulation type, insulation thickness and ambient temperature outside of the duct: 25°C). Then asked ChatGPT what the surface temperature of the duct would be. It responded with a superficially convincing (even to two, slightly long in the tooth, degree qualified mechanical engineers) sets of equations and explanations before announcing the outside surface temperature of the duct would be....... 650°C :)

Now that is impossible and we both laughed as a result (yeah, yeah we know, we should get out more, but we liked it). The duct can't be hotter outside than the temperature of the internal heat source, 400°C.

But what if:

a) neither of us was capable of digging out the textbooks and verifying the result by calculation
b) we were too busy to do the above
c) it had predicted a credible temperature of say 56.7°C

You can see the danger. People are going to get convincing answers to questions that they are not capable of verifying and they going to take it as gospel. Then take action based on that incorrect answer.

There will be chaos.
Agreed. OTOH the models are getting better - GPT-4 has impressed me. OTOOH this may make the problem you state more pernicious!
 
I fear now that my view of the creative works I am going to come across in the future is forever going to be tainted by doubt and this will diminish my appreciation for these works.
I am not sure the loss will be that great. If you are a little into classical music, you are aware of the fact that 95 or so per cent of the music written in the past has been forgotten to never appear again - and rightly so, because said works were written for a temporary purpose (e.g. reception of Sir XY in order to sign contract W and engagement with daughter Z) and usually of very questionable quality. Themes were stolen here and there and modified a little to sound like new, basically what ChatGPT does if I got that right. I am rather confident that AI applications will take a long time before they can write a Chaconne in D or a Sonata in B minor.
 
Agreed. OTOH the models are getting better - GPT-4 has impressed me. OTOOH this may make the problem you state more pernicious!

and I in turn agree with your statement. The better GPT gets the more authoritative and unassailable it will sound to its users, exactly the technique used by con artists to suck people into the deception. Imagine election manipulation through plausible deep faking that is targeted at you personally. It will be a powerful tool and we know there are already evil minds working on this and other ways to pervert our thinking. I see the online future as one where we cannot tell truth from lie. Perhaps we need to meet in person to know we are dealing with an authentic person.

Since GPT learns by assimilating the contents of the internet and the percentage of fake news and plain nasty misinformation increases over time then I think it will throw up more rubbish answers to serious questions.

If you think the stupid routes Google Maps sometimes sends us on are bad, sit back and watch the scale of disasters that will come when people start relying on GPT responses to their dilemmas and decisions.
 
ChavGPT is simply yet another example of a lousy technology for lousy technologies sake. As others are pointing out, the only real use it will have is as an instrument of manipulation.

John
 
I pay for GPT-4 to use it for work because I find it useful. Sometimes its responses aren't quite right or optimal and need some adjustment (sometimes due to me not giving relevant context, but sometimes not) so you still need to know your field ... but it's impressively useful IMO.
 
I don't think just being wrong about stuff is really the problem. Life, the internet and hifi forums are already full of people being very confidently wrong on all manner of subjects, so we already need to guard against this as part of our daily lives. For sure there are dangers but I am not so sure I have any more sympathy with the excessively credulous than I do in other walks of life.

The wider alignment problem though is a big issue. And it's much discussed and studied most famously in Brian Christian's book or you can watch his lecture here:

 
This site contains affiliate links for which pink fish media may be compensated.
I'd say the Internet already has many paid social agents (posting on behalf of businesses, state actors etc.) - which as an aside, explains a lot. Bots are an increasing part of this picture but it's not new.
 
Last edited:
I'd say the Internet already has many paid social agents (acting on behalf of businesses, state actors etc.) - which as an aside, explains a lot. Bots are an increasing part of this picture but it's not new.

Agreed, but I think with bots able to make very convincing presentations we will see the number of lies increase exponentially each year. I am sure bots offer a far more economical method of getting your message to the masses than setting up a studio, cameras, video editing, sound etc that is required for a YouTube video. Industrial scale lie propagation will ensue.

I'm not there yet, but I can see my default position on any content or live interaction I can't verify the veracity of becoming "assume its deep fake". Which is both a sad misuse of something potentially very helpful (AI that is) and wasteful use of my time spent chasing up the verifiable in order to run my life. Is this really progress?
 
On a Sam Harris podcast interviewing Stuart Russell and Gary Marcus (https://www.samharris.org/podcasts/making-sense-episodes/312-the-trouble-with-ai) AI was described as a "potential thermo nuclear bomb of disinformation", and virtually real-time deep fakery of video was talked about as being '5 minutes away'. I think the first hour of the interview is free on apple/spotify. Lex Fridman's interview with Sam Altman (the CEO of OpenAi) was also very interesting - Altman had an interesting view that in many ways something like ChatGPT presents an argument in a more balanced and fair way than most media and people, so we may come to rely on it for a better perspective on things. In contrast Lex's next interviewee Eliezer Yudkowsky talks of AI as the end of humanity and his genuine advice 'to young people' was to destroy the server farms that power the tech, like something out of Terminator it seems, pretty scary tbh!
 
ChatGPT is an incredibly powerful bullshitting engine. To understand why, it’s interesting to delve a little into how it works.

Generative AI systems like ChatGPT are trained using a well-proven general educational technique called reinforcement learning: you learn information, you’re examined, you get a rating, you use the rating to improve your answers, and the cycle continues - it’s basically how school works, but AI training differs in the detail and the scale.

To begin a gross simplification.... In training, the AI is given a description of the work needed: this is the prompt. It produces a set of slightly different outputs that relate to that prompt. A second AI rates those outputs against its own “good” answer for that prompt and assigns each of the submitted outputs an accuracy rating. The AI that is being trained then uses that accuracy rating to fine-tune its internal parameters for that prompt, keeping the paths that produced the best-rated answers, and reducing the ones that produced badly-weighted output. This process repeats billions of times with billions of prompts, and a human overseer checks a random sample of the results from time to time to make sure the training is working correctly, but in the end you have an AI that when given a prompt, will create an answer that would be considered a good example of that description*

But “a good example” is not the same as “a correct answer”. So, AIs like ChatGPT do very well when the possible answers are limited (“when and where did Mozart die?”), or when there’s no objective “right” or “wrong” answer (“what is the best kind of cheese?”), but not when a lot of work on the information is required (Paul’s heat-transfer example above). So, it can write plausible press release and passable poetry, but it can’t do maths very well at all, and its primary goal of producing a good facsimile of what the prompt is means that it will make up information it does not have rather than omit it. Why? because when it was trained, “I don’t know” was not a high-rated answer to any prompt, and the AI has been trained to produce high-rated answers, so it will provide anything it can find, in the hope of hitting the prompter’s idea of a right answer by accident. Or, to put it in a more human terms, it’s a bullshitter: it isn’t allowed to admit its ignorance, so it fills the gaps with plausible sounding data. If you’ve ever corrected examination papers, or interviewed candidates for a job, or seen Boris Johnson give a policy briefing, you’ll very familiar with this behaviour in humans, so it’s not like the AI isn’t doing a bad job of being human-like...

__
* the “good example” is why AIs can end up racist. If the original training data contains unrepresentative racial samples, the AI will make bad associations from it. One famously poor study attempted to train an AI to see if there was really any such thing as “criminal face”, but they used arrest mugshots and Linked-In profiles as their two category groups in training: Guess what - people smile on their profile pic, but not after arrest, and the AI quickly determined that a frown is the mark of a criminals. This study was done with a Korean dataset - had it been carried out in the USA, it would have made a much more racist determination of what a “criminal” looked like...
 
Thanks Kris for going to the lengths of explaining how AI training works, very illuminating. I saw this bullshitting confidence in the respnnse I got to my heat transfer problem.
 
I wonder whether university essay and test questions might be redesigned to exploit the 'tendency to bullshit' as one way to tell a student's work from an AI facsimile?
 


advertisement


Back
Top