advertisement


ChatGPT, the latest conversational chatbot from OpenAI

Aye. Although there was some stuff I was reading recently about getting it to "do science" or at least propose experiments.

Karl Friston has some interesting ideas about A.I. which to my untrained eye looks like a feedback system that minimises the difference between what a system expects and it’s inputs but that would require an architecture change as the system would have be retraining as it ran which chat GPT does not do. I don’t really have a problem with trading data from text as it’s just electrical as are our senses, except our senses are pre filtered which probably makes classifications easier.
 
as the system would have be retraining as it ran which chat GPT does not do. I don’t really have a problem with trading data from text as it’s just electrical as are our senses, except our senses are pre filtered which probably makes classifications easier.

I am not sure real time re-training makes much sense given one of the emergent behaviours that is most interesting is GPT's increasing ability to "know" what's going on during a conversation and obviously we are not going to be dropping major advantages in human knowledge in our chats with an AI bot. :) Also the loop between new data and new model is only going to get tighter, energy costs notwithstanding.

We actually went through a phase of retraining our models overnight on the new data from yesterday but the benefit was impossible to measure but was clearly orders of magnitude less advantageous than making a better model. And we ended up making model improvements at a faster rate than new useable information was entering the data.

I also don't get the preoccupation with "senses". I mean I'm sure vision is a fascinating problem but for me intelligence is all about the language and you don't need to be able to see to do that.
 
I am not sure real time re-training makes much sense given one of the emergent behaviours that is most interesting is GPT's increasing ability to "know" what's going on during a conversation and obviously we are not going to be dropping major advantages in human knowledge in our chats with an AI bot. :) Also the loop between new data and new model is only going to get tighter, energy costs notwithstanding.

We actually went through a phase of retraining our models overnight on the new data from yesterday but the benefit was impossible to measure but was clearly orders of magnitude less advantageous than making a better model. And we ended up making model improvements at a faster rate than new useable information was entering the data.

I also don't get the preoccupation with "senses". I mean I'm sure vision is a fascinating problem but for me intelligence is all about the language and you don't need to be able to see to do that.


Well from a layman’s POV I can see senses as fundamental to efficiency of learning. If you think of a child eating an apple for the first time then it sees the apple so it gets color and shape, it gets to feel the apple so it knows how big it is, it smells and tastes the apple and it knows it’s still an apple when it eats it. So it has a shed load of training data already classified and if it gets an apple that is red instead of green it can pretty quickly identify it as an apple which if you just use words would take an awful lot. Senses evolved for a reason and efficiency is a good reason.

edit. It can also update apple colors to red and green without having to be told about lots of red apples.
 
We had a corporate email this morning with new 'rules' for the use of, and limitations of, AI. Company is mostly worried about peeps inadvertently uploading sensitive corporate or IP content to the web and then trying to use AI to do 'something'
 
Really interesting talk from a few weeks back, watched it last night, highly recommended viewing, the subject matter is very concerning. The reckless high speed race that the tech companies are involved in here is alarming to say the least, (check out the deployment of the 'snapchat AI bot' at 48 min)

Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.

 
[QUOTE="Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.

[/QUOTE]

I've watched a few AI presentations recently, but this one opened my eyes the widest. I now consider AI to be a greater threat to mankind than global warming. The message here needs spreading as far and wide as fast as it possibly can.
 
[QUOTE="Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.


I've watched a few AI presentations recently, but this one opened my eyes the widest. I now consider AI to be a greater threat to mankind than global warming. The message here needs spreading as far and wide as fast as it possibly can.

Samaritan
 
Really interesting talk from a few weeks back, watched it last night, highly recommended viewing, the subject matter is very concerning. The reckless high speed race that the tech companies are involved in here is alarming to say the least, (check out the deployment of the 'snapchat AI bot' at 48 min)

Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.

I watched the first few minutes. It needs an AI to summarise it because they really don't want to get to a point.I skipped to 48min and wondered had they never heard of Eliza?
 
I watched the first few minutes. It needs an AI to summarise it because they really don't want to get to a point.I skipped to 48min and wondered had they never heard of Eliza?
It's worth watching the whole thing, the first few mins is a bit confusing as it's clips from further on as a 'taster'
 
It's worth watching the whole thing, the first few mins is a bit confusing as it's clips from further on as a 'taster'
I agree, the first few minutes aren't representative of the rest of the presentation, which is adequately structured and logical. Nevertheless its an hour long. I stopped it a couple of times to put the kettle on, but its just like watching any other hour long TV documentary - if that's not your thing then you won't like this either. Actually the presentation touches upon the ills of social media and reduced attention span is one of the downsides they call out :)
 
I had a reply but the machine ate it.

I gave up on the presentation when they introduced the assertion that 50% of AI researchers believe there's a 10% chance of uncontrolled AI causing human extinction. And then made the analogy to aircraft engineers. The SnapChat bot example I skipped to then made it clear they don't really know anything.

The actual dangers to civilisation come from within.
 
I had a reply but the machine ate it.

I gave up on the presentation when they introduced the assertion that 50% of AI researchers believe there's a 10% chance of uncontrolled AI causing human extinction. And then made the analogy to aircraft engineers. The SnapChat bot example I skipped to then made it clear they don't really know anything.

The actual dangers to civilisation come from within.

Not sure what you're saying is incorrect here? Here's the ref below and what is wrong with the SnapChat example? It's a genuine example.

In 2022, a survey of AI researchers found that some researchers believe that there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe (more than half the respondents of the survey, with a 17% response rate).[11][12]
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
 
There is no objective basis on which to form the opinion expressed. So 50% of 17% of a selection of 'AI researchers', many of whom may be non-technical, made a guess, and that turns into a fact and so on. The following analogy with engineering aircraft makes it clear this isn't a serious presentation.

The SnapChat 'bot' isn't AI. Well it's 1960s level AI, so it's a bit late to worry.

Anyway, apropos of nothing, I recommend reading Alan Turing's 1950 paper in 'Mind', should appear with a plain old search of the Internet.
 
There is no objective basis on which to form the opinion expressed. So 50% of 17% of a selection of 'AI researchers', many of whom may be non-technical, made a guess, and that turns into a fact and so on. The following analogy with engineering aircraft makes it clear this isn't a serious presentation.

The SnapChat 'bot' isn't AI. Well it's 1960s level AI, so it's a bit late to worry.

Anyway, apropos of nothing, I recommend reading Alan Turing's 1950 paper in 'Mind', should appear with a plain old search of the Internet.

The Snapchat bot is apparently powered by ChatGPT so it's 2023 technology - the point they are making is showing an example of the irresponsible way that big companies will roll out untried and potentially unsafe tech to the public, in this case children, in order to have a business 'advantage' or simply because 'they can'. Children potentially talking to an AI powered friend available 24/7, who in this example will give 'advice' on how a 13 yr old girl could have sex with a 31 year old male, is absolutely a serious issue, isn't it?

Have you used GPT-4? Not sure if it's officially passed the Turing test or whether that is still a benchmark, but it's easy to see how it will potentially spell the end of a huge amount of jobs if not regulated.
 
Today I tried to flip the script and get GPT-4 to ask me questions. I suggested a game of Twenty Questions, and it obliged. All the questions it asked were perfectly reasonable given the answers, and it very nearly got the object about which I was thinking!

(Only problem is I'm allowed 25 prompts per 3 hours, and one round used all of them!)
 
Great video highlighting some of the more advanced features of ChatGBT - Quite extraordinary and revolutionary in equal measure

 


advertisement


Back
Top