advertisement


AI=Extinction...

The difficulty which you’re overlooking, is that with AI, it may not be possible to determine what is lawful. So, for example, a company replaces part of its recruitment team with AI. Let’s say that the bot just sifts incoming applications and CVs and does the shortlisting. Without access to the algorithm, and the knowledge to interrogate it and interpret the results, you can’t know if the shortlisting was genuinely the best candidates for the role, or if people were excluded on the basis of gender, ethnicity, disability, or other protected characteristics. Similarly any AI process that makes decisions which affect people.
This, who's to know what information the AI used as it's points of reference. There's an assumption that all AI will have access to all world knowledge. That's not likely to be the case (at least not initially), so there's a very good chance that specific AI instances could have bias.
 
I'm hearing of the US military going full steam ahead with AI armed drones and such. Have to you know. China.

Chine of course will likewise see US programs as necessitating their own high-priority programs. Is everyone as confident as I am in the reliability of safety provisions with autonomous drones churned out in an arms race?
 
I'm hearing of the US military going full steam ahead with AI armed drones and such. Have to you know. China.

Chine of course will likewise see US programs as necessitating their own high-priority programs. Is everyone as confident as I am in the reliability of safety provisions with autonomous drones churned out in an arms race?
I saw something not that long ago (can't recall if it was a youtube video or news item) where the US military stated (a pentagon official if i recall correctly) that they had no intention of allowing any AI controled ordnance etc to make the decision to fire. They said that decision will always remain in human control.

Edited to add:

That said, I don't see any explicit statement to that effect in the latest US DoD guidance on the use of AI:

 
Sometimes the bias is subtle though. Did you know, for example, that facial recognition technology is less reliable with black and brown faces, which may make mis-identification more likely in those communities. That’s generally put down to the training data that was used. With human decisions, you can at least ask the human why they made the decision, and go over the process if you choose to. Not really viable with machine learning AI algorithms, often with millions of lines of code.
No, but you can spot the discrimination statistically, otherwise you would not know that about facial recognition software.
 
I saw something not that long ago (can't recall if it was a youtube video or news item) where the US military stated (a pentagon official if i recall correctly) that they had no intention of allowing any AI controled ordnance etc to make the decision to fire. They said that decision will always remain in human control.

Edited to add:

That said, I don't see any explicit statement to that effect in the latest US DoD guidance on the use of AI:

It's all non-binding in any case, and states have been known to disregard even binding undertakings if they really want to.
 
No, but you can spot the discrimination statistically, otherwise you would not know that about facial recognition software.
Perhaps, but only if you're looking for it. Let's be honest, if a company invests in an AI 'solution' for something, it's just going to deploy it. Nobody is going to check whether it's doing a decent job, without bias. Especially those who signed off on the purchase.

The facial recognition thing is interesting. It was discovered because it's used mostly by the police, who have inspectorates crawling all over them, and to be fair, they're pretty keen not to get this one wrong. So, they've been looking. Also, the way they deploy FRT, a real life copper checks the 'matches' and can spot those that aren't good ones. Do you imagine commercial organisations, faced with an opportunity to downsize the workforce, will have the same degree of scruples?
 
Perhaps, but only if you're looking for it. Let's be honest, if a company invests in an AI 'solution' for something, it's just going to deploy it. Nobody is going to check whether it's doing a decent job, without bias. Especially those who signed off on the purchase.

If it improves on current systems it can only be a good thing. Case in point, a friend’s son wants his first job in technology, can‘t get past the recruitment algorithms and never given any feedback. Cue a call to a tech company CEO who is hiring, sends his CV directly, interviewed and they couldn’t offer him a job quick enough. Bright as a button and super keen to learn. CEO now questioning their recruitment company and process.
 
If it improves on current systems it can only be a good thing. Case in point, a friend’s son wants his first job in technology, can‘t get past the recruitment algorithms and never given any feedback. Cue a call to a tech company CEO who is hiring, sends his CV directly, interviewed and they couldn’t offer him a job quick enough. Bright as a button and super keen to learn. CEO now questioning their recruitment company and process.
Well that's an excellent example of the sort of risk I'm talking about. How would the CEO have known about the problem without having been approached directly? Sure as eggs are eggs, the outsourced recruitment company wouldn't have told them, and in all likelihood, wouldn't even be aware of any shortcomings.
 
If it improves on current systems it can only be a good thing. Case in point, a friend’s son wants his first job in technology, can‘t get past the recruitment algorithms and never given any feedback. Cue a call to a tech company CEO who is hiring, sends his CV directly, interviewed and they couldn’t offer him a job quick enough. Bright as a button and super keen to learn. CEO now questioning their recruitment company and process.
And question he should. It's always good to be someone who's calls get taken by tech company CEOs....
 

Marques Brownlee’s take on the above is interesting. He highlights just how many artistic jobs (acting, film, photography etc) it will take before we even get to the sinister political implications. Think of TV ads etc, no one is going to pay to make them with real people and production teams anymore. That’s a whole industry ended. Fascinating and scary stuff.
 
It is well known that people don't make a change until the consequences are imminent and certain. So until then nothing much will happen.
People still smoke, they are well aware it can give you cancer. But so and so smoked all her life and died at 98.
 

Marques Brownlee’s take on the above is interesting. He highlights just how many artistic jobs (acting, film, photography etc) it will take before we even get to the sinister political implications. Think of TV ads etc, no one is going to pay to make them with real people and production teams anymore. That’s a whole industry ended. Fascinating and scary stuff.
I’ve been watching MKBHD for years. Production values are top notch. His take on tech has mostly been on point. Probably the best tech reviewer (for consumer goods at least).

As for ‘text to video’. The whole concept just seems bonkers and would have been unheard of 18 months ago. Look at how far the technology has progressed since we saw ‘Will Smith eating spaghetti’.

 

Marques Brownlee’s take on the above is interesting. He highlights just how many artistic jobs (acting, film, photography etc) it will take before we even get to the sinister political implications. Think of TV ads etc, no one is going to pay to make them with real people and production teams anymore. That’s a whole industry ended. Fascinating and scary stuff.
Funny you should post that. I was watching the BBC coverage of the Gov AI committie on sat, where the two ministers from the science, technology blah blah dept were answering questions from MPs. The question of protecting content creators copywrite came up.. the ministers answer troubled me.. it went along the lines of :

"we need to get the balance right, so we're going to take the time make sure we do so that all sides are happy"

when challenged with "but the content creators believe something needs to be done NOW because their content is being stolen by AI companies"

The response was "not if we get it wrong"

All above is paraphrased as I don't remember the exact words used. But never the less it seems very obvious from the response that the government believe that allowing AI companies to "push forward this exciting technology" is more important than protecting the livelihoods of exisiting content creators today who's copywrited content is being stolen by the AI companies right now, not in the future. Seems clear the gov are happy that musicians, authors etc may lose their ablity to survive in their chosen profession (or worse starve) if it means that "high tech" is successful.
 


advertisement


Back
Top