advertisement


AI and human communication

It's not pedantic at all. An AI is fundamentally not a computer program in that they do not do what they do by executing a computer program. If you want to be pedantic, we could say their behaviour is a higher order function unrelated to the computer programming involved.
I don't have your experience but a few years ago I did build a PC to study ML and DL. I don't like to use the term AI as its misleading as can be seen on pfm. I had to build separate models for each intended purpose. Then I had to train those models from the huge but free online databases and test that model with some raw data. Retune the model and keep retesting until I got good ~97% or so correct results. That model then to an onlooker appeared to be thinking and intelligent but its not. At the end of the day the model relied on multidimension arrays that contained just a number in each element. Every test was whittled down to numbers in these arrays to see if there was a close hit - 97% in my case.

As you say this is not programming such as applications, utilities and tools but we do use code to build the models and the little I've seen its very clever and an immense amount of thought and effort has gone into this by a huge number of people over the years.

For those who want to understand AI/ML/DL Kaggle is the place to go take a look https://www.kaggle.com/learn

DV
 
Last edited:
The excellent 3Blue1Brown with a deep dive on how AI models like GPT work as part of his broader series on Machine Learning. It is quite math-y which is unavoidable if you want to know more than the very basics but anyone reasonably numerate should be fine.

Spoiler: it is indeed matrices all the way down and what we might call the unreasonable effectiveness of how doing this at this massive scale produces this emergent behaviour seemingly from nowhere.

 
There's been a lot of discussion in economics about the impact of AI on jobs. See this Noah Smith essay on comparative advantage including the comments.


Although he is right on the economics one wonders about how the current trend almost everywhere to minimum oversight and regulation means that despite these effects AI will likely get a free pass and unintended consequences will abound.

PSA: As ever with comparative advantage read very carefully -- years of neoliberal economics means it's quite easy to get the wrong end of the stick and think it's about competitive advantage.
 
The excellent 3Blue1Brown with a deep dive on how AI models like GPT work as part of his broader series on Machine Learning. It is quite math-y which is unavoidable if you want to know more than the very basics but anyone reasonably numerate should be fine.

Spoiler: it is indeed matrices all the way down and what we might call the unreasonable effectiveness of how doing this at this massive scale produces this emergent behaviour seemingly from nowhere.

Not touching the point that the neural networks that are the foundation of the models are implemented in software. They were phsyical things in the past, but haven't been for a while now. All those "transforms" etc that happen in the Transfomer don't happen magically. Even if it's not a high level language in use, something more akin Machine Code has to read/right data between phsyical data structures etc or do operations on it. Machine Code is still programming irrespective of the fact that it's very low level.
 
Not touching the point that the neural networks that are the foundation of the models are implemented in software. They were phsyical things in the past, but haven't been for a while now. All those "transforms" etc that happen in the Transfomer don't happen magically. Even if it's not a high level language in use, something more akin Machine Code has to read/right data between phsyical data structures etc or do operations on it. Machine Code is still programming irrespective of the fact that it's very low level.

You are conflating the means by which it is built with the mechanism by which the function itself operates. i.e. you need to use computers and therefore programming to make a modern AI from a purely practical point of view but when it does the actual functionality (the AI bit) it's not executing a computer program or doing anything related to computer programming. It's doing the maths described in the video.

When you use an AI to, say, translate your document into another language yes you are running a computer program so you can use it on your phone or computer but the means by which the program does the translation is fundamentally not by executing some combination of logic and instructions. i.e. there is no computer program that "does" AI, only one that allows you to use AI. Compare this to your game example where the game itself is a computer program that contains logic like "if bullet hits player than player health = current health - 10". There is no equivalent of this in AI.

You could argue that somewhere in the several billion interconnected weighted nodes there is some sort of implicit program that could achieve the same results but working out what this is is literally impossible. And, indeed, we only started making progress in AI when we realised that trying to make a program that can do AI just doesn't work. And yes building a physical one out of cogs and gears is impossible but you very much can make them out of silicon and it's not a great step to imagine how in the future biochemistry might allow us to make them out of a grey, gelatinous goo getting input from a pair of fancy light sensors and a couple of microphones at which point you don't even need the program on your phone.

One final point I would make about this is that the fact that there is no programming involved is why all these difficult regulatory and even ethical issues arise. If your AI does or recommends something but you have no real way of explaining why and you cannot change it to stop it happening again then how do we control that?
 
One final point I would make about this is that the fact that there is no programming involved is why all these difficult regulatory and even ethical issues arise. If your AI does or recommends something but you have no real way of explaining why and you cannot change it to stop it happening again then how do we control that?
Yep, this is the real problem. Regulators want 'explainability' and 'accountability' so it's clear that decisions made about people can be scrutinised and, where necessary, challenged. For example, data protection law in the UK and EU forbids most 'automated decision making', without some element of human intervention (either as part of the process, or as a post hoc review and re-make of the decision). How you achieve this if you act on what the AI says, and have no way of verifying or scrutinising the method used by the AI to reach its conclusion, is occupying greater minds than mine.
 
One final point I would make about this is that the fact that there is no programming involved is why all these difficult regulatory and even ethical issues arise. If your AI does or recommends something but you have no real way of explaining why and you cannot change it to stop it happening again then how do we control that?
aint that the truth - we have several Profs and their PhD students working in the area of explainable AI. One is working with our AV. I am writing a paper ATM with a colleague, on AI and Ethical issues. There is a long thread on a closed list that i am a member about "Testing AI Safety".

Lots of interesting research in AI and Data Science going on through the Turing Network
 
Yep, this is the real problem. Regulators want 'explainability' and 'accountability' so it's clear that decisions made about people can be scrutinised and, where necessary, challenged. For example, data protection law in the UK and EU forbids most 'automated decision making', without some element of human intervention (either as part of the process, or as a post hoc review and re-make of the decision). How you achieve this if you act on what the AI says, and have no way of verifying or scrutinising the method used by the AI to reach its conclusion, is occupying greater minds than mine.
Luckily it's only things like mortgage applications and it's not as if people will be using it to decide who to fire a state-of-the-art missile at.

D'oh!


(Although to be fair I think in this case the problem is Israeli intelligence rather than AI per se.)
 
aint that the truth - we have several Profs and their PhD students working in the area of explainable AI. One is working with our AV. I am writing a paper ATM with a colleague, on AI and Ethical issues. There is a long thread on a closed list that i am a member about "Testing AI Safety".

It's a fascination area for sure. At a very simple, practical level in financial markets, we had challenges justifying decisions to a regulator for simple conditional decisions ("Why did it buy this amount at this price?", etc.) and there were cases where our human supervisors intervened when the AI might have been playing some sort of 5D chess and making good choices we just didn't understand. One of my colleagues once explained how you can have pairs of AIs so one is making suggestions and the other is deciding which one to do and this is *waves hands* better? But it all sounded a bit unconvincing.

Once you move up to more real world, people level decisions then we might have reached the point where there is nothing for it but to have the AIs read Kant and Mill for several years so they can explain themselves in 5,000 words of dense prose :)
 
Training an AI involves a few steps, but they aren't exactly programming.

First, build/collate a dataset which expresses the property that you would like the AI to learn. Let's say you want to feed images of cats and dogs to an AI, and have it tell you which is which. The dataset would be a set of dog images, and a set of cat images. No programming here.

Next up, choose a model - this is quite often determined by reading previously published attempts at solving something similar, and making a decision. Again, no programming.

Training the model - this will typically involve writing a script, which you could call a program, to configure a ML package such as tensor flow to configure the model, and to feed the training data in. The script is really just a quick way of automating the software configuration, you could easily have a GUI package which allowed the user to achieve this same result, and so again, i'm not considering this to be programming.

Once you've got a trained model, and you want to 'run' it, you need some sort of inference program. Now if you want to create a 'dog vs cat' website which allows stuff to be fed to the model and the results displayed, this is a programming job, but it's not the ML that is being programmed here, just a UI to display the results. The actual inference will be carried out by the existing inference program.

If you classify running the model and producing results as programming, you will have to explain why running a spell checker on a document isn't 'programming' or running a filter on an image in photoshop. It's closer to this than anything a real programmer would do.
 
You are conflating the means by which it is built with the mechanism by which the function itself operates. i.e. you need to use computers and therefore programming to make a modern AI from a purely practical point of view but when it does the actual functionality (the AI bit) it's not executing a computer program or doing anything related to computer programming. It's doing the maths described in the video.

When you use an AI to, say, translate your document into another language yes you are running a computer program so you can use it on your phone or computer but the means by which the program does the translation is fundamentally not by executing some combination of logic and instructions. i.e. there is no computer program that "does" AI, only one that allows you to use AI. Compare this to your game example where the game itself is a computer program that contains logic like "if bullet hits player than player health = current health - 10". There is no equivalent of this in AI.

You could argue that somewhere in the several billion interconnected weighted nodes there is some sort of implicit program that could achieve the same results but working out what this is is literally impossible. And, indeed, we only started making progress in AI when we realised that trying to make a program that can do AI just doesn't work. And yes building a physical one out of cogs and gears is impossible but you very much can make them out of silicon and it's not a great step to imagine how in the future biochemistry might allow us to make them out of a grey, gelatinous goo getting input from a pair of fancy light sensors and a couple of microphones at which point you don't even need the program on your phone.

One final point I would make about this is that the fact that there is no programming involved is why all these difficult regulatory and even ethical issues arise. If your AI does or recommends something but you have no real way of explaining why and you cannot change it to stop it happening again then how do we control that?
We're clearly going to have to agree to disagree on the defintion of what constitutes software execution then. It's apparent that your definition relates to the traditional exectution of an explicitly written sequence of lines of code (in a given software programming language). i.e. what even some one with the most basic understanding of "software" would consider a "program". In my view any instruction that is executed by a computing device is programming, because in the first instance at least some human would have had to write and design the instruction sets and model and processes by which they are excuted.

BTW - a very large part of what any software does is just maths, so I don't consider it as a distinction that is worthy of consideration.
 
If you classify running the model and producing results as programming, you will have to explain why running a spell checker on a document isn't 'programming' or running a filter on an image in photoshop. It's closer to this than anything a real programmer would do.
But those functions are executed by software code non the less, and code that someone would have had to explicitly write as part of the overall program. If a "real programmer" hadn't written those sections of code photoshop, word etc wouldn't have those functions available.

Again, it's my view that people are using a very selective and narrow definition of what is software/programming/code in this discussion.

Anyway, I don't really see any value in continuing this discussion as we're clearly going around in circles and aren't ever going to agree.
 
We're clearly going to have to agree to disagree on the defintion of what constitutes software execution then. It's apparent that your definition relates to the traditional exectution of an explicitly written sequence of lines of code (in a given software programming language). i.e. what even some one with the most basic understanding of "software" would consider a "program". In my view any instruction that is executed by a computing device is programming, because in the first instance at least some human would have had to write and design the instruction sets and model and processes by which they are excuted.

BTW - a very large part of what any software does is just maths, so I don't consider it as a distinction that is worthy of consideration.

Happy to move on, but for the record, what you are saying is just wrong. AI models do not work in any sense by executing instructions or code and to think of them like this is to misunderstand what they are and how they work. AI models run on computers but they are not themselves computers and they do not work like computers.

Which in a thread about AI is, I think, pretty foundational aspect that is important to get right.
 


advertisement


Back
Top