Search
Close this search box.

AI Today Podcast #008: Weak, Strong AI – Do these Terms Matter?

Podcast #8_ Weak, Strong AI – Do these Terms Matter_

Artificial Intelligence has a broad spectrum of ability.  Many people try to classify AI as weak or strong, depending on how generally intelligent we want this system to be. The more an AI system approaches the abilities of a human, with all the intelligence, emotion, broad applicability of knowledge, the more “strong” it is. On the other hand the more narrow in scope, specific to a particular application a particular AI system is, the more “weak” it is in comparison. But do these terms mean anything? And does it matter whether we have Strong or Weak AI systems?

In this podcast, we go over what these terms are commonly understood to mean in the AI ecosystem and also pose the question of whether these terms are even useful or relevant to enterprise AI Adopters and AI technology vendors.

Episode Sponsors:

For over 25 years, QS has been helping prospective MBA candidates just like you make informed decisions about choosing the right business school. At our upcoming Dallas event, you can meet face-to-face with admission directors from top ranked US and international business schools, including UT Austin, SMU, Rice, IE, Hult, and many more! You will also be able to participate in interactive GMAT sessions by GMAC – creators of the GMAT exam, apply for $7 million in scholarships, attend Panels, and network with alumni and your future peers. Learn more about exclusive opportunities on the day by registering now to claim your free ticket at https://goo.gl/iRF9PR!

Show Notes:

 

___________________________________________

A transcript of podcast is available below:

[00:00:00] Kathleen: Hello and welcome to the AI today podcast. I’m your host Kathleen Walch.

[00:00:04] Ronald: And I’m your host Ronald Schmelzer.

[00:00:06] Kathleen: And today we’re going to be discussing weak, strong, narrow, broad, and general AI. So artificial intelligence has a broad spectrum of ability, and right now many people try to classify AI as either weak or strong – depending on how generally intelligent we want the system to be. So the more an AI system approaches the abilities of a human, with all of human intelligence, emotion, and a broad applicability of knowledge, the more ‘strong’ we call it. On the other hand, the more narrow in scope it is – which, is specific to a particular application and a particular task – the more weak it is in comparison. But do all these terms mean anything? And does it even matter whether or not we have ‘strong’ or ‘weak’ AI systems?

[00:01:03] Ronald: So let’s get into this. Maybe one good place to start, is to talk about what we mean by a ‘strong AI’ system. Because it seems that whenever we see the term ‘weak AI’, it’s defining everything that’s not strong, so… I think one good place to start is that there are a number of people that have defined ‘strong’ as meaning ‘broad’. You can take the word ‘strong’ and make that equivalent to the definition of broad AI, meaning systems that are just generally intelligent. So what is meant by ‘general intelligence’? The term artificial general intelligence is generally meant to mean intelligence of a machine, that can successfully perform any intellectual task that a human can perform. So this comes down to, I think, three general areas, and one is the ability to generalize knowledge from one domain to another. That is, that the system has learned to perform some task that’s applicable to one particular set of capabilities. A generally intelligent system would be able to take that knowledge and apply it somewhere else. Another definition of the general intelligence term, is the ability to make plans in the future, based on knowledge and experiences. So a generally intelligent system will not just be able to respond to whatever it’s been trained to respond to, but it’ll be able to make plans for the future things that it’ll be able to do – based on whatever the goals of the system are, on the particular task or set of things it needs to accomplish. And of course inherent with that, is the ability to adapt. So the third part of artificial general intelligence is usually the ability to adapt to changes as they happen in the ecosystem. So this is one definition of ‘strong’, and strong as defined by ‘broad’. And there’s a bunch of things that come with it. The ability to reason and solve puzzles, to represent knowledge and so-called common sense; the ability to plan and adapt, and tie all these things together into common goals. And we haven’t been able to do that yet. So we call systems that can successfully do all these things ‘strong’, because they’re broad.

[00:03:07] Kathleen: Right. But some people say that this definition of strong AI as general intelligence is actually not even strong enough, and that just being able to perform tasks and communicate like a human, is not enough to be classified as truly intelligent. So another definition of strong AI is defined as ‘systems in which humans are unable to distinguish between a human and a machine’, with strong AI being defined by the ability to experience consciousness. So when people are commonly discussing this kind of strong AI, they usually bring up two tests of intelligence and consciousness. So the first test is the Turing test; this is where you have a human, a machine and an interrogator, so three parties. And the interrogator needs to determine which one is the human, and which one is the machine. If the interrogator can’t distinguish, then the machine passes the Turing test. The second test is the ‘Chinese room’, and this builds upon the Turing test. So it assumes that a machine has already been built and that it passes the Turing Test, and convinces a human Chinese speaker that the program is itself a live Chinese speaker. And this was introduced in 1980, by John Searle. So the question that Searle wants to answer is: does the machine literally understand Chinese, or is it merely simulating the ability to understand Chinese? So to just generally recap his test: he places himself in a closed room with an English language book, that has instructions in it. And people pass Chinese characters through a slot; he then reads the instructions in English and then provides output in Chinese characters – similar to what the machine would do in the Turing test to prove that it was indistinguishable. He believes that there is no essential difference between the roles of the computer and himself in this experiment, because each simply follow a program with step by step instructions and produce a behavior that’s deemed ‘intelligent’. However, he argues that it’s not really intelligent: because at the end of the day, he still doesn’t understand Chinese, even though he’s producing something that people interpret as intelligent. So he argues that the computer itself also doesn’t understand Chinese; and he says that without ‘understanding’, you can’t say that a machine is ‘thinking’… And that if it’s not thinking… You have to ‘think’ in order to have a mind. So from Searle’s perspective, a strong AI system must have understanding. Otherwise it’s just a less intelligent simulation.

[00:05:59] Ronald: Yeah, we could definitely… For those that are really interested in this topic of Searle and the Chinese Room and the Turing Test and all: there is a lot of writing about this. It gets very philosophical, very quickly.

[00:06:09] Kathleen: Well, John Searle was a philosopher.

[00:06:10] Ronald: Exactly. And actually, this is really interesting. We found this very interesting about artificial intelligence in general. One of the things that uniquely separates AI from other elements of computing, if you want to think that, is how much it overlaps with philosophy. Whereas other computing is about the mechanics of getting systems to work, and data and computing and storage and networking and all that sort of stuff. This is very interesting. You find yourself really getting wrapped up into it. But we don’t want to dive too deeply into the philosophy of this, because we have to think about how this is going to be applicable to today. So one of the things that we want to – if I can can sort of continue on this a little bit… You know, John Searle basically says that you can only have strong systems that are truly understanding and truly conscious. He does believe you can still build systems like this. So it is a far end of strong AI systems, where the AI is basically used to be able to explain how the mind works. You can use AI to explain how the mind works and therefore, because you can build AI in a system, he says that the study of the brain is actually not relevant to the study of the mind. And furthermore, he says that to use the Turing Test is actually sufficient to explain and establish the existence of mental states. So anyway, I think now that we have all this clarity about ‘strong’, you know, let me get back to the definition of ‘weak’. What is weak AI? Of course you can say that anything that isn’t strong is weak. But this is not particularly helpful, because we haven’t been able to build anything so far that’s really strong. So is everything that we’ve built so far weak? You know, is everything really weak? Well, things look pretty good. They may not be ‘strong’ as defined by either of the two previous definitions, but they’re pretty good. So let’s toss out the term ‘weak’, because it’s not particularly useful. And instead, let’s think about the terms ‘narrow’ or ‘applied’. And what we mean by that is ‘narrow’ as applied to a specific task. So we take all the various things that AI can do, if we apply it to a very specific task, and therefore, this intelligence is really not meant or even able to be applied to other tasks. We can think about things like image recognition and voice recognition, conversational technology, recommendation engines, things that have some modeling of that. And I think pretty much all of our experiences today have been with narrow and applied versions of AI. A long time ago, what we thought wasn’t intelligent was intelligent, so maybe we can…

[00:08:38] Kathleen: Yeah. Which brings up another point: that we’re slowly creeping our way up the ladder of intelligence. So as technology continues to advance, people’s definition of artificial intelligence also advances. Take our children’s generation, for example. They have grown up with Siri and Alexa; they know that as their baseline and they no longer consider that ‘intelligent’. So now, that to them… They want a system that can do more. And that’s where it’s interesting, because I didn’t grow up with that, and Ron didn’t grow up with that. So to us, that is artificial intelligence. But to our children, now that that’s their base, that’s no longer intelligent. So it’s interesting. With cars for example: now we have cars that can self-park, that are self-driving. Just a few decades ago, people thought that cruise control was a high-end technology, and now cruise control… If your car doesn’t have cruise control… I mean, that’s expected. And now they have cruise control that can sense if your car’s getting too close to another car, and will slowly break so that you don’t have to. But again: for our children, that’s their baseline, so they don’t consider that artificial intelligence. They want something beyond that.

[00:10:03] Ronald: Right. So as you say, today, even if you define something as ‘weak’ or ‘strong’ AI now, or ‘weaker’ or ‘stronger’, that’s just going to keep changing over time.

[00:10:12] Kathleen: Right.

[00:10:12] Ronald: So you could say ‘well, this technology is weak’, but, you know, 30 years ago it wasn’t. So I think from our perspective, given all that, is it even useful to classify the systems as strong or weak? And if we then use the terms ‘narrow’ or ‘applied’ or ‘focused’… That really doesn’t give us any specificity to tell us just how intelligent a system is. Can you actually measure – in some sort of concrete way – a system’s intelligence? Without using a generic or a relative term like ‘weak’ or ‘strong’? Because when we say ‘strong’ or ‘broad’ or ‘general’, that doesn’t say much either. Especially because we have a disagreement about how strong a system should be. To some people, to John Searle and the rest of the folks who think about consciousness, an AGI system is weak – even though to many people, that’s something we haven’t been able to accomplish yet. So ‘strong’ and ‘weak’ are relative terms, they’re just like ‘dark’ and ‘light’. How light is light, and how dark is dark? It isn’t really helpful. So we think it’s actually really better to define this sort of concept in terms of a spectrum. In particular, a spectrum of maturity: of how intelligent a system is, against the sort of tasks and range of tasks that need to be done. So, for example, at one end of the spectrum we can have AI that is so narrow and so focused in application to a single task, and is really barely above what you can do with like straightforward programming.,,

[00:11:37] Kathleen: Right.  It’s like: does it even qualify as AI? Maybe it’s just some very narrowly specific deep learning task. And I know Kathleen is like me, saying you got the hotdog versus not hotdog. You’ve been watching Silicon Valley that’s… That’s barely AI. But anyway. And at the other end of the spectrum, we have… The AI is so mature and so advanced, that we’ve basically created a new kind of sentient being. We basically created a species. So between these two, we have many degrees of intelligence and applicability, and that’s why terms like weak, narrow and strong don’t really mean anything. So our take is that, you know… We’re producing research on this that provides more detail into what we’re calling the ‘Cognilytica’s concept of the AI maturity spectrum’. And this will basically help enterprise users and vendors know how AI could be applied to… How we can apply this spectrum to various AI systems and implementations, and the goals of why you’d want a system to be at a particular level of maturity with regards to AI versus other levels.

 

[00:12:41] Kathleen: So building off that – to the enterprise users and vendor listeners: we don’t want you to get fixated on this terminology. And we also don’t want you to get lost in the philosophy. So understanding the history of AI and how we got to this point is good to know, but the vision of what people have in mind for the future of AI is also good to know. So know how the boundary of maturity is evolving, as we as we just talked about, and then figure out how your particular problem can be solved with what level of AI maturity. Not everybody needs a super intelligent system to solve some of the basic problems that they have.

[00:13:21] Ronald: Exactly. So what matters is how you’re applying the technology and how it’s evolving to meet new needs. At Cognilytica, we’re not going to use the vague and not helpful terms ‘weak’ or strong to define AI systems. I mean, obviously we’re going to – to the extent that other people are using those terms, we’re going to be aware of them and know how they’re being used, and why they’re being used – but we’re not going to use them when we define AI systems.

[00:13:42] Kathleen: And I think another reason why we’re not going to use it, is that most people consider what we have now ‘weak’ at best. So I think that it’s… One, we don’t think that ‘weak’ brings upon the right connotation of the word. And then there’s also a lot that can be done with the systems that we currently have in place.

[00:14:01] Ronald: Just thinking about something comedic. It’s like, two podcasts ago or three podcasts ago, we were talking about: should we be scared of AI? We were thinking about the bad actors and what they’re doing. And we’re like “oh look, these AI systems are launching nuclear missiles – but don’t worry, they’re ‘weak'” ‘Don’t worry, it’s a weak AI’. That doesn’t seem to matter does it. It sounds like it doesn’t really matter: you could get something really strong but useless, or you could get something really weak and extreme.

[00:14:28] Kathleen: Being very powerful.

[00:14:29] Ron: And powerful. So that’s kind of why we don’t think it matters. So at Cognilytica we’re just not going to use this vague and non-helpful terms. Instead we’re going to just look at the capabilities of what these AI systems can do and map these capabilities across the spectrum of what we imagine AI can do – even things that we have not been able to do yet. And of course, we’re going to keep track of how this boundary of what’s imaginable, becomes increasingly more possible.

[00:14:56] Kathleen: Right. Alright, listeners. Well, thank you for joining us today. And as always, we’ll post articles and concepts discussed in the show notes.

[00:15:05] Ronald: Yet, thank you for joining us.

[00:15:06] Kathleen: And we’ll catch you at the next podcast.

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The AI Today Podcast #008: Weak, Strong AI – Do these Terms Matter?

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!