Since I’m talking about “AI” here, and took the time to define IUI, I figured it was probably worth some time offering my thoughts on AI…
Let me start off with a disclaimer: I’m not a machine learning engineer. I’ve never written a line of production AI code… I say production code on purpose, because the truth is, I have written some code that I consider to be “smart”, but more as a hobby. I actually began my career back in 1998 as a software engineer and took an interest in AI quite some time ago. I moved from software development to design about 5 years later and haven’t written production software since about 2003.
I first started digging into AI back in the early 2000’s, and the book that served as my introduction to AI was ‘Constructing Intelligent Agents using Java’. The principles that I learned from reading that, and playing around with some of the sample applications really became the foundation of my understanding of AI. I’ve built a few little things along the way, mostly just hobby projects – I built a little SPAM detection app and a new reading app that grouped similar stories (so I didn’t have to see 10 different versions of the same “Apple announces new iPhone”).
As someone who conceives and designs products for a living, I think it’s really helpful to have a solid understanding of what is possible with technology, so I’ve made sure to stay up to date with what is happening in the field as much as possible.
The way that I think about it is, there are really two kinds of “AI”, sometimes referred to as Narrow AI and Strong AI.
Narrow AI applications are optimized for a single problem or domain. As an example, there is a company that makes a system called Smart Airport, the domain is Airport Logistics – this is narrow AI aimed at optimizing the flow of an airport (planes, people, etc.). Just think about the domino effect of one delayed plane, a bad snowstorm, or mechanical issues with a plane. A smart system can run through all the scenarios of how to recover from that much faster than humans. Narrow AI can be incredibly smart, however, if you tried to use a system tuned to the operational efficiency of an airport to run the logistics of a different domain, say a sports venue, it would probably fail miserably.
Strong AI, on the other hand is what most people think of when they talk about AI: “human like intelligence”. As an example, we weren’t born with the knowledge of how to use the Internet, but we learned how to use it. That was not “programmed in”. Nor were we born with the knowledge of how do design things, we learned that. Strong AI, or “General Intelligence” is best characterized by its ability to learn to operate in any domain.
Recently there has been a lot of talk about Deep Learning, and the way that I like to think about this is that it falls somewhere between the two.
DeepMind, in case you haven’t heard of them, is an Artificial Intelligence company that Google acquired. They developed a Deep Learning algorithm that actually learned how to play – and win – video games. It started with some of the classic arcade style video games many of us grew up with, like Pac Man. Okay, you’re probably thinking, so? Who cares? Well, what made it so astonishing is that they didn’t teach the system the rules of the games, they just let it play the game until it learned the rules on it’s own. One of the games it learned to play was Boxing, and not only did it learn how to win, it learned how to optimize winning. It found, on it’s own, that you could pin the opponent in a corner and run up the score. At the time Google acquired them, it had mastered about 2/3rds of the 35 or so games that is was learning to play.
Fast forward to 2015 and DeepMind accomplished what most people thought was an impossible task for Artificial Intelligence, it beat a human champion at Go – in case you’re not familiar with it, Go is an ancient Chinese game in which you place stones on a 19 by 19 board, and capture your opponent’s stones by surrounding them. The rules are very simple, but they give rise to a complex, subtle game.
There are a number of articles online that describe why this accomplishment is such a big deal, but the simple explanation is that unlike Chess, which was solved using Brute Force algorithms, Go is different. Every single move in Go gives rise to infinitely more possible responses. If the average possible response to a move in chess could be one of about 35 moves, in Go the “Branching Factor” is about 250. To give you a sense of what that means, if you want to think 2 moves ahead in chess, there are about 1235 moves to consider (35 x 35). In Go, it is about 62,500 (250 x 250), and three moves would be 15,625,000 (250 x 250 x 250).
Winning at go requires a sort of an intuition of the board, something that a champion develops. This kind of intuition is something that Brute Force algorithms just can’t deal with.
Based on what I wrote above, you’d think that Deep Mind was Strong AI, because it learned on it’s own and made decisions to achieve it’s goals, and that is correct. However, the question of whether Deep Learning is really Strong AI is whether or not it could function in other domains? Could the same Deep Learning algorithm that learned to play & win those video games also run a supply chain? I’m not sure, my suspicion is that it couldn’t, therefore, my guess is that it falls somewhere between Narrow & Strong (but honestly, I’m not really smart enough to know for sure).
Just to be complete here, there is actually another category, which is called Super Intelligence, which is really a natural evolution of General Intelligence. If AGI can learn, then why can’t it learn to improve itself? This is what all the commotion has been about over the last couple of years with people like Elon Musk & others warning about our impending AI Doom.
Back in 2014, I had the good fortune of attending ICRA, the International Conference on Robotics & Automation. One of the workshops I attended was the Workshop on General Intelligence for Humanoid Robots. The guy who organized that is a pioneer in the field of General Intelligence, Ben Goertzel. During his presentation he said something that has really stuck with me:
…so, AI includes, conceptually, making systems that are intelligent like C3P0, HAL9000 or a thousand times smarter than any human being. The field of AI also includes “Expert Systems“ for say, medical diagnosis, that just goes through a hand coded list of rules or say a neural net control system for a self-driving car, which is highly specialized for that type of car. AI is a very big umbrella, so it’s not particularly clear where AI leaves off and algorithms begin, is there really a big difference between all the algorithms in an AI textbook and the algorithms in an algorithms textbook? Drawing borders between disciplines is not the most interesting thing, the world cuts across all the disciplinary boundaries anyway.Ben Goertzel
That notion of it’s all just algorithms has always been something that I’ve kept in mind when talking about “AI”.
At the end of the day, for the purposes of this blog, I’m going to consider something intelligent as long as it as it loosely conforms to the definition I laid out in my first post here (What is IUI?)
Improving the acumen, acuity, and productivity of people by applying computational intelligence to experience design.
What do you think?