Artificial Intelligence > AI: Intelligence, Learning and Understanding

AI: Intelligence, Learning and Understanding

By LEONARD MCGAVIN
Published: June 3, 2008

AI is still plagued by fundamental issues despite many years of research and development in the field. One of the biggest problems is defining what "intelligence", "understanding" and "learning" actually consist of. Even if we attempt to look past the wording of the definitions, scientists cannot even agree on how to test if something is intelligent.

The Turing test is the most famous AI test. It consists of a persons difficulty in determining which result comes from a human and which comes from a machine. Any person working in AI will be quick to point out where the Turing test breaks down - just because something looks intelligent and can fool an intelligent being doesn't make it intelligent.

Intelligence tests, such as an IQ test or the Turing test, tend to break down when the point is made that dogs and other animals are deemed intelligent and couldn't come close to passing the tests. The tests seem to be focused on the bi-product of intelligence and not on intelligence itself. This is precisely where the difficulty lies; determining the act of intelligence and not the effects of intelligence.

In looking for intelligence there is a stand out feature that seems to be a key factor behind it - learning. "Learning" is another difficult term to define and when we consider what learning consists of, it seems impossible to separate it from intelligence. When we consider learning we must also consider "understanding".

Learning and understanding seem to be part of the same thing but it is possible to be able to learn and not understand what has been learned. This is called rote memorisation. In some ways computers can already do this. Consider the following example:

In teaching a child that 2 + 2 = 4 you could ask the child, "what does 2 + 2 equal?".

The child may say they don't know.

You tell them the answer is "4" then you ask them again, "what does 2 + 2 equal?"

The child may say "4" and the answer would be deemed correct but the child really has not clue as to what they are being asked.

Say we create a small computer application that can pretty much do the same thing. This application allows 2 values to be entered that are related to one another and the application stores them (and their relationship) to memory. If the first value is not already stored in memory with a second value, then the values entered will both be stored in memory. If the first value already exists in memory it will display the second value it has stored and will except a replacement value if a second value is entered.

Using the small application, if we enter the value "2 + 2 =" and value "4" as our first entry, and we enter "2 + 2 =" as the second entry, we should have the value "4" returned to us from memory.The result from the computer application in the example seems to have about the same level of understanding as the child. However we still feel that the child is intelligent and the computer, with its application, is not.

Learning is deemed as something more involved than just storing values with the possibility of being able to recall them. Sometimes, however, there are exceptions. Intelligence can seem to appear in computer "learning" applications such as the one in the example but there is a massive leap from the intelligence of child and intelligence in computer application.

When considering some of the sub-fields of AI are trying to create intelligence and not one person seems to know what "intelligence", "learning" and "understanding" actually are - it all seems a bit ridiculous.

Comments

1. tkorrovi on August 6, 2008

Well i have always said that the most important criterion for True AI is that it should be unrestricted, unrestricted means that it should be possible that whatever system can emerge within a bigger system, as a result of self-development. Mostly they consider every kind of other criteria, but never this criterion, while creating yet another AI system, which then again proves to be restricted, therefore useless, and would become another failed project. The are an immense number of different possibilities for AI systems, all the programmers in the world can never try them all in any limited period of time. I have seen even very dynamic systems like neural network systems which are able to grow new neurons, but never ever have the authors of such systems contemplated, whether there is any reason why their system can be considered unrestricted, or not. You may sometimes visit the Artificial Consciousness Forum, which is about creating an unrestricted system. Please don't answer me in AI Forum, as i cannot reply you there.

2. Cobalt on August 10, 2008

Great article, except for one fundamental part. In the article, you assume that the child and we as humans are "intelligent". Are WE really intelligent? Are WE really capable of truly understanding the concepts of space and nature? I think not. I wont go deeper into this at the moment.

3. Anonymous on November 11, 2008

I agree with tkorrovi about the unrestricted system, which is the only one who should be considered really intelligent. I don't consider a Turing test a proof of intelligence at all. Well, an unrestricted system will surely pass it, but passing it is not a sufficient condition to consider the system intelligent.

If you want you can visit my forum about AI at http://www.aitalk.net

We're not many but some nice discussion or project can come up :)

Any Comments?


More...

The Forever Web App Project

By LEONARD MCGAVIN
Published: November 15, 2009

The Forever Web App Project is an AI project to demonstrate a web app's ability to exist on the web unassisted (except by strangers) for as long as possible after a given date.

Sudoku & Artificial Intelligence

By LEONARD MCGAVIN
Published: March 16, 2009

Using AI on Sudoku could be considered overkill. Either way, any algorithm written to solve a Sudoku puzzle could be considered intelligent by understanding what it accomplishes.