As artificial systems become more sophisticated, how will we distinguish computation from true intelligence? How will we identify intelligence in any potential extraterrestrial creatures we may one day meet? And for that matter, how do we evaluate intelligence in each other? José Hernández-Orallo of Universitat Politècnica de València and David L. Dowe of Monash University, Australia propose designing a new universal intelligence test.
Among other specifics, the researchers propose that any measure of universal intelligence have the following criteria:
- It must be applicable to any intelligent system, biological or synthetic either now or in the future.
- It must distinguish between different levels of intelligence, from very basic to incredibly brilliant.
- It must be informative regardless of the amount of time available for testing.
There are a few artificial intelligence tests available today, though none fulfill all the requirements above. For example, an artificial intelligence is said to have passed the ‘Turing test’ if a human being is unable to distinguish its answers from that of another human. Thus far, no computer has passed this test, but if one could, it would mean that the computer had achieved humanity, not necessarily superior intelligence. In other words, the intelligence of any artificial system that passed the Turing test could not be rated beyond being somewhere in the human continuum. This does not fulfill the second requirement above.
Another proposed test relies partly on Kolmogorov complexity, a method of comparing the length of instructions for producing a string of characters with the length of the string itself. For example, the instructions ‘repeat xy 20 times’ are shorter than ‘xyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxy’. The authors expect any universal intelligence test to have components of these known tests plus new additions, perhaps developed from games or from collaborative platforms like social networks.
No comments:
Post a Comment