The 12th IASTED International Conference on
Artificial Intelligence and Applications
AIA 2013

February 11 – 13, 2013
Innsbruck, Austria

KEYNOTE SPEAKER

Artificial Un-Intelligence by Natural Intelligence: Case Study Mathematical Algorithm Synthesis

Dr. Bruno Buchberger
Johannes Kepler University, Austria

Abstract

fiogf49gjkf0d
In the literature, Artificial Intelligence (AI) has two meanings:
• Difficult Problems whose solution is deemed to need "intelligence" and for which, at some stage, no general algorithm is known are often called AI Problems. If then somebody gives an algorithm (often a "heuristic" algorithm) that solves the problem in a way that surpasses the performance of human intelligence one says "the problem can now be solved by AI". (Examples of AI Problems: playing chess, symbolic integration, semantic text analysis, learning are / have been considered to be "artificial intelligence problems".)
• Processes that imitate the behavior of "intelligent" living systems and which can be used to solve or at least partially solve difficult problems (in particular, "AI Problems") are often called *AI Methods*. (Examples of AI Methods: neural networks, genetic algorithms, ant colony optimization).
Independent of the various flavors of using the attribute AI for problems or methods, it is the fact that natural (human) intelligence is used and needed for establishing AI methods for AI problems. Having established an AI method by natural intelligence, the application of the method (i.e. implementing and running the method on a computer) is not intelligent but rather utterly non-intelligent namely mechanical. In fact, seen in this way, AI is not at all different from what happened and happens in mathematics since the dawn of mathematics in early history: Given a mathematical problem for whose solution in each of infinitely many instances of the problem,
at a given historical moment, an extra mathematical invention (human insight, human intelligence, human experience, ..., i.e. “natural” intelligence) is needed, one strives to invent, again by human intelligence, one general method, algorithm, procedure, ... that solves the problem (or at least approximates solutions) in all the infinite problem instances. In other words, it is the essence of mathematics to think once deeply (by "natural" intelligence) on a problem and its context in order to replace thinking (i.e. to allow instead "artificial non-intelligence") infinitely many times for solving the individual problem instances.
The mathematical principle of replacing the application of natural intelligence in infinitely many problem instances by one application of natural intelligence on the "meta-level" is recursive: Over and over again, mathematics proceeds from more special problem classes to more general ones needing higher and higher "natural" intelligence on higher levels for allowing "non-intelligence" (i.e. computerization, mechanization, algorithmization) on lower levels.
In particular, the problem of automated algorithm synthesis (i.e. the problem of inventing methods that automatically invent algorithms from given problem specifications) is a “higher-order” mathematical problem which people like to consider as an AI Problem (i.e., in my terminology, a problem of generating “artificial non-intelligence by natural intelligence”).
In this talk, we will explain our recent method for the algorithm synthesis problem that is derived from the observation of two important principles applied by human mathematicians when they try to establish an algorithm for a given problem:
• Using general principles of algorithm design that occur over and over again in various contexts (we cast these principles in the form of higher-order “algorithm schemes”).
• Analyzing unsuccessful proofs of theorem and inventing auxiliary conjectures that may lead to a successful proof.
We will first explain this automated algorithm synthesis method, which we call the “Lazy Thinking Method”, in a simple example. Then, we will demonstrate the power of the method by synthesizing, automatically, the algorithmic solution of the so-called “Gröbner-bases problem”, which is one of the fundamental problems in non-linear systems theory. This problem (before finding a solution in 1965 by “natural intelligence”) had been open for 65 years.
As a conclusion, some implications will be drawn on the future of the area of “thinking technology” (i.e. mathematics and computer science viewed as “natural intelligence” for producing “artificial non-intelligence”.)

Biography of the Keynote Speaker

Keynote Speaker Portrait

fiogf49gjkf0d
Bruno Buchberger is Professor of Computer Mathematics at the Research Institute for Symbolic Computation (RISC) of the Johannes Kepler University (JKU)in Linz, Austria.
Buchberger is best known for the invention of the theory of Groebner bases, which has found numerous applications in mathematics, science, and engineering. For his Groebner bases theory, he received the prestigious ACM Kanellakis Award 2007 (http://awards.acm.org/kanellakis/ ), he was elected (1991) member of the Academia Europea London (www.acadeuro.org), and received five honorary doctorates.
His current main research topic is automated mathematical theory exploration (the "Theorema Project"). This project aims at the (semi-)automation of the mathematical invention and verification process, see www.theorema.org.
Buchberger is founder of
- RISC, the Research Institute for Symbolic Computation (1987), see www.risc.jku.at
- The Journal of Symbolic Computation, Elsevier, (1985), see http://www.journals.elsevier.com/journal-of-symbolic-computation/
- The JKU Softwarepark Hagenberg (1990), a technology center with over 1000 R&D coworkers, see www.softwarepark-hagenberg.com.
Buchberger's general homepage: www.brunobuchberger.com