Reactive models of intelligence are based on independent behaviour
modules, each intermediating directly between the input stimulus and
the output action. Except for inhibition signals, inter-process
communication is kept at a minimum. In a series of early successes,
this model has challenged the traditional AI model where different
subproblem solvers act sequentially on a global knowledgebase, and
given rise to considerable debate. In complex implementations of this
model however, fatal interactions have been observed where behaviours
are triggered in an endless cycle. For example, a can-picking robot,
programmed by Jon Connell at MIT, sometimes tended to pick up
the can it has just deposited:
Sometimes, the Scan module picks up the freshly deposited
can, resulting in a cycle. Such cycles are also very common in
potential-field based robots (e.g. near local minima).
This work analyzes the very fundamental claim that complex functionalities can be achieved by putting together simpler behaviours. Particularly, we investigate this type of cyclic conflict, through a formalization of notions like power, usefulness and modularity.
Our principal results are:
We compare this problem with similar cyclicity observed in early planning systems, and show how attempts to introduce learning (a necessary part of behaviour models) will erode modularity further. We also consider the feasibility of other architectures in avoiding this cyclicity, e.g. meta-level planners or hybrid models. One of the practical benefits of this work is that the cycle detection test will be useful to behaviourists since it provides a mechanism for detecting unforeseen conflicts before implementing a system. We conclude by identifying some of methods in which reactive intelligence can be combined with multi-agent knowledge-based systems to provide more functionality.