Fear Itself?

You often hear people saying that robots will be better at X or Y because they "have no fear." But is that really true? Chris von Csefalvay, doodler of data, chaser of Bavarian Pokemon, and author of a upcoming book on using Julia for data science, wrote a fascinating post on that question that quotes yours truly. A sampling:

Fear is best represented as a vector, having both magnitude and direction (example: “I’m very afraid (magnitude) of spiders (direction)”). Different magnitudes help prioritising for immediacy and apprehended risk (likelihood times expected loss). Of course, it is not possible to simply bestow this upon a computer, and there are other methods of prioritising risk, but the great benefit of fear is that it distills signals down into a simple and fast calculation that is remarkably rarely wrong. It does so by considering a current signal in the context of all signals, the signal space of all possible signals, as well as learned patterns and the wider context in which the entire process is taking place. The decision whether to be afraid of something is, actually, quite complex.

Fear, von Csefalvay argues, is both a component of "fast and frugal" decision-making that might be useful for an robot and is also something that might be necessary for a robot to work with a human and vice versa. A robot that "fears nothing" might not accomplish the mission because it lacks the capability to effectively and quickly deal with threats. A robot that "fears nothing" may also be an ineffective co-worker for a human that fears many things, as the robot would not be able to predict or properly process the behavior of the human co-worker and vice versa. Of course, a computer is unlikely to be able to feel "fear" in the way we understand it nor is it wise or useful to give a robot the ability to feel fear or pain in a "human-like" manner. You should read von Csefalvay 's post for more, especially the biochemical underpinnings.

Beyond the specific issue of fear, von Csefalvay's post illustrates a larger truth about the sciences of the artificial. At the beginning of the post, he states that he will discuss two things:

  1. "Fear, and the function it has in the human mind (not just the psyche – fear is a primarily neural response, secondarily perhaps cognitive and far behind it is any psychological aspect thereof), and.."

  2. "What a robot ought to do/know."

Focusing purely on item #2, even simple animal-like robots require some kind of system of motivations and drives for autonomous behavior. David McFarland explains this as follows in his book Guilty Robots, Happy Dogs: The Question of Alien Minds:

In Chapter 1 we saw that self-sufficient robots require a degree of energy autonomy and motivational autonomy. Autonomy implies freedom from outside control. What does this mean? Energy autonomy implies that the robot’s energy supply is free from outside control, and that the robot is able to gain its energy by itself, from sunlight, or by foraging for a suitable energy source. The degree of energy autonomy depends upon the number of resources that the robot requires for survival. Like a plant or animal, a robot is likely to require other commodities in addition to energy, such as water and oil. So the completely self-sufficient robot would have to be able to find all its ‘nutritional’ requirements by itself. Whereas energy autonomy implies that the robot is free from outside control with respect to its needs, motivational autonomy implies that the robot is free with respect to its ‘wants’. Like an animal, a well-designed robot will tend to want whatever it needs. Not only does this apply to the ‘nutrients’ necessary for survival, but also the safety, territory, and freedom of choice necessary for useful work (e.g. it would be advantageous for a slug-catching robot to be territorial, because slugs removed from a productive area are soon replaced by others).

Similarly, animal motivation is aimed at those needs necessary for survival and reproduction, such as food, water, safety, and territory, plus the freedom to choose a mate. The degree of autonomy depends upon the number of resources that are addressed motivationally. ... In the cases of energy autonomy and motivational autonomy, there is no suggestion that the robot has ‘a mind of its own’. All that is implied is a degree of freedom from outside control that allows the robot to decide by itself when to switch from one activity to another. The decisions made by such robots are entirely the result of procedures based upon information about the robot’s internal state and its registration of external stimuli. The designer of this type of robot must set things up in such a way that the robot can cope with any contingency likely to arise in its environment. Of course, the designer chooses the environment within which the robot is to operate. This may be an office environment, a marine environment, the planet Mars, or whatever environment the robot sponsors think is appropriate for deploying self-sufficient robots.

To many, this may seem tremendously obvious if not trivial. However, it was (crudely speaking) far from obvious to the people in the Good Old Fashioned Artificial Intelligence (GOFAI) and traditional cognitive science traditions. Herbert Simon and Alan Newell's General Problem Solver is a case in point. Readers are recommended to take a look at the LISP code of a GPS model implementation. For Simon and Newell, human problem-solving behavior was an issue of how, given a well-structured domain model, an agent could compose a plan that would yield the desired goal state. Given that it is difficult and/or impossible for both humans and machines to exhaustively enumerate and evaluate all possible plans for many problems, GPS uses a strategy often seen in GOFAI/cognitivism that I'll explain and illustrate. GPS uses means-end analysis, which reduces the difference between the current state (S) and a desired goal state (G).

However, for means-end analysis to work, one must supply the analyzer with a way of understanding the distinction between general actions that cause changes in the state of the problem itself and actions that reduce the difference between S and G as well as a means of keeping track of its progress towards G. Thus, in Simon and Newell's GPS model, the most important constructs are problem operators. Suppose you have a son and need to drive him/her to school. But in order to do so, your son needs to be at home and your car needs to work. And for your car to work.....if you thought hard about the problem, you might be able to decompose it into a tree of goals and subgoals that need to be satisfied. From this you would create a set of generic steps that take you closer to your goal of being able to drive your son to school.

Here is part of the LISP code to do this:

(defparameter *school-ops*
    (make-op :action 'drive-son-to-school
         :preconds '(son-at-home car-works)
         :add-list '(son-at-school)
         :del-list '(son-at-home))

The action of driving your son to school requires your son being at home and your car working. It has the side effects of your son being at school and your son not being at home.

(make-op :action 'shop-installs-battery
        :preconds '(car-needs-battery shop-knows-problem shop-has-money)
        :add-list '(car-works))

....but in order for that to happen, the auto shop needs to install a new battery to create the side effect of your car working. And the preconditions for this are that your car needs a new battery, the shop knows that the problem is a need for a new battery, and the auto shop has money you paid them. In some more parts of the full code, we can see the full picture.

(defun GPS (*state* goals *ops*)
  "General Problem Solver: achieve all goals using *ops*."
  (if (every #'achieve goals) 'solved))

(defun achieve (goal)
  "A goal is achieved if it already holds,
  or if there is an appropriate op for it that is applicable."
  (or (member goal *state*)
      (some #'apply-op
            (find-all goal *ops* :test #'appropriate-p))))

(defun appropriate-p (goal op)
  "An op is appropriate to a goal if it is in its add list."
  (member goal (op-add-list op)))

(defun apply-op (op)
  "Print a message and update *state* if op is applicable."
  (when (every #'achieve (op-preconds op))
    (print (list 'executing (op-action op)))
    (setf *state* (set-difference *state* (op-del-list op)))
    (setf *state* (union *state* (op-add-list op)))

The GPS is given a beginning state, goals, and problem operators. Given its inputs, the GPS will then attempt to achieve all of the goals it has been given. It achieves goals by finding operations appropriate for the achievement of each goal. A "table of differences" is used. To use this sort of solution, we assume that the problem is combinatorial complexity. The domain structure is already well-known, which allows us to supply the GPS with the current state, the goal, and the relevant problem operators of the domain. The school domain, for example, is defined below in this code:

(defparameter *school-ops*
    (make-op :action 'drive-son-to-school
         :preconds '(son-at-home car-works)
         :add-list '(son-at-school)
         :del-list '(son-at-home))
    (make-op :action 'shop-installs-battery
         :preconds '(car-needs-battery shop-knows-problem shop-has-money)
         :add-list '(car-works))
    (make-op :action 'tell-shop-problem
         :preconds '(in-communication-with-shop)
         :add-list '(shop-knows-problem))
    (make-op :action 'telephone-shop
         :preconds '(know-phone-number)
         :add-list '(in-communication-with-shop))
    (make-op :action 'look-up-number
         :preconds '(have-phone-book)
         :add-list '(know-phone-number))
    (make-op :action 'give-shop-money
         :preconds '(have-money)
         :add-list '(shop-has-money)
         :del-list '(have-money))))

Here is a sample run from a famous LISP programming book:

(gps '(son-at-home car-needs-battery have-money have-phone-book)
    '(son-at-school) *school-ops*)


With knowledge of the domain, the system reasons about how to satisfy a goal (driving son to school) with subgoals. A plan is built that begins with the most basic action (looking up the number of the auto shop) and eventually ends with a terminal action (driving the son to school.) This is a highly idealized way of making decisions. It has some similarity to "ends, ways, and means" in US military planning at various levels of decision-making. Yet it becomes more difficult when there is more than one goal (achieving one goal can harm the other) and there is the threat of infinite regress when it comes to the achievement of a subgoal. Moreover, the entire assumption of the GPS itself is that you have a very rich internal model of the domain to use. Surprisingly (or not), these kinds of assumptions tend to be quite brittle.

There's certainly value in trying to make decisions in as orderly and deliberate a manner as you can. But the ability to make elaborate plans is not as fundamentally basic as the regulatory role of motivational drives and emotions such as fear in helping us act day to day in the world. Both the "fast and frugal" and the GPS perspectives represent features of human behavior: a capacity for quick and instinctual action that doesn't require much in the way of elaborative representations and problem structure and a capacity for hierarchical planning and reasoning that requires some of the kinds of bells and whistles seen in GPS. Some of the better examples of how these two types of perspectives can be brought together can be found in Aaron Sloman's work (especially his CogAff system) and Joanna Bryson's research on biological action selection with her Artificial Models of Natural Intelligence software.

In any event, this to me is the primary justification for using simulation with synthetic agents. It doesn't just tell us about engineering methods for the next Google or Facebook self-driving car system (though that's pretty important!) It helps us learn more about ourselves. It's a reason to work with/be interested in robots, autonomous agents and artificial life, agent-based models, cognitive models, etc even if you have no desire to build the latest and greatest Silicon Valley AI or could care less about it. Frankly, there is too little interest these days in the role of synthetic agents for self-understanding of *homo sapiens* but it is also nonetheless growing. No need to fear, I guess.