Rethinking Autonomy

Richard Alterman



Designed Environment

Everyday Activity

FLOABN ("For Lack Of A Better Name")

Ways to extract information from the environment:

In instructions (e.g. textbooks), the structure of the information is important; not only that, but users explicitly acknowledge using this information.

Get ready and Run!

Sweeping generalizations about how other researchers in AI seperate the 'development' stage of their system from the 'deployment' stage, the claim being that the 'deployment' stage is fully autonomous.

But really (because the researchers end up tweaking the system based on usage and user feedback) the system goes through cycles of autonomous and non-autonomous learning.

Furthermore, because the system was designed by humans with a particular goal in mind, it is immediately non-autonomous, having taken all its initial input and knowledge directly from the programmer.

So what does autonomy mean, then?

The Work and Practice Of The User

Here the paper shifts gears, and starts talking about why it might be a good idea to make the system non-autonomous:

Then we switch tacks and talk about why the system should be adaptive, and should reason about the user's practice:

System as Artifact

See the mediation triangle on pg 13.

The idea is that the system should alter the way in which the user thinks about the task, preferably making it easier cognitively.

Joint Runtime Learning

Pick up data from use of the system and use it to adapt the system. This is just what was discussed above as adaptation, defined.

Types of run-time learning:


Type 1:
No examples are given in the paper, but here are a few off the top of my head:
An inventory-tracking system at a video store that prompts the manager to order more copies of popular tapes
Money-management software that notices monthly bills and offers to set up a recurring payment
Type 2:
dang (Dan Griffin)'s manual server, which asked you to explicitly mark search results as useful or irrelevant.
Reading and marking text (hilighting, etc.)

Other Models of Non-Autonomy

System as:

Concluding Remarks

Friends don't let friends think autonomously.

"Success for an AI system depends on drawing practical inferences from the practice of the user in applying the system to a set of problems in a particular task environment. Once a user becomes a part of the running system, intelligence is no longer located in the system alone; rather it emerges from the history of interactions between system and user."