- He's against it.
- "by autonomous it is meant systems that can think, reason or act
on their own."
- by "on their own", it is meant that the system, once created, no longer
alters itself due to explicit instruction from a human user or programmer.
- the paper's position is that even after deployment, "the system must
continue to develop, a project jointly undertaken by user and system."
- Reading the world - putting an autonomous agent in an environment
designed to give it information will help it complete its tasks
- "Traditional" AI looks down upon this as simplifying the problem, or
relying too much on human interference
- But this is just what the real world does for humans; devices are
designed to communicate their use. ("Affordance", etc.)
FLOABN ("For Lack Of A Better Name")
- System implemented by who, Zito-Wolf?
- Attempts everyday activities (using a photocopier) by reading the
instructions for the task and attempting to match them to what it sees.
Ways to extract information from the environment:
- Norman (1988) talks about other activities where the environment
communicates the task or the solution to the task:
opening a door, flipping a light switch, using a pay phone, etc.
- The actor can get information from other actors, either explicitly
(e.g., asking them) or implicitly (following the flow of people at a movie
theatre to find the bathroom).
- Finally, there is a default knowledge (cultural common ground) about the
layout of buildings, etc. that can be used.
In instructions (e.g. textbooks), the structure of the information is
important; not only that, but users explicitly acknowledge using this
Get ready and Run!
Sweeping generalizations about how other researchers in AI seperate the
'development' stage of their system from the 'deployment' stage, the
claim being that the 'deployment' stage is fully autonomous.
But really (because the researchers end up tweaking the system based
on usage and user feedback) the system goes through cycles of autonomous
and non-autonomous learning.
Furthermore, because the system was designed by humans with a particular
goal in mind, it is immediately non-autonomous, having taken all its
initial input and knowledge directly from the programmer.
So what does autonomy mean, then?
The Work and Practice Of The User
Here the paper shifts gears, and starts talking about why it might be
a good idea to make the system non-autonomous:
- Allows the programmer to continuously adjust the system
- Allows the melding of areas of expertise between the designer
(programmer) and user (domain expert)
- Allows exploitation of regularities in the domain
Then we switch tacks and talk about why the system should be adaptive, and
should reason about the user's practice:
- The system can learn specifics of the task environment
- The system can mitigate imperfections in the design process
- Allows exploration of just what is the best solution to a problem; if
a solution arises that is better than the designed one, a non-adaptive
system is stuck and has to go 'Get Ready' again.
System as Artifact
See the mediation triangle on pg 13.
The idea is that the system should alter the way in which the user thinks
about the task, preferably making it easier cognitively.
Joint Runtime Learning
Pick up data from use of the system and use it to adapt the system.
This is just what was discussed above as adaptation, defined.
Types of run-time learning:
- System runs and learns autonomously
- Type 1: System learns by interpreting using behavior
- Type 2: user makes explicit some of the things he learns
- User responsible for all runtime adjustments to system
- Type 1:
- No examples are given in the paper, but here are a few off the top of
- An inventory-tracking system at a video store that prompts the manager
to order more copies of popular tapes
- Money-management software that notices monthly bills and offers to set
up a recurring payment
- Type 2:
- dang (Dan Griffin)'s manual server, which asked you to explicitly
mark search results as useful or irrelevant.
- Reading and marking text (hilighting, etc.)
Other Models of Non-Autonomy
- An administrative assistant
- A collaborator
- A cognitive artifact
- A part of the user's "surround" (cf. Perkins1993)
Friends don't let friends think autonomously.
"Success for an AI system depends on drawing practical inferences from the
practice of the user in applying the system to a set of problems in
a particular task environment. Once a user becomes a part of the running
system, intelligence is no longer located in the system alone; rather it
emerges from the history of interactions between system and user."