Although many researchers feel that an autonomous system, capable of behaving appropriately in an uncertain environment, must have an internal representation (world model) of entities, events and situations it perceives in the world, research into active vision, inattentional amnesia has implications for our views on the content of represented knowledge and raises issues concerning coupling knowledge held in the longer term with dynamically perceived sense data. This includes implications for the type of formalisms we employ and for ontology. Importantly, in the case of the latter, evidence for the micro-structure of natural vision indicates that ontological description should perhaps be (task-related) feature-oriented, rather than object-oriented. These issues are discussed in the context of existing work in developing autonomous agents for a simulated driving world. The view is presented that the reliability of represented knowledge guides information seeking and perhaps explains why some things get ignored.
Originally published in Robotics and Autonomous Systems, Volume 49, Issues 1-2, 30 November 2004, Pages 79-90 containing publisher's errors (see publisher's note in Robotics and Autonomous Systems, Volume 51, Issues 2-3, 31 May 2005, Page 215).