Accession Number:

ADA458916

Title:

Belief and Incompleteness

Descriptive Note:

Technical note no. 319

Corporate Author:

SRI INTERNATIONAL MENLO PARK CA ARTIFICIAL INTELLIGENCE CENTER

Personal Author(s):

Report Date:

1984-07-13

Pagination or Media Count:

70.0

Abstract:

Both agents are state-of-theart constructions, incorporating the latest Al research in chess playing, natural-language understanding, planning, etc. But because of the overwhelming combinatorics of chess, neither they nor the fastest foreseeable computers would be able to search the entire game tree to find out whether White has a forced win. Why then do they come to such an odd conclusion about their own knowledge of the game The chess scenario is an anecdotal example of the way inaccurate cognitive models can lead to behavior that is less than intelligent in artificial agents. In this case, the agents model of belief is not correct. They make the assumption that an agent actually knows all the consequences of his beliefs. S1 knows that chess is a finite game, and thus reasons that, in principle, knowing the rules of chess is all that is required to figure out whether White has a forced iniial win. Mter learning that S2 does indeed know the rules of chess he comes to the erroneous conclusion that S2 also knows this particular consequence of the rules. And S2 himself, reflecting on his own knowledge in the same manner, arrives at the same conclusion, even though in actual fact he could never carry out the computations necessary to demonstrate it.

Subject Categories:

  • Linguistics
  • Operations Research
  • Cybernetics

Distribution Statement:

APPROVED FOR PUBLIC RELEASE