COMPUTATIONAL STUDIES OF PRESENTATION STRATEGIES FOR A MULTILEVEL MODEL OF LEARNING.
SYSTEM DEVELOPMENT CORP SANTA MONICA CALIF
Pagination or Media Count:
We consider a class of look-ahead rules for generating stimulus presentation strategies in learning experiments, i.e., rules on local optimization over the next one, two, or more trials--given the subjects state of conditioning at the current trial. In previous studies using a two-level single-element model from the stimulus-sampling theory of learning, we proved that R1 indeed generated only globally optimal strategies. In the present work we hypothesize a more general, multilevel learning model and put forth two conjectures concerning the rule Rh. We report on computational studies performed to test these conjectures. The computations did not refute the conjectures although they led to some modification. The conjectures have not yielded to analytical treatment. The primary conjecture asserts that for an m-level model of learning the Rm-1 rule will generate a globally optimal strategy. Roughly, the second conjecture is the intuitive one that Rk is at least as good as h for k h. Author