Deep Belief Networks Learn Context Dependent Behavior
BOSTON UNIV MA
Pagination or Media Count:
With the goal of understanding behavioral mechanisms of generalization, we analyzed the ability of neural networks to generalize across context. We modeled a behavioral task where the correct responses to a set of specific sensory stimuli varied systematically across different contexts. The correct response depended on the stimulus A,B,C,D and context quadrant 1,2,3,4. The possible 16 stimulus-context combinations were associated with one of two responses X,Y, one of which was correct for half of the combinations. The correct responses varied symmetrically across contexts. This allowed responses to previously unseen stimuli probe stimuli to be generalized from stimuli that had been presented previously. By testing the simulation on two or more stimuli that the network had never seen in a particular context, we could test whether the correct response on the novel stimuli could be generated based on knowledge of the correct responses in other contexts. We tested this generalization capability with a Deep Belief NetworkDBN, Multi-Layer Perceptron MLP network, and the combination of a DBN with a linear perceptron LP. Overall, the combination of the DBN and LP had the highest success rate for generalization.