A major challenge for research in Artificial Intelligence (AI) is to develop systems that can infer humans' goals and beliefs when observing their behavior alone (i.e., systems that have Theory of Mind, ToM). In this research we use a theoretically grounded, pre-existent cognitive model to demonstrate the development of ToM from observation of other agents' behavior. The cognitive model relies on Instance-Based Learning Theory (IBLT) of experiential decision making, that distinguishes it from previous models that are hand-crafted for particular settings, complex, or unable to explain a cognitive development of ToM. An IBL model was designed to be an observer of agents' navigation in gridworld environments and was queried afterwards to predict the actions of new agents in new (not experienced before) gridworlds. The IBL observer can infer and predict potential behaviors from just a few samples of agents' past behavior of random and goal-directed reinforcement learning agents. Furthermore the IBL observer is able to infer the agent's false belief and pass a classic ToM test commonly used in humans. We discuss the advantages of using IBLT to develop models of ToM, and the potential to predict human ToM.