Explanation-based learning is a recently developed approach to concept acquisition by computer. In this type of machine learning, a specific problems solution is generalized into a form that can later be used to solve conceptually similar problems. A number of explanation-based generalization algorithms have been developed. Most do not alter the structure of the explanation of the specific problem - no additional objects nor inference rules are incorporated. Instead, these algorithms generalize by converting constants in the observed example to variables with constraints. However, many important concepts, in order to be properly learned, require that the structure of explanations be generalized. This can involve generalizing such things as the number of entities involved in a concept or the number of times some action is performed. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table can involve an arbitrary number of guests. Two theories of extending explanations during the generalization process have been developed, and computer implementations have been created to computationally test these approaches. The Physics 101 system utilizes characteristics of mathematically-based problem solving to extend mathematical calculations in a psychologically-plausible way, while the BAGGER system implements a domain-independent approach to generalizing explanation structures. Both of these systems are described and the details of their algorithms presented. An approach to the operationalitygenerality trade-off and an empirical analysis of explanation-based learning are also presented.