Evaluations of system operational effectiveness, suitability, and survivability increasingly rely on models to supplement live testing. In order for these supplements to be valuable, testers must understand how well the models represent the systems or processes they simulate. This means testers must quantify the uncertainty in there presentation and understand the impact of that uncertainty. Two broad categories of uncertainties (statistical and knowledge) are of central importance to test and evaluation (T and E), particularly as testers try to extrapolate the model output and live test data into predictions of performance in combat. The validation process should include parametric analyses and a comparison of simulation output to live data to support quantification of statistical uncertainty. However, qualitative and non-statistical techniques may also be required to compare the future hypothetical combat environment with the non-quantitative portions of the validation referent.