Without corrective updates from the Global Positioning System, navigational capabilities are degraded significantly when the inertial navigation system becomes the only source of an unmanned aerial vehicles movement estimate. Today, unmanned vehicles are easily equipped with a variety of passive sensors, such as video cameras, due to their increasingly lower prices and improvements in sensor resolution. The concept of using an image-matching technique on an input video camera stream was demonstrated earlier with real flight data using a single low-grade onboard sensor. This technique works by matching the stream of data from the camera with a pre-stored depository of geo-referenced reference images to estimate the current attitude and position of an unmanned aerial vehicle UAV. Preliminary results indicated that unfiltered position estimates can be accurate to the order of roughly 100 meters when flying at two kilometers above the surface and unfiltered orientation estimates are accurate to within a few degrees. This thesis examines developed algorithms on a suite of video data, seeking to reduce the errors in estimating attitude and position of a UAV. The data sets collected at King City and Camp Roberts, California, are also studied to discover the effect of altitude, terrain pattern, elevation map, light conditions, age of reference data and other parameters on estimation. This thesis concludes that in the absence of other sources of navigational information, imagery from a camera is a viable option to provide positional information to a UAV.