Recent advances in many multi-discipline technologies have allowed small, low-cost fixed wing unmanned air vehicles (UAV) or more complicated unmanned ground vehicles (UGV) to be a feasible solution in many scientific, civil and military applications. Cameras can be mounted on-board of the unmanned vehicles for the purpose of scientific data gathering, surveillance for law enforcement and homeland security, as well as to provide visual information to detect and avoid imminent collisions for autonomous navigation. However, most current computer vision algorithms are highly complex computationally and usually constitute the bottleneck of the guidance and control loop. In this paper, we present a novel computer vision algorithm for collision detection and time-to-impact calculation based on feature density distribution (FDD) analysis. It does not require accurate feature extraction, tracking, or estimation of focus of expansion (FOE). Under a few reasonable assumptions, by calculating the expansion rate of the FDD in space, time-to-impact can be accurately estimated. A sequence of monocular images is studied, and different features are used simultaneously in FDD analysis to show that our algorithm can achieve a fairly good accuracy in collision detection. In this paper we also discuss reactive path planning and trajectory generation techniques that can be accomplished without violating the velocity and heading rate constraints of the UAV.