Conventional orthorectification software cannot handle
surface occlusions and image visibility. The approach
presented here synthesizes related work in photogrammetry
and computer graphics/vision to automatically produce
orthographic and perspective views based on fully 3D
surface data (supplied by laser scanning). Surface occlusions
in the direction of projection are detected to create the
depth map of the new image. This information allows
identifying, by visibility checking through back-projection of
surface triangles, all source images which are entitled to
contribute color to each pixel of the novel image. Weighted
texture blending allows regulating the local radiometric
contribution of each source image involved, while outlying
color values are automatically discarded with a basic
statistical test. Experimental results from a close-range
project indicate that this fusion of laser scanning with multiview
photogrammetry could indeed combine geometric
accuracy with high visual quality and speed. A discussion of
intended improvements of the algorithm is also included.