The mapping from a 3D scene to the 2D retinal or camera image is many-to-one. It follows that the inverse problem of recovering 3D objects and scenes from a single 2D image is ill-posed. The human visual system solves this problem nearly perfectly: we see objects around us the way they are “out there.” This visual ability, which is called “veridical vision” remained, until very recently, unexplained. I will describe a new theory of human 3D vision, in which a regularization method is used to solve this ill-posed inverse problem. In this theory, symmetry of objects is the main a priori constraint. Using symmetry as a constraint makes sense because all natural objects are symmetrical, or nearly so. This is true with animal bodies, plants, as well as man-made objects. Symmetry proves to be very effective computationally because it represents 3D abstract characteristics of objects, rather than concrete objects. Using an abstract constraint allows the visual system to recover both familiar and unfamiliar 3D shapes and scenes. Note that because symmetry is a mathematical concept, the human visual system does not have to learn it from experience; we are simply born with it. The talk will be concluded with a brief discussion of how this new theory generalizes to the case of multiple 2D views and how it compares to a least-action principle used in physics.