Techniques for texturing reconstructed surfaces in real time ?

I am hoping to get some suggestions for a technique in texturing reconstructed surfaces. Maybe this is the right place to get some input and if not would appreciate a nudge in the right direction. The problem is namely, for a given mesh that was reconstructed, calculate or retrieve the observed color from a previously seen view of it in an efficient manner. I have included two videos of some prior attempts at solving this that will likely explain what I am getting at.

I have a pretty robust engine such that for every camera frame I know the world space pose of the frame, the world position of every pixel, the normal of every pixel, its color, and the surface data in a sparse voxel hashmap (TSDF). I have tried out two naive approaches myself, but both have issues. My first and easiest to understand is that I store a per vertex color for every voxel seen by the camera. This is always accurate but can’t scale to larger worlds and has an issue of needing a high density of triangles on the mesh to look ‘right’. Not to mention it adds 24 bits for every voxel. This is seem in https://youtu.be/3Cv2CYkK_wU (100fps+)

My second technique I have tried is planar mapping. This is done by thinking of a mesh’s bounding box as having 6 textures. Whenever there is a direct ray from the camera to the surface, it uses the surface normal and projects the color onto several of these 6 textures. The shader in turn, uses 6 textures and retrieves them. This was pretty memory efficient and certainly can render quickly as seen in https://youtu.be/6RaeNFNfm4c The problem I have is if there are two surfaces (overhangs) whose normal point to the same texture you can effectively only know the color of the one closest to the border face. (60fps+) The other issue here is that the 6 textures are updated as more observations are made, thus if there is 1 blurry frame or a very very small misalignment all the colors in the texture can bleed and be incorrect. I have tried to reduce this by adding velocity and translational speed gates on the camera, but this only hides the issue and doesn’t fundamentally solve it.

I have been reading about view based rendering and backprojection but the academic literature is hard to follow [at least for me] and often times not real time rendering friendly. So I am here hoping to get some advice. One area of understanding I am lacking is how would a shader, given a fragment position, grab the corresponding pixel in a set of textures where that fragment was in view ?

submitted by /u/JRhapsodus
[link] [comments]