CVPR 2019 Day 1

What an early flight to Long Beach! Waked up at 3:30am and noticed that there was Lyft driver available at mid-night. Have to say that Lyft/Uber makes life easier. But just a reminder that SJC won’t open checkin until 4:00 am… So don’t be rush there anyway.

So Sunday and Monday are for the workshops. The morning I went to the 3D Scene Understanding workshop and listen to a good talk on “What do Single-view 3D Reconstruction Networks Learn?” It points out the current state of the arts single image reconstruction work is, by large chance, just image retrieval. This is due to that the shape similarity measurement is not good enough and the training set is contaminated by models which already looks very similar like the one in the test set. And using a certain model pose view as the single image input fits 2D image case but does not really the best one for 3D mesh case. The talk really resolve some issues in 3D reconstruction research and I think the paper is good to read. You can find the paper Here. And here is the Youtube video for the talk

However, at the same day Facebook AI also provide their Mesh R-CNN basically to reconstruct mesh from single image like there R-CNN to create 2D mask from a single image. Hence it would be interesting to check that paper to see if it violates any issues points out by the work above.

In the afternoon my colleague leads me to the ScanNet benchmark challenge workshop. Professor Matthias Nießner is really active in facial/body reconstruction work and now his work also expand to general scene 3D capture and registration. ScanNet is trying to create data set with vertex level labeling plus 3D bounding box , like the 3D version of the ImageNet. The workshop is basically an exhibition of all the people who participate in the detection task with the ScanNet dataset. The Stanford’s work achieve very good result by take advantage of temporal coherence information. It is a very interesting idea and fundamentally optimize the data representation and training procedure. Very nice result.

Later afternoon I went back to my original research domain to take look of the state-of-the-art human body/facial capture survey style talk given by Michael J Black. I do feel there is great potential. Need to investigate when I have some spare time. And here is the video for that.

I think Monday I will be in the AR/VR session. Hope to learn more in this region. Or at least see what is the part people still not cover yet…