The first day is enjoyable and fun at LA Convention Center!
About who I am luckily meet:
So basically introduction level information about what will happen in SIGGRAPH 2015. In the morning, I went to the VFX talk where the Double Negative crews talk about how they make T-1000 in the new Terminator. So surprisingly when I look back at the entrance, I saw My DWA supervisor! Then things becomes crazy. In the afternoon after the ON AND UNDER THE SURFACE talk, I meet all the people have screen or phone interviewed with, including Sr. Software Engineer from Disney, the lead who created the T-rex in the first Jurassic Park. All the people I need to say thank you to that now I can say THANK YOU IN person! And also my colleagues from DWA. I just saw his back and I feel that is him. Since I also list him name inside my poster, I hope he can come and check out. Basically all the people are nice (at least during the conference time…).
About my poster presentation:
I have prepared it for a week and we talked about the slides at the station to about 7 judges, some of them comes from Disney, some from NASA. It is a great experience. The major questions is about when people show different expressions, which means the same emotion, how is the system try to handle this condition? And another one is that does the current design support multiple person in the scene?
I think the first one is really about how to link expressions and emotions. I mentioned about Action Unit description and clarify that this is still an open questions. For the second one, we declare that if multiple skeleton tracking data is available then it is achievable.
About the On and Under the Surface:
This is the multiple section talk which includes how the D-Rex’s multiple resolution model is created and maintained in ILM pipeline. And how the Auto SIM multiple-layer muscle simulation system is used to simulate the tissues from muscle layer to the fat layer for body like HULK in Age of Ultron. Disney talked about their animation rendering system in Maya and how it can reach real-time playback ability by using a game style rendering engine Nitro, with rig-caching to provide the artists the possibility to view their animation preview in real time.
About the fast forward technical paper presentation (2 hour of 156 technical papers!):
Each technical paper presentation group has only 30 seconds to sell their research work. I saw some traditional one try to catch up the 30 seconds, someone just put their videos with audio and let it play and finally say THANK YOU. Someone tried to solve a rubric in 30 seconds and at the mean time purpose their 3D puzzle printing work (unfortunately the rubric broke in his hand! He must practice it too much.) For the always cool Professor Hao Li, his two papers really spoke them self. In the Oculups one, the guy just ware the headset and try to dance like the Japanese rainbow cat for 30 seconds! His students are also cool. But personally I feel them too geeky…
Any way, here are a super good resource for most of the SIGGRAPH papers:
I am interested in these sections:
Let’s do time warping
Video Processing: High-Quality Streamable Free-Viewpoint Video
Simulating with Surface