SIGGRAPH 2015 Day 3

Sorry for the late post but I definitely will finish the 5-day season of this journey:)

In the morning I attended the studio course, which covers the GPU general computing discussion on Mobile phone. Personally I feel the presentation is not that clear for the technical detail, but definitely covers huge knowledge about what the GPGPU technology is like on the mobile device. Picture below illustrates the comparison of current different computing interfaces. I suppose if there is some heavy computation that do not need to be sent to the server to compute (like real-time tracking for the mobile system), utilizing the GPU for the task should be a good option.IMG_20150811_101317

Exhibition

PANO_20150811_105119
The main entrance of the exhibition hall. Besides showing GTC talks, Nvidia have a booth to show off the GPU technology for game, movie production. Intel, on the right side, illustrates the graphics technology for the CPU. Ironically, these picture shows the current leader of the computer graphics.

Besides the academic mind mixing, another important platform is that the industrial companies show off of their “coming soon” technologies. Below are some high lights.

IMG_20150811_125951
There are also multiple companies focusing on body skeleton tracking technology by using infrared camera array and reflective markers. The pipeline also handle motion retargeting. These companies provide the 3rd-party solution for movie or game studios.
IMG_20150811_130157
EPSON demonstrates the argument reality eyeware. The AR scene is displayed on a small section of the eye glass. You need to connect the glass with a specific hardware (size like a phone) for touch-based navigation. I feel the system is still kind prototype, may targeting certain professional user but not for everyone. And yes, we have the Pixar booth at the back!!!!!
IMG_20150811_130316
This France company shows a very smooth eye tracking demo here. In a limited distance, the system can track the eye gaze movement and use it to control the reading. It is very smooth and effective. Can definitely be used in hospital for the patients who cannot move.
IMG_20150811_104843
A mobile plug-in style 3D scanning system. very efficiently create still 3D model of any object.
IMG_20150811_111337
IMG_20150811_111353 Qualcomm shows a lot of advanced computer vision technologies this year. Here is a cloud-based object recognition services for mobile devices. It will come with their new CPU so it may need to take a while.
IMG_20150811_125723
IMG_20150811_124822 IMG_20150811_125109 USense is a startup targeting VR/AR experience. Applause for the Chinese start up in US! They purpose two systems. One for mobile, which have two camera inside the headset to help recognize the hand gesture. The wire version have two camera to sense the 3D world and a leapmotion sensor for hand gesture, I think.
IMG_20150811_122012
I meet Evan in SIGGRAPH! So surprise that I meet the guy who we emailed each other before about the DI4D system. Their data quality is good and the new head mounted capture system is remote and light weighted. So great to meet him.

 

Advanced VR Develop Experience Sharing

In the afternoon I attend the talk about VR application and experience. SONY discuss their learning from the new head set. To guide the user in a fully 3D environment, using 3D sound to drag the attention is very important since it is super easy to lose the focus of the story in 3D video. BTW, to enhance the VR experience, hand gesture, especially the grasp motion need to be efficiently detected.

Another one is to use the VR for realistic journal report. To create a new media to bring the audience to the scene of the crime or war zone to experience negative emotions. For the effects please refer to the Buzzfeed video below. Last is a start-up about a droid which can take 360 degree videos.

IMG_20150811_142248 IMG_20150811_145159 IMG_20150811_150408

A secret everyone knows

I feel that I am lucky because I catch this chance. So every SIGGRAPH, pixar will send 1,500 renderman teapot (the shape of the famous Utah teapot) for anyone. Three days, each day 500. The wait list is crazy and normally the teapots will be gone in less than 15 minutes. Each teapot comes with a metal box and a unique ID for it. OMG, it really will be one of the thing that encourage me to go back here next year!

So if you want one, remember to check out with Pixar guys once the exhibition opens about the time. Oh, pixar also like to host renderman party during siggraph. This need invitation and also will be super easy to fill. So also ask the employee about how to register.

PhotoGrid_1439353921392

Make your Python experience the same as you work with Visual Studio!

So suppose you are a windows programmer and get used to the way Visual Studio arranges and debug code for you, when you get into Python, to set up a new IDE and debugging environment with pdb.set_trace() is really, how to say, a new experience that you may not work efficiently with. Today I found a plug-in for Visual Studio (multiple versions) that make python programming and debugging like you work with C++ in VS. Same debugging and even with Python interaction window! I have to say I love it so much.

So suppose you already have VS installed. Just go to HERE and look at the top right corner for the plug-in link. For VS 2013 and later please go to V2.2. During install, it asks about the Python library location, assign your already installed one to it. Then you can program Python with Visual Studio. You can even new a Python project with your previous python code. The tool will create two files in the same directory as your Python code, one pyproj file and one sln file. You can also follow the video below for more feature about the tool.

SIGGRAPH 2015 Day 2

On Monday after our poster presentation, we can fully enjoy the conference!

IMG_20150809_164633First thing I chose to do is to join the Panel talk offered from research leaders at University of Arkansas at Little Rock, Intel, Ford Motor, and Microsoft on the Topic “The Renaissance of VR: Are We Going to do it Right This Time?” In which, we know that VR is not new, all these veterans have tried on VR/AR even before I was born. The key information is that due to the hardware is achievable by normal people now, it may be a good chance for the VR/AR to get into people.

 

A great Keynote speech about what leads the graphics in the coming future from MIT Media Lab Director.
A great Keynote speech about what leads the graphics in the coming future from MIT Media Lab Director.

Secondly, I go to the large hall to listen to the Keynote speech offered by MIT media lab director Jochi Ito. Then I know that MIT media lab (MML) does not hair tenure professor since they want to keep the research active and new. In term of the research, MML focuses on iso-discipline area, which is the total empty space in the research. That is so cool. He also purposes the manufacture power of Shenzhen, China and the innovation from that region. I feel that it sounds like it is because Shenzhen does not have regulation on copy rights (sometimes rules become the boundary, but I still feel that the ignorance of copy rights is not a good sign for long term development).

 

The magicians of Industrial Light and Magic!
The magicians of Industrial Light and Magic!

The most amazing thing is the ILM 40 year anniversary! OMG it is like life dream comes true. I sat in the first group of chairs just below those big hero who

These are the people behind the first star wars!
These are the people behind the first star wars!

worked with the first Star Wars movie. They talked about how they make the first Star Wars, why they start the ILM at the beginning as to create the special effects, specifically about mechanic motors system to make capturing the same scene from same camera angles possible. In this way, multiple layers of footage can be obtained. Then the films can be composted as what happens now in digital version production. However, since the event only lasts about one and half hours, I think they talked too much about the old days. But at the end, ILM X Lab is announced with out surprise. Mr. Rob Bredow gives an amazing demo about how interactive high quality rendering can be achieved in the cloud and stream into the tablet to give more story from different characters’ point of view. I personally feel that ILM X Lab shows everything they have for the advanced movie production this time. But I still feel that expose so many details about one story line to the audience is still not a good way for VR/AR experience. Too much brain is used and we are looking for entertainment right? But without doubt, this is an amazing new direction!

Dream comes true with this talent artist + tech genius.
Dream comes true with this talent artist + tech genius.

In the afternoon, it is time for some academic topic! Face Reality technical paper section attracted so many audiences, including the supervisors from ILM and other studios. Four presentations covers a lot of the technologies on how to represent facial animation  either in real time or high resolution model sequences. The EPFL’s paper is really interesting. To follow the paper in more detail you can access the document at here.  And it is also amazing to say hi to my old friend Laura. And also have a chance to meet the super star Dr. Hao Li. His facial retargeting work with depth sensor, super amazing experience in ILM and Weta, and multiple hits on SIGGRAPH really make me speechless. To work with him as a post-doc really jumped into my mind during the meeting with him.

Then it is the reception, where the people meet and talk. I meet another amazing professor Mark Sagar from New Zealand. He is the leader of Baby-X project (as shown below).

FX Guide Phd course about Baby-X project

So nice he is, highly recommend to study with this MIT PhD at University of Auckland and of course, visit Weta!
So nice he is, highly recommend to study with this MIT PhD at University of Auckland and of course, visit Weta!

I got the information about Baby-X from FX Guide (Thanks for my colleagues from DreamWorks to tell me this amazing website!) At that moment I could not help and sent him a email to show my excitement about his ambition project. He comes to SIGGRAPH this year alone and I recognize him in the reception. He is really nice to share his work and we really have a good conversation. This is also the first time Baby-X is revealed in SIGGRAPH so I best wish him a successful presentation during the Real-time Live! event. And I also notice that the research leader of Microsoft also attends SIGGRAPH 2015! But that is another story I will talk about in next post.

Great Monday!!!!

2015 SIGGRAPH first day

The first day is enjoyable and fun at LA Convention Center!

YOw1onw2s9Em0s19WqV0gRCxAStw9BYdl8D_nbcAX78=w1921-h645-no

About who I am luckily meet:

So basically introduction level information about what will happen in SIGGRAPH 2015. In the morning, I went to the VFX talk where the Double Negative crews talk about how they make T-1000 in the new Terminator. So surprisingly when I look back at the entrance, I saw My DWA supervisor! Then things becomes crazy. In the afternoon after the ON AND UNDER THE SURFACE talk, I meet all the people have screen or phone interviewed with, including Sr. Software Engineer from Disney, the lead who created the T-rex in the first Jurassic Park. All the people I need to say thank you to that now I can say THANK YOU IN person! And also my colleagues from DWA. I just saw his back and I feel that is him. Since I also list him name inside my poster, I hope he can come and check out. Basically all the people are nice (at least during the conference time…).

About my poster presentation:

I have prepared it for a week and we talked about the slides at the station to about 7 judges, some of them comes from Disney, some from NASA. It is a great experience. The major questions is about when people show different expressions, which means the same emotion, how is the system try to handle this condition? And another one is that does the current design support multiple person in the scene?

I think the first one is really about how to link expressions and emotions. I mentioned about Action Unit description and clarify that this is still an open questions. For the second one, we declare that if multiple skeleton tracking data is available then it is achievable.

About the On and Under the Surface:

This is the multiple section talk which includes how the D-Rex’s multiple resolution model is created and maintained in ILM pipeline. And how the Auto SIM multiple-layer muscle simulation system is used to simulate the tissues from muscle layer to the fat layer for body like HULK in Age of Ultron. Disney talked about their animation rendering system in Maya and how it can reach real-time playback ability by using a game style rendering engine Nitro, with rig-caching to provide the artists the possibility to view their animation preview in real time.

About the fast forward technical paper presentation (2 hour of 156 technical papers!):

Each technical paper presentation group has only 30 seconds to sell their research work. I saw some traditional one try to catch up the 30 seconds, someone just put their videos with audio and let it play and finally say THANK YOU. Someone tried to solve a rubric in 30 seconds and at the mean time purpose their 3D puzzle printing work (unfortunately the rubric broke in his hand! He must practice it too much.) For the always cool Professor Hao Li, his two papers really spoke them self. In the Oculups one, the guy just ware the headset and try to dance like the Japanese rainbow cat for 30 seconds! His students are also cool. But personally I feel them too geeky…

Any way, here are a super good resource for most of the SIGGRAPH papers:

http://kesen.realtimerendering.com/sig2015.html

I am interested in these sections:

Face Reality

Let’s do time warping

Video Processing: High-Quality Streamable Free-Viewpoint Video

Geometry Zoo

Simulating with Surface

 

My welcome to you.

Welcome to Xing Zhang (张幸)’s official webpage. I would like to share my journey after my Ph.D. from Graphics and Image Computing Laboratory at Binghamton University here, which includes Computer Graphics, Computer Vision, Virtual Reality, Argument Reality, VFX… basically, everything that I have passion about.

The first big news will be SIGGRAPH 2015. I have a poster accepted this year. It will be my first SIGGRAPH experience and I hope that I can learn a lot and keep attending from now on. Check the fast forward video out!