SIGGRAPH 2015 Day 5 Last day!!!!

First, I have to say, this is my first SIGGRAPH, an event I dreamed for a long time, it happens right after my intern at DWA, so I can meet a lot of old pals. Today is the last day of this intensive experiences. At beginning I would like to make some tips for participating in the SIGGRAPH.

0. Carry a good camera with you (no video but picture is OK), if you want to carry cellphone to take picture, do remember to carry one fully charged phone charger or you won’t survive half of the day!

1. Try to sync the daily schedule with your google calendar. Try to find a way to narrow down the things you want to do.

2. For what you should do? It depends on your purpose of the event. If you try to hunt job as a artist, there will be a lot of HR from big animation studios hanging all over the place. You should carry a tablet to show best of your work to them, they may immediately show you to some director so do your homework for their work. You may not get a job offer here but make your future interview easier by letting people remember your work and see your progress is not a bad idea. Also, large graphics companies may host special sessions for young people who want to be in graphics area, you may get a lot of chance to ask questions and know the story behind the screen. So if the SIGGRAPH is hosted near you, it is definitely worthy the price to join the event even you don’t have any contributions.

If you are an zealot of technology, exhibition hall is the place for you, it is also good for job hunting too since a lot of front-end workers of the company there, giving you opportunity to ask anything about their working environment and experience.

I don’t feel the tutorial or conference would be that beneficial if you are not at least studied them for a year. Even they say it is intro level, the professors and instructors basically try to cover one semester’s knowledge in 45 mins. Don’t expect you can understand too much if you haven’t been in that area for a while. However, if you have been studying the area for a while, the tutorial can help you sort your idea out since you may not listen to the creator of the theory on how to do the work.

If you are movie fan, well, the world’s top VFX studios are here!

3. Find a place close to the conventional center to live. You can find them on Airbnb easily and price should be much better than the hotel.

4. Rent a car if you can drive, download ParkWhiz from App store, It could find the parking place super easy for you near the conventional center. Also it may give you the first time parking for free.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

OK, now go back to the last day. In the morning I don’t feel there is too much to do so I went to the color section, Surprisingly I meet the professor who are from UC Irvine for the ray tracing graphics course. I recognized him and said thank you for his good explanation on the youtube for the physics of light. And I also meet the research lead of Microsoft Dr. Hoppe. My Phd work on edge feature is actually derived from his NPR rendering. It is like finally all the things finally reveal and make sense.

The height difference is also the academic difference between us...
The height difference is also the academic difference between us…

In the afternoon, I majorly listened Dr. Hao Li’s tutorial about Digital human. He host the presentation with colleages from MIT Media lab. This is a great way to solve some questions in my mind about face tracking and animation transfer. For the pdf file you can find more information on Dr. Hao Li’s website: Click Me. It should be on the section of “MODELING AND CAPTURING THE HUMAN BODY: FOR RENDERING, HEALTH, AND VISUA-LIZATION“.

Trust me, the seat is full during the talk. Everyone is active about face.
Trust me, the seat is full during the talk. Everyone is active about face.

I also tried to catch up with the behind scene talk about Averanger: Age of Ultron. This is super great to know how each different character is handled differently by the international VFX studios to make the amazing movie. Of course, ILM still have the super advanced muscle simulation system to simulate the real layers of tissues so the body of Hulk and Dinosaurs do not too much artist manual work to detail since the rig movement directly manipulate the different layers to the top skin. The real-time visual feed back of the rigging suite Mr. Blade wore also helps him to see how Ultron looks like during the rehearsal. Actually, one thing I notice is that the boundary between off-line rendering and real-time rendering is blur. The movie industry and game industry needs talents from each other’s side.

IMG_20150813_171856 IMG_20150813_171953 IMG_20150813_171959

Yes, it is so amazing experience and I am glad I can keep my memory here at the first day of 2016. It gives me the hint about what I want to do and I know how I can do it. Just need to be a geek guy, go to gym, and eat healthy and leave all other times on the thing you try to created. This is a way to NERD, but it is also my choice.

Hope to see you around in 2016!
Hope to see you around in 2016!

SIGGRAPH 2015 Day 4

Hey, here we continue with the journey of SIGGRAPH 2015. On the morning, Umur and I redo our presentation again so we can have a video recording of the presentation. As you can see, although the presentation does not require a computer, all the SRC participants carried one to demo their work. So this is a good hint for you: BRING YOUR DEMO LAPTOP, even they say they won’t have enough power cord.

Umur with the poster Todd with the poster.

 

 

 

After that, the major topic I went on this day is the AR/VR. Several company shares a little information about their experience with VR/AR and mobile rendering architecture.  Feels like everyone try to make the rules for the VR/AR rendering for the mobile system. This is important and I will discuss it later.

Samsung London discuss their works.

We also have a chance to listen to the final presentation for the student research competition (SRC) at this SIGGRAPH. Actually not too much people join but to talk about work at the big hall definitely promote yourself well. The judges comes form Disney research, Facebook, Oculus. Treat it as an interview for the company. That is one thing I feel about the SIGGRAPH event, it gives you a extraordinary way to illustrate what you can do and what you are good at to the top company without going through the coding questions. In my idea, if you are good at creating some real nice graphics and vision related product, spending time on this things and maintaining a good website is better than implementing a stack with two queues in C++…

The App Hour at the exhibition hall involves dozen of start-up companies, they all demonstrate the product related to AR/VR. As we can see below, we have cube based game which tries to do table battle with different armory. We have app which can show video of famous tourist sight when you turn your camera towards the picture or the statue. The FX lead from Dreamworks Animation shows his VR shooting game, which is very successful since he also made the presentation in the big hall. I definitely think it is a good way to promote your product (Hope one day I can do it.)

IMG_20150812_180522 IMG_20150812_180750 IMG_20150812_181311 IMG_20150812_181326

 

 

 

 

 

 

IMG_20150812_125908

There is also some company doing nice work on face tracking on mobile device. They can change your face with zombie or vampire makeup easily, even your contact lenses can be added! Hope I have time to look into more of related technology.

At night, we plan to go somewhere else beside the conventional center. The SIGGRAPH after party sounds nice, so Umur and I walked to that hotel. On the way we saw a long queue waiting. Isn’t it similar to the queue when people wait for the Renderman Teapot? Yes, no wonder it is for Pixar… So another tradition Pixar has is to host a exhibition during SIGGRAPH (for Free) to show how other companies or artist use renderman to do amazing things. This event need advanced tickets so next time, visit Pixar booth ASAP.

IMG_20150812_184428

SO Khronos group, who maintains the OpenGL standards, as I mentioned at the begining, notice the rendering power of all different platforms (PC, console, mobile device, etc.) And a more general programmable model should be define and implemented to provide thinner layer between developer and the graphics hardware (GPU). This is one reason they invent Vulkan. Several large graphics companies have developed the product for a while and shows how amazingly and efficiently they can render on different platform with same API. Please google for more information on Vulkan.

IMG_20150812_193914 IMG_20150812_194623 IMG_20150812_195056

SIGGRAPH 2015 Day 3

Sorry for the late post but I definitely will finish the 5-day season of this journey:)

In the morning I attended the studio course, which covers the GPU general computing discussion on Mobile phone. Personally I feel the presentation is not that clear for the technical detail, but definitely covers huge knowledge about what the GPGPU technology is like on the mobile device. Picture below illustrates the comparison of current different computing interfaces. I suppose if there is some heavy computation that do not need to be sent to the server to compute (like real-time tracking for the mobile system), utilizing the GPU for the task should be a good option.IMG_20150811_101317

Exhibition

PANO_20150811_105119
The main entrance of the exhibition hall. Besides showing GTC talks, Nvidia have a booth to show off the GPU technology for game, movie production. Intel, on the right side, illustrates the graphics technology for the CPU. Ironically, these picture shows the current leader of the computer graphics.

Besides the academic mind mixing, another important platform is that the industrial companies show off of their “coming soon” technologies. Below are some high lights.

IMG_20150811_125951
There are also multiple companies focusing on body skeleton tracking technology by using infrared camera array and reflective markers. The pipeline also handle motion retargeting. These companies provide the 3rd-party solution for movie or game studios.
IMG_20150811_130157
EPSON demonstrates the argument reality eyeware. The AR scene is displayed on a small section of the eye glass. You need to connect the glass with a specific hardware (size like a phone) for touch-based navigation. I feel the system is still kind prototype, may targeting certain professional user but not for everyone. And yes, we have the Pixar booth at the back!!!!!
IMG_20150811_130316
This France company shows a very smooth eye tracking demo here. In a limited distance, the system can track the eye gaze movement and use it to control the reading. It is very smooth and effective. Can definitely be used in hospital for the patients who cannot move.
IMG_20150811_104843
A mobile plug-in style 3D scanning system. very efficiently create still 3D model of any object.
IMG_20150811_111337
IMG_20150811_111353 Qualcomm shows a lot of advanced computer vision technologies this year. Here is a cloud-based object recognition services for mobile devices. It will come with their new CPU so it may need to take a while.
IMG_20150811_125723
IMG_20150811_124822 IMG_20150811_125109 USense is a startup targeting VR/AR experience. Applause for the Chinese start up in US! They purpose two systems. One for mobile, which have two camera inside the headset to help recognize the hand gesture. The wire version have two camera to sense the 3D world and a leapmotion sensor for hand gesture, I think.
IMG_20150811_122012
I meet Evan in SIGGRAPH! So surprise that I meet the guy who we emailed each other before about the DI4D system. Their data quality is good and the new head mounted capture system is remote and light weighted. So great to meet him.

 

Advanced VR Develop Experience Sharing

In the afternoon I attend the talk about VR application and experience. SONY discuss their learning from the new head set. To guide the user in a fully 3D environment, using 3D sound to drag the attention is very important since it is super easy to lose the focus of the story in 3D video. BTW, to enhance the VR experience, hand gesture, especially the grasp motion need to be efficiently detected.

Another one is to use the VR for realistic journal report. To create a new media to bring the audience to the scene of the crime or war zone to experience negative emotions. For the effects please refer to the Buzzfeed video below. Last is a start-up about a droid which can take 360 degree videos.

IMG_20150811_142248 IMG_20150811_145159 IMG_20150811_150408

A secret everyone knows

I feel that I am lucky because I catch this chance. So every SIGGRAPH, pixar will send 1,500 renderman teapot (the shape of the famous Utah teapot) for anyone. Three days, each day 500. The wait list is crazy and normally the teapots will be gone in less than 15 minutes. Each teapot comes with a metal box and a unique ID for it. OMG, it really will be one of the thing that encourage me to go back here next year!

So if you want one, remember to check out with Pixar guys once the exhibition opens about the time. Oh, pixar also like to host renderman party during siggraph. This need invitation and also will be super easy to fill. So also ask the employee about how to register.

PhotoGrid_1439353921392

SIGGRAPH 2015 Day 2

On Monday after our poster presentation, we can fully enjoy the conference!

IMG_20150809_164633First thing I chose to do is to join the Panel talk offered from research leaders at University of Arkansas at Little Rock, Intel, Ford Motor, and Microsoft on the Topic “The Renaissance of VR: Are We Going to do it Right This Time?” In which, we know that VR is not new, all these veterans have tried on VR/AR even before I was born. The key information is that due to the hardware is achievable by normal people now, it may be a good chance for the VR/AR to get into people.

 

A great Keynote speech about what leads the graphics in the coming future from MIT Media Lab Director.
A great Keynote speech about what leads the graphics in the coming future from MIT Media Lab Director.

Secondly, I go to the large hall to listen to the Keynote speech offered by MIT media lab director Jochi Ito. Then I know that MIT media lab (MML) does not hair tenure professor since they want to keep the research active and new. In term of the research, MML focuses on iso-discipline area, which is the total empty space in the research. That is so cool. He also purposes the manufacture power of Shenzhen, China and the innovation from that region. I feel that it sounds like it is because Shenzhen does not have regulation on copy rights (sometimes rules become the boundary, but I still feel that the ignorance of copy rights is not a good sign for long term development).

 

The magicians of Industrial Light and Magic!
The magicians of Industrial Light and Magic!

The most amazing thing is the ILM 40 year anniversary! OMG it is like life dream comes true. I sat in the first group of chairs just below those big hero who

These are the people behind the first star wars!
These are the people behind the first star wars!

worked with the first Star Wars movie. They talked about how they make the first Star Wars, why they start the ILM at the beginning as to create the special effects, specifically about mechanic motors system to make capturing the same scene from same camera angles possible. In this way, multiple layers of footage can be obtained. Then the films can be composted as what happens now in digital version production. However, since the event only lasts about one and half hours, I think they talked too much about the old days. But at the end, ILM X Lab is announced with out surprise. Mr. Rob Bredow gives an amazing demo about how interactive high quality rendering can be achieved in the cloud and stream into the tablet to give more story from different characters’ point of view. I personally feel that ILM X Lab shows everything they have for the advanced movie production this time. But I still feel that expose so many details about one story line to the audience is still not a good way for VR/AR experience. Too much brain is used and we are looking for entertainment right? But without doubt, this is an amazing new direction!

Dream comes true with this talent artist + tech genius.
Dream comes true with this talent artist + tech genius.

In the afternoon, it is time for some academic topic! Face Reality technical paper section attracted so many audiences, including the supervisors from ILM and other studios. Four presentations covers a lot of the technologies on how to represent facial animation  either in real time or high resolution model sequences. The EPFL’s paper is really interesting. To follow the paper in more detail you can access the document at here.  And it is also amazing to say hi to my old friend Laura. And also have a chance to meet the super star Dr. Hao Li. His facial retargeting work with depth sensor, super amazing experience in ILM and Weta, and multiple hits on SIGGRAPH really make me speechless. To work with him as a post-doc really jumped into my mind during the meeting with him.

Then it is the reception, where the people meet and talk. I meet another amazing professor Mark Sagar from New Zealand. He is the leader of Baby-X project (as shown below).

FX Guide Phd course about Baby-X project

So nice he is, highly recommend to study with this MIT PhD at University of Auckland and of course, visit Weta!
So nice he is, highly recommend to study with this MIT PhD at University of Auckland and of course, visit Weta!

I got the information about Baby-X from FX Guide (Thanks for my colleagues from DreamWorks to tell me this amazing website!) At that moment I could not help and sent him a email to show my excitement about his ambition project. He comes to SIGGRAPH this year alone and I recognize him in the reception. He is really nice to share his work and we really have a good conversation. This is also the first time Baby-X is revealed in SIGGRAPH so I best wish him a successful presentation during the Real-time Live! event. And I also notice that the research leader of Microsoft also attends SIGGRAPH 2015! But that is another story I will talk about in next post.

Great Monday!!!!

2015 SIGGRAPH first day

The first day is enjoyable and fun at LA Convention Center!

YOw1onw2s9Em0s19WqV0gRCxAStw9BYdl8D_nbcAX78=w1921-h645-no

About who I am luckily meet:

So basically introduction level information about what will happen in SIGGRAPH 2015. In the morning, I went to the VFX talk where the Double Negative crews talk about how they make T-1000 in the new Terminator. So surprisingly when I look back at the entrance, I saw My DWA supervisor! Then things becomes crazy. In the afternoon after the ON AND UNDER THE SURFACE talk, I meet all the people have screen or phone interviewed with, including Sr. Software Engineer from Disney, the lead who created the T-rex in the first Jurassic Park. All the people I need to say thank you to that now I can say THANK YOU IN person! And also my colleagues from DWA. I just saw his back and I feel that is him. Since I also list him name inside my poster, I hope he can come and check out. Basically all the people are nice (at least during the conference time…).

About my poster presentation:

I have prepared it for a week and we talked about the slides at the station to about 7 judges, some of them comes from Disney, some from NASA. It is a great experience. The major questions is about when people show different expressions, which means the same emotion, how is the system try to handle this condition? And another one is that does the current design support multiple person in the scene?

I think the first one is really about how to link expressions and emotions. I mentioned about Action Unit description and clarify that this is still an open questions. For the second one, we declare that if multiple skeleton tracking data is available then it is achievable.

About the On and Under the Surface:

This is the multiple section talk which includes how the D-Rex’s multiple resolution model is created and maintained in ILM pipeline. And how the Auto SIM multiple-layer muscle simulation system is used to simulate the tissues from muscle layer to the fat layer for body like HULK in Age of Ultron. Disney talked about their animation rendering system in Maya and how it can reach real-time playback ability by using a game style rendering engine Nitro, with rig-caching to provide the artists the possibility to view their animation preview in real time.

About the fast forward technical paper presentation (2 hour of 156 technical papers!):

Each technical paper presentation group has only 30 seconds to sell their research work. I saw some traditional one try to catch up the 30 seconds, someone just put their videos with audio and let it play and finally say THANK YOU. Someone tried to solve a rubric in 30 seconds and at the mean time purpose their 3D puzzle printing work (unfortunately the rubric broke in his hand! He must practice it too much.) For the always cool Professor Hao Li, his two papers really spoke them self. In the Oculups one, the guy just ware the headset and try to dance like the Japanese rainbow cat for 30 seconds! His students are also cool. But personally I feel them too geeky…

Any way, here are a super good resource for most of the SIGGRAPH papers:

http://kesen.realtimerendering.com/sig2015.html

I am interested in these sections:

Face Reality

Let’s do time warping

Video Processing: High-Quality Streamable Free-Viewpoint Video

Geometry Zoo

Simulating with Surface