SIGGRAPH 2018 Day 4

Today is a little casual. In the morning, I visit Nvidia ray  tracing/path tracing section. They emphasize that similar like the first GPU card in 1998, The RTX is a new thing that everyone should try to catch up with.

Then I also went to 3D capture session. The papers there are all very interesting. I think we are also at an important stage at the moment.

In the afternoon, I went to material capture session, It is glad to see how a deep learning model is trained which can use differentiable render for material generation from one image file. I do need to look into this work.

SIGGRAPH 2018 Day 3

Today is a time space mixture adventure. Try to get into the talk of two state of the arts face related paper in the VR session in the morning. One is from TMU and the other is from Facebook Reality Lab. Both of them try to tackle the issue on how to show genuine whole face expression in VR while both sides wear the headset.

On one side,  Matthias Niessner and his golden face synthesize team explore how to deal with this issue based on their face-to-face work. The advantage is that due to they use generic face model, the representation is not strongly subject dependent so no calibration and pre-capture is necessary. However, due to only use infrared camera inside headset for eye gaze tracking, the upper face’s expression may not be preserved.

While facebook use subject dependent high quality model for this work. And use deep learning on teeth composition. The quality looks better. However, it needs pre-capture for the subject.

 

And thanks to my friends from Pixar, this time we notice that there is no booth for the animation studio so we don’t know where to pick up the renderman teapot. It is turn out that they release after their renderman 22 demo talk which last for 1 hour. It is actually a really good talk. 30 years development of renderman, from scan line renderer to ray tracing, and then path tracing. They give up old infrastructure for physical correct and simple models. It is glad to see at this stage, ray tracing lighting can be achieved in an interactive speed. With the help of Nvidia’s RTX, I think the production time for all stages of animation can be shrink and we could see more ideas in the movie since the cost to try out new story line, camera, actions, etc are cheaper. But the most important thing is get my teapot!

 

 

 

 

 

 

The real-time live! demo session is also crazy. The Nvidia RTX, ILM X LAB, and Unreal combined VR virtual movie shot demo is totally a game changer on how we can make movie quality shots in real time with everyone inside a virtual environment. I can image in near future, the individual shot may be captured in this real-time ray tracing environment. Then the director can cut the movie to review, and handle that short to the off-line renderer, if necessary, for movie final images.

SIGGRAPH2018 Day 2

So today’s major coverage is two speeches, one is from Rob Bredow, VP of ILM. The other is form the CEO of NVIDIA.

Rob’s talk is the power of creative process. In which he talked about his experiences to be the first time VFX producer on the Star Wars movie: Solo.

He mentioned the people will have 3 different stages during the creative process:

  • Just start: when you want to be in the field.

During the beginning, people should do study. And try to build the things from other’s work. More like interdisciplinary study. It is easier to create something based on other stuff.

  • Know the theme: when you already know the tools and try to actively work in the field.

 

  • Lead: How to lead creative process.

During this stage, people need to first define the theme, which is the concept you try to follow. Make sure to work on this path before dive into the detail. He use the example on the solo film where he hope to go back to the classic 70’s film style. Hence the movie production explicitly uses rig for the hyper speed traveling set, and under water explosion, which relies on the real hardware (huge 180 degree LED screen, and 20 thousands fps camera) to get real lighting and “explosion never seen before”.

Then it is about learn on the constraint, so people can focus on the right thing. He mentioned how the roller coaster in Disney’s Animal Kingdom was created. From the beginning when it is not fit into the style. Then people visit Nepal and found the story of Yeti to build up the story about Everest and Yeti for the roller coster.

Third is simplify. Try to make the target simple. He mentioned about a shot in World War where a rig is jumped out during a crash scene, which may need retouching the scene to remove. However, no one actually knows what that is and people pay attention to the character’s face, so it is indeed not that important to spend extra time on removing in the film.

The third is about share. Rob mentioned on the start of ASWF, the academy software foundation, where the film industry first time try to organize their software together to share tools between companies.

The topic title.
ASWF actually starts with a lot of big names. I think to explore these repositories could also help new people to get into the business.

 

 

 

 

 

 

 

 

He also proposed his photo book he made during the Solo movie, I think this is a very good collection.

 

Nvidia’s special event is crazy to attract a lot of people. It is also my first time to see the CEO’s iconic gesture: hold the nvidia card on the stage. The event is basically the announcement of the next big thing since CUDA introduced in 2003. The Turing architecture, where Nvidia makes real-time ray tracing rendering possible.

10 Gig ray per second, mixed operation on GPU 16 TFLOPS and 16 TIPS, 500 Trillion tensor ops per second, 8K image decoder. This monster makes real time ray tracing possible. It dramatically reduce the time of physical based rendering for movie quality images, hence could be very attractive to the movie industry. And since the basic version is not that expensive ($2300, I think it is worthy than some AR glasses), we may expect soon game developer may not need to play too much tricks on the shading effect while just let things following the physics law.

Mr. Huang really enjoys to use the high glossy RTX card to play with the audience.
Demo on the real time ray tracing Star Wars shot. The light does look real!
Introduce how different hardware/software stack it is for the new arch.

 

 

SIGGRAPH 2018 Day 1

Day one is so many people! Next time if I arrived one day earlier, I should do registration first.

So in the morning, I went to the Vulkan course, really helpful to understand this thing and glad it has all the support we expect to. I think it is the way to go.

Then we visit the product exhibition, nice to see the probs from Infinity War and Solo.

The AR session hosted by apple basically go over what they said at WWDC, which makes AR still a pretty new thing for the graphics industry. I can sense that people are looking for new things, but they hesitated on the future.

In this case, what should we do? The Jurassic park 25 year screening gives the answer. You just spend your spare time and do it. Then break the old business. From 0 to 1, that is how we make progress.

See everyone in day 2.

Hello to SIGGRAPH 2018!

I think time indeed goes fast and it has been 3 years since my first and amazing SIGGRAPH experience. Now it is Vancouver, with a new me on Amazon’s AR platform and try to make it better.

So Sunday is the beginning, I plan to check in and take course: Intro to Vulkan. And later afternoon for deep learning maybe (but I feel the deep learning one may be too simple).