Make your Python experience the same as you work with Visual Studio!

So suppose you are a windows programmer and get used to the way Visual Studio arranges and debug code for you, when you get into Python, to set up a new IDE and debugging environment with pdb.set_trace() is really, how to say, a new experience that you may not work efficiently with. Today I found a plug-in for Visual Studio (multiple versions) that make python programming and debugging like you work with C++ in VS. Same debugging and even with Python interaction window! I have to say I love it so much.

So suppose you already have VS installed. Just go to HERE and look at the top right corner for the plug-in link. For VS 2013 and later please go to V2.2. During install, it asks about the Python library location, assign your already installed one to it. Then you can program Python with Visual Studio. You can even new a Python project with your previous python code. The tool will create two files in the same directory as your Python code, one pyproj file and one sln file. You can also follow the video below for more feature about the tool.

SIGGRAPH 2015 Day 2

On Monday after our poster presentation, we can fully enjoy the conference!

IMG_20150809_164633First thing I chose to do is to join the Panel talk offered from research leaders at University of Arkansas at Little Rock, Intel, Ford Motor, and Microsoft on the Topic “The Renaissance of VR: Are We Going to do it Right This Time?” In which, we know that VR is not new, all these veterans have tried on VR/AR even before I was born. The key information is that due to the hardware is achievable by normal people now, it may be a good chance for the VR/AR to get into people.

 

A great Keynote speech about what leads the graphics in the coming future from MIT Media Lab Director.
A great Keynote speech about what leads the graphics in the coming future from MIT Media Lab Director.

Secondly, I go to the large hall to listen to the Keynote speech offered by MIT media lab director Jochi Ito. Then I know that MIT media lab (MML) does not hair tenure professor since they want to keep the research active and new. In term of the research, MML focuses on iso-discipline area, which is the total empty space in the research. That is so cool. He also purposes the manufacture power of Shenzhen, China and the innovation from that region. I feel that it sounds like it is because Shenzhen does not have regulation on copy rights (sometimes rules become the boundary, but I still feel that the ignorance of copy rights is not a good sign for long term development).

 

The magicians of Industrial Light and Magic!
The magicians of Industrial Light and Magic!

The most amazing thing is the ILM 40 year anniversary! OMG it is like life dream comes true. I sat in the first group of chairs just below those big hero who

These are the people behind the first star wars!
These are the people behind the first star wars!

worked with the first Star Wars movie. They talked about how they make the first Star Wars, why they start the ILM at the beginning as to create the special effects, specifically about mechanic motors system to make capturing the same scene from same camera angles possible. In this way, multiple layers of footage can be obtained. Then the films can be composted as what happens now in digital version production. However, since the event only lasts about one and half hours, I think they talked too much about the old days. But at the end, ILM X Lab is announced with out surprise. Mr. Rob Bredow gives an amazing demo about how interactive high quality rendering can be achieved in the cloud and stream into the tablet to give more story from different characters’ point of view. I personally feel that ILM X Lab shows everything they have for the advanced movie production this time. But I still feel that expose so many details about one story line to the audience is still not a good way for VR/AR experience. Too much brain is used and we are looking for entertainment right? But without doubt, this is an amazing new direction!

Dream comes true with this talent artist + tech genius.
Dream comes true with this talent artist + tech genius.

In the afternoon, it is time for some academic topic! Face Reality technical paper section attracted so many audiences, including the supervisors from ILM and other studios. Four presentations covers a lot of the technologies on how to represent facial animation  either in real time or high resolution model sequences. The EPFL’s paper is really interesting. To follow the paper in more detail you can access the document at here.  And it is also amazing to say hi to my old friend Laura. And also have a chance to meet the super star Dr. Hao Li. His facial retargeting work with depth sensor, super amazing experience in ILM and Weta, and multiple hits on SIGGRAPH really make me speechless. To work with him as a post-doc really jumped into my mind during the meeting with him.

Then it is the reception, where the people meet and talk. I meet another amazing professor Mark Sagar from New Zealand. He is the leader of Baby-X project (as shown below).

FX Guide Phd course about Baby-X project

So nice he is, highly recommend to study with this MIT PhD at University of Auckland and of course, visit Weta!
So nice he is, highly recommend to study with this MIT PhD at University of Auckland and of course, visit Weta!

I got the information about Baby-X from FX Guide (Thanks for my colleagues from DreamWorks to tell me this amazing website!) At that moment I could not help and sent him a email to show my excitement about his ambition project. He comes to SIGGRAPH this year alone and I recognize him in the reception. He is really nice to share his work and we really have a good conversation. This is also the first time Baby-X is revealed in SIGGRAPH so I best wish him a successful presentation during the Real-time Live! event. And I also notice that the research leader of Microsoft also attends SIGGRAPH 2015! But that is another story I will talk about in next post.

Great Monday!!!!

2015 SIGGRAPH first day

The first day is enjoyable and fun at LA Convention Center!

YOw1onw2s9Em0s19WqV0gRCxAStw9BYdl8D_nbcAX78=w1921-h645-no

About who I am luckily meet:

So basically introduction level information about what will happen in SIGGRAPH 2015. In the morning, I went to the VFX talk where the Double Negative crews talk about how they make T-1000 in the new Terminator. So surprisingly when I look back at the entrance, I saw My DWA supervisor! Then things becomes crazy. In the afternoon after the ON AND UNDER THE SURFACE talk, I meet all the people have screen or phone interviewed with, including Sr. Software Engineer from Disney, the lead who created the T-rex in the first Jurassic Park. All the people I need to say thank you to that now I can say THANK YOU IN person! And also my colleagues from DWA. I just saw his back and I feel that is him. Since I also list him name inside my poster, I hope he can come and check out. Basically all the people are nice (at least during the conference time…).

About my poster presentation:

I have prepared it for a week and we talked about the slides at the station to about 7 judges, some of them comes from Disney, some from NASA. It is a great experience. The major questions is about when people show different expressions, which means the same emotion, how is the system try to handle this condition? And another one is that does the current design support multiple person in the scene?

I think the first one is really about how to link expressions and emotions. I mentioned about Action Unit description and clarify that this is still an open questions. For the second one, we declare that if multiple skeleton tracking data is available then it is achievable.

About the On and Under the Surface:

This is the multiple section talk which includes how the D-Rex’s multiple resolution model is created and maintained in ILM pipeline. And how the Auto SIM multiple-layer muscle simulation system is used to simulate the tissues from muscle layer to the fat layer for body like HULK in Age of Ultron. Disney talked about their animation rendering system in Maya and how it can reach real-time playback ability by using a game style rendering engine Nitro, with rig-caching to provide the artists the possibility to view their animation preview in real time.

About the fast forward technical paper presentation (2 hour of 156 technical papers!):

Each technical paper presentation group has only 30 seconds to sell their research work. I saw some traditional one try to catch up the 30 seconds, someone just put their videos with audio and let it play and finally say THANK YOU. Someone tried to solve a rubric in 30 seconds and at the mean time purpose their 3D puzzle printing work (unfortunately the rubric broke in his hand! He must practice it too much.) For the always cool Professor Hao Li, his two papers really spoke them self. In the Oculups one, the guy just ware the headset and try to dance like the Japanese rainbow cat for 30 seconds! His students are also cool. But personally I feel them too geeky…

Any way, here are a super good resource for most of the SIGGRAPH papers:

http://kesen.realtimerendering.com/sig2015.html

I am interested in these sections:

Face Reality

Let’s do time warping

Video Processing: High-Quality Streamable Free-Viewpoint Video

Geometry Zoo

Simulating with Surface

 

My welcome to you.

Welcome to Xing Zhang (张幸)’s official webpage. I would like to share my journey after my Ph.D. from Graphics and Image Computing Laboratory at Binghamton University here, which includes Computer Graphics, Computer Vision, Virtual Reality, Argument Reality, VFX… basically, everything that I have passion about.

The first big news will be SIGGRAPH 2015. I have a poster accepted this year. It will be my first SIGGRAPH experience and I hope that I can learn a lot and keep attending from now on. Check the fast forward video out!