目前Todd在Curry Village有預定兩個 cabin, 8張床（內部情況大致可以看下面video），從7月29日週五到7月31日週日check out。
早上6點出發，9點左右前到達。週五主題吃吃吃。打算搶佔available的BBQ rack，然後把帶的燒烤食物烤著吃。我們也可以參觀一下 下瀑布的一個trail。還有在湖邊淌水。
今天來爬非常有特色的Vernal Fall and Nevada Fall Trails。大約4個小時，但是山頂的風景真的非常美麗壯觀。
下午下山泉水泡腳，晚上我們可以考慮去著名的Ahwahnee酒店感受一下。著名電影 閃靈 的內部取景就在這家酒店。晚上酒店有篝火的故事會，可以坐下來聽聽看。
地点：Pescadero State Beach, 在南湾这边，注意有两个入口，我们要去南边那个可以跨海上潮汐岛的。所以应该是沿着Pescadero Creek Rd 到海边。
9:00am 在CCIC-Cupertino母堂集合或者自己直接前往Pescadero State Beach
Today is May 6, 2017. I am working half a day on the project so later the app can make change irrelevant to the data set. From experiment to implementation, and error searching and debugging, days like today run though my mind, and this kind of feeling is what I endurance for the past 2 years.
It is always about focusing on one problem and cannot do anything else besides it. This can be applied to any other domain, such as job hunting, paper writing. I think this is what my advantages: Focus. But it is also my weakness if this happens very often in my daily life. This is bad if it is 24/7 since I will be very skill at doing one thing, but lose the big picture of the work, even life.
Even PhD study, as I think, is not only about concentration in my tiny region. It needs a deep understanding of one domain, and at the mean time to explore other domains for new idea.
That is why I feel my start-up life is not good for me. It has majorly been a journey about learning new things, which is not very applicable to a general knowledge. This is especially obvious when I prepare for CV/CG related job hunting. I think my start-up job do give me a very advanced vision on the topic and that is really what I appreciate. However, life and work balance is also critical in the long run. Besides, in my specific domain, I work alone for the 2 years, I do need communication to break for new idea and better solution. Learning alone is not a good habit.
The past two months really show the true face of living in real world rather than the irony tower. Making plan and do multiple things in pieces of time is what I try to do in the future.
I am a little regretful for that I am not brave enough to choose the way to do face. I hope I can have chance to do it but I do feel I should start with a more general topic. Face could be my side project especially when I get so many resources on the possible way to do it. But the job is the highest priority, after so many years, I think I come back to where I begin, and this time I hope I can polish my skill and create something elegant and useful.
So finally get the grit to try some deep learning framework. This post tries to cover the basic installation tips to install Ubuntu 16+ on a ASUS ROG laptop with Torch deep learning framework so we can try the facebook’s DeepMask+SharpMask.
- Install Ubuntu. I choose to create a USB installer for this. Win32DiskImager is a good tool to create the install image.
- For ASUS ROG, some fast boot feature needs to be off so linux can be boot up. Please check online to set the bios.
- Look at this post from Taiwan for tips especially about installing Video card driver. Just in case, I leave the code for the Nvidia driver here:
sudo apt-get purge nvidia*
sudo apt-get purge bumblebee* primus
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-get install nvidia-352 nvidia-prime
sudo add-apt-repository -r ppa:bumblebee/stable
- On Youtube, there is a very good tutorial about how to install dual boot system so Ubuntu and Win 10 can run together. In my case, I have the Win 10 installed on the 500G SSD so I get the entire linux on a 200 GB partition of my 1TB data disk. My experience proves that following this video tutorial you should be OK to get a dual boot-able system after that. Since I don’t need to hibernate linux normally and I have enough RAM, I set swap as 2GB and just keep all the other 198 GB for the system.
- Do remember to follow the post in 3 for the Video card update since the facebook deepmask needs Nvidia GPU with compute power 3.5+, by default, Ubuntu will use the Intel integrated graphics card so install the Nvidia driver is necessary.
- Now you can follow the Torch web page to install Torch, just like the facebook deep mask page suggests.
- Notice that even after you installed the torch, you need CUDA package and other stuff to run the demo.
- First, you need to install the CUDA SDK so the cutorch can be installed. Follow the link here at Nvidia for how to install CUDA SDK. Then you just need to go to torch folder in your system and type : luarocks install cutorch, it should work like charm. Please follow the pre and pose installation guide since the command will be different based on your system. To get the correct CUDA SDK, go to here.
To start the development of Unity, especially with Vuforia, will be an excited learning opportunity for me this year. I hope I can get a good knowledge in this game engine. I feel that we will be successful today and I wish we can really get the project running like hell!
So first thing is that Vuforia project need Unity 32 bits. To do that, you have to either find a 32 bit machine or you need to find the 32 bit editor.
I will try to post some common knowledge here about Unity development.
First, how to source control your project:
So suppose you are a windows programmer and get used to the way Visual Studio arranges and debug code for you, when you get into Python, to set up a new IDE and debugging environment with pdb.set_trace() is really, how to say, a new experience that you may not work efficiently with. Today I found a plug-in for Visual Studio (multiple versions) that make python programming and debugging like you work with C++ in VS. Same debugging and even with Python interaction window! I have to say I love it so much.
So suppose you already have VS installed. Just go to HERE and look at the top right corner for the plug-in link. For VS 2013 and later please go to V2.2. During install, it asks about the Python library location, assign your already installed one to it. Then you can program Python with Visual Studio. You can even new a Python project with your previous python code. The tool will create two files in the same directory as your Python code, one pyproj file and one sln file. You can also follow the video below for more feature about the tool.
On Monday after our poster presentation, we can fully enjoy the conference!
First thing I chose to do is to join the Panel talk offered from research leaders at University of Arkansas at Little Rock, Intel, Ford Motor, and Microsoft on the Topic “The Renaissance of VR: Are We Going to do it Right This Time?” In which, we know that VR is not new, all these veterans have tried on VR/AR even before I was born. The key information is that due to the hardware is achievable by normal people now, it may be a good chance for the VR/AR to get into people.
Secondly, I go to the large hall to listen to the Keynote speech offered by MIT media lab director Jochi Ito. Then I know that MIT media lab (MML) does not hair tenure professor since they want to keep the research active and new. In term of the research, MML focuses on iso-discipline area, which is the total empty space in the research. That is so cool. He also purposes the manufacture power of Shenzhen, China and the innovation from that region. I feel that it sounds like it is because Shenzhen does not have regulation on copy rights (sometimes rules become the boundary, but I still feel that the ignorance of copy rights is not a good sign for long term development).
The most amazing thing is the ILM 40 year anniversary! OMG it is like life dream comes true. I sat in the first group of chairs just below those big hero who
worked with the first Star Wars movie. They talked about how they make the first Star Wars, why they start the ILM at the beginning as to create the special effects, specifically about mechanic motors system to make capturing the same scene from same camera angles possible. In this way, multiple layers of footage can be obtained. Then the films can be composted as what happens now in digital version production. However, since the event only lasts about one and half hours, I think they talked too much about the old days. But at the end, ILM X Lab is announced with out surprise. Mr. Rob Bredow gives an amazing demo about how interactive high quality rendering can be achieved in the cloud and stream into the tablet to give more story from different characters’ point of view. I personally feel that ILM X Lab shows everything they have for the advanced movie production this time. But I still feel that expose so many details about one story line to the audience is still not a good way for VR/AR experience. Too much brain is used and we are looking for entertainment right? But without doubt, this is an amazing new direction!
In the afternoon, it is time for some academic topic! Face Reality technical paper section attracted so many audiences, including the supervisors from ILM and other studios. Four presentations covers a lot of the technologies on how to represent facial animation either in real time or high resolution model sequences. The EPFL’s paper is really interesting. To follow the paper in more detail you can access the document at here. And it is also amazing to say hi to my old friend Laura. And also have a chance to meet the super star Dr. Hao Li. His facial retargeting work with depth sensor, super amazing experience in ILM and Weta, and multiple hits on SIGGRAPH really make me speechless. To work with him as a post-doc really jumped into my mind during the meeting with him.
Then it is the reception, where the people meet and talk. I meet another amazing professor Mark Sagar from New Zealand. He is the leader of Baby-X project (as shown below).
I got the information about Baby-X from FX Guide (Thanks for my colleagues from DreamWorks to tell me this amazing website!) At that moment I could not help and sent him a email to show my excitement about his ambition project. He comes to SIGGRAPH this year alone and I recognize him in the reception. He is really nice to share his work and we really have a good conversation. This is also the first time Baby-X is revealed in SIGGRAPH so I best wish him a successful presentation during the Real-time Live! event. And I also notice that the research leader of Microsoft also attends SIGGRAPH 2015! But that is another story I will talk about in next post.
Welcome to Xing Zhang (张幸)’s official webpage. I would like to share my journey after my Ph.D. from Graphics and Image Computing Laboratory at Binghamton University here, which includes Computer Graphics, Computer Vision, Virtual Reality, Argument Reality, VFX… basically, everything that I have passion about.
The first big news will be SIGGRAPH 2015. I have a poster accepted this year. It will be my first SIGGRAPH experience and I hope that I can learn a lot and keep attending from now on. Check the fast forward video out!