讓美味的烤肉開啟你明媚的夏天~週六(5月14)CCIC-SUNNYVALE 經驗豐富的燒烤師父們要在Blackberry Farm Park (21979 San Fernando Ave, Cupertino, CA 95014) 舉辦BBQ戶外活動啦。時間是11am to 6pm,12點應該可以開飯。目前感覺食物豐盛,除了如上圖的肉肉,還有各地特色麵食,水果,飲料。
Old faithful 周围的步道也有很多间歇喷泉,比如这个鼓鼓的,其实喷水可以高达20米,但是它喷发的时间不固定,中间可能间隔十几个小时甚至几天。结果我们当天回到老忠实附近等待的时候,还有幸看到了它的喷发。结果却是感觉这个喷发效果要不老忠实高不少。老忠实得名,就是因为它每隔90分钟(左右)喷发水柱一次差不多5分钟,十分准时(faithful)。周围有很多作为可以坐着等候。秋冬位置都坐不满。感觉夏季会围绕很多人。推荐观赏点就是这个牌子方向的座位,比较空旷,运气好还可以看到左侧高地其他温泉的喷发。
这一天晚上11点的飞机回家,所以有一整天在盐湖城。想着再去看看风格完全不同的地貌。看着天气还是不错,于是打算去周围最近的Arches National Park走走。这个国家公园周围还有Canyonlands National Park等公园 (后来感觉还好周围有其他的)。车程从盐湖城大约3.5小时。
还好周围还有Canyonlands National Park,没有full。结果证明Canyonlands 真的也是非常赞,而且十分适合short trip。一路往山上开,可以来到Canyonlands,这里有很好的峡谷view,也有Mesa Arch,是一个在山顶的风化石拱,也是犹他州很有特色的地貌了。在平顶山的山顶眺望峡谷也有大峡谷公园的感觉。Canyonlands上我们参观了visitor center附近的一块眺望处,还有令人震惊的Mesa Arch。时间就到下午两点多了,正好往回开。所以还好没有去Arches National Park,否则感觉时间根本不够。
GPS地址可以參考:Pillar Point RV Park, 4000 Cabrillo Hwy, Half Moon Bay, CA 94019
停車地最好是ParkingA,也就是Pillar Point RV Park。但感覺位子不太多,所以可以考慮從南開到Pillar Point RV Park的之前,在綠線處Coronado St (有紅綠燈,很好認)右轉去Parking B。然後橫穿highway 1到達ParkingA。注意帶小朋友過馬路要小心。
2020 is passing by. I generally feel there is not too much great happen this year. Most are similar like my PhD years: endless work and no social and personal conversation. Glad my parents stay with me and we do have a great time together.
Looks back the year, I think it is most about Covid-19 and US president election. When I wrote this, I still do not know where the virus is from and who will get into the white house. But I learned about one truth is that people really do not care about what you have said. It is just history, and it can be forgotten. Or we can rephrase it in a different way. The line defining truth is not clear anymore.
So is there a truth? I choose to believe so. That is what I chosen in 2013 in Binghamton. Jesus makes life simple. And it makes me to be easy to be happier. In the end of the year, I am so blessed to say something from my deep heart to my past about what I believe, and get a positive feedback from her about her belief on my birthday. I also heard from a friend that she started to go to church in Taiwan, to put herself in front to the lord. I just indeed feel happy, to see someone I am really care about, although I did not express the care enough when I have the opportunity, to have a chance to hear the gospel. I am not sound like a crazy people to them after all.
This year, I think at the end, I have learned that I should do the good thing because I have a good heart, not because I want the rewards. Let God do the job and make the right one I feel comfortable with shows up (please also be quick).
I think I like to work from home, I like jumping rope and doing some weight lifting just by getting out of my room. I also like to see I get one brick on the EB wall with a lot of words written on it ( May be the most wordy one). I did some meaningful things this year.
I also played piano on one of my friends funeral. Yes, I think I see marriage, and I also see death. And I know I cannot explain them well yet. What I learned is that we should, at the right time, say the words we want to say, and say it clear like the end of day is tomorrow. Tell others how much we love them like we won’t see each other until we face Jesus. Try to tell everyone that you hope to meet them when you can, and hope to be with them forever, if possible, when we left this world, no matter how sudden it is. Make them hear the gospel.
For 2021, I hope:
I can keep a good habit to write useful blogs here again at least every month.
I can finish some religion books so I can have my own understand of belief so I can talk to my friends with a completed logic circle, maybe create some slides so I can preach in my way.
I hope I can continuously work on my github so do some good project. This need to be frequently. Small progress everyday would make a lot difference.
I hope I can finish my Game Engine Architecture book.
I hope I can wire a ray tracer in Taichi.
I hope I can get a good sense of AWS usage.
I hope I can find the one who will be with me on the road to God and please make sure I feel it is the right one.
Focus more on myself, run myself good, and enjoy what God plans ahead.
What an early flight to Long Beach! Waked up at 3:30am and noticed that there was Lyft driver available at mid-night. Have to say that Lyft/Uber makes life easier. But just a reminder that SJC won’t open checkin until 4:00 am… So don’t be rush there anyway.
So Sunday and Monday are for the workshops. The morning I went to the 3D Scene Understanding workshop and listen to a good talk on “What do Single-view 3D Reconstruction Networks Learn?” It points out the current state of the arts single image reconstruction work is, by large chance, just image retrieval. This is due to that the shape similarity measurement is not good enough and the training set is contaminated by models which already looks very similar like the one in the test set. And using a certain model pose view as the single image input fits 2D image case but does not really the best one for 3D mesh case. The talk really resolve some issues in 3D reconstruction research and I think the paper is good to read. You can find the paper Here. And here is the Youtube video for the talk
However, at the same day Facebook AI also provide their Mesh R-CNN basically to reconstruct mesh from single image like there R-CNN to create 2D mask from a single image. Hence it would be interesting to check that paper to see if it violates any issues points out by the work above.
In the afternoon my colleague leads me to the ScanNet benchmark challenge workshop. Professor Matthias Nießner is really active in facial/body reconstruction work and now his work also expand to general scene 3D capture and registration. ScanNet is trying to create data set with vertex level labeling plus 3D bounding box , like the 3D version of the ImageNet. The workshop is basically an exhibition of all the people who participate in the detection task with the ScanNet dataset. The Stanford’s work achieve very good result by take advantage of temporal coherence information. It is a very interesting idea and fundamentally optimize the data representation and training procedure. Very nice result.
Later afternoon I went back to my original research domain to take look of the state-of-the-art human body/facial capture survey style talk given by Michael J Black. I do feel there is great potential. Need to investigate when I have some spare time. And here is the video for that.
I think Monday I will be in the AR/VR session. Hope to learn more in this region. Or at least see what is the part people still not cover yet…
Today is a little casual. In the morning, I visit Nvidia ray tracing/path tracing section. They emphasize that similar like the first GPU card in 1998, The RTX is a new thing that everyone should try to catch up with.
Then I also went to 3D capture session. The papers there are all very interesting. I think we are also at an important stage at the moment.
In the afternoon, I went to material capture session, It is glad to see how a deep learning model is trained which can use differentiable render for material generation from one image file. I do need to look into this work.
Today is a time space mixture adventure. Try to get into the talk of two state of the arts face related paper in the VR session in the morning. One is from TMU and the other is from Facebook Reality Lab. Both of them try to tackle the issue on how to show genuine whole face expression in VR while both sides wear the headset.
On one side, Matthias Niessner and his golden face synthesize team explore how to deal with this issue based on their face-to-face work. The advantage is that due to they use generic face model, the representation is not strongly subject dependent so no calibration and pre-capture is necessary. However, due to only use infrared camera inside headset for eye gaze tracking, the upper face’s expression may not be preserved.
While facebook use subject dependent high quality model for this work. And use deep learning on teeth composition. The quality looks better. However, it needs pre-capture for the subject.
And thanks to my friends from Pixar, this time we notice that there is no booth for the animation studio so we don’t know where to pick up the renderman teapot. It is turn out that they release after their renderman 22 demo talk which last for 1 hour. It is actually a really good talk. 30 years development of renderman, from scan line renderer to ray tracing, and then path tracing. They give up old infrastructure for physical correct and simple models. It is glad to see at this stage, ray tracing lighting can be achieved in an interactive speed. With the help of Nvidia’s RTX, I think the production time for all stages of animation can be shrink and we could see more ideas in the movie since the cost to try out new story line, camera, actions, etc are cheaper. But the most important thing is get my teapot!
The real-time live! demo session is also crazy. The Nvidia RTX, ILM X LAB, and Unreal combined VR virtual movie shot demo is totally a game changer on how we can make movie quality shots in real time with everyone inside a virtual environment. I can image in near future, the individual shot may be captured in this real-time ray tracing environment. Then the director can cut the movie to review, and handle that short to the off-line renderer, if necessary, for movie final images.