ARBA: Augmented Reality Bay Area meetup

On Jan 24, 2017, we joint the first Augmented Reality meetup hosted at the Runway Incubator in San Francisco. This is also the first time I went to the meetup. 

So the above company basically creates a Kinect which can work with VR headset to produce body tracking.

This is an interesting device, a company named ultrahaptics creates a ultrasound matrix (where the green dot is under the palm), which give air pressure based on the computer display. In VR, this makes feeling the virtual object possible. However, a plane only gives force from one direction. To resolve this issue, a cube-shape array can be created with the blocks. This could be a good plugin for automobile control panel where ultrahaptics can provide feeling of real buttons so people don’t need to look at the panel while driving.

Wearing HoloLens in daily life really need a strong mind! All right, the presentation starts. First, Occipital demonstrates a smart mixture of AR/VR. They created a headset for Iphone. The headset also includes a depth camera. In the demo, they showed that how the depth camera can used to scan a room to create a 3D mesh model for that space. Then the texture is created from the iphone camera. After generating the 3D mesh of the space, the model is loaded into the VR set so it becomes a AR environment. In my opinion, this is an offline real world mapping trick that take advantage of the 3D reconstruction functions. As a result, the system may not ready to process real time point cloud data.

 

Next company Yowza shows a new idea to convert our real living space into digital world. In their idea, the raw mesh of the space is captured and uploaded to their cloud, then the point cloud is segmented and classified to different, completed furnature model in the dataset. Then the 3D models replace the raw mesh and ideally create a completed 3D scene for VR environment. 3D object recognition and 3D segmentation are hot topics in SIGGRAPH and CVPR. This company’s idea will be a very good feature for VR.

The last demo is a Tango based one from Clever Robot Labs. It analyzes the 3D point cloud from Tango phone to recognize the ceiling, floor, table, bed, etc in real time. Then it can replace them with VR contents dynamically. Interestingly, the algorithm can replace the real table based on the point cloud to scale the virtual table. Please see the video for the result.

After that, some new member also introduce themselves and also pop some job information. It is a very nice experience. Focusing more on technical side. And I am also glad that our CEO An Li has a good conversation with Ori Inbar. Hope that we can also join AR meetup in this year!

I485 timeline

So luckily get the message of acceptance of my I485 application just before the price increase. Hope everything can go smoothly so we can get it done before summer of 2017. On this post I will update my timeline for the I485 process. For my I140 EB1A, I use PP directly so I submit application on Nov 20, 2016 and get approved notice on Nov 30, 2016. Then I spent 1.5 weeks for I485 preparation and health check, I mailed my package on Dec. 12, 2016.

  1. 12/12/2016 Mail the package to Phoenix.
  2. 12/14/2016 In the afternoon the package is delivered to the mail box.
  3. 12/21/2016 The check is cashed.
  4. 12/22/2016 At noon I received the text message and email saying my application is accepted and transferred to NSC.
  5. 12/24/2016 Received the return of my AP application and it says that my form expired, this is a processing error from the USCIS since someone else actually uses the expired form and get accepted.
  6. 12/26/2016 Received the receipt of my EAD and I485 (official received date is Dec. 15, 2016).
  7. 12/28/2016 Mail my new package for AP application to use the new form.
  8. 01/11/2017 In the afternoon I received the text message indicates that my AP application is accepted and transferred to NSC. Now it is long time wait…
  9. 01/29/2017 USCIS mailed the finger print appointment notice. and I received it on 02/04/2017.
  10. 02/16/2017 Appointment to the finger print.
  11. 02/25/2017 USCIS notices my new EAD card is produced.
  12. 03/01/2017 USCIS indicates that my new EAD is mailed to me.
  13. 03/03/2017 Received the EAD card in mailbox. It uses the photo taken during FP process and has my finger print.
  14. 4/19/2017 Received text message about my AP mailed.
  15. 4/24/2017 Received my AP paper, two copy, the covered date is from 4/14/2017 – 4/18/2018
  16. 8/23/2017 Go to the USICS for a status check, the naming check just completed on 8/22/2017 so that do take a while. BTW, my NIW I485 has been approved, from 9/26/2016 to 9/8/2017. NIW cannot make fast so that is basically how long it takes.
  17. 10/20/2017 Submit a service request.
  18. 11/17/2017 Get reply for the service request, indicating that the file is in the queue for review.
  19. 11/20/2017 Status update as Card is being produced.
  20. 11/27/2017 Status update as Card was mailed to me.

In memory of the blue fish

Today I saw the blue fish died in the tank, I even haven’t picked up a name for it…

I feel sad due to that it has been just less than a week for it to be in our office. And I am majorly the person who take care of it. And what is important, it dies with white mold cover its body. I actually saw these white mold yesterday when it is still alive but just forgetting to spend 1 minute to google it. I saw the pattern but did not treat it seriously. Then just after one night, it is gone.

Another sad situation is, no one notice it except me. I doubt that no one will ask where the fish goes for the entire week. Let’s see… UPDATE: Yes, so people notice it, not that disappoint. 

This makes me think more about the company at the same time. How should we run a healthy business? First, we need to have a lot of experience in running a successful business. If we do not have a veteran in group who have so many experiences to see the pattern of sickness, we need to learn as much as possible, as soon as possible. Because lacking of knowledge is dangerous, which may leads to catastrophe in one night. However, the problem may be not that hard to fix in the early stage.  You just miss the chance to do the easy thing and then the damage cannot be stopped.

Pay attention to details. Yes, if we do not treat the abnormal case in a company as serious as possible, we lose the chance to fix them.

No body may notice the death of the fish, why? Because no one really take it as his/her responsibility. For me, I feed the fish, but I did not extend the responsibility to keep it alive. I did not take the consequence for its death. So I did not treat it as that serious. So In terms of company, there are two ways in my mind for this situation. One, make very clear responsibility list for everyone so in a whole picture every corner of the company has been taken care of. Another, you have to be self-motive to look after the development of the company in terms of your general duty. Self motivation needs stimuli. For me, I need to prove myself and get what I do not expect from the company to treat it as my own business. To understand what you want from your job is very critical, it is not only good for your development, it is also good for the company since in this case people have direction to motivate themselves to go, once people have the force to go within themselves, the leader understand how to guild the force, even change the force in regarding to the general benefit of the company. If the employee lacks of the force, pushing them will be useless and really won’t go anywhere.

Employee should not ignore abnormal situation of the company. It needs eyes to detect the “white mold” on the body, through study more than just technical and theoretical knowledge. In small company, I feel every employee need to train to acquire this leadership skill. In return, the boss should also take care of the employee to detect the “white mold”. Try to understand where the force of the employee is, help them build the career path and grow in a healthy way. It cost much more if we ignore a small problem we notice in the early stage.

I buried the little blue fish in front of 3150 building, under a bush. I am so sorry for you.

December 14, 2016

China Tech Day Take-away

On December 10, 2016 Santa Clara Convention Center hosts the “China Tech Day” event, which invites a group of CEOs to visit Silicon Valley. There is some interesting information to take home.

I20161210_135051n the opening talk from Chun Li, CTO of Alibaba, I notice that what Ali accomplishes is to build a platform for all people who try to do business but don’t have the power to deal with the technology part. They resolved the problem like create general API for dealing with money with all different banks in China in early 2000.

With some try-out of business, they start to design and develop a general platform which can support the business in China. Majorly with multi-layer design. When they think the problem, they put the question “How does different cases run on this system?” in their mind.

They also think about how to put people to develop the platform together.

They also mentioned about the differences between SV and China.

  • Silicon Valley focuses on tech and product.
  • China is about how to get the business big, so it may involve more on how to get order and enlarge asap since it won’t be that hard to duplicate the similar product in China. The issue is who gonna be the big boss.

 


20161210_14320220161210_145635The next coming report is from Zhengrong Tang, vipabc’s CTO, but also a veteran who worked 10 years in SV and 10 years back to China. His talk is really informative on how to start business.


Then we have a panel talk on How asian engineers should plan their career path. But I feel that panel is a little out of the topic of “Asian”, it is about the difference between being engineer in SV and China.

  • There basically are two pathway as professional career: Technical and Management. Both of them can go pretty high but seems like Technical path needs a strong research passion to keep thing going, which won’t be very easy.
  • When you choose the job, actually, if you have the power to choose, then try to see the culture-fit, challenging job, fun work, and self-improvment space (such as to work under the supervision of big star).
  • Management is about how to help others from your own perspective.
  • A rating and evaluation system is also critical. Since it provides a objective way to make you see what people thinks about you on the work performance. Normally the rating is about Leadership, Leverage, Result.
  • For people who choose to go to China to start work, the time that the oversea student enjoy much high attention has gone. If people are not very special when they study/work in US, then when he/she comes back to China, it won’t be that different, and he/she also need to avoid using English for the job and work. Some technical words need to be learned in their Chinese form. For these group, they should think what they can do for the job in China with their knowledge learned in US.

20161210_160125 20161210_160536 20161210_160544 20161210_160626Then it is the technical part, AR/VR introduction given by Wanmin Wu. This is a intro level talk about AR/VR, but it is very helpful for me to sort my knowledge.

  • The non-technical challenges for VR: anti-socialism. In its current form, it actually not helping people to communication.
  • The challenging part for AR is eye tracking so the system can dynamically refocus. Interaction is need to be natural and effective. Display is difficult to be small. The content is an issue. Power drain fast. How to reduce the heat. How to increase the FOV and resolution? How to render pure black on transparent surface?

In conclusion, I feel this is a very interesting talk and give me more insight knowledge on how a people become bigger than just one.

WeGreened绿卡申请经验

现在想想很幸运和WeGreened北美联合律师所合作,把EB2NIW和EB1A联合package办下来,并最终EB1A PP在7天内通过。

选择WeGreened主要是一开始对我case的一个比较有信心的评估。我于2015年7月博士毕业并开始在加州工作以后开始联系身份申请的律师,当时的状况是如此:

  • 美国公立大学计算机博士
  • 论文19篇,其中journal 5篇,会议14篇,一作7篇,其实journal一作只有一篇,还有2篇是心理学的合作journal,那个其实只需要提交abstract,所以并没有完整的文章。
  • 引用60,主要集中于一两篇文章
  • 没有国际大奖,学校小奖或者学生competition入围奖
  • 无Patent,无媒体报道
  • 虽然读书时间review了很多文章,但是没有直接给我自己发Thankyou letter的。

这个情况大多数律师是觉得可以办EB2NIW,但是EB1A不行。联系到WeGreened,给他们发了我的CV,律所很快回复,可以办理EB2NIW和EB1A联合申请。其中EB2NIW还是失败退款项目。这个比较特别,失败退款是指如果WeGreened为你提供此项目但是最终你的申请被USCIS驳回,那么律所会全额退还律师费(给USCIS的申请费应该退不了…)。所以如果律所敢保障办理这个项目,则证明你的通过率很大。实际上,WeGreened在2014到2015年EB2NIW和EB1A的通过率都相当高,在95%以上,如果单算不敢失败退款的类别,也在80%以上。我觉得能够对办理有如此的信心,主要还是基于两点;1. 大量办理所产生的数据量和经验;2. 对于客户的严格把关。

所以于2015年9月签署了合作书,然后就开始在WeGreened转的一个网上信息平台整理搜集律师所需要的资料。

  1. 律师会给出一个10页的模板,让你总结你的3-4个两点,推荐信撰写人的list以及相关背景。模板都是以问题的方式给出,所以你可以写的很有条理。这个文稿很关键,因为律师肯定不是你研究方向的专家,所以你的petition letter,recommendation letters里的内容都将基于这个模板来,甚至就是原文搬过去。所以这个需要自己花些脑经。但是,感觉比让自己照着别人的Petition letter再重新打稿要容易多了。
  2. 之后根据对应亮点和相应的推荐人,律所会撰写推荐信。然后各位就求各位大神签字吧。对于EB1A类型,至少得有一名推荐人是来自美国之外。还有啊,这个推荐人不要一位追求同领域内大牛,因为签证官有时会跟你来比较,说你同领域有这么牛的,你就low了,反而弄巧成拙。

同时他们会完成你的Petition letter,以及需要的item list。你就一方面准备这些items一方面求推荐信。律所10天会根据你的模板准备好Petition letter 文稿,如我所说,因为不是你领域的专家,他们写出的文章在论述你领域的时候难免会有你看上去不对的地方,请耐心修订。经过多轮修改后,同时准备好材料就可以打包邮寄给律所了。

等到正式提交EB1A申请时,我的状态是:

  • 美国公立大学计算机博士
  • 论文20篇,其中journal 5篇,会议14篇,一作7篇,其实journal一作只有一篇,还有2篇是心理学的合作journal,那个其实只需要提交abstract,所以并没有完整的文章。
  • 引用160,主要集中于一两篇文章
  • 没有国际大奖,学校小奖或者学生competition入围奖
  • 无Patent,无媒体报道
  • 16 篇 journal conference review thank you letters。

2016年感恩节完成PP NSC的申请,于11月30收到通过的电子通知。

写在8年之期

来美8年了,今天终于给自己的博士生涯在生命的历程中画上了一个句号。EB1A申请在7天内通过,就在2016年11月30号,一个身在Nebraska的签证官,审阅了我这八年来上帝保守下行的道路,给出了Approve的答案。

打开浏览器的时候只是不经意的看到昨天查过的网页又弹出来,于是又在输入栏里面填上了L,选了历史记录里面的收据号码,回车,然后显示了Approve的信息,我还不敢相信,拿出收据原件好好比较又比较。感觉好像不会是自己吧,不会这么走运吧。

是的,这就是我,一个脚踏实地但是却总觉得自己不够格的人。读博士没想着最后还能去SIGGRAPH,没想过可以认识那么多牛人,没想过会和工业光魔的老大say hello,没想到可以完成那感觉总也写不完了毕业论文(没动笔真是觉得Mission Impossible),没想到会跟同学出来自己创业,没想到过去八年所做的会如此加速我拿到绿卡的过程。

这就是上帝的美意,在我最困难的时候,是我的心中的责任和柔软以及我在Binghamton教会家庭的话语让我能够在失落的时候可以归向神,让我明白神不应许我们妄求的因为必将准备更好的在我们的前方,而一切挫折其实都是为了祂在我们生命中的安排。

我也明白,否认上帝的也可以活的很精彩,甚至更自由,我也不比那些更加虔诚的基督徒,但是我心里一直想的就是:在2013年复活节,我做出了这个选择,我就继续在主的道路下走。我会不听话,我会怀疑,但是我不会放弃。就像我2008年8月3号踏上飞往纽约州的航班,想想真是时光匆匆,所有的怨恨,快乐,悲伤,到现在真的又在乎多少呢?一个人的心态确实是会完全改变人的行为处事甚至生命的轨迹。我要更加积极,同时也要保守自己的谨慎和认真,继续走主给我安排的道路。

话说今天早上接人还拿了一张Ticket,所以上帝的工真是猜不透啊。。。

圣诞节能看到梁静茹和老友,今年我真的很满足。

湾区爬山信息一

这里开个贴聊聊旧金山湾区的民俗活动之一,爬山(hiking)。直入主题,今天要说的是两个去过但是风格完全不同的地方,一个是位于中半岛Fremont的Mission Peak,一个是位于金门大桥以北的Muir Woods National Park.

  • Muir Woods National Park

地址:1 Muir Woods Rd, Mill Valley, CA 94941

阳光指数:★

难度:★★★

经典度:★★★★★

费用:10

Muir 属于北加景点,但是确实是值得一游的地方,建园100多年,里面很多古树遮日,白天步行其中仿佛回到史前时代。此地也是星球大战的场景之一(视频见上)。入园的底层都是木板铺就的行人道,然后多条不同的Trails依次从两边延伸出去。比较推荐的是Skyline 步道和Dipsea步道,都是顺着石阶环山而上,最后柳暗花明鸟瞰太平洋。Dipsea步道单程大概时间2小时,Skyline应该会短一点。

去这个Trail就抓住一点:早!停车位实在有限所以感觉9点之后到就比较难找到车位。而且如果到得比公园管理员早,应该是不会被收门票的。

  • Mission Peak Trail

20160417_155645 20160417_170543

地址: Stanford Ave, Fremont, CA 94539

阳光指数:★★★★★

难度:★★★★

经典度:★★★★

费用:Free

想认认真真做运动那就来Mission Peak吧!周末停车位是不好找,8点以后估计就得停到旁边的豪宅区了。山上由于是牧场,经常会有奶牛过马路,当然也要小心牛粪。山路弯弯曲曲上去,可以回眺整个湾区的景色,最后是一段岩石路,小心登顶以后就可以排队等待在顶峰的标志上合影了(如上图)。上山时间差不多2小时。

由于地处Fremont,所以爬完山吃吃喝喝的地方都是很方便,沸点啦,韶山冲啦,还有Union City刚开的涛味海鲜自助。。。(额,感觉有点反爬山宗旨)

爬山准备

我一般都是头一天蒸一些番薯,鸡蛋放包里,然后带两瓶水,出门记得带防晒或先涂好,带帽子,穿运动鞋。初爬最好带登山杖,这个东西Costco有卖,运动品商店也有。

那基本就说到这里吧,看大家有问题再一一回复,也欢迎各位指正错误哈。

Start with deep learning

So finally get the grit to try some deep learning framework. This post tries to cover the basic installation tips to install Ubuntu 16+ on a ASUS ROG laptop with Torch deep learning framework so we can try the facebook’s DeepMask+SharpMask.

  1. Install Ubuntu. I choose to create a USB installer for this. Win32DiskImager is a good tool to create the install image.
  2. For ASUS ROG, some fast boot feature needs to be off so linux can be boot up. Please check online to set the bios.
  3. Look at this post from Taiwan for tips especially about installing Video card driver. Just in case, I leave the code for the Nvidia driver here:sudo apt-get purge nvidia*
    sudo apt-get purge bumblebee* primus
    sudo add-apt-repository ppa:graphics-drivers/ppa
    sudo apt-get update
    sudo apt-get install nvidia-352 nvidia-prime
    sudo add-apt-repository -r ppa:bumblebee/stable
  4. On Youtube, there is a very good tutorial about how to install dual boot system so Ubuntu and Win 10 can run together. In my case, I have the Win 10 installed on the 500G SSD so I get the entire linux on a 200 GB partition of my 1TB data disk. My experience proves that following this video tutorial you should be OK to get a dual boot-able system after that. Since I don’t need to hibernate linux normally and I have enough RAM, I set swap as 2GB and just keep all the other 198 GB for the system.
  5. Do remember to follow the post in 3 for the Video card update since the facebook deepmask needs Nvidia GPU with compute power 3.5+, by default, Ubuntu will use the Intel integrated graphics card so install the Nvidia driver is necessary.
  6. Now you can follow the Torch web page to install Torch, just like the facebook deep mask page suggests.
  7. Notice that even after you installed the torch, you need CUDA package and other stuff to run the demo.
  8. First, you need to install the CUDA SDK so the cutorch can be installed. Follow the link here at Nvidia for how to install CUDA SDK. Then you just need to go to torch folder in your system and type : luarocks install cutorch, it should work like charm. Please follow the pre and pose installation guide since the command will be different based on your system. To get the correct CUDA SDK, go to here.

CVPR Experienes: Conference Session 2

On Tuesday, the major show is the Face session! However, we have to say that Face related research is not like the main thing in CVPR. As the session name indicates: “Computational Photography and Faces”. Sure, we have a lot of poster about face modeling and expression detection, but the limited oral session tells the trend now. No worry, however, in SIGGRAPH, Face session is always packed up with people!

So in the afternoon oral session, we have 5 oral presentations, which definitely shine the state-of-the-art work in this region. I am so much loving this section due to the fact that this is what I belong to, and of course, our lab has contributed to one of the paper!

13. Recurrent Face Aging: a cool data set in 2D which contains a lot of faces covering large aging space is crated and used to predict human aged face.

14. Face2Face: Real-Time Face Capture and Reenactment of RGB Videos: What I can say, the jaw-dropping demo video since last SIGGRAPH Aisa. This time they updated the model to work with only 2D rgb image. The presentation is cool because it ends with a live demo with Putin as the target agent! So the Basel Morph Model is used to do identity morphing, which need the user to show a frontal face, rotate left then right to create subject dependent 3D face model. The initialization procedure takes about 30 seconds. Then we obtain a fully controllable avatar. The texture albedo is also learned. Based on my demo test, the system is pretty nice and smooth, however, don’t expect it can handle directional light, might just be like the global light source. Without tongue model, it is still can accurately modeling the lip movement so normal speaking should be OK. There are more interesting story behind this. To me, it is such a nice experience to meet the authors here at CVPR!

15. Self-Adaptive matrix Completion for Heart Rate Estimation From Face Videos Under Realistic Conditions: A stable face region is located in the general face image and the model can used to detect the heart rate from the image space. It is just so glad to see the demo and illustration video/image are actually from our database!

 

16. Visually Indicated Sounds: MIT always has the balls to do cool stuff. The authors notice that human beings can indicate the sound of the materials pretty well even only with the image. So they spend a lot of time to use drum stick to hit “A LOT” object and recording with video camera. Then they train the deep learning model so that the machine can pick up the motion of drum stick hit certain “objects” and simulate the corresponding audio.
We know that in movies, sometimes the audio composers can not get the real some of the scene due to different reasons and need to create the audio effect with other stuff. This CVPR paper is like a auto way to do this.

17. Image Style Transfer Using Convolutional Neural Network: Transfer Van Gogh’s painting style to your image automatically? This is the instruction for you.

Here are some photos again, to cover the topics on the second day.

CVPR Experiences: Conference Session 1

First day of CVPR is packed with some good talk which shows the trend of the computer vision research right now. Day one is packed with object detection work, especially by using convolution neural network (CNN, aka deep learning approach).

Here I just report some interesting work:

Matching and Alignment:

    1. Learning to Assign Orientations to Feature Points: Include the orientation learning in 3D reconstructions by CNN implicitly will help to obtain the missing part of alignment, so you have less holes. It sounds like the orientation of the image patch can play a key role in image alignment.
    2. Learning Dense Correspondence via 3D -Guided Cycle: Directly apply to car, this paper talks about how to find matches in two images. The similarity need to be at the component level. In this way, you can reconstruct image B with information from image A’s pixels, while still maintain the structure and orientation of image B. It shows how to do the 3D model to 2D image alignment. And with possible occlusion, matchability learning is the way they try. Possible extension of the work is to extend the patch to the entire target so even in the occlusion case we can have a fully recovered image.
    3. The Global Patch Collider: Try to find the Patch which matches in different images, by forest voting.
    4. Joint Probabilistic Matching Using m-Best solution: a little optimization by using a sampling weighted function to choose several sub-optimal solutions.
    5. Face Alignment Across Large Poses: A 3D Solution. In traditional way, face alignment rely on the fact that all the tracking points are available, which is too strong. To training on large pose tracking data, we normally do not have this kind labeled data. In this paper, the people use synthesized training data by align a morphable model to the any face pose with knowing pose information, then get the 3D position and the corresponding 2D intensity plus the pose. Then a CNN can be trained to locate the correspondence.

During the spotlight session, Segmentation and Contour Detection is covered.

  1. Affinity CNN: Learning  Pixel-Centric Pairwise Relations for Figure/Ground Embedding: Should look into.

Then basically I went to the poster session so take some photos about different posters I am interested in. One talk about real-time (80 fps) CNN, with lower accuracy got my attention. Low memory bandwidth with full code and “How to run” tutorial, this could be a very good way to try some cool idea: The detail about this can be found at Pjreddie.com/yolo.

Here are some poster photos:

At the end of first day, the best paper award and related work has been announced. MSRA’s new deep learning model “Deep Residual Learning for Image Recognition” shows Microsoft’s position in this deep battle. By winning all the major competitions during 2015, it does not sound that the model is very elegant, but it works. The best paper award to this paper settles the tune of this CVPR to still be “Deep Learning”. And later during the CVPR we notice that the author of the paper, Jian Sun, has been dig out from MS Asian Research to Face++ by a super good salary (like 8 digit in Chinese Yuan). As I know, a good PhD student focus on deep learning now normally don’t worry about job and salary at all. They are like bias in the market because there are so much data but so little people have the hints on how to dig them.