ARBA: Augmented Reality Bay Area meetup

On Jan 24, 2017, we joint the first Augmented Reality meetup hosted at the Runway Incubator in San Francisco. This is also the first time I went to the meetup. 

So the above company basically creates a Kinect which can work with VR headset to produce body tracking.

This is an interesting device, a company named ultrahaptics creates a ultrasound matrix (where the green dot is under the palm), which give air pressure based on the computer display. In VR, this makes feeling the virtual object possible. However, a plane only gives force from one direction. To resolve this issue, a cube-shape array can be created with the blocks. This could be a good plugin for automobile control panel where ultrahaptics can provide feeling of real buttons so people don’t need to look at the panel while driving.

Wearing HoloLens in daily life really need a strong mind! All right, the presentation starts. First, Occipital demonstrates a smart mixture of AR/VR. They created a headset for Iphone. The headset also includes a depth camera. In the demo, they showed that how the depth camera can used to scan a room to create a 3D mesh model for that space. Then the texture is created from the iphone camera. After generating the 3D mesh of the space, the model is loaded into the VR set so it becomes a AR environment. In my opinion, this is an offline real world mapping trick that take advantage of the 3D reconstruction functions. As a result, the system may not ready to process real time point cloud data.


Next company Yowza shows a new idea to convert our real living space into digital world. In their idea, the raw mesh of the space is captured and uploaded to their cloud, then the point cloud is segmented and classified to different, completed furnature model in the dataset. Then the 3D models replace the raw mesh and ideally create a completed 3D scene for VR environment. 3D object recognition and 3D segmentation are hot topics in SIGGRAPH and CVPR. This company’s idea will be a very good feature for VR.

The last demo is a Tango based one from Clever Robot Labs. It analyzes the 3D point cloud from Tango phone to recognize the ceiling, floor, table, bed, etc in real time. Then it can replace them with VR contents dynamically. Interestingly, the algorithm can replace the real table based on the point cloud to scale the virtual table. Please see the video for the result.

After that, some new member also introduce themselves and also pop some job information. It is a very nice experience. Focusing more on technical side. And I am also glad that our CEO An Li has a good conversation with Ori Inbar. Hope that we can also join AR meetup in this year!

Canon Mixed Reality Project MReal 体验

今天在斯坦福体验了摄影大厂佳能自己捣鼓出的一款增强现实设备MReal。这是一款配有三个摄像头的有线连接头盔和对应的解码设备。配合佳能为其量身打造的Markers, 该产品可以实现空间内三维物体的实体渲染。不同于目前各大厂商的AR设备,佳能的这一款采用了Video see-through的设计。也就是说,用户眼前看到的影像实际是经过两个在眼睛处的摄像头采集的。头顶部还有一个简单的摄像头,头部平视时该摄像头朝下,用来跟踪地板上的maker map。

MReal HMD set.

设备的帧率在30 fps, 视角在110°左右。作为一个面向工业设计界的AR头盔(感觉Mixed Reality就是对Augmented Reality这个晦涩的词汇的大众叫法),感觉HMD本身只是三个摄像头的支架,视频信息被汇总到一个解码器一样的设备然后再传输给一台主机process,看起来略显笨重。但是不同于其他VR Hack成的MR设备,MReal在眼前的图像会真是还原实际的景深,所以裸眼看见物体的大小和通过此设备看到的大小别无二致,这一点还是不错的。


此台设备由于配备了Marker 的地板,所以不需要额外的摄像机来跟踪头盔在空间的位置,用户也是通过一个贴满Marker的操纵杆来和MR环境交互,基本上全部是利用图像跟踪的技术。值得注意的是,为了提供更自然地交互,系统提供了颜色选取功能,用来让用户自定义一个颜色范围,比方说手,然后就可以将手的部分从二维图像中挖出来。据透露,通过两个摄像头捕捉图像的视差,三维空间可以重建,于是手和虚拟渲染出的物体就可以交互。


关于这个产品的应用,主要是针对工业设计。因为传统工业设计中直接制造毛培模型话费平均百万美元,到最后定稿阶段必须要手工制造出实体才能商讨。通过这样一个设备,许多设计可以按实际大小显示在增强现实空间里。而且由于头盔佩戴者可以被跟踪,相隔千里的人也可以针对同一个产品建模进行交流,用手中的tracker指向模型,另一边的用户会看到一个虚拟渲染出的手指向模型的位置,然后可以走过去观察。这样一个在虚拟环境中的展示可以很大程度上缩小开模的次数,从而减少设计成本。而且由于是在三维空间等大小显示,设计师可以更直观的获得产品外观方便设计。而作为Video See-through设备,可以很好地避免光学成像的半透明效果,渲染出的物体拥有实体店颜色,更适合工业设计用途。此外,在进行专业人员训练时,不需要实际的机器,人员就可以在这种虚拟环境下迅速尝试多台设备,对于培养诸如同一厂商整条流水线的汽车维修人员也是很有帮助的。


佳能的这款MReal设备推出大约有三年,并不面向普通消费者,提供SDK但是并不开源。但是确实作为一个Video See-through的设备,解决了光学成像的半透明问题,还是值得学习借鉴的。

IMG_20160413_170532 IMG_20160413_164129


SIGGRAPH 2015 Day 4

Hey, here we continue with the journey of SIGGRAPH 2015. On the morning, Umur and I redo our presentation again so we can have a video recording of the presentation. As you can see, although the presentation does not require a computer, all the SRC participants carried one to demo their work. So this is a good hint for you: BRING YOUR DEMO LAPTOP, even they say they won’t have enough power cord.

Umur with the poster Todd with the poster.




After that, the major topic I went on this day is the AR/VR. Several company shares a little information about their experience with VR/AR and mobile rendering architecture.  Feels like everyone try to make the rules for the VR/AR rendering for the mobile system. This is important and I will discuss it later.

Samsung London discuss their works.

We also have a chance to listen to the final presentation for the student research competition (SRC) at this SIGGRAPH. Actually not too much people join but to talk about work at the big hall definitely promote yourself well. The judges comes form Disney research, Facebook, Oculus. Treat it as an interview for the company. That is one thing I feel about the SIGGRAPH event, it gives you a extraordinary way to illustrate what you can do and what you are good at to the top company without going through the coding questions. In my idea, if you are good at creating some real nice graphics and vision related product, spending time on this things and maintaining a good website is better than implementing a stack with two queues in C++…

The App Hour at the exhibition hall involves dozen of start-up companies, they all demonstrate the product related to AR/VR. As we can see below, we have cube based game which tries to do table battle with different armory. We have app which can show video of famous tourist sight when you turn your camera towards the picture or the statue. The FX lead from Dreamworks Animation shows his VR shooting game, which is very successful since he also made the presentation in the big hall. I definitely think it is a good way to promote your product (Hope one day I can do it.)

IMG_20150812_180522 IMG_20150812_180750 IMG_20150812_181311 IMG_20150812_181326








There is also some company doing nice work on face tracking on mobile device. They can change your face with zombie or vampire makeup easily, even your contact lenses can be added! Hope I have time to look into more of related technology.

At night, we plan to go somewhere else beside the conventional center. The SIGGRAPH after party sounds nice, so Umur and I walked to that hotel. On the way we saw a long queue waiting. Isn’t it similar to the queue when people wait for the Renderman Teapot? Yes, no wonder it is for Pixar… So another tradition Pixar has is to host a exhibition during SIGGRAPH (for Free) to show how other companies or artist use renderman to do amazing things. This event need advanced tickets so next time, visit Pixar booth ASAP.


SO Khronos group, who maintains the OpenGL standards, as I mentioned at the begining, notice the rendering power of all different platforms (PC, console, mobile device, etc.) And a more general programmable model should be define and implemented to provide thinner layer between developer and the graphics hardware (GPU). This is one reason they invent Vulkan. Several large graphics companies have developed the product for a while and shows how amazingly and efficiently they can render on different platform with same API. Please google for more information on Vulkan.

IMG_20150812_193914 IMG_20150812_194623 IMG_20150812_195056

SIGGRAPH 2015 Day 3

Sorry for the late post but I definitely will finish the 5-day season of this journey:)

In the morning I attended the studio course, which covers the GPU general computing discussion on Mobile phone. Personally I feel the presentation is not that clear for the technical detail, but definitely covers huge knowledge about what the GPGPU technology is like on the mobile device. Picture below illustrates the comparison of current different computing interfaces. I suppose if there is some heavy computation that do not need to be sent to the server to compute (like real-time tracking for the mobile system), utilizing the GPU for the task should be a good option.IMG_20150811_101317


The main entrance of the exhibition hall. Besides showing GTC talks, Nvidia have a booth to show off the GPU technology for game, movie production. Intel, on the right side, illustrates the graphics technology for the CPU. Ironically, these picture shows the current leader of the computer graphics.

Besides the academic mind mixing, another important platform is that the industrial companies show off of their “coming soon” technologies. Below are some high lights.

There are also multiple companies focusing on body skeleton tracking technology by using infrared camera array and reflective markers. The pipeline also handle motion retargeting. These companies provide the 3rd-party solution for movie or game studios.
EPSON demonstrates the argument reality eyeware. The AR scene is displayed on a small section of the eye glass. You need to connect the glass with a specific hardware (size like a phone) for touch-based navigation. I feel the system is still kind prototype, may targeting certain professional user but not for everyone. And yes, we have the Pixar booth at the back!!!!!
This France company shows a very smooth eye tracking demo here. In a limited distance, the system can track the eye gaze movement and use it to control the reading. It is very smooth and effective. Can definitely be used in hospital for the patients who cannot move.
A mobile plug-in style 3D scanning system. very efficiently create still 3D model of any object.
IMG_20150811_111353 Qualcomm shows a lot of advanced computer vision technologies this year. Here is a cloud-based object recognition services for mobile devices. It will come with their new CPU so it may need to take a while.
IMG_20150811_124822 IMG_20150811_125109 USense is a startup targeting VR/AR experience. Applause for the Chinese start up in US! They purpose two systems. One for mobile, which have two camera inside the headset to help recognize the hand gesture. The wire version have two camera to sense the 3D world and a leapmotion sensor for hand gesture, I think.
I meet Evan in SIGGRAPH! So surprise that I meet the guy who we emailed each other before about the DI4D system. Their data quality is good and the new head mounted capture system is remote and light weighted. So great to meet him.


Advanced VR Develop Experience Sharing

In the afternoon I attend the talk about VR application and experience. SONY discuss their learning from the new head set. To guide the user in a fully 3D environment, using 3D sound to drag the attention is very important since it is super easy to lose the focus of the story in 3D video. BTW, to enhance the VR experience, hand gesture, especially the grasp motion need to be efficiently detected.

Another one is to use the VR for realistic journal report. To create a new media to bring the audience to the scene of the crime or war zone to experience negative emotions. For the effects please refer to the Buzzfeed video below. Last is a start-up about a droid which can take 360 degree videos.

IMG_20150811_142248 IMG_20150811_145159 IMG_20150811_150408

A secret everyone knows

I feel that I am lucky because I catch this chance. So every SIGGRAPH, pixar will send 1,500 renderman teapot (the shape of the famous Utah teapot) for anyone. Three days, each day 500. The wait list is crazy and normally the teapots will be gone in less than 15 minutes. Each teapot comes with a metal box and a unique ID for it. OMG, it really will be one of the thing that encourage me to go back here next year!

So if you want one, remember to check out with Pixar guys once the exhibition opens about the time. Oh, pixar also like to host renderman party during siggraph. This need invitation and also will be super easy to fill. So also ask the employee about how to register.