半月灣 挖蚌 計畫

更新
2022年6月18早晨7點應該又是一個低潮時間。CCIC-SUNNYVALE的朋友會再一次下面的活動。地點不變。想動手挖的朋友可以聯絡我,挖掘工具帶家裏的大鐵鍬就好,文中的大白管子有5個夠用了。起不來的那就中午過來吃就行哈哈。

這篇文章主要是針對2021年7月24 半月灣 Pillar Point beach 低潮挖蚌初試的通知和計畫,供大家參考。

最重要提醒

一定注意看好小朋友!如果可以,小朋友請穿救生衣。雖然是退潮,但是從開始到結束,潮水會慢慢逼近海灘,所以一定要注意小朋友安全。

集合時間

根據Pillar Point Habor的潮汐圖顯示,24號早上差不多6點是最低潮,淺水的沙灘露出的最多,所以我們需要在差不多提前一小時到達。

所以目前訂立集合時間為早上5:20點。等到5:40am就會去沙灘開始挖棒。大概2小時結束。

停車地點和集合地點

GPS地址可以參考:Pillar Point RV Park, 4000 Cabrillo Hwy, Half Moon Bay, CA 94019

停車地最好是ParkingA,也就是Pillar Point RV Park。但感覺位子不太多,所以可以考慮從南開到Pillar Point RV Park的之前,在綠線處Coronado St (有紅綠燈,很好認)右轉去Parking B。然後橫穿highway 1到達ParkingA。注意帶小朋友過馬路要小心。

挖蚌的地方就在下方的沙灘,沿著大礁石的右側往海邊走就可以到達。我會穿紅黑相間wet suit在礁石大道附近等候各位。

去往潮汐海灘的沙路
博主的當天著裝~~

挖蚌種類

加州對於不同的貝類保護是不一樣的。而且一般有季節性。這個網站綠燈的是目前可以捕捉的海產:https://wildlife.ca.gov/Fishing/Ocean/Regulations/Fishing-Map/SF-Bay#clams。

我們這次在半月灣主要挖掘的是馬頸蚌(hourse clam),也可以捉一些竹蛏子。但是請注意,海螺,鮑魚都是保護動物,現在不能帶走。

準備工具

提醒:由於此次屬於遊玩為主(不知道能不能挖到足夠多),請各位酌情準備。不需要下血本置辦工具。

漁證(fishing license for that 6/26): 如果只考慮今年釣魚挖蚌一次的,可以考慮購買當天的漁證。16歲以下小朋友不需要,所以如果家裡有小朋友,可以考慮讓他拿桶裝蚌。大人拿貨的話需要買漁證。根據加州法律,每個合法挖蚌人可以挖10個。

漁證購買可以再運動品商店比如Big 5, walmart的戶外部門,Dicks購買,購買時說明日期就好,一張15塊。一個家庭一張應該夠了。

也可以考慮往上購買:https://wildlife.ca.gov/Licensing/Online-Sales,記得打印出來帶在身上。

著裝:由於需要身子趴下去挖好的沙坑面撈,建議挖蚌主力穿比較耐髒快乾的衣服。如果有wetsuit是比較好的選擇,或者半身的釣魚防水服(亞馬遜參考)。

手套:沙地裡會有一些小貝殼的破殼,在摸的時候可能會傷到手指,建議帶一雙手套。

水鞋或者涼鞋:腳會濕,所以還是帶一雙吧。

鹽:看網上貌似有人捉過razor clam,只需要在沙上有洞的地方撒鹽它就會自己鑽出來,本人沒有成功過,大家有興趣可以帶一罐鹽試試。參考視頻:

https://www.youtube.com/watch?v=0xsKVIKM1ac
用鹽挖razor clam

鐵鏟:如上圖所示,花園用的鐵鏟就可以,需要往地下挖大概1米左右。

花園鏟:(亞麻參考),小朋友可以用這個挖著玩,或者挖到深處還差一點點可以考慮用這個來挖。

塑料桶:用來裝棒和工具之類。

挖棒的大管子:長30 inches,直徑12 inches。這個目前我還沒有找到哪裡可以直接買。Home Depot和Lowes都沒有這麼粗的。歡迎各位留言如果找到其他可以買到的地方。

目前所知購買地點是在Lawson Landing Park的魚具店。因為Lawson Landing也是一個很好的挖蚌捕蟹的地方。只是太遠。

這個管子,如上圖我挖掘所示,主要是防止挖深的時候周圍沙塌方導致沒辦法把貝克撈出。

所以也可以考慮用Clam Gun (亞麻參考)。我看到在Half Moon Bay也是有人用的。

挖掘技巧

簡單說一下如果尋找和挖掘。當天來到海灘之後,往比較潮濕的沙地走,由於走路的震動,會導致clam往沙里躲,同時排出體內的海水,行程表沙地裡噴出的水柱。看到水柱大小,高度可以判斷clam的大小。然後找准噴水口開始往下挖。挖掘時要沿著噴水口挖不要直接下鏟,以免傷到貝殼肉。往下挖一米左右,可以開始用手探查,摸到粗糙的肉皮不要急於拔,很容易扯斷鼻子部分。請用小鏟向下再挖一下,鏟到貝殼輕鬆取出就好。

有什麼問題歡迎微信群裡討論。祝大家漁獲豐富。

2020 to 2021

2020 is passing by. I generally feel there is not too much great happen this year. Most are similar like my PhD years: endless work and no social and personal conversation. Glad my parents stay with me and we do have a great time together.

Looks back the year, I think it is most about Covid-19 and US president election. When I wrote this, I still do not know where the virus is from and who will get into the white house. But I learned about one truth is that people really do not care about what you have said. It is just history, and it can be forgotten. Or we can rephrase it in a different way. The line defining truth is not clear anymore.

So is there a truth? I choose to believe so. That is what I chosen in 2013 in Binghamton. Jesus makes life simple. And it makes me to be easy to be happier. In the end of the year, I am so blessed to say something from my deep heart to my past about what I believe, and get a positive feedback from her about her belief on my birthday. I also heard from a friend that she started to go to church in Taiwan, to put herself in front to the lord. I just indeed feel happy, to see someone I am really care about, although I did not express the care enough when I have the opportunity, to have a chance to hear the gospel. I am not sound like a crazy people to them after all.

This year, I think at the end, I have learned that I should do the good thing because I have a good heart, not because I want the rewards. Let God do the job and make the right one I feel comfortable with shows up (please also be quick).

I think I like to work from home, I like jumping rope and doing some weight lifting just by getting out of my room. I also like to see I get one brick on the EB wall with a lot of words written on it ( May be the most wordy one). I did some meaningful things this year.

I also played piano on one of my friends funeral. Yes, I think I see marriage, and I also see death. And I know I cannot explain them well yet. What I learned is that we should, at the right time, say the words we want to say, and say it clear like the end of day is tomorrow. Tell others how much we love them like we won’t see each other until we face Jesus. Try to tell everyone that you hope to meet them when you can, and hope to be with them forever, if possible, when we left this world, no matter how sudden it is. Make them hear the gospel.

For 2021, I hope:

I can keep a good habit to write useful blogs here again at least every month.

I can finish some religion books so I can have my own understand of belief so I can talk to my friends with a completed logic circle, maybe create some slides so I can preach in my way.

I hope I can continuously work on my github so do some good project. This need to be frequently. Small progress everyday would make a lot difference.

I hope I can finish my Game Engine Architecture book.

I hope I can wire a ray tracer in Taichi.

I hope I can get a good sense of AWS usage.

I hope I can find the one who will be with me on the road to God and please make sure I feel it is the right one.

Focus more on myself, run myself good, and enjoy what God plans ahead.

See you tomorrow, 2021.

CVPR 2019 Day 1

What an early flight to Long Beach! Waked up at 3:30am and noticed that there was Lyft driver available at mid-night. Have to say that Lyft/Uber makes life easier. But just a reminder that SJC won’t open checkin until 4:00 am… So don’t be rush there anyway.

So Sunday and Monday are for the workshops. The morning I went to the 3D Scene Understanding workshop and listen to a good talk on “What do Single-view 3D Reconstruction Networks Learn?” It points out the current state of the arts single image reconstruction work is, by large chance, just image retrieval. This is due to that the shape similarity measurement is not good enough and the training set is contaminated by models which already looks very similar like the one in the test set. And using a certain model pose view as the single image input fits 2D image case but does not really the best one for 3D mesh case. The talk really resolve some issues in 3D reconstruction research and I think the paper is good to read. You can find the paper Here. And here is the Youtube video for the talk

However, at the same day Facebook AI also provide their Mesh R-CNN basically to reconstruct mesh from single image like there R-CNN to create 2D mask from a single image. Hence it would be interesting to check that paper to see if it violates any issues points out by the work above.

In the afternoon my colleague leads me to the ScanNet benchmark challenge workshop. Professor Matthias Nießner is really active in facial/body reconstruction work and now his work also expand to general scene 3D capture and registration. ScanNet is trying to create data set with vertex level labeling plus 3D bounding box , like the 3D version of the ImageNet. The workshop is basically an exhibition of all the people who participate in the detection task with the ScanNet dataset. The Stanford’s work achieve very good result by take advantage of temporal coherence information. It is a very interesting idea and fundamentally optimize the data representation and training procedure. Very nice result.

Later afternoon I went back to my original research domain to take look of the state-of-the-art human body/facial capture survey style talk given by Michael J Black. I do feel there is great potential. Need to investigate when I have some spare time. And here is the video for that.

I think Monday I will be in the AR/VR session. Hope to learn more in this region. Or at least see what is the part people still not cover yet…

旧金山日本五年多次往返签证申请经验

这里我总结一下自己在旧金山日本领馆签证的经验。旧金山日领馆负责加州中北以及内华达的签证服务。而且要求是自己来递签。

第一,材料准备。这个网站总结的很好:https://piao.tips/japan-multi-visa-in-the-us-san-francisco/。不过我自我的感觉下来还是有东西要强调一下。

官方网站找到了很贴心的,针对不同国籍的签证check list:https://www.sf.us.emb-japan.go.jp/itpr_en/e_m02_01_04.html

个人在面签时候是完全按照这个check list收材料的,不多不少。与上面驴友总结的稍有不同,那个自己的一封信说明要多次签证理由的信还蛮重要,真的被收走了。而且上文驴友那一份是针对以前去过日本再签写的,所以不要照抄哈。

关于申请表格,第一页名字除了写英文,还要在相应地方写中文的汉字,反面签名我也是用汉字签的。注意日本表格的时间写法,DD/MM/YY而不是一般的月份开头。

关于check list 里面第6条财产文件,我是两个都准备了。pdf里面强烈说明了如果是选择出工作证明+最近工资单,那么工作证明不能是Offer letter。各个公司开工作证明的方法不一。以Amazon为例,在内网搜索Employment Verification可以很容易找到拿到工作证明的步骤,网上操作就可以,不需要联系HR。注意default pin哈。

至于多数面经里面提到的机票打印和旅店订单,行程计划,大使馆工作人员倒没有问我要。不过我还是有准备了。注意酒店订单一定要有自己的名字。最好自己定可以退的那种,用来签证就够。之后再退掉。免得上面有多个名字或者没有你的名字,万一签证官看到了还给自己找麻烦。

在面签过程中,签证官有问在日有没有朋友,建议说没有。否则会要求写出很详细的对方的信息。

整个签证过程很快,大概5分钟。之后会收到一个收据,5个工作日后凭收据来大使馆窗口领取护照。记得带正确金额现金(收据上会写明金额),不找零的,感觉也不能信用卡支付。

如果不想再跑一趟而且市里也有朋友的话,可以找人带领。只要你在收据上写好代领人的名字,并且用中文签你自己的名即可。中文前面的原因是表格要求签名必须和你护照签名吻合。是中国护照所以还是签中文吧。代领人的名字感觉得写英文,因为代领人要出示photo ID,如果代领人用美国驾照,那当然只有英文了。日本人办事很中规中矩,所以按规矩比较好。

交通

大使馆早上9点30开门,如果从南边过去,建议坐早上7-8点的caltrain,直到旧金山终点。然后走两分钟去坐10号公交。这个通勤在Google Map上可以搜到。旧金山公交票是2块75,交费后会得到收据,2小时内换乘都是免费的。所以理论上如果大使馆人不多可以来回到Caltrain终点。我签证一共花了大概1小时,到大使馆的时候领取的号码是15号,而刚到的时候才开始办1号。大概是这个速度。

 

SIGGRAPH 2018 Day 4

Today is a little casual. In the morning, I visit Nvidia ray  tracing/path tracing section. They emphasize that similar like the first GPU card in 1998, The RTX is a new thing that everyone should try to catch up with.

Then I also went to 3D capture session. The papers there are all very interesting. I think we are also at an important stage at the moment.

In the afternoon, I went to material capture session, It is glad to see how a deep learning model is trained which can use differentiable render for material generation from one image file. I do need to look into this work.

SIGGRAPH 2018 Day 3

Today is a time space mixture adventure. Try to get into the talk of two state of the arts face related paper in the VR session in the morning. One is from TMU and the other is from Facebook Reality Lab. Both of them try to tackle the issue on how to show genuine whole face expression in VR while both sides wear the headset.

On one side,  Matthias Niessner and his golden face synthesize team explore how to deal with this issue based on their face-to-face work. The advantage is that due to they use generic face model, the representation is not strongly subject dependent so no calibration and pre-capture is necessary. However, due to only use infrared camera inside headset for eye gaze tracking, the upper face’s expression may not be preserved.

While facebook use subject dependent high quality model for this work. And use deep learning on teeth composition. The quality looks better. However, it needs pre-capture for the subject.

 

And thanks to my friends from Pixar, this time we notice that there is no booth for the animation studio so we don’t know where to pick up the renderman teapot. It is turn out that they release after their renderman 22 demo talk which last for 1 hour. It is actually a really good talk. 30 years development of renderman, from scan line renderer to ray tracing, and then path tracing. They give up old infrastructure for physical correct and simple models. It is glad to see at this stage, ray tracing lighting can be achieved in an interactive speed. With the help of Nvidia’s RTX, I think the production time for all stages of animation can be shrink and we could see more ideas in the movie since the cost to try out new story line, camera, actions, etc are cheaper. But the most important thing is get my teapot!

 

 

 

 

 

 

The real-time live! demo session is also crazy. The Nvidia RTX, ILM X LAB, and Unreal combined VR virtual movie shot demo is totally a game changer on how we can make movie quality shots in real time with everyone inside a virtual environment. I can image in near future, the individual shot may be captured in this real-time ray tracing environment. Then the director can cut the movie to review, and handle that short to the off-line renderer, if necessary, for movie final images.

SIGGRAPH2018 Day 2

So today’s major coverage is two speeches, one is from Rob Bredow, VP of ILM. The other is form the CEO of NVIDIA.

Rob’s talk is the power of creative process. In which he talked about his experiences to be the first time VFX producer on the Star Wars movie: Solo.

He mentioned the people will have 3 different stages during the creative process:

  • Just start: when you want to be in the field.

During the beginning, people should do study. And try to build the things from other’s work. More like interdisciplinary study. It is easier to create something based on other stuff.

  • Know the theme: when you already know the tools and try to actively work in the field.

 

  • Lead: How to lead creative process.

During this stage, people need to first define the theme, which is the concept you try to follow. Make sure to work on this path before dive into the detail. He use the example on the solo film where he hope to go back to the classic 70’s film style. Hence the movie production explicitly uses rig for the hyper speed traveling set, and under water explosion, which relies on the real hardware (huge 180 degree LED screen, and 20 thousands fps camera) to get real lighting and “explosion never seen before”.

Then it is about learn on the constraint, so people can focus on the right thing. He mentioned how the roller coaster in Disney’s Animal Kingdom was created. From the beginning when it is not fit into the style. Then people visit Nepal and found the story of Yeti to build up the story about Everest and Yeti for the roller coster.

Third is simplify. Try to make the target simple. He mentioned about a shot in World War where a rig is jumped out during a crash scene, which may need retouching the scene to remove. However, no one actually knows what that is and people pay attention to the character’s face, so it is indeed not that important to spend extra time on removing in the film.

The third is about share. Rob mentioned on the start of ASWF, the academy software foundation, where the film industry first time try to organize their software together to share tools between companies.

The topic title.

ASWF actually starts with a lot of big names. I think to explore these repositories could also help new people to get into the business.

 

 

 

 

 

 

 

 

He also proposed his photo book he made during the Solo movie, I think this is a very good collection.

 

Nvidia’s special event is crazy to attract a lot of people. It is also my first time to see the CEO’s iconic gesture: hold the nvidia card on the stage. The event is basically the announcement of the next big thing since CUDA introduced in 2003. The Turing architecture, where Nvidia makes real-time ray tracing rendering possible.

10 Gig ray per second, mixed operation on GPU 16 TFLOPS and 16 TIPS, 500 Trillion tensor ops per second, 8K image decoder. This monster makes real time ray tracing possible. It dramatically reduce the time of physical based rendering for movie quality images, hence could be very attractive to the movie industry. And since the basic version is not that expensive ($2300, I think it is worthy than some AR glasses), we may expect soon game developer may not need to play too much tricks on the shading effect while just let things following the physics law.

Mr. Huang really enjoys to use the high glossy RTX card to play with the audience.

Demo on the real time ray tracing Star Wars shot. The light does look real!

Introduce how different hardware/software stack it is for the new arch.

 

 

SIGGRAPH 2018 Day 1

Day one is so many people! Next time if I arrived one day earlier, I should do registration first.

So in the morning, I went to the Vulkan course, really helpful to understand this thing and glad it has all the support we expect to. I think it is the way to go.

Then we visit the product exhibition, nice to see the probs from Infinity War and Solo.

The AR session hosted by apple basically go over what they said at WWDC, which makes AR still a pretty new thing for the graphics industry. I can sense that people are looking for new things, but they hesitated on the future.

In this case, what should we do? The Jurassic park 25 year screening gives the answer. You just spend your spare time and do it. Then break the old business. From 0 to 1, that is how we make progress.

See everyone in day 2.

Hello to SIGGRAPH 2018!

I think time indeed goes fast and it has been 3 years since my first and amazing SIGGRAPH experience. Now it is Vancouver, with a new me on Amazon’s AR platform and try to make it better.

So Sunday is the beginning, I plan to check in and take course: Intro to Vulkan. And later afternoon for deep learning maybe (but I feel the deep learning one may be too simple).