Massive Sensing


Reserch
figure summary


esss http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/esss-150x150.png 150w
http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/esss.png 358w" sizes="(max-width: 300px) 100vw 300px" />

Facial micro-expressions are fast and subtle facial motions that are considered as one of the most useful external signs for detecting hidden emotional changes in a person. However, they are not easy to detect and measure as they appear only for a short time, with small muscle contraction in the facial areas where salient features are not available. We propose a new computer vision method for detecting and measuring timing characteristics of facial micro-expressions. The core of this method is based on a descriptor that combines pre-processing masks, histograms and concatenation of spatial-temporal gradient vectors. Presented 3D gradient histogram descriptor is able to detect and measure the timing characteristics of the fast and subtle changes of the facial skin surface. This method is specifically designed for analysis of videos recorded using a hi-speed 200 fps camera. Final classification of micro expressions is done by using a k-mean classifier and a voting procedure. The Facial Action Coding System was utilized to annotate the appearance and dynamics of the expressions in our new hi-speed micro-expressions video database. The efficiency of the proposed approach was validated using our new hi-speed video database.

Senya Polikovsky,Yoshinari Kameda,Yuichi Ohta,Facial Micro-Expression Detection in Hi-Speed Video Based On Facial Action Coding System (FACS); The Transactions of the IEICE D,E96-D,1,pp.81-92 (2013.1)


tarumi http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/tarumi.jpg 520w" sizes="(max-width: 300px) 100vw
300px" />

We propose a location estimation method of a snapshot by referring pre-recorded route video.We make the database that is a set of detected key features in pre-recorded route video.Location of a snapshot is estimated by counting the number of key-pairs at every frame of the pre-recorded video, and the frame that gives the largest number of key-pairs indicates the location of the snapshot.As the similarity of key-pairs make much influence on the accuracy of frame selection, we investigate the relation between precision of retrieval and similarity degree of the image key features.

樽見佑亮,亀田能成,大田友一,経路上での事前の移動撮影映像を用いたスナップショット撮影位置の推定; 電子情報通信学会 技術研究報告MVE,113,470,pp.1-5 (2014.3)


liu http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/liu.jpg 480w" sizes="(max-width: 300px) 100vw
300px" />

We propose “AR replay”, a framework to record the working scene including a tutor's action in a small workspace, and then replay the tutor's action in front of a learner's view in an AR fashion (Figure.1). This framework uses one RGB-D camera for recording and replaying. On learning a task in a small workspace, when a tutor cannot be in the workspace, it is useful for a learner to check the action of the tutor by a video which was taken in advance in the same workspace. If the video can be replayed in an AR fashion, it will be more useful. We propose a new “AR replay” method by using one RGB-D camera. In our “AR replay”, the action of tutor is aligned to the right place and the learner can check the action from various viewpoints. The action is shown as 3D dynamic shape with color and it is aligned to the workspace by the static geometric clues in the workspace. Since we expect the RGB-D camera is maneuvered to frame the interaction between the tutor and the static workspace environment, we assume the demand of changing viewpoint from the original recorded camera viewpoint is limited to some extent on checking the “AR replay”. Our preliminary experimental system can acquire the 3D shapes about tutor's action and the workspace environment. Moreover, this system can produce the “AR replay” on a video see-through display, with which a learner can shift the viewpoint from the original path of the RGB-D camera in order to have the better view of the interaction between the tutor and the static workspace environment.

Yun Li,Yoshinari Kameda,Yuichi Ohta,AR Replay in a Small Workspace; Proceedings of 23rd International Conference on Artificial Reality and Telexistence (ICAT2013),pp.97-101 (2013.12)