Visual Support by using Environmental and Mobile cameras (VSEM)


Reserch
figure summary


image


We present a novel view interpolation method for snapshot. Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that difficult to correct manually. View morphing works by prewarping two images prior to computing image morphing and then postwarping the interpolated images. View morphing create unnatural image when input lines contains foreground and background lines. Our method interpolate foreground and background separately.

Nao Akechi, Itaru Kitahara, Ryuuki Sakamoto, Yuichi Ohta, “Multi-Resolution Bullet-Time Effect”, ACM SIGGRAPH Asia 2014,2014. 12. 3-6 (ShenZhen Convention and Exhibition Center, China) poster】


atsumi http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/atsumi.jpg 942w" sizes="(max-width: 300px) 100vw
300px" />

We propose a method of generating 3D layout-maps of rocks by using multi-view images. To prevent rocks from falling,we need information of the shape of rocks, the condition around them, and their distribution. However, it is difficult to generate 3D models which contain all of these information at a time. A 3D model reconstructed from multi-view images with high spatial resolution taken close to a rock with a mobile camera has the precise form and texture of the rock but loses information of surroundings. On the other hand, a 3D model of surroundings around the rock can be reconstructed from multi-view images with low spatial resolution taken far from the rock,but it is difficult to reconstruct the precise form and texture from images with low spatial resolution. If we can integrate them, we can get a 3D model that has all necessary information of the above. Although integrating two 3D models needs the corresponding points, it is difficult to directly match two 3D models with different spatial resolutions. So after matching image of the surroundings, rendering image of 3D model of the rock and calculating the three dimensional coordinate of the matching point, we get the corresponding points.In the experiment, we took images of a rock and the surroundings and confirmed to be able to reconstruct and integrate these 3D models in this method.

厚見彰吾,北原格,大田友一,多視点画像を用いた岩石のモデリングと配置図の生成; 地盤工学会関東支部発表会2013,pp.4pages (2013.10)


tsuru http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/tsuru.jpg 604w" sizes="(max-width: 300px) 100vw
300px" />

Pose estimation (calibration) of a mobile camera is one of the most important research issues to realize geometrical consistency between the real and the virtual world in mixed-reality.This paper proposes a method to estimate the pose of a mobile camera in a dynamic scene by using an environmental stereo camera.Sequential 3D-maps of the capturing environment are generated in real-time by the stereo images, which include both static objects and dynamic objects such as people.By using the 3D point of dynamic objects as landmarks for camera calibration, it is possible to realize a robust pose estimation method in a dynamic environment.Experimental evaluations were conducted using both simulation CG images and captured images of a real scene to demonstrate the effectiveness of our proposed method.

Hiroyoshi Tsuru,Itaru Kitahara,Yuichi Ohta,A Mobile Camera Calibration Method Using an Environmental Stereo Camera; IEEJ Transactions on Electronics,Information and Systems / 電気学会論文誌C(電子・情報・システム部門誌),133,1,pp.47-53 (2013.1)


toriya http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/toriya.png 730w" sizes="(max-width: 300px) 100vw
300px" />

This paper proposes a method to localize a mobile camera by searching corresponding points between the mobile camera image and aerial images yielded by GIS database.A same object differently appears in the two images, because the mobile camera images are taken from the user’s viewpoints (i.e., on the ground), and aerial images from much higher viewpoints.To reduce such differences in appearance, the mobile camera image is transformed into a virtual top-view image by using the gravity information given by an inertia sensor embedded in the mobile camera.The SIFT algorithm is applied to find corresponding points between the virtual top-view and the aerial images. As the result, a homography matrix that transforms the virtual top-view image into the aerial image is obtained.By using the matrix, the position and orientation of mobile camera are estimated. If the textural information about ground region captured by the mobile camera is poor, it is difficult to obtain a sufficient number of correct corresponding points to allow an accurate homography matrix to be calculated.To deal with such cases, we develop an optional process that stitches multiple virtual top-view images together to cover a larger region of the ground. Experimental evaluation is conducted by a developed pilot system.

Hisatoshi Toriya,Itaru Kitahara,Yuichi Ohta,A Mobile Camera Localization Method Using Aerial-View Images; Asian Conference on Pattern Recognition 2013 (ACPR2013),5pages (2013.11)