Shared Mixed Reality


Reserch
figure summary


hsato http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/hsato.png 793w" sizes="(max-width: 300px) 100vw
300px" />

MR-mirror is a novel Mixed-Reality (MR) display system created by using real and virtual mirrors. It merges real visual information reflected on a real mirror and a virtual one displayed on an electronic monitor. A user’s body is presented by the reflection on the real mirror, and virtual objects are presented on a monitor that is visible through the real mirror. Users can observe an MR scene without wearing such devices as a head-mounted display and can interact with the virtual objects around them using their body motion in MR space. We implemented a prototype MR-mirror and a demonstration system.

Hideaki Sato,Itaru Kitahara,Yuichi Ohta,MR-Mirror:A Complex of Real and Virtual Mirrors,HCI International 2009,San Diego, July 2009,Proceedings of the 3rd International Conference on Virtual and Mixed Reality: HCI International 2009, pp. 482 -492.


ueda http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/ueda.png 519w" sizes="(max-width: 300px) 100vw
300px" />

This paper proposes a wallpaper replacement method in a free-hand movie by using Mixed-Reality (MR) technique. Our method overlaps Computer Graphics (CG) wallpaper onto a real wall region in a movie which is taken by handheld cam- era (free-hand movie) that looks around a room. In order to overlap CG wallpaper by using MR technique, a wallpaper plane should be extracted from the free-hand movie. In an ordinary way, 3D models of the target room are required. Espe- cially, for excluding thin objects pasted on the wall (like a poster) from the wallpaper region, 3D models must be very pre- cise. Although a special and expensive 3D survey instrument is needed to generate such 3D models, such a instrument is not in homes or offices where many users demand for a wallpaper replacement simulation. In order to solve this problem, we extract a wallpaper region by using image segmentation technique with user interaction. Image segmentation technique can extract a target objects that include a thin one from a source image with an appropriate user interaction, and requires no special equipment like a 3D survey instrument. By applying our method, wallpaper replacement can be easily realized by using only a handy camera and a PC.

Masashi Ueda,Itaru Kitahara,and Yuichi Ohta,“MR Simulation for Re-Wallpapering a Room in a Free-Hand Movie”,The 20th Anniversary International Conference on MultiMedia Modeling (MMM2014),2014. 01. 6-10. (MMM2014,Dublin,Ireland )


ssato http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/ssato.png 642w" sizes="(max-width: 300px) 100vw
300px" />

Facial expression, which is one of the most important non-verbal communication media, makes communication smoother by compensating the missed message in language communication. However, some shy people are not as good at using facial expressions as they want. Such poor emotional expression by the conversation partner makes it difficult to read feelings correctly, and as the result, smooth communication is hindered. In order to solve this problem, this paper proposes a facial expression enhancement method that realizes the smooth communication with rich facial expression. To enhance facial expression, the facial shapes and textures are expressed as the parameters in parametric spaces reconstructed from the personal facial images naturally. In the parametric spaces of facial expression, the difference between two facial expressions can be handled as a multidimensional vector. By controlling the scale of the difference vector between input vector and the norm vector, the facial expression difference is enhanced without having to recognize the facial expression. Then we generate the texture, which enhanced the texture of expression by re-projecting into image space. Finally, we overlay the synthesized facial image of the conversation partner onto the face region in the video chatting sequence. We conduct on evaluations to confirm the effect of expression enhancement by our method using CG faces and a real video sequence.

Shogo Sato,Itaru Kitahara,Yuichi Ohta,Augmented-Reality Facial Expression Enhancement for Video Chatting Using Point Distribution Models 【Poster】; 5th Joint Virtual Reality Conference (JVRC2013),pp.51-56 (2013.12)


suzuki http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/suzuki.jpg 937w" sizes="(max-width: 300px) 100vw
300px" />

This paper proposes a method to visually transport tabletop objects using Virtualized Reality and Mixed-Reality (MR). Our method provides a portable workspace for a user, which takes the 3D model of a real object a table and puts it on another table. The setting position of the 3D virtual object is automatically decided on the target table by recognizing the condition of the table. The position is decided by free space on the table at the display-site to avoid physically conflicting with other real objects and the original layout at the captured table. We conduct on experimental evaluations to confirm the effectiveness by developing our proposed method. The result shows users feel that the virtual object arrangement decided by our method is suitable. The experiment also shows that the position of the object changes the impression of the table. And furthermore, the impression given by a virtual object on the table is similar as the real object. These results showed that portable tabletop realized by our method can provide similar impression at the different working environments.

鈴木友規,北原格,大田友一,仮想化現実と複合現実型提示を用いた可搬的な卓上作業空間; 電子情報通信学会 技術研究報告MVE,113,470,pp.211-216 (2014.3)


shizuku http://www.image.esys.tsukuba.ac.jp/wp-content/uploads/2014/02/shizuku.png 548w" sizes="(max-width: 202px) 100vw
202px" />

In this reseach, it supports so that a visitor can peruse more comfortably by enabling to share a visitor's interest. Therefore, voice is visualized in three dimensions at the place which conversation is done to realize sharing of interest. The position of a character and direction are coincided with an utterance person’s position and direction so that it can guess words that the speaker said what from where to look. Moreover, a visitor enables it to hear the sound of conversation by performing the interaction of "touching" to the visualized character. The grade of interest can be known by hearing a sound and it is thought that the degree of comprehension to conversation goes up.

雫泰裕,北原格,大田友一,複合現実感を用いた発話内容の可視化と3次元インタラクション; 電子情報通信学会 技術研究報告MVE,113,470,pp.205-210 (2014.3)