Updated on 2024/04/02

写真a

 
SEJIMA,Yoshihiro
 
Organization
Faculty of Informatics Professor
Title
Professor
Contact information
メールアドレス
External link

Degree

  • 博士(工学) ( 2010.3 )

Research Interests

  • Line-of-sight

  • Social robotics

  • 親近性

  • Human robot interaction

  • Pupil

  • Interactive atmosphere

Research Areas

  • Life Science / Gerontological nursing and community health nursing

  • Informatics / Human interface and interaction

  • Informatics / Intelligent robotics

Education

  • Okayama Prefectural University   Graduate School of Systems Engineering

    2007.4 - 2010.3

      More details

  • Okayama Prefectural University   Graduate School of Systems Engineering

    2005.4 - 2007.3

      More details

  • Okayama Prefectural University   Computer Science and Systems Engineering   Department of System Engineering

    2001.4 - 2005.3

      More details

Research History

  • Kansai University   Faculty of Informatics, Department of Informatics   Associate Professor

    2019.4

      More details

  • Okayama Prefectural University   Assistant Professor

    2014.4 - 2019.3

      More details

  • Yamaguchi University   Assistant Professor

    2010.4 - 2014.3

      More details

  • 科学技術振興機構 (JST) 戦略的創造研究推進事業 (CREST) 専任RA

    2009.4 - 2010.3

      More details

Professional Memberships

Committee Memberships

  • Journal of Advanced Mechanical Design, Systems, and Manufacturing, Design & Systems Editorial Committee   Editor  

    2023.4 - Present   

      More details

    Committee type:Academic society

    researchmap

  • 日本機械学会 設計工学・システム部門 企画活動活性化委員会   幹事  

    2023.4 - Present   

      More details

    Committee type:Academic society

    researchmap

  •   幹事  

    2022.4 - 2023.3   

      More details

  • The Japan Society for Welfare Engineering   Board Member  

    2021.11 - Present   

      More details

    Committee type:Academic society

    researchmap

  •   委員  

    2021.3   

      More details

  • Human Interface Society   幹事  

    2021.3   

      More details

  • Ministry of education   National Institute of Science and Technology Policy  

    2020.4 - Present   

      More details

    Committee type:Government

    researchmap

  • Human Interface Society   Councilor  

    2020.4 - Present   

      More details

    Committee type:Academic society

    researchmap

  • ロボティクス・メカトロニクス部門 第4地区技術委員会   委員長  

    2020.4 - 2022.3   

      More details

  • Japan Society for Welfare Engineering   Councilor  

    2019.11 - 2021.11   

      More details

    Committee type:Academic society

    researchmap

  •   設計工学システム部門 運営委員  

    2018.4   

      More details

  • 日本福祉工学会   論文査読委員会 委員長  

    2018.4   

      More details

    Committee type:Academic society

    researchmap

  • Human Interface Society   幹事  

    2014.10   

      More details

▼display all

Papers

  • A speech-driven avatar robot system with changing complexion for the Visualization of an interactive atmosphere Reviewed

    Yoshihiro Sejima, Liheng Yang, Saki Inagaki, Daiki Morita

    Journal of Robotics and Mechatronics   35 ( 5 )   2023.10

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    researchmap

  • withコロナ時代における目ヂカラコミュニケーション—特集 人とロボットのインタラクションと距離 : コロナ禍に学び進む Invited Reviewed

    瀬島 吉裕

    計測と制御 : journal of the Society of Instrument and Control Engineers   61 ( 3 )   198 - 202   2022.3

     More details

    Language:Japanese  

    CiNii Books

    researchmap

  • A speech-driven pupil response system with affective highlight by virtual lighting

    Yoshihiro Sejima, Hiroki Kawamoto, Yoichiro Sato, Tomio Watanabe

    Journal of Advanced Mechanical Design, Systems and Manufacturing   16 ( 5 )   2022

     More details

    Publishing type:Research paper (international conference proceedings)  

    In face-to-face communication, talkers perceive various affects and emotions of other talkers as the eye-impressions such as highlight or dilated pupil. In the previous studies, focusing on the eye-impressions in social robots, various expression methods such as changing the LED color or blinking the LED were proposed and developed. However, it is difficult to generate the affective expressions with the highlight that has an optical reflection characteristic on the surface of the eyeball by using the LED. Therefore, in order to enhance the eye-impressions of social robots, it is expected to develop an expressible method with the affective highlight. In this study, we proposed an affective highlight to enhance the eye-impressions of social robots. This method introduces a virtual lighting in virtual space and can generate rich affective expressions by controlling various parameters of the virtual lighting such as brightness or colors. In addition, we developed a speech-driven pupil response system that applied the affective highlight to the pupil response interface which has been developed. This system can generate the affective expressions by the dilation of pupil response that is synchronized with the talker’s speech as well as the highlight expression with talker’s intentional input. The effectiveness of the developed system with the affective highlight was demonstrated by sensory evaluations.

    DOI: 10.1299/jamdsm.2022jamdsm0058

    Scopus

    researchmap

  • Effects of Virtual Character's Eye Movements in Reach Motion on Target Prediction. Reviewed

    Liheng Yang, Yoshihiro Sejima, Tomio Watanabe

    HCI (5)   162 - 171   2022

     More details

    Publishing type:Research paper (international conference proceedings)  

    DOI: 10.1007/978-3-031-06509-5_12

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2022-5.html#YangSW22

  • A fast and accurate tube diameter visualization method for digestive system medical images Reviewed

    Mitsuru Ueda, Masafumi Kondo, Isao Kayano, Yoshihiro Sejima, Tomoyuki Yokogawa, Yoichiro Sato, Yuusaku Sugihara, Kazuyuki Matsumoto

    IEEJ Transactions on Electronics, Information and Systems   141 ( 9 )   982 - 991   2021.9

     More details

    Publishing type:Research paper (scientific journal)  

    Although the tube diameter and stenosis rate are important observations in the diagnosis using medical images of digestive organ, such as digestive systems, the diagnosis is based on the physician’s subjective evaluation. On the other hand, automatic measurement of tube diameters by image processing has been used for blood vessels, which are tubular tissues similar to digestive systems, and it is also possible to visualize the vessel diameter by assigning pseudo-colors according to the length of the vessel. However, it is difficult to apply the existing diameter visualization methods directly to digestive system medical images due to the processing time and accuracy problems because the tubes are large and tortuous, unlike blood vessels. In this paper, we propose a method to measure the tube diameter quickly by dynamically calculating the next featured pixel using the history of the midpoints of the tube diameter line. Furthermore, we derive an approximate curve corresponding to the centerline of the tube from the history of the featured pixels and measure the tube diameter with high accuracy. We applied the proposed method to various types of digestive system medical images and confirmed that the proposed method could accurately measure and visualize the tube diameter along the shape of the tube and reduce the processing time by up to 99%.

    DOI: 10.1541/ieejeiss.141.982

    Scopus

    researchmap

  • A Robot that Tells You It is Watching You with Its Eyes. Reviewed

    Saizo Aoyagi, Yoshihiro Sejima, Michiya Yamamoto

    Human-Computer Interaction. Interaction Techniques and Novel Applications - Thematic Area   12763   201 - 215   2021

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Springer  

    The eyes play important roles in human communication. In this study, a robot tells a user it is watching her/him in a shopping scenario . First, we conducted experiments to determine the parameters of the eyes on the screen of the robot . Next, we conducted a scenario experiment, assuming a shopping scene, to demonstrate the effectiveness of the eye-based interaction compared to common push-type interaction. The results showed that the robot achieved modest and casual interaction in a shopping scene.

    DOI: 10.1007/978-3-030-78465-2_15

    Web of Science

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2021-2.html#AoyagiSY21

  • Development of a Presentation Support System Using Group Pupil Response Interfaces. Reviewed

    Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe

    Human Interface and the Management of Information. Information-Rich and Intelligent Environments - Thematic Area   429 - 438   2021

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Springer  

    DOI: 10.1007/978-3-030-78361-7_33

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2021-5.html#SejimaSW21

  • Effects of pupil area on impression formation in pupil expression media Reviewed

    SEJIMA Yoshihiro, KAWAMOTO Hiroki, SATO Yoichiro, WATANABE Tomio

    Transactions of the JSME (in Japanese)   87 ( 903 )   21-00187 - 21-00187   2021

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Pupils give favorable impressions to humans depending on their size. Focusing on effects of the pupil size, various pupil sizes were evaluated using human images as stimulations in previous researches. However, it is difficult to express an extreme pupil size in a face image due to a limited area of eyes. In addition, the evaluation of pupil size itself is hard because impressions in human images are formed depending on not only the pupil size but also other facial parts. In order to evaluate impressions of the pupil size itself, it is desirable to extract only the pupil and exclude other facial parts. In this study, we analyzed impressions on various pupil sizes that combined the dilation / contraction using two kinds of pupil expression media that have been developed so far. In the analysis, the impression evaluation obtained by the SD method was factor-analyzed. As the result of factor analysis, three factors were extracted: acceptability, reliability, and curiosity. Acceptability and reliability factors had a certain peak, and curiosity factor was proportional to the pupil size. These analysis results demonstrated that a certain dilated area of pupil could form a favorable impression regardless of pupil expression media.

    DOI: 10.1299/transjsme.21-00187

    researchmap

  • ミラーセラピー時の手指運動特徴に基づく麻痺度推定システムの開発 Reviewed

    瀬島 吉裕

    日本福祉工学会誌   Vol.22, No.2, pp.35-40 ( 2 )   35 - 40   2020.11

     More details

    Language:Japanese   Publisher:日本福祉工学会  

    CiNii Books

    researchmap

    Other Link: https://search.jamas.or.jp/link/ui/2021102526

  • Development of a Pupil Response System with Empathy Expression in Face-to-Face Body Contact Reviewed

    Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe

    Advances in Intelligent Systems and Computing   952   95 - 102   2020

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER INTERNATIONAL PUBLISHING AG  

    © 2020, Springer Nature Switzerland AG. Pupil response is closely related to human affects and emotions. Focusing on the pupil response in human- robot interaction, we developed a pupil response interface using hemisphere displays for enhancing affective expression. This interface can generate pupil response like human by speech input and enhance affective expression. In this study, for the basic research of forming an intimate communication between human and pet-robot, we analyzed the pupil response during his or her body contact stroking forearm or head by using a pupil measurement device. Based on the analysis, we developed an advanced pupil response system for enhancing intimacy. This system generates the empathy expression when the talker touches any surface of hemisphere displays. The effectiveness of the system was confirmed experimentally.

    DOI: 10.1007/978-3-030-20441-9_11

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ahfe/ahfe2019-1.html#SejimaSW19

  • Development of an Interface that Expresses Twinkling Eyes by Superimposing Human Shadows on Pupils. Reviewed

    Yoshihiro Sejima, Makiko Nishida, Tomio Watanabe

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   12424 LNCS   271 - 279   2020

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Springer  

    Human eyes are related to own affects and emotions, and seem to twinkle when human has positive affects and emotions such as a strong interest and eagerness. Human is subconsciously attracted to the twinkling eyes. In this study, aiming at the development of a novel communication interface that attracts human, we focused on the reflected glare on the surface of eyeballs and proposed an expression method that superimposes human shadows on pupils. We conducted an impression experiment to evaluate the proposed method and demonstrated that the method was effective for attracting human. Based on the result, we developed an advanced interface that expresses twinkling eyes for attracting human. The developed interface generates a pseudo reflected glare by superimposing human shadow on each pupil as a pseudo-self-image.

    DOI: 10.1007/978-3-030-60117-1_20

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2020-42.html#SejimaNW20

  • A video communication system with a virtual pupil CG superimposed on the partner's pupil Reviewed

    Yoshihiro Sejima, Ryosuke Maeda, Yoichiro Sato, Tomio Watanabe

    JOURNAL OF ADVANCED MECHANICAL DESIGN SYSTEMS AND MANUFACTURING   14 ( 6 )   336 - 345   2020

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:JAPAN SOC MECHANICAL ENGINEERS  

    Pupil response plays an important role in expression of talker's affect in an embodied interaction and communication. Focusing on the pupil response in human voice communication, we analyzed the pupil response during utterance, and demonstrated that the pupil enlarges and contracts in synchronization with the burst-pause (ON-OFF) of the utterance. In addition, it was confirmed that the pupil response is effective for enhancing affective conveyance by using the developed system in which an interactive CG character generates the pupil response based on the synchronization with the burst-pause of utterance. In this study, we developed a video communication system with a virtual pupil CG superimposed on the partner's pupil for enhancing affective conveyance. This system generates a virtual pupil response in synchronization with the talker's burst-pause of utterance. We performed a communication experiment under the condition that the virtual pupil CG is generated by being synchronized with the talker's burst-pause of utterance. The effectiveness of the system was demonstrated by means of sensory evaluations of 12 pairs of participants in the video communication.

    DOI: 10.1299/jamdsm.2020jamdsm0091

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2018-4.html#SejimaMHSW18

  • A pupil response system using hemispherical displays for enhancing affective conveyance Reviewed

    SEJIMA,Yoshihiro

    Journal of Advanced Mechanical Design, Systems, and Manufacturing   Vol.13, No.2, pp.JAMDSM0032-1--JAMDSM0032-9   2019.4

     More details

  • Proposal of an Estimation Method of Emotional Centroid Based on the Russell’s Circumplex Model for Quantitative Evaluation of Affect Reviewed

    EGAWA Shoichi, SEJIMA Yoshihiro, SATO Yoichiro

    Transactions of Japan Society of Kansei Engineering   Vol.18, No.3, pp.187-193 ( 3 )   187 - 193   2019.4

     More details

    Language:Japanese   Publisher:Japan Society of Kansei Engineering  

    In human face-to-face communication, not only verbal messages but also non-verbal behaviors such as facial expressions, body movements, and gazes are conveyed; these non-verbal behaviors can express human affect. In addition, it leads to sharing of empathy unconsciously in human interaction and enhancing intimacy between humans. In particular, cognitive empathy as well as emotional empathy play an important role in sharing of empathy. Therefore, it is expected to evaluate emotional empathy based on the features of affect for enhancing intimacy. In this study, for the basic research of evaluating emotional empathy, a method that estimates the emotional centroid based on the coordinate system in the Russell’s circumplex model was proposed. In addition, the experiment was conducted to evaluate the proposed method. The results demonstrated that the method has a possibility to estimate the features of affect.

    DOI: 10.5057/jjske.tjske-d-18-00069

    researchmap

    Other Link: https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-16K01560/

  • A virtual window communication system with pointing function for remote communication Reviewed

    Shunsuke Ota, Mitsuru Jindai, Toshiyuki Yasuda, Yoshihiro Sejima

    Journal of Advanced Mechanical Design, Systems and Manufacturing   13 ( 2 )   2019

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:JAPAN SOC MECHANICAL ENGINEERS  

    © 2019 The Japan Society of Mechanical Engineers In recent years, video communication systems have found wide application along with cellphones and e-mails for remote communication. Using video communication, partner’s expression and personal space, which are beneficial in providing smooth communication between two remote locations, can be seen. However, it is difficult to perceive depth, since the projected images are only able to display two-dimensional information. Furthermore, during video communication, while user might move around, the view of partner remains unchanged. It is important to see an image including depth information for communication. It is expected that people will be able to communicate smoothly using images including depth information because feelings of shared place are increased by it. Therefore, in this paper, we develop a virtual window communication system by which an image of a remote location through a virtual window is projected based on the head position of the user. In this system, when the positional relationship between the head position of the user and the window changes, the projected image will also change accordingly. Thus, the user can feel as though he or she is looking directly at his or her conversational partner and the surrounding environment through the virtual window. Furthermore, remote pointing is realized using a pointing function in this communication system. This pointing function does not require calibration and can be placed at an arbitrary position using image processing. Then, sensory evaluations were performed to demonstrate the effectiveness of the developed virtual window communication system.

    DOI: 10.1299/JAMDSM.2019JAMDSM0030

    Web of Science

    Scopus

    researchmap

  • A Speech-Driven Pupil Response System with Affective Expression Using Hemispherical Displays Reviewed

    Yoshihiro Sejima, Shoichi Egawa, Ryosuke Maeda, Yoichiro Sato, Tomio Watanabe

    RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication   228 - 233   2018.11

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2018 IEEE. We developed an expressible pupil response interface using hemispherical displays for enhancing human-robot communication. This interface looks like robot's eyeballs and expresses vivid pupil response by speech input. In particular, the interface can express exaggerated pupil response that human cannot express. In this study, for the basic research of realizing affinitive interaction in human-robot communication, we analyzed the pupil response with affective expression during utterance using a pupil measurement device. On the basis of the analysis result, we developed a speech-driven pupil response system with affective expression using hemispherical displays. This system expresses pupil response with various affective expression by speech input. We carried out an evaluation experiment by using a sensory evaluation with the system acting as the speaker. The result demonstrated that the system with affective expression is effective for enhancing affinitive interaction.

    DOI: 10.1109/ROMAN.2018.8525764

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2018.html#SejimaEMSW18

  • Development of a hand-up request motion model based on analysis of hand-up motion between humans Reviewed

    Shunsuke Ota, Toshiyuki Yasuda, Mitsuru Jindai, Yoshihiro Sejima

    Proceedings - 2nd IEEE International Conference on Robotic Computing, IRC 2018   2018-January   228 - 231   2018.4

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2018 IEEE. Robots are expected to work in fields involving direct interaction with human. However, in order to work in these fields, the robot has to be accepted favorably by humans. Therefore, these robots need to start communication smoothly. In the previous research, hand-up response motion of robot has been discussed. The effectiveness of this model was demonstrated by experiments using a hand-up robot system that employed the developed model. On the other hand, robots should also be able to generate active hand-up motion in order to promote its embodied interaction with humans. However, request side of hand-up greeting interaction of robot has not been discussed. Therefore, in this paper, a hand-up request motion model was proposed for generation of hand-up greeting with voice greeting with human. Initially, hand-up greeting between mutual human is measured and analyzed. Based on the motion characteristics obtained from the analysis result, a hand-up request motion model in which a robot requests hand-up greeting with voice greeting from human is proposed. The effectiveness of the developed request motion model is demonstrated through sensory evaluation experiments.

    DOI: 10.1109/IRC.2018.00048

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/irc/irc2018.html#OtaYJS18

  • A video communication system with a virtual pupil CG superimposed on the partner’s pupil Reviewed

    Yoshihiro Sejima, Ryosuke Maeda, Daichi Hasegawa, Yoichiro Sato, Tomio Watanabe

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   10904 LNCS   336 - 345   2018

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Springer  

    © Springer International Publishing AG, part of Springer Nature 2018. Pupil response plays an important role in expression of talker’s affect. Focusing on the pupil response in human voice communication, we analyzed the pupil response in embodied interaction, and demonstrated that the speaker’s pupil was clearly dilated during the burst-pause of utterance. In addition, it was confirmed that the pupil response is effective for enhancing affective conveyance by using the developed system in which an interactive CG character generates the pupil response based on the burst-pause of utterance. In this study, we develop a video communication system with a virtual pupil CG superimposed on the partner’s pupil for enhancing affective conveyance. This system generates a virtual pupil response in synchronization of the talker’s utterance. The effectiveness of the system is demonstrated by means of sensory evaluations of 12 pairs of subjects in video communication.

    DOI: 10.1007/978-3-319-92043-6_28

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2018-4.html#SejimaMHSW18

  • A speech-driven pupil response robot synchronized with burst-pause of utterance Reviewed

    Yoshihiro Sejima, Shoichi Egawa, Ryosuke Maeda, Yoichiro Sato, Tomio Watanabe

    RO-MAN 2017 - 26th IEEE International Symposium on Robot and Human Interactive Communication   2017-January   437 - 442   2017.12

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2017 IEEE. We have developed a pupil response robot called, Pupiloid, that generates pupil responses that are closely related to human emotions as well as gaze. Pupiloid can express human-like pupil response by using a mechanism that rotates feathers sterically. In this study, in order to create a smooth interaction between human and robot, we performed an analysis of pupil response during utterance by using a pupil measurement device. Based on the results, we propose a method in which the robot's pupil is dilated by being synchronized with the burst-pause of utterance. Then, we developed an advanced communication robot that was used with the Pupiloid in order to enhance the robot's affect during utterance. This advanced robot generates a vivid pupil response via mechanical structures based on the burst-pause of utterance. We carried out a sensory evaluation experiment under the condition that the robot speaks. The results demonstrated that the developed robot effectively enhances affect.

    DOI: 10.1109/ROMAN.2017.8172339

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2017.html#SejimaEMSW17

  • Development of a hug request motion model during active approach to human Reviewed

    Mitsuru Jindai, Shunsuke Ota, Toshiyuki Yasuda, Tohru Sasaki, Yoshihiro Sejima

    2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017   2017-January   612 - 617   2017.11

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2017 IEEE. In human face-to-face communication, embodied sharing using the synchrony of embodied rhythms is promoted by embodied interactions. Therefore, embodied interactions are important for smoothly initiating coexistence and communication. When there is embodied interactions with direct contact, it plays to synchronize embodied rhythms effectively. Hug behavior is one of the types of embodied interactions that involve direct contact. In this type interactions, humans whole-body contact with each other. In the case of interaction between a human and a robot, robot synchronizes embodied rhythms effectively using hug behaviors. Furthermore, embodied interactions are expected to be promoted by hug behaviors, in which robot actively approaches and requests to the human. Therefore, in this paper, we develop a hug request motion model of robot during active approach to human. At first, mutual hug behaviors between humans are analyzed in an environment with a voice greeting. Then, on the basis of this analysis results, a hug request motion model during active approach to human is developed. This model generates a hug behavior in which a robot active approaches and requests a hug behavior to a human. In addition, we developed a hug robot system that uses the developed hug request motion model. Using this robot system, the effectiveness of the proposed model is demonstrated by a sensory evaluation.

    DOI: 10.1109/SMC.2017.8122674

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/smc/smc2017.html#JindaiOYSS17

  • Development of nodding detection using neural network based on communication characteristics Reviewed

    Shunsuke Ota, Mitsuru Jindai, Toshiyuki Yasuda, Yoshihiro Sejima

    2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2017   2017-November   943 - 945   2017.11

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2017 The Society of Instrument and Control Engineers - SICE. In order to communicate smoothly, human does not only use verbal information but also use non-verbal information such as nodding. Nodding can be defined as the action of moving the head vertically up-and-down rhythmically. Nodding plays a huge role as a form of non-verbal information to show approval, agreement or understanding during communications. However, a system to detect nodding action where the head motion and voice rhythm are integrated has not been proposed. Therefore, in this paper, we developed a nodding detection using Neural Network based on the communication characteristics of the head motion and voice rhythm. Initially, the voice data of the speaker side and the head motion of the listener side are measured. The human nodding is detected by measured data using NN.

    DOI: 10.23919/SICE.2017.8105683

    Web of Science

    Scopus

    researchmap

  • Proposal of a pupil response model synchronized with burst-pause of utterance based on the heat conduction equation Reviewed

    Shoichi Egawa, Yoshihiro Sejima, Ryosuke Maeda, Yoichiro Sato, Tomio Watanabe

    2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2017   2017-November   932 - 934   2017.11

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2017 The Society of Instrument and Control Engineers - SICE. In our previous study, we performed the analysis of pupil response during his or her utterance by using a pupil measurement device and demonstrated that speaker's pupil response dilates with synchronization to the burst-pause of utterance. In addition, we developed a pupil response robot called 'Pupiloid' that generates the pupil response with mechanical structure, and demonstrated that the pupil response is effective to express own affect. In this paper, in order to enhance affective conveyance, we propose a pupil response model synchronized with burst-pause of utterance based on the heat conduction equation in human-robot interaction. This model estimates the degree of affective conveyance and generates the pupil response based on the estimated value.

    DOI: 10.23919/SICE.2017.8105598

    Web of Science

    Scopus

    researchmap

  • デジタル補聴器用DSPを対象とした直列積和演算器の一構成法 Reviewed

    岡本大地, 近藤真史, 瀬島吉裕, 茅野功, 横川智教, 有本和民, 佐藤洋一郎

    電子情報通信学会論文誌 D(Web)   J100-D ( 3 )   321 - 330   2017.3

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

    J-GLOBAL

    researchmap

  • A laughing-driven pupil response system for inducing empathy Reviewed

    Shoichi Egawa, Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe

    SII 2016 - 2016 IEEE/SICE International Symposium on System Integration   520 - 525   2017.2

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2016 IEEE. Laughing response plays an important role in supporting human interaction and communication, and enhances empathy by sharing laughter each other. Therefore, in order to develop communication systems which enhance empathy, it is desired to design the media representation using the pupil response which is related to affective response such as pleasure-unpleasure. In this paper, we aim to enhance empathy during human and robot interaction and communication, and develop a pupil response system for inducing empathy by laughing response using hemispherical display. In addition, we evaluate the pupil response with the laughing response by using the developed system. The results demonstrate that the dilated pupil response with laughing response is effective for enhancing empathy.

    DOI: 10.1109/SII.2016.7844051

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/sii/sii2016.html#EgawaSSW16

  • A serial booth multiplier using ring oscillator Reviewed

    Daichi Okamoto, Masafumi Kondo, Tomoyuki Yokogawa, Yoshihiro Sejima, Kazutami Arimoto, Yoichiro Sato

    Proceedings - 2016 4th International Symposium on Computing and Networking, CANDAR 2016   458 - 461   2017.1

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2016 IEEE. An increase of half-hearing person caused by progressive aging of society in our country leads to an increase in demand for a digital hearing aid with a DSP. Because of a hard physical limit for battery capacity which stems from its wearing form, the battery life of an existing digital hearing aid comes up to only about few days. In this paper, we proposed an implementation for a bit-serial multiplier for DSP in a hearing aid with high working frequency and low power consumption. To reduce the power consumption associated with clock generation, we use a ring oscillator to dynamically generate clock pulse only in the period of calculation. In addition, we adopt the Booth encoding to reduce the number of partial products in multiplication and reduce the calculation time and power consumption associated with it. We implement the proposed multiplier and show the effectiveness of it through the comparison experiments.

    DOI: 10.1109/CANDAR.2016.105

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ic-nc/candar2016.html#OkamotoKYSAS16

  • A speech-driven embodied communication system based on an eye gaze model in interaction-activated communication Reviewed

    Yoshihiro Sejima, Koki Ono, Tomio Watanabe

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   10273 LNCS   607 - 616   2017

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER INTERNATIONAL PUBLISHING AG  

    © Springer International Publishing AG 2017. Line-of-sight such as gaze and eye-contract plays an important role to enhance the embodied interaction and communication through avatars. In addition, many gaze models and communication systems with the line-of-sight using avatars have been proposed and developed. However, the gaze behaviors by generating the above-mentioned models are not considered to enhance the embodied interaction such as activated communication, because the models stochastically generate the eyeball movements based on the human gaze behavior. Therefore, we analyzed the interaction between the human gaze behavior and the activated communication by using line-of-sight measurement devices. Then, we proposed an eye gaze model based on the above-mentioned analysis. In this study, we develop an advanced avatar-mediated communication system in which the proposed eye gaze model is applied to speech-driven embodied entrainment characters called “InterActor.” This system generates the avatar’s eyeball movements such as gaze and looking away based on the activated communication, and provides a communication environment wherein the embodied interaction is promoted. The effectiveness of the system is demonstrated by means of sensory evaluations of 24 pairs of subjects involved in avatar-mediated communication.

    DOI: 10.1007/978-3-319-58521-5_48

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2017-3.html#SejimaOW17

  • A pupil response system for inducing empathy by laugh response in voice communication

    EGAWA Shoichi, SEJIMA Yoshihiro, SATO Yoichiro, WATANABE Tomio

    Transactions of the JSME (in Japanese)   83 ( 853 )   17 - 00076-17-00076   2017

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    <p>We have already developed an embodied communication system with the pupil response and demonstrated that the pupil response plays an important role in realizing smooth human interaction and communication. Therefore, the pupil response has possibilities to enhance a sharing of empathy and to convey rich affects such as a pleasure with laugh. Hence, in order to develop communication systems which enhance empathy, it is desired to design the media representation of pupil response. In this paper, focusing on the laugh with pleasure emotion as a typical pleasure affect, we analyzed the relation between laugh and pupil response using a pupil measurement device, and developed a pupil response system for inducing empathy by laugh response based on speech input. In addition, we evaluated the pupil response with the laugh by using the developed system. The results demonstrated that the dilated pupil response with the laugh is effective for enhancing empathy.</p>

    DOI: 10.1299/transjsme.17-00076

    researchmap

    Other Link: https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-16K01560/

  • Development of a Speech-Driven Pupil Response Robot Synchronizing with Burst-Pause of Utterance

    SEJIMA Yoshihiro, EGAWA Shoichi, MAEDA Ryosuke, SATO Yoichiro, WATANABE Tomio

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)   2017   1P2 - L03   2017

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    <p>We have already developed a pupil response robot called "Pupiloid" that generates the robot's gaze as well as pupil response which are closely related to human emotions. This robot can express pupil response like human by using the mechanism which rotates feathers sterically. In this study, for the basic research of realizing smooth communication during embodied interaction between human and robot, we perform the analysis of pupil response during his or her utterance by using a pupil measurement device. On the basis of the analysis, we propose a concept in which the robot's pupil is dilated by synchronizing with the burst-pause of utterance. Then, we develop an advanced communication robot that is used with the Pupiloid for enhancing the robot's affects such as enthusiasm in utterance. This advanced robot generates a vivid pupil response via mechanical structures based on the burst-pause of utterance. The effectiveness of the developed robot is demonstrated by a sensory evaluation in the communication experiment for 20 subjects.</p>

    DOI: 10.1299/jsmermd.2017.1P2-L03

    researchmap

  • Development of an embodied avatar system using avatar-Shadow's color expressions with an interaction-activated communication model Reviewed

    Yutaka Ishii, Tomio Watanabe, Yoshihiro Sejima

    HAI 2016 - Proceedings of the 4th International Conference on Human Agent Interaction   337 - 340   2016.10

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Association for Computing Machinery, Inc  

    Copyright © 2016 ACM. In reality, shadows are usually natural and unintentional. In virtual reality, however, they play an important role in three-dimensional effects and the perceived reality of the virtual space. An avatar's shadow can have interactive effects with the avatar itself in the virtual space. In this study, we develop an embodied avatar system using avatar-shadow color expressions with an interaction-activated communication model. This model is based on the heat conduction equation in heat-transfer engineering, and has been developed to enhance empathy during embodied interaction in avatar-mediated communication. A communication experiment is performed with 12 pairs of participants to confirm the effectiveness of the system. The results of the sensory evaluation show that interaction activation is visualized by changing avatar-shadow color.

    DOI: 10.1145/2974804.2980487

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/conf/hai/2016

  • Effect of Changes in Face Color on Emotion Perception in Color Vision Deficiency Reviewed

    SEJIMA Yoshihiro, TAKIMOTO Hironori, SATO Yoichiro, MATSUDA Ken

    Transactions of Japan Society of Kansei Engineering   Vol.15, No.1, pp.7-14 ( 1 )   7 - 14   2016.2

     More details

    Language:English   Publisher:Japan Society of Kansei Engineering  

    Face color plays an important role in enhancing empathy in human face-to-face communication, and supports recognition of human emotions such as delight and sorrow. However, these effects cannot be applied to people with a color vision deficiency, who differ in color vision compared to people with normal color vision. Therefore, it is essential to investigate differences in the role of face color in perceiving human emotions. This paper focuses on face color in color vision deficiency, and the experiments evaluate the influence of face color on perception in people with normal color vision and a color vision deficiency. In addition, we evaluated differences in emotion perception between people with normal color vision and people with a color vision deficiency. The results demonstrate that orange improves perception of delight.

    DOI: 10.5057/jjske.TJSKE-D-15-00056

    researchmap

  • A pupil response system using hemispherical displays for enhancing affective conveyance Reviewed

    Yoshihiro Sejima, Shoichi Egawa, Yoichiro Sato, Tomio Watanabe

    Journal of Advanced Mechanical Design, Systems and Manufacturing   10 ( 4 )   2016

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:JAPAN SOC MECHANICAL ENGINEERS  

    © 2019 The Japan Society of Mechanical Engineers. In human interaction and communication, not only verbal messages but also nonverbal behaviors such as facial expressions, body movements, gazes and pupil responses play an important role in expressions of talker’s affect. These expressions encourage to read the emotional cues and to cause the sharing of embodiment and empathy. We focused on the pupil response which is closely related to human affect, and developed an embodied communication system in which an interactive CG character generates the pupil response as well as communicative actions and movements such as nodding and body movements by speech input. In addition, it was confirmed that the pupil response is effective for supporting the embodied interaction and communication using the developed system. In this paper, in order to realize the smooth interaction between human and robot, we developed a pupil response system using hemispherical displays for enhancing affective conveyance. This system looks like robot’s eyeballs and expresses vivid pupil response by speech input. We carried out a sensory evaluation experiment under the condition that the developed system speaks. The results demonstrated that the system effectively enhances affective conveyance.

    DOI: 10.1299/JAMDSM.2019JAMDSM0032

    Web of Science

    Scopus

    researchmap

  • DEVELOPMENT OF A SMALL-SIZE HANDSHAKE ROBOT SYSTEM FOR GENERATION OF HANDSHAKE RESPONSE MOTION USING NON-CONTACT MEASUREMENTS Reviewed

    Shunsuke Ota, Yoshihiro Sejima, Mitsuru Jindai

    ICIM'2016: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON INDUSTRIAL MANAGEMENT   493 - 501   2016

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:CHINA AVIATION INDUSTRY PRESS  

    A handshake is an embodied interaction for displaying closeness using physical contact. Therefore, in our previous study, a small-size handshake robot system is developed. This robot system can shake hands with humans, and the generated handshake motion is preferred by humans. In this system, a magnetic sensor is used for measuring positions of hands and joint angles of human. Therefore, it is necessary that receivers of the magnetic sensor are attached to human arm. However, for a natural handshake interaction, hand position and postures of humans should be measured by non-contact. In this paper, we develop a small-size handshake robot system for generation of handshake response motion using non-contact measurements. In this system, Kinect and Leap Motion controller are installed. The human position at long range is measured by Kinect, and the hand position at close range is measured with high accuracy by Leap Motion controller. Furthermore, sensory evaluations are performed to confirm the hand motion which is generated by the robot system using non-contact measurement. From these results, the handshake robot system can generate a handshake response motion using non-contact measurements the same as the previous system. Additionally, the generated response motion is preferred by humans. Thus, the effectiveness of the system is demonstrated.

    Web of Science

    researchmap

  • Speech-driven embodied entrainment character system with pupillary response Reviewed

    Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe, Mitsuru Jindai

    MECHANICAL ENGINEERING JOURNAL   3 ( 4 )   2016

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:JAPAN SOC MECHANICAL ENGINEERS  

    We have developed a speech-driven embodied entrainment character called "InterActor" that has functions of both speaker and listener for supporting human interaction and communication. This character would generate communicative actions and movements such as nodding, body movements, and eyeball movements by using only speech input. In this study, we focus specifically on the pupillary responses related to human emotions in embodied interaction and communication. Pupillary responses in human face-to-face communication are analyzed using an embodied communication system with a line-of-sight measurement device. Based on the analysis results, we enhance the functions of the character and develop an advanced speech-driven embodied entrainment character system for conveying empathy. Using only speech input, this system changes the pupil size of characters based on the analysis. Through sensory evaluation, we perform experiments to determine the effects of the developed system. The results reveal that the system is effective in human interaction and communication.

    DOI: 10.1299/mej.15-00314

    Web of Science

    researchmap

  • Estimation model of interaction-activated communication based on the heat conduction equation Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai

    Journal of Advanced Mechanical Design, Systems and Manufacturing   10 ( 9 )   2016

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:JAPAN SOC MECHANICAL ENGINEERS  

    © 2016 The Japan Society of Mechanical Engineers. In human interaction and communication, not only verbal messages but also nonverbal behavior such as nodding and paralanguage are rhythmically related and mutually synchronized among speakers. This synchrony of embodied rhythms unconsciously enhances a sense of unification and causes an interaction-activated communication in which nonverbal behaviors such as body movements and speech activity increase, and the embodied interaction is activated. In this paper, we propose the concept of an estimation model of interaction-activated communication based on the heat conduction equation with the characteristics of precipitous speed fluctuation and develop a model that estimates the degree of interaction-activated communication by using speech input only. Further, we evaluate the developed model in estimating the period of the interaction-activated communication in an avatar-mediated communication. The results demonstrate that the developed model is effective in estimating the interaction-activated communication.

    DOI: 10.1299/jamdsm.2016jamdsm0103

    Web of Science

    Scopus

    researchmap

  • Development of an expressible pupil response interface using hemispherical displays Reviewed

    Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication   2015-November   285 - 290   2015.11

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2015 IEEE. We have analyzed the entrainment between a speaker's speech and a listener's nodding in face-to-face communication, and developed iRT (InterRobot Technology) to generate a variety of communicative actions and movements such as nodding and body movements by using a speech input based on the entrainment analysis. In this study, to conduct basic research for realizing smooth communication during embodied interactions between humans and robots, we focus on the pupil response, which is related to human emotions during such interactions. We analyze the pupil response in human face-to-face communication by using an embodied communication system with a line-of-sight measurement device. On the basis of this analysis, using hemispherical displays, we develop an expressible pupil response interface in which the iRT is applied to enhance embodied interaction between humans and robots. This system enables expression of the pupil response by using only speech input. In addition, the effectiveness of the developed system is demonstrated experimentally.

    DOI: 10.1109/ROMAN.2015.7333618

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2015.html#SejimaSW15

  • Development of a speech-driven embodied entrainment character system with pupil response Reviewed

    Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe, Mitsuru Jindai

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   9173   378 - 386   2015

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Springer Verlag  

    © Springer International Publishing Switzerland 2015. We have developed a speech-driven embodied entrainment character called “InterActor” that had functions of both speaker and listener for supporting human interaction and communication. This character would generate communicative actions and movements such as nodding, body movements, and eyeball movements by using only speech input. In this paper, we analyze the pupil response during the face-to-face communication and non-face-to-face communication with the typical users of the character system. On the basis of the analysis results, we enhance the functionalities of the character and develop an advanced speech-driven embodied entrainment character system for expressing the pupil response.

    DOI: 10.1007/978-3-319-20618-9_38

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2015-5.html#SejimaSWJ15

  • Development of an interaction-Activated communication model based on a heat conduction equation in voice communication Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai

    IEEE RO-MAN 2014 - 23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions   2014-October ( October )   832 - 837   2014.10

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    © 2014 IEEE. In a previous study, we developed an embodied virtual communication system for human interaction analysis by synthesis in avatar-mediated communication and confirmed the close relationship between speech overlap and the period for activating embodied interaction and communication through avatars. In this paper, we propose an interaction-Activated communication model based on the heat conduction equation in heat-Transfer engineering for enhancing empathy between a human and a robot during embodied interaction in avatar-mediated communication. Further, we perform an evaluation experiment to demonstrate the effectiveness of the proposed model in estimating the period of interaction-Activated communication in avatar-mediated communication. Results suggest that the proposed model is effective in estimating interaction-Activated communication.

    DOI: 10.1109/ROMAN.2014.6926356

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2014.html#SejimaWJ14

  • An embodied group entrainment characters system based on the model of lecturer's eyeball movement in voice communication Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa, Yukari Zushi

    ACHI 2014 - 7th International Conference on Advances in Computer-Human Interactions   351 - 358   2014

     More details

    Publishing type:Research paper (international conference proceedings)  

    Copyright © IARIA, 2014. We have developed a speech-driven embodied group entrained communication system called "SAKURA" for enabling group interaction and communication. In this system, speech-driven computer-generated (CG) characters called InterActors with functions of both speakers and listeners are entrained as a teacher and some students in a virtual classroom by generating communicative actions and movements. In this study, for enhancing group interaction and communication, we analyze the eyeball movements of a lecturer communicating in a virtual group by using an embodied communication system with a line-of-sight measurement device. On the basis of the analysis results, we propose an eyeball movement model that consists of a saccade model and a model of the lecturer's gaze at the audience, called "group gaze model." The saccade model reveals eyeball movement with a delay of 0.20 s with respect to the lecturer's head movement. A group gaze model reveals the rate of the lecturer's gaze (Center: 60%, Left-side: 27%, Right-side: 13%). Then, we develop an advanced communication system in which the proposed model is used with SAKURA. Using this system, we perform experiments and carry out sensory evaluation for determining the effects of the proposed model. The results reveal that the proposed model is effective for group interaction and communication in the speech-driven embodied group entrainment characters system.

    Scopus

    researchmap

  • A study on an eye-contact measurement method for mental health care Reviewed

    Yukari Zushi, Yoshihiro Sejima, Atsushi Osa, Mitsuru Jindai, Tomio Watanabe

    Proceedings of the SICE Annual Conference   2694 - 2695   2013

     More details

    Publishing type:Research paper (international conference proceedings)  

    In this study, we performed an experiment for examining an advanced line-of-sight measurement method in human face-to-face communication to investigate eye-contact measurement method that supports mental health care. In this experiment, an acrylic board was arranged between two talkers as a virtual display, and the gaze points were analyzed in order to examine the effects of the presence of the acrylic board on the talkers. The result demonstrated that the talkers' lines of sight were focused on the acrylic board.

    Scopus

    researchmap

  • Eyeball movement model for lecturer character in speech-driven embodied group entrainment system Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa

    Proceedings - 2013 IEEE International Symposium on Multimedia, ISM 2013   506 - 507   2013

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    In our previous research, we proposed an eyeball movement model that consists of a saccade model and a group gaze model for enhancing group interaction and communication. In this study, in order to evaluate the effects of the proposed model, we develop an advanced communication system in which the proposed model is used with a speech-driven embodied group entrained communication system. The effectiveness of the proposed model is demonstrated for performing the communication0 experiments with a sensory evaluation using the developed system. © 2013 IEEE.

    DOI: 10.1109/ISM.2013.99

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ism/ism2013.html#SejimaWJO13

  • A speech-driven embodied group entrainment system with the model of lecturer's eyeball movement. Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa, Yukari Zushi

    The 21st IEEE International Symposium on Robot and Human Interactive Communication, IEEE RO-MAN 2012, Paris, France, September 9-13, 2012   1086 - 1091   2012

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    DOI: 10.1109/ROMAN.2012.6343893

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2012.html#SejimaWJOZ12

  • A virtual audience system for enhancing embodied interaction based on conversational activity Reviewed

    Yoshihiro Sejima, Yutaka Ishii, Tomio Watanabe

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   6772 LNCS ( PART 2 )   180 - 189   2011

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER-VERLAG BERLIN  

    In this paper, we propose a model for estimating conversational activity based on the analysis of enhanced embodied interaction, and develop a virtual audience system. The proposed model is applied to a speech-driven embodied entrainment wall picture, which is a part of the virtual audience system, for promoting enhanced embodied interaction. This system generates activated movements based on the estimated value of conversational activity in enhanced interaction and provides a communication environment wherein embodied interaction is promoted by the virtual audience. The effectiveness of the system was demonstrated by means of sensory evaluations and behavioral analysis of 20 pairs of subjects involved in avatar-mediated communication. © 2011 Springer-Verlag.

    DOI: 10.1007/978-3-642-21669-5_22

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2011-12.html#SejimaIW11

  • Effects of delayed presentation of self-embodied avatar motion with network delay Reviewed

    Yutaka Ishii, Yoshihiro Sejima, Tomio Watanabe

    2010 4th International Universal Communication Symposium, IUCS 2010 - Proceedings   262 - 267   2010

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    A large network delay is likely to obstruct human interaction in telecommunication systems such as telephony or video conferencing systems. In spite of the extensive investigations that have been carried out on network delays of voice and image data, there have been few studies regarding support for embodied communication under the conditions of network delay. To maintain smooth human interaction, it is important that the various ways in which delay is manifested are understood. We have already developed an embodied virtual communication system that uses an avatar called "VirtualActor," in which speakers who are remotely located from one another can share embodied interaction in the same virtual space. Responses to a questionnaire that was used in a communication experiment confirmed that a fixed 500-ms network delay has no effect on interactions via VirtualActors. In this paper, we propose a method of presenting a speaker's voice and an avatar's motion feedback in the case of a 1.5-s network delay using VirtualActors. We perform two communication experiments under different conditions of network delay. The aim of the first experiment is to examine the effect of a random time delay on the conversation. The second experiment is conducted under the conditions of a free-form conversation that takes place in 5 scenarios - 1 real-time scenario without a network delay and 4 scenarios with network delay that involve a combination of a delay in the talker's voice and in his/her avatar's motion feedback. The subjects consisted of a total of 30 students who worked in 15 pairs and who were familiar with each other. A sensory evaluation shows the effects upon communication of delays in the avatar's motion feedback, from the viewpoint of supporting the interaction. ©2010 IEEE.

    DOI: 10.1109/IUCS.2010.5666007

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/iucs/iucs2010.html#IshiiSW10

  • A speech-driven embodied entrainment wall picture system for supporting virtual communication Reviewed

    Yoshihiro Sejima, Tomio Watanabe

    ACM International Conference Proceeding Series   309 - 314   2009

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:ACM  

    We have developed a speech-driven embodied entrainment system called "InterPicture" and have demonstrated the effectiveness of the system using an embodied virtual communication system. InterPicture is an image containing flowers that react to the speech input of talkers. We confirmed the importance of providing a communication environment in which not only avatars but also CG objects placed around the avatars are related to virtual communication. In this study, we have developed an advanced speech-driven embodied entrainment system called "InterWall". This system projects wall picture widely onto the wall surrounding avatars and behaves as a listener by producing nodding and body movements on the basis of the speech input of a talker. Further, a communication experiment has been performed, and the effectiveness of "InterWall" has been demonstrated by carrying out a sensory evaluation and a speech-overlap analysis for 20 pairs of 40 talkers. Copyright 2009 ACM.

    DOI: 10.1145/1667780.1667844

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/iucs/iucs2009.html#SejimaW09

  • An embodied virtual communication system with a speech-driven embodied entrainment picture. Reviewed

    Yoshihiro Sejima, Tomio Watanabe

    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2   979 - 984   2009

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    We have already developed an embodied virtual communication system for human interaction analysis by synthesis. This system provides two remote talkers with a communication environment in which embodied interaction is shared by Virtual Actors including the talkers themselves through a virtual face-to-face scene. We confirmed the importance of embodied sharing in embodied communication by using the analysis-by-synthesis system. We have also demonstrated the effects of nodding responses for embodied interaction and communication support. In this paper, we develop an embodied virtual communication system with a speech-driven embodied entrainment picture "InterPicture" for supporting virtual communication. The effects of the developed system are demonstrated by a sensory evaluation and speech-overlap analysis in the communication experiment for 20 pairs of talkers.

    DOI: 10.1109/ROMAN.2009.5326243

    Web of Science

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2009.html#SejimaW09

  • Analysis by synthesis of embodied communication via VirtualActor with a nodding response model Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Michiya Yamamoto

    Proceedings of the 2nd International Symposium on Universal Communication, ISUC 2008   225 - 230   2008

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE COMPUTER SOC  

    In this study, we develop the embodied virtual communication system with the speech-driven nodding response model for the analysis by synthesis of embodied communication. Using the proposed system in embodied virtual communication, we perform experiments and carry out sensory evaluation and voice-motion analysis to demonstrate the effects of nodding responses on a talker's avatar called VirtualActor. The result of the study shows that superimposed nodding responses in a virtual space promote communication. © 2008 IEEE.

    DOI: 10.1109/ISUC.2008.71

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/iucs/isuc2008.html#SejimaWY08

▼display all

MISC

  • 人を惹き込むコミュニケーションシステム「かかわりEye」の紹介

    瀬島 吉裕

    情報研究 = Journal of informatics : 関西大学総合情報学部紀要 / 関西大学総合情報学部 編   ( 55 )   25 - 32   2022.7

     More details

    Language:Japanese  

    CiNii Books

    researchmap

  • オンラインコミュニケーションにおける背景色の動的変化による雰囲気表現法の開発—Development of a method to express the atmosphere by dynamically changing the background color in online communication—知覚情報研究会・VR心理,複合現実型実応用および一般

    稲垣 早紀, 森田 大樹, 井上 遼介, 松本 康希, 瀬島 吉裕

    電気学会研究会資料. PI = The papers of Technical Meeting on "Perception Information", IEE Japan, / 知覚情報研究会 [編]   2021 ( 31-45 )   5 - 9   2021.8

     More details

    Language:Japanese   Publisher:電気学会  

    researchmap

  • Development of a Speech-Driven Pupil Response Pet-Robot for Promoting Self-Disclosure

    OKUBO Kiho, SEJIMA Yoshihiro

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)   2021   1P3-E03   2021

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Dilated pupils make favorable impression. Focusing on the benefits of the dilated pupils, we have developed a pupil response pet-robot for improving familiarity. This robot can enhance familiarity by enlarging pupils in synchronization with the body contact. In this study, focusing on creating an atmosphere that makes it easy to talk with the pet-robot for promoting self-disclosure during communication, we investigated the size of the pupils that give favorable impression with or without speech input. Based on the experimental results, we developed a speech-driven pupil response pet-robot that generates both a clearly enlarged pupil response and a blink that is synchronized with the speech rhythm.

    DOI: 10.1299/jsmermd.2021.1p3-e03

    researchmap

  • Development of an interaction-activated communication model with eye-gaze in voice communication

    SEJIMA Yoshihiro, OKAMOTO Ryohei, WATANABE Tomio

    The Proceedings of Design & Systems Conference   2021.31   2401   2021

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    In human communication, nonverbal behaviors such as facial expression, body movement, eye-gaze, and pupil response play an important role in realizing activated interaction and communication. In our previous study, an activated interaction through avatars was analyzed, and an estimation model for the activated interaction was developed by introducing the concept of heat-transfer engineering. This model estimates the amount of heat in the communication field as the degree of the activated interaction based on an assumption that each talker's heat is transmitted to the communication field. However, it is hard to estimate interactions including the conveyance of eye-gaze because the model has no parameter of the eye-gaze. It is important to share the conveyance of mutual eye-gaze for producing the activated interaction in voice communication. Therefore, it is expected to develop a model that estimates the conveyance of eye-gaze. In this paper, focusing on eye-gaze which plays an important role in producing the activated interaction, we developed a model that estimates the degree of conveyance by the talker’s eye-gaze in voice communication. This model estimates the degree of the interaction-activated communication as heat transfer by combining the speech input and the eye-gaze input.

    DOI: 10.1299/jsmedsd.2021.31.2401

    researchmap

  • Impressions for the Difference of Pupillary Area in the Media of Pupil Expression

    SEJIMA Yoshihiro, KAWAMOTO Hiroki, SATO Yoichiro, WATANABE Tomio

    The Proceedings of Design & Systems Conference   2020.30   2409   2020

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Pupils are related to own affect and give favorable impressions to others depending on their size. Focusing on effects of the pupil size, various pupil sizes were evaluated using human images as stimulations in previous researches. However, it is difficult to generate extreme pupil size in human images, and the evaluation of pupil size itself is difficult because the impressions of human images are formed depending on not only pupil sizes but also other facial parts. To evaluate the impressions of the pupil size itself, it is desirable to extract only the pupil and exclude other facial parts. In this study, we analyzed the impressions on various pupil sizes that combined the dilation / contraction using two kinds of pupil expression media that have been developed. The analysis results demonstrated that a certain dilated area of pupil has a potential to form favorable impressions regardless of pupil expression media.

    DOI: 10.1299/jsmedsd.2020.30.2409

    researchmap

  • Development of an interface with twinkling eyes by superimposing self-shadows on pupils

    SEJIMA Yoshihiro, NISHIDA Makiko, WATANABE Tomio

    The Proceedings of Mechanical Engineering Congress, Japan   2020   S12101   2020

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    DOI: 10.1299/jsmemecj.2020.s12101

    researchmap

  • Development of a teary-eyed robot that displays emotional empathy

    SEJIMA Yoshihiro, WATANABE Tomio

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)   2020   1A1-F03   2020

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Empathy is very important in the enhancement of rapport, and broadly divided into emotional empathy and cognitive empathy. However, there are few approaches to expressing emotional empathy that can be understood intuitively. In this study, we designed and developed a teary-eyed for displaying emotional empathy during human-robot interaction. This robot mimics the human lacrimal structure, and can generate teary eye by controlling the inflow of water. The effectiveness of the teary-eyed was confirmed experimentally.

    DOI: 10.1299/jsmermd.2020.1a1-f03

    researchmap

  • Minimal Design of Pupil Responses in Eye-Communication

    54 ( 11 )   723 - 728   2019.11

     More details

    Language:Japanese  

    CiNii Books

    researchmap

  • Efficient Visualization Method of Tube Diameter Based on Center Coordinates and its History

    上田満, 近藤真史, 茅野功, 瀬島吉裕, 佐藤洋一郎, 杉原雄策, 松本和幸

    電子情報通信学会技術研究報告   118 ( 412(MI2018 59-115)(Web) )   165‐168 (WEB ONLY)   2019.1

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • Effects of Difference in Pupillary Area on Impressions in the Pupil Response Interface

    21   9 - 14   2019

     More details

    Language:Japanese  

    CiNii Books

    researchmap

  • An embodied communication system with avatar‐shadow’s color expressions based on an interaction‐activated communication model in voice communication

    瀬島吉裕, 石井裕, 渡辺富夫

    日本機械学会論文集(Web)   85 ( 873 )   ROMBUNNO.18‐00074(J‐STAGE) - 18-00074   2019

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Shadows play an important role in the effects of three-dimension and the enhancement of reality in virtual space. On the other hand, addition of color change in human interaction and communication is effective for enhancing expressions such as affects and interactions. Therefore, by changing the color of avatar’s shadows based on the interaction, it is expected to activate and enhance an avatar-mediated interaction and communication. In this study, focusing on the effects of avatar-shadow’s color, we develop an embodied communication system which expresses the change of avatar-shadow color based on an interaction-activated communication model. The model estimates activation levels of embodied communication based on speech input, and the color of avatar-shadow is changed on the basis of the estimated values. A communication experiment is performed with 12 pairs of participants to evaluate the system. The results of the sensory evaluation demonstrate that the color expressions of avatar-shadow are effective for visualizing embodied interaction and communication.

    DOI: 10.1299/transjsme.18-00074

    J-GLOBAL

    researchmap

  • ソーシャルロボットにおける時間的眼色変化を伴う感情表現に対する色覚多様性者の印象評価

    瀬島吉裕, 佐藤洋一郎

    日本福祉工学会学術講演会講演論文集   22nd   125‐126   2018.11

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 複数の組み込みプロセッサとGPUを併用した俯瞰画像合成システムの開発

    田所勇生, 近藤真史, 瀬島吉裕, 佐藤洋一郎, 河本崇幸, 石原洋之

    情報科学技術フォーラム講演論文集   17th   119‐120   2018.9

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 医用画像における管状組織を対象とした管径可視化システムの開発

    上田満, 近藤真史, 茅野功, 瀬島吉裕, 佐藤洋一郎, 杉原雄策, 松本和幸

    情報科学技術フォーラム講演論文集   17th   333‐334   2018.9

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 対面における身体接触を伴う共感表現を付加した瞳孔反応システムの開発

    瀬島吉裕, 前田涼介, 長谷川大地, 佐藤洋一郎, 渡辺富夫

    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)   2018   ROMBUNNO.1P2‐F11   2018.6

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • A Recursive Approximation Approach of Projective Transformation for Free-Viewpoint Cameras

    田所勇生, 近藤真史, 瀬島吉裕, 佐藤洋一郎

    電子情報通信学会技術研究報告   117 ( 400(CAS2017 109-132) )   45‐50   2018.1

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • コミュニケーションロボットのための時間的眼色変化を付加した感情表現

    瀬島吉裕, 佐藤洋一郎

    日本感性工学会大会予稿集(CD-ROM)   20th (Web)   ROMBUNNO.P‐44 (WEB ONLY)   2018

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • Analysis of Impressions on Color Vision Variation for Attentional Color in Dynamic Information Presentation

    瀬島 吉裕, 滝本 裕則, 佐藤 洋一郎, 神代 充

    日本福祉工学会誌   20 ( 1 )   27 - 34   2018

     More details

    Language:Japanese   Publisher:日本福祉工学会  

    CiNii Books

    J-GLOBAL

    researchmap

    Other Link: http://search.jamas.or.jp/link/ui/2018372157

  • 深層ニューラルネットワークを用いた外観的“かわいい”の定量的評価

    長谷川大地, 瀬島吉裕, 田所勇生, 前田涼介, 佐藤洋一郎

    日本感性工学会大会予稿集(CD-ROM)   20th (Web)   ROMBUNNO.F5‐01 (WEB ONLY)   2018

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 複数ロボットからの画像を集約する多方向画像共有システムの開発

    太田俊介, 瀬島吉裕, 福田忠生, 山内仁, 保田俊行, 神代充

    計測自動制御学会システムインテグレーション部門講演会(CD-ROM)   18th   ROMBUNNO.2C1‐04   2017.12

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 動的情報提示における注目色の速度変化に対する色覚多様性者の印象評価

    瀬島吉裕, 滝本裕則, 佐藤洋一郎, 神代充

    日本福祉工学会学術講演会講演論文集   21st   71‐72   2017.11

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 場の盛り上がり推定モデルに基づく身体的アバタ影色表現システム

    瀬島吉裕, 石井裕, 渡辺富夫

    日本機械学会設計工学・システム部門講演会論文集(CD-ROM)   27th   ROMBUNNO.1208   2017.9

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 半球ディスプレイを用いた瞳孔反応インタフェース

    江川翔一, 瀬島吉裕, 前田涼介, 佐藤洋一郎, 渡辺富夫

    日本機械学会設計工学・システム部門講演会論文集(CD-ROM)   27th   ROMBUNNO.1205   2017.9

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • スポーツビジョンと運動系を統合した周辺視野トレーニングシステムの開発

    瀬島吉裕, 岡本拓也, 江川翔一, 前田涼介, 佐藤洋一郎

    日本機械学会設計工学・システム部門講演会論文集(CD-ROM)   27th   ROMBUNNO.1207   2017.9

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 発話呼気に同期する瞳孔CGモデルを瞳孔に重畳合成したビデオチャットシステムの開発

    前田涼介, 江川翔一, 瀬島吉裕, 佐藤洋一郎, 渡辺富夫

    日本機械学会設計工学・システム部門講演会論文集(CD-ROM)   27th   ROMBUNNO.2607   2017.9

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • 熱伝導方程式に基づく発話呼気に同調する瞳孔反応生成モデルの開発

    瀬島吉裕, 江川翔一, 前田涼介, 佐藤洋一郎, 渡辺富夫

    日本機械学会年次大会講演論文集(CD-ROM)   2017   ROMBUNNO.S1210107   2017.9

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • A method of reducing amount of operations on the bit serial multiply-accumulator and its application

    岡本 大地, 近藤 真史, 瀬島 吉裕, 横川 智教, 有本 和民, 佐藤 洋一郎

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報   116 ( 96 )   35 - 40   2016.6

     More details

    Language:Japanese   Publisher:電子情報通信学会  

    CiNii Books

    researchmap

  • A speech-driven embodied entrainment character system for supporting line-of-sight interaction using an interaction-activated communication model

    SEJIMA Yoshihiro, ONO Koki, WATANABE Tomio

    Transactions of the JSME (in Japanese)   82 ( 842 )   16 - 00114-16-00114   2016

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    <p>Line-of-sight such as gaze and eye-contract plays an important role to enhance the embodied interaction and communication through avatars. In addition, many eyeball movement models and communication systems using avatars with the line-of-sight have been proposed and developed. However, these eyeball movements by generating the above-mentioned models were not considered to enhance the embodied interaction such as activated communication, because the models stochastically generate the eyeball movements based on the human gaze behavior. Therefore, in order to enhance and promote the embodied interaction from the view point of line-of-sight, it is desired to design a line-of-sight interaction which relates to the interaction-activated communication. In this study, we analyze the interaction between human gaze behavior and the interaction-activated communication by using an embodied communication system with line-of-sight measurement devices. Then, we propose a line-of-sight model based on the above-mentioned analysis, and develop an advanced embodied communication system by applying the proposed model to the speech-driven embodied entrainment character. This system generates the eyeball movements such as gaze and looking away based on the proposed model, and provides a communication environment wherein the embodied interaction is promoted. The effectiveness of the system is demonstrated by means of sensory evaluations of 15 pairs of subjects involved in avatar-mediated communication.</p>

    DOI: 10.1299/transjsme.16-00114

    researchmap

  • A bit serial multiply and accumulator with negative number operation

    岡本 大地, 近藤 真史, 瀬島 吉裕

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報   115 ( 316 )   115 - 120   2015.11

     More details

    Language:Japanese   Publisher:電子情報通信学会  

    CiNii Books

    researchmap

  • A bit serial multiply and accumulator with negative number operation

    岡本 大地, 近藤 真史, 瀬島 吉裕

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報   115 ( 315 )   115 - 120   2015.11

     More details

    Language:Japanese   Publisher:電子情報通信学会  

    CiNii Books

    researchmap

  • An Attempt of PBL Education Using Delivery Gymnastic Experiment by Regional Collaboration

    SEJIMA Yoshihiro, YAMAUCHI Hitoshi, MATSUI Toshiki, SATO Yoichiro, TAKASUGI Seiji

    Journal of JSEE   63 ( 5 )   5_122 - 5_127   2015

     More details

    Language:Japanese   Publisher:Japanese Society for Engineering Education  

    DOI: 10.4307/jsee.63.5_122

    researchmap

  • 1110 Development of an Embodied Communication System with Line-of-Sight Model for Speech-Driven Embodied Entrainment Character

    SEJIMA Yoshihiro, ONO Koki, YAMAMOTO Mayo, ISHII Yutaka, WATANABE Tomio

    The Proceedings of Design & Systems Conference   2015 ( 0 )   _1110 - 1_-_1110-9_   2015

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have already analyzed enhanced interaction through avatars using an embodied virtual communication system and proposed an interaction-activated communication model based on the heat conduction equation. In this paper, we analyzed the interaction between the talker's gaze and the degree of interaction-activated communication in face-to-face communication, by using an embodied communication system with a line-of-sight measurement device. On the basis of this analysis, we proposed a line-of-sight model, which consists of an eyeball delay movement model and a look away model. Then, we developed an advanced communication system in which the proposed model is applied to speech-driven embodied entrainment character for enhancing embodied interaction and communication. This system generates the eyeball movement on the basis of the proposed model by using the speech input alone to realize smooth and enjoyable interaction and communication.

    DOI: 10.1299/jsmedsd.2015.25._1110-1_

    CiNii Books

    researchmap

  • Assessing the Educational Effectiveness of Human Resource Development Education with Information System Engineers through a Rescue Robot Contest

    FUKUTA Tadao, SEJIMA Yoshihiro, OBUNAI Kiyotaka, YAMAUCHI Hitoshi, MATSUI Toshiki

    Journal of JSEE   63 ( 5 )   5_87 - 5_92   2015

     More details

    Language:Japanese   Publisher:Japanese Society for Engineering Education  

    Project-based learning (PBL) is effective for teaching students about the engineering design process, as it leads to higher educational effectiveness among lower-grade students. In this paper, we introduced an original PBL program that involved the use of an autonomous mobile robot in a rescue robot contest, and we assessed its educational effectiveness among several categories of students who participated in the contest. The results demonstrated that essential abilities for information engineering could be acquired by participating in the contest, and both the PBL program and contest provided a good educational environment for lower-grade students.

    DOI: 10.4307/jsee.63.5_87

    researchmap

  • Development of an Eye Contact Measurement System for Quantitative Evaluation of Mental Health

    瀬島 吉裕, 長 篤志, 神代 充

    日本福祉工学会誌   17 ( 1 )   11 - 19   2015

     More details

    Language:Japanese   Publisher:日本福祉工学会  

    CiNii Books

    researchmap

    Other Link: http://search.jamas.or.jp/link/ui/2017235199

  • The task dependence in left and right deviation of gazing time and application for a remote education support system

    Tanabe Makoto, Sejima Yoshihiro, Yamamoto Masayuki, Osa Atsushi

    ITE Technical Report   38 ( 0 )   13 - 16   2014

     More details

    Language:Japanese   Publisher:The Institute of Image Information and Television Engineers  

    A previous study reported that the lecturer's avatar of a remote education communication support system with a group gaze model which, the avatar gazed the audience at a rate of 13% right, 60% center, 27% left in the virtual classroom, could provide effectiveness for group interaction and communication, especially sense of unity. We believe that a reason of the result is explained by an experimental result, lecturer in the real class room gazes longer duration to the left side than to the right side. In this study, we investigated relationship between duration time of gazing and experimental tasks assigned to the participants. Results show that gazing time depends on the experimental tasks, and two tasks brought left and right deviation. Furthermore, we discussed the reason that the group gaze model can improve the sense of unity in the virtual class room.

    DOI: 10.11485/itetr.38.10.0_13

    CiNii Books

    researchmap

    Other Link: http://www.lib.yamaguchi-u.ac.jp/yunoca/handle/2014010498

  • 3A04 Development of Fundamental Competencies for Working Persons by Creative Design Education Program

    SAKIYAMA Satoshi, OKADA Hideki, Sejima Yoshihiro, KIKUMA Yoshie, Miura Fusanori

    Proceedings of Annual Conference of Japanese Society for Engineering Education   2014 ( 0 )   420 - 421   2014

     More details

    Language:Japanese   Publisher:Japanese Society for Engineering Education  

    DOI: 10.20549/jseeja.2014.0_420

    CiNii Books

    researchmap

  • Novel Educational Project for Science and Technology developed in Yamaguchi Prefecture

    SAKIYASMA Satoshi, OKADA Hideki, SEJIMA Yoshihiro, MIURA Fusanori

    Proceedings of the Annual Meeting of Japan Society for Science Education   37 ( 0 )   172 - 173   2013

     More details

    Language:Japanese   Publisher:Japan Society for Science Education  

    DOI: 10.14935/jssep.37.0_172

    CiNii Books

    researchmap

  • An interaction-activated communication support system using a virtual audience with an estimated model of conversational activity Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Yutaka Ishii

    Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C   79 ( 807 )   4095 - 4107   2013

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Previously, we developed an embodied virtual communication system (EVCOS) for human interaction analysis by synthesis, and confirmed the importance of embodied sharing in avatar-mediated communication. In this paper, we analyze human interaction and communication in interaction-activated conversation for supporting avatar-mediated communication by using EVCOS, and we propose a model for estimating conversational activity on the basis of this analysis. Further, we develop an interaction-activated communication support system for enhancing embodied interaction and communication by applying the proposed model to a virtual audience comprising interactive CG objects. In this system, the virtual audience generates entrained nodding responses as well as dynamic movements based on the estimated conversational activity during the interaction-activated communication period. The effectiveness of the system is demonstrated by means of sensory evaluations and behavioral analysis of 20 pairs of subjects involved in avatarmediated communication. © 2013 The Japan Society of Mechanical Engineers.

    DOI: 10.1299/kikaic.79.4095

    Scopus

    researchmap

  • 9-336 Development of a Measurement System Estimating Degree of Concentration Based on Head Direction Using Image Processing

    SEJIMA Yoshihiro, OKADA Hideki, SAKIYAMA Satoshi, MIURA Fusanori

    Proceedings of Annual Conference of Japanese Society for Engineering Education   2013 ( 0 )   680 - 681   2013

     More details

    Language:Japanese   Publisher:Japanese Society for Engineering Education  

    DOI: 10.20549/jseeja.2013.0_680

    CiNii Books

    researchmap

  • A speech-driven embodied group entrained system with an eyeball movement model for lecturer character Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa

    Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C   79 ( 799 )   827 - 836   2013

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have already developed a speech-driven embodied group entrained communication system called "SAKURA" for activating group interaction and communication in which speech-driven CG characters called InterActors with both functions of speaker and listener by generating communicative actions and motions are entrained to one another as a teacher and some students in the same virtual classroom. In this paper, the eyeball movement of lecturer in virtual group communication is analyzed by using an embodied communication system with a line-of-sight measurement device. On the basis of the analysis, we propose an eyeball movement model, which consists of a saccade model and a model of lecturer's gazing to audiences called "group gaze model." Then, an advanced communication system in which the proposed model is applied to SAKURA is developed for enhancing group interaction and communication. This system generates lecturer's eyeball movement based on the proposed model by using only speech input. By using the system, we perform experiments for the effects of the proposed model by sensory evaluation. The results demonstrate that the system with the model is effective in group interaction and communication. © 2013 The Japan Society of Mechanical Engineers.

    DOI: 10.1299/kikaic.79.827

    Scopus

    CiNii Books

    researchmap

    Other Link: http://www.lib.yamaguchi-u.ac.jp/yunoca/handle/2013010010

  • The Effects of CG Characters' Physical Characteristics and Background on Impression Formation:- Examination based on Repeated Exposure -

    MATSUDA Ken, KUSUMI Takashi, SEJIMA Yoshihiro

    Transactions of Japan Society of Kansei Engineering   12 ( 1 )   67 - 75   2013

     More details

    Language:English   Publisher:Japan Society of Kansei Engineering  

    We investigated the effects of pre-preferences for CG characters (avatar), gender of CG characters, exposure frequency, and the concordance between CG characters and the background on the impression formed about CG characters. During 8 sessions, 36 participants were shown a succession of CG characters presented against concordant or discordant backgrounds. During sessions 1, 3, 5, and 7, participants evaluated each CG character in terms of post-preference, intelligence, and reliability using a 7-point scale. Session 8 involved a delay condition. The results showed that post-preference evaluation scores in the high pre-preference condition were associated with better first impressions of CG characters, and increased with exposure frequency. However, background conformity influenced the impressions of CG characters when pre-preferences were medium or low. In addition, post-preference and reliability evaluation scores were higher for female than male characters.

    DOI: 10.5057/jjske.12.67

    researchmap

  • 3202 Development of an Eye-Contact Measurement System Using a Dichroic Mirror in Face-to-Face Communication

    SEJIMA Yoshihiro, ZUSHI Yukari, OSA Atsushi, WSTANABE Tomio, JINDAI Mitsuru

    The Proceedings of Design & Systems Conference   2013 ( 0 )   _3202 - 1_-_3202-6_   2013

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have developed a speech-driven embodied entrainment characters with eyeball movement model, which consists of an eyeball delay movement model and a gaze withdrawal model. This characters can support the embodied interaction and communication in avatar mediated communication and demonstrate that the line-of-sight plays an important role in enhancing the embodied interaction and communication. In this paper, we propose a concept of eye-contact measurement method and develop an eye-contact measurement system using a dichroic mirror m face-to-face communication for enhancing the embodied interaction and communication. In this system, a dichroic mirror which has a function that transmits visible rays, and reflects infrared rays is arranged between two talkers as a virtual display, and the gaze points on the virtual display are estimated. When the estimated gaze points are present m the eye area of opposite talker, an eye-contact is established in face-to-face communication.

    DOI: 10.1299/jsmedsd.2013.23._3202-1_

    CiNii Books

    researchmap

  • S121013 Development of an Interaction-activated Communication Modelb Based on the Heat Conduction Equation in Voice Communication

    SEJIMA Yoshihiro, KONISHI Hirofumi, WATANABE Tomio, JINDAI Mitsuru

    The Proceedings of Mechanical Engineering Congress, Japan   2013 ( 0 )   _S121013 - 1-_S121013-4   2013

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Previously, we developed an embodied virtual communication system for human interaction analysis by synthesis in avatar-mediated communication and confirmed that the speech-overlap between talkers is closely related with the period in which embodied interaction and communication is activated through their avatars. In this paper, we propose an interaction-activated communication model based on the heat conduction equation for enhancing embodied interaction in avatar-mediated communication. Further, we perform the evaluation experiment and demonstrate that the proposed model is effective for estimating the period of interaction-activated communication in avatar-mediated communication.

    DOI: 10.1299/jsmemecj.2013._S121013-1

    CiNii Books

    researchmap

  • RO-MAN 2012

    瀬島 吉裕

    ヒューマンインタフェース学会誌 = Journal of Human Interface Society : human interface   14 ( 4 )   320 - 320   2012.11

     More details

    Language:Japanese  

    CiNii Books

    researchmap

  • 7-324 A Curriculum of Creativity Education in Design Engineering for Developing Manufacturing Mind

    SEJIMA Yoshihiro, KANEMORI Kaoru, SAKIYAMA Satoshi, MIURA Fusanori

    Proceedings of Annual Conference of Japanese Society for Engineering Education   2012 ( 0 )   676 - 677   2012

     More details

    Language:Japanese   Publisher:Japanese Society for Engineering Education  

    DOI: 10.20549/jseeja.2012.0_676

    CiNii Books

    researchmap

  • A speech-driven embodied group entrainment system with the model of lecturer's eyeball movement Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa, Yukari Zushi

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication   2012 ( 0 )   1086 - 1091   2012

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have already developed a speech-driven embodied group entrained communication system called SAKURA for activating group interaction and communication. In this system, a speech-driven computer graphics (CG) characters called InterActors with functions of both speaker and listener are entrained to one another as a teacher and some students in a virtual classroom by generating communicative actions and movements. In this study, for the basic research of realizing smooth communication during embodied interaction between human and robot, we analyzed the eyeball movements of a lecturer communicating in a virtual group by using an embodied communication system with a line-of-sight measurement device. On the basis of the analysis results, we propose an eyeball movement model that consists of a saccade model and a model of a lecturer's gaze at an audience, called group gaze model. Then, we developed an advanced communication system in which the proposed model was used with SAKURA for enhancing group interaction and communication. This advanced system generates a lecturer's eyeball movement on the basis of the proposed model by using only speech input. We used sensory evaluation in the experiments to determine the effects of the proposed model. The results showed that the system with the proposed model is effective in group interaction and communication. © 2012 IEEE.

    DOI: 10.1109/ROMAN.2012.6343893

    Scopus

    researchmap

  • 1G3-G5 Development of educational system for science by regional collaboration (II)

    SAKIYAMA Satoshi, HAYAKAWA Seiji, OKADA Hideki, SEJIMA Yoshihiro, MIURA Fusanori

    Proceedings of the Annual Meeting of Japan Society for Science Education   36 ( 0 )   353 - 354   2012

     More details

    Language:Japanese   Publisher:Japan Society for Science Education  

    DOI: 10.14935/jssep.36.0_353

    CiNii Books

    researchmap

  • A virtual audience system by using a speech-driven embodied entrainment picture for supporting avatar-mediated communication Reviewed

    Yoshihiro Sejima, Yutaka Ishii, Tomio Watanabe

    Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C   78 ( 786 )   523 - 534   2012

     More details

    Publisher:The Japan Society of Mechanical Engineers  

    We have already confirmed the importance of embodied sharing in avatar-mediated communication and have demonstrated the effectiveness of the entrained nodding responses for human interaction and communication support. In this paper, in order to support the avatar-mediated communication, a virtual audience system is developed by using a speech-driven embodied entrainment picture in which interactive CG objects behave as listeners such as nodding responses. This system provides a communication environment in which not only avatars but also the CG objects placed around the avatars are related to embodied communication. By using this system, the effectiveness of the present system in supporting embodied communication has been demonstrated by carrying out a sensory evaluation and a speech-overlap analysis for 20 pairs of 40 talkers. © 2012 The Japan Society of Mechanical Engineers.

    DOI: 10.1299/kikaic.78.523

    Scopus

    CiNii Books

    researchmap

  • Effects of Delayed Presentation of Self-Embodied Avatar Motion with Network Delay

    ISHII Yutaka, SEJIMA Yoshihiro, WATANABE Tomio

    The Transactions of Human Interface Society   13 ( 1 )   23 - 30   2011.2

     More details

    Language:Japanese   Publisher:Human Interface Society  

    <p>A large network delay is likely to obstruct human interaction in telecommunication systems such as telephony or video conferencing systems. In spite of the extensive investigations that have been carried out on network delays of voice and image data, there have been few studies regarding support for embodied communication under the conditions of network delay. To maintain smooth human interaction, it is important that the various ways in which delay is manifested are understood. We have already developed an embodied virtual communication system that uses an avatar called "VirtualActor," in which speakers who are remotely located from one another can share embodied interaction in the same virtual space. Responses to a questionnaire that was used in a communication experiment confirmed that a fixed 500-ms network delay has no effect on interactions via VirtualActors. In this paper, we propose a method of presenting a speaker's voice and an avatar's motion feedback in the case of a 1.5-s network delay using VirtualActors. We perform two communication experiments under different conditions of network delay. The aim of the first experiment is to examine the effect of a random time delay on the conversation. The second experiment is conducted under the conditions of a free-form conversation that takes place in 5 scenarios-1 real-time scenario without a network delay and 4 scenarios with network delay that involve a combination of a delay in the talker's voice and in his/her avatar's motion feedback. The subjects consisted of a total of 30 students who worked in 15 pairs and who were familiar with each other. A sensory evaluation shows the effects upon communication of delays in the avatar's motion feedback, from the viewpoint of supporting the interaction.</p>

    DOI: 10.11184/his.13.1_23

    CiNii Books

    researchmap

  • 10-328 Development of Regional Cooperation Network for Science Education

    SAKIYAMA Satoshi, OKADA Hideki, SEJIMA Yoshihiro, MIURA Fusanori

    Proceedings of Annual Conference of Japanese Society for Engineering Education   2011 ( 0 )   678 - 679   2011

     More details

    Language:Japanese   Publisher:Japanese Society for Engineering Education  

    DOI: 10.20549/jseeja.2011.0_678

    CiNii Books

    researchmap

  • 3107 Development of a Speech-driven Embodied Group Entrainment System with an Eyeball Movement Model for Lecturer Character

    SEJIMA Yoshihiro, WATANABE Tomio, JINDAI Mitsuru, OSA Atsushi

    The Proceedings of Design & Systems Conference   2011 ( 0 )   525 - 528   2011

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have already developed a speech-driven embodied group entrained communication system called "SAKURA" for activating group interaction and communication in which speech-driven CG characters called InterActors with both functions of speaker and listener by generating communicative actions and motions are entrained one another as a teacher and some students in the same virtual classroom. In this paper, we analyze the eyeball movement of lecturer in group communication, by using an embodied communication system with a line-of-sight measurement device. On the basis of this analysis, we propose an eyeball movement model, which consists of an saccade model and a group gaze model. Then, we develop an advanced communication system in which the proposed eyeball movement model is applied to a speech-driven embodied group entrained communication system. The communication system generates the eyeball movement of lecturer on the basis of the model, and it generates the head and body entrained motions of characters by using the speech input alone. The system would be effective for group embodied interaction and communication as well as remote interaction and communication support.

    DOI: 10.1299/jsmedsd.2011.21.525

    CiNii Books

    researchmap

  • 1G3-F1 Development of educational system for science by regional collaboration

    SAKIYAMA Satoshi, OKADA Hideki, SEJIMA Yoshihiro, MIURA Fusanori

    Proceedings of the Annual Meeting of Japan Society for Science Education   35 ( 0 )   265 - 266   2011

     More details

    Language:Japanese   Publisher:Japan Society for Science Education  

    本研究の最終目標は、地域社会構成員が互いに協働することで、明日の科学技術日本を担う理系人材を効果的に育成する手法を開発することである。ここではまずその第1歩として、山口県下の産学公民が連携した新しい科学教育システムの構築を目的とし、展開した長州科楽維新プロジェクトについて述べる。さらに、このような教育システムの構築、維持に有用な役割を果たす要因について検討する。

    DOI: 10.14935/jssep.35.0_265

    CiNii Books

    researchmap

  • An embodied communication system using speech-driven embodied entrainment characters with an eyeball movement model Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai

    Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C   76 ( 762 )   340 - 350   2010.2

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    In this paper, we analyze human eyeball movement through their avatars, by using an embodied virtual communication system with a line-of-sight measurement device. On the basis of this analysis, we propose an eyeball movement model, which consists of an eyeball delay movement model and a gaze withdrawal model. Then, we develop an advanced communication system in which the proposed eyeball movement model is applied to speech-driven embodied entrainment characters called InterActors. The communication system generates the eyeball movement on the basis of model, and it generates the head and body entrained motions of InterActors by using the speech input alone. The effectiveness of the proposed eyeball movement model and the developed communication system is demonstrated by means of sensory evaluations in an avatar mediated communication system.

    DOI: 10.1299/kikaic.76.340

    Scopus

    CiNii Books

    researchmap

  • 3-334 Development of Science Education for Primary School Students and Junior High School Students

    SAKIYAMA Satoshi, OKADA Hideki, SEJIMA Yoshihiro, MIIKE Hidetoshi, MIURA Fusanori

    Proceedings of Annual Conference of Japanese Society for Engineering Education   2010 ( 0 )   528 - 529   2010

     More details

    Language:Japanese   Publisher:Japanese Society for Engineering Education  

    DOI: 10.20549/jseejaarc.2010.0_528

    CiNii Books

    researchmap

  • S1201-2-4 Development of an Enhanced Interaction Support System Based on a Model Estimating Conversational Activity

    Sejima Yoshihiro, Ishii Yutaka, Watanabe Tomio

    The proceedings of the JSME annual meeting   2010 ( 0 )   15 - 16   2010

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have developed an embodied virtual communication system called EVCOS for human interaction analysis by synthesis. In this paper, focusing on the speech-overlap in enhanced interaction, the characteristics are analyzed in the virtual face-to-face communication using EVCOS, and a model estimating conversational activity on the basis of the analysis is proposed. In addition, an enhanced interaction support system is developed by applying the proposed model to an interactive wall as a virtual audience.

    DOI: 10.1299/jsmemecjo.2010.4.0_15

    CiNii Books

    researchmap

  • Analysis by synthesis of embodied communication via virtualactor with a nodding response model Reviewed

    Yoshihiro Sejima, Tomio Watanabe, Michiya Yamamoto

    Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C   75 ( 758 )   2773 - 2782   2009.10

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have already developed an embodied virtual communication system (EVCOS) for human interaction analysis by synthesis. This system provides two remote talkers with a communication environment in which embodied interaction is shared by VirtualActors (VAs) including the talkers themselves through a virtual face-to-face scene. We confirmed the importance of head movements including nodding in embodied communication by using this system. In this paper, we develop the EVCOS with a speech-driven nodding response model for the analysis by synthesis of embodied communication under the promoted condition of superimposed nodding responses in a virtual space. By using the system, we perform experiments for the effects of superimposed nodding responses by the sensory evaluation and the voice-motion analysis by superimposing nodding responses on VA by the nodding response model. It is found that the cross-correlation between the talker's voice and the listener's head motion in the inconsistently activated condition increases at a significance level of 1% compared to that observed under normal condition. The result demonstrates that the system with the model promotes embodied interaction.

    DOI: 10.1299/kikaic.75.2773

    Web of Science

    Scopus

    CiNii Books

    researchmap

  • Transmission Timing Sharing of Talker's own voice and motion in Embodied Avatar Mediated Interaction under the Network Delay

    ISHII Yutaka, SEJIMA Yoshihiro, WATANABE Tomio

    Human Interface   11 ( 2 )   75 - 80   2009.5

     More details

    Language:Japanese   Publisher:ヒュ-マンインタフェ-ス学会  

    CiNii Books

    researchmap

  • Transmission Timing Sharing of Talker's own voice and motion in Embodied Avatar Mediated Interaction under the Network Delay

    ISHII Yutaka, SEJIMA Yoshihiro, WATANABE Tomio

    IEICE technical report   109 ( 27 )   75 - 80   2009.5

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    A large network delay would obstruct human interaction using telecommunication systems such as telephony or video conference. Studies on the embodied communication support in the network delay condition have been little undertaken despite considerable investigations of the network delay of voice and image data. It is important that the ways of revealing the delay are discussed for smooth human interaction. We have already developed an embodied virtual communication system using an avatar called "VirtualActor" in which remote talkers can share embodied interaction in the same virtual space. It has been confirmed that the fixed 500ms network delay has no effects on interaction via VirutalActors in the questionnaire in the communication experiment. In this paper, we propose the presentation method of talker's voice and avatar's motion feedback in the 1.5s network delay condition by using VirtualActors. The communication experiment is performed under the conditions of a free conversation by 5 scenes; the real-time scene without the network delay, and the 4 remained network-delay scenes for the combination of the delay of the talker's voice and his/her avatar's motion feedback. The subjects consisted of a total of 15 pairs of 30 students who are familiar with each other. Sensory evaluation shows communication effects of the delay of avatar's motion feedback on interaction support.

    CiNii Books

    researchmap

  • Transmission Timing Sharing of Talker's own voice and motion in Embodied Avatar Mediated Interaction under the Network Delay

    ISHII Yutaka, SEJIMA Yoshihiro, WATANABE Tomio

    IEICE technical report   109 ( 28 )   75 - 80   2009.5

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    A large network delay would obstruct human interaction using telecommunication systems such as telephony or video conference. Studies on the embodied communication support in the network delay condition have been little undertaken despite considerable investigations of the network delay of voice and image data. It is important that the ways of revealing the delay are discussed for smooth human interaction. We have already developed an embodied virtual communication system using an avatar called "VirtualActor" in which remote talkers can share embodied interaction in the same virtual space. It has been confirmed that the fixed 500ms network delay has no effects on interaction via VirutalActors in the questionnaire in the communication experiment. In this paper, we propose the presentation method of talker's voice and avatar's motion feedback in the 1.5s network delay condition by using VirtualActors. The communication experiment is performed under the conditions of a free conversation by 5 scenes; the real-time scene without the network delay, and the 4 remained network-delay scenes for the combination of the delay of the talker's voice and his/her avatar's motion feedback. The subjects consisted of a total of 15 pairs of 30 students who are familiar with each other. Sensory evaluation shows communication effects of the delay of avatar's motion feedback on interaction support.

    CiNii Books

    researchmap

  • Transmission Timing Sharing of Talker's own voice and motion in Embodied Avatar Mediated Interaction under the Network Delay

    ISHII Yutaka, SEJIMA Yoshihiro, WATANABE Tomio

    IEICE technical report   109 ( 29 )   75 - 80   2009.5

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    A large network delay would obstruct human interaction using telecommunication systems such as telephony or video conference. Studies on the embodied communication support in the network delay condition have been little undertaken despite considerable investigations of the network delay of voice and image data. It is important that the ways of revealing the delay are discussed for smooth human interaction. We have already developed an embodied virtual communication system using an avatar called "VirtualActor" in which remote talkers can share embodied interaction in the same virtual space. It has been confirmed that the fixed 500ms network delay has no effects on interaction via VirutalActors in the questionnaire in the communication experiment. In this paper, we propose the presentation method of talker's voice and avatar's motion feedback in the 1.5s network delay condition by using VirtualActors. The communication experiment is performed under the conditions of a free conversation by 5 scenes; the real-time scene without the network delay, and the 4 remained network-delay scenes for the combination of the delay of the talker's voice and his/her avatar's motion feedback. The subjects consisted of a total of 15 pairs of 30 students who are familiar with each other. Sensory evaluation shows communication effects of the delay of avatar's motion feedback on interaction support.

    CiNii Books

    researchmap

  • An embodied virtual communication system with a speech-driven embodied entrainment picture Reviewed

    Yoshihiro Sejima, Tomio Watanabe

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication   2008 ( 0 )   979 - 984   2009

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have already developed an embodied virtual communication system for human interaction analysis by synthesis. This system provides two remote talkers with a communication environment in which embodied interaction is shared by VirtualActors including the talkers themselves through a virtual face-to-face scene. We confirmed the importance of embodied sharing in embodied communication by using the analysis-by-synthesis system. We have also demonstrated the effects of nodding responses for embodied interaction and communication support. In this paper, we develop an embodied virtual communication system with a speech-driven embodied entrainment picture "InterPicture" for supporting virtual communication. The effects of the developed system are demonstrated by a sensory evaluation and speech-overlap analysis in the communication experiment for 20 pairs of talkers. © 2009 IEEE.

    DOI: 10.1109/ROMAN.2009.5326243

    Web of Science

    Scopus

    researchmap

  • 2114 A Virtual Communication Support System with a Speech-Driven Embodied Entrainment Wall Picture

    Sejima Yoshihiro, Watanabe Tomio

    The Proceedings of Design & Systems Conference   2009 ( 0 )   279 - 283   2009

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have already developed an embodied virtual communication system for human interaction analysis by synthesis. This system provides two remote talkers with a communication environment in which embodied interaction is shared by VirtualActors including the talkers themselves through a virtual face-to-face scene. We confirmed the importance of embodied sharing in embodied communication by using the system. We have also demonstrated the effects of nodding responses for embodied interaction and communication support. In this paper, we develop a virtual communication support system using a speech-driven embodied entrainment wall picture focusing on the immersive effect which makes an impression visually. The effectiveness of the system is demonstrated by the sensory evaluation and speech-overlap analysis in the communication experiment for 20 pairs of 40 talkers.

    DOI: 10.1299/jsmedsd.2009.19.279

    CiNii Books

    researchmap

  • A Speech-Driven Embodied Entraiment System Based on an Eye-Movement Model in Voice Communication

    SEJIMA Yoshihiro, WATANABE Tomio, JINDAI Mitsuru

    IPSJ SIG technical reports   127 ( 11 )   1 - 8   2008.1

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    In this paper, an eye-movement model generating eyeball movement based on head movement is proposed on the basis of the analysis of eyeball movement characteristics in an avator communication experiment. An advanced embodied entraiment communication system is developed in which the proposed model is applied to the speech-driven embodied entraiment character InterActor. The effectiveness of the system is demonstrated by the sensory evaluation of an avator communication experiment.

    CiNii Books

    researchmap

  • C15 Speech-Driven Embodied Entrainment Character System with an Eyeball Movement Model

    Sejima Yoshihiro, Watanabe Tomio, Jindai Mitsuru

    The Proceedings of Conference of Kyushu Branch   2008 ( 0 )   115 - 116   2008

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    DOI: 10.1299/jsmekyushu.2008.115

    CiNii Books

    researchmap

  • 3123 An Embodied Virtual Communication System with Line-of-Sight Information

    Sejima Yoshihiro, Watanabe Tomio, Jindai Mitsuru

    The proceedings of the JSME annual meeting   2007 ( 0 )   217 - 218   2007

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    We have developed the embodied virtual communication system for human interaction analysis by synthesis. In this paper, focusing on line-of-sight effected smoothly communication in face-to-face communication, embodied virtual communication system with line-of-sight information is developed. In this system, body motions and line-of-sight information of the avatar are represented based on the talker's ones which are measured by magnetic sensors and a developed line-of-sight measurement device. The effectiveness of the system is demonstrated by the sensory evaluation of a remote communication experiment.

    DOI: 10.1299/jsmemecjo.2007.4.0_217

    CiNii Books

    researchmap

  • うなずき反応モデルを付加したVirtualActorを介する身体的コミュニケーションの合成的解析

    瀬島 吉裕, 渡辺 富夫, 山本 倫也

    ヒューマンインタフェースシンポジウム2006論文集   1 - 6   2006.9

     More details

  • 身体的バーチャルコミュニケーションシステムにおける面の皮インタフェースの自己参照効果

    新徳 健, 渡辺 富夫, 山本 倫也, 瀬島 吉裕

    第6回システムインテグレーション部門学術講演会(SI2005)   635 - 636   2006

     More details

  • VirtualActor にうなずき反応モデルを付加した身体的バーチャルコミュニケーションシステム

    瀬島 吉裕, 渡辺 富夫, 山本 倫也

    第6回IEEE広島支部学生シンポジウム論文集   216 - 219   2006

     More details

    <優秀プレゼンテーション賞>

    researchmap

  • 身体的バーチャルコミュニケーションシステムにおける面の皮インタフェースの評価

    瀬島 吉裕, 山本 倫也, 渡辺 富夫, 新徳 健

    ヒューマンインタフェースシンポジウム2005論文集   929 - 934   2005.9

     More details

▼display all

Industrial property rights

  • 推定装置、推定システム、推定方法、および制御プログラム

    瀬島吉裕

     More details

    Application no:特願2023-35026  Date applied:2023.3

    researchmap

  • ロボット装置

    瀬島吉裕, 佐藤洋一郎

     More details

    Application no:特願2020-137687  Date applied:2019.2

    researchmap

  • 瞳孔反応を用いた情報処理システム

    瀬島吉裕

     More details

    Applicant:公立大学法人岡山県立大学

    Application no:特願2017-170764  Date applied:2017.9

    Announcement no:特開2019-046331  Date announced:2019.3

    J-GLOBAL

    researchmap

Awards

  • 優秀プレゼンテーション賞

    2023.9   ヒューマンインタフェースシンポジウム2023  

    森田大樹, 稲垣早紀, 瀬島吉裕

     More details

  • 優秀講演賞

    2022.12   第23回計測自動制御学会システムインテグレーション部門講演会  

    瀬島吉裕, 橋本翔太, 渡辺富夫

     More details

  • Best Presentation Award

    2022.9   Human Interface Symposium  

     More details

  • 近畿経済産業局長賞

    2022.3   近畿経済産業局  

    瀬島吉裕

     More details

    Country:Japan

    researchmap

  • 奨励業績表彰

    2019.9   日本機械学会設計工学・システム部門  

    瀬島吉裕

     More details

  • 優秀発表賞

    2019.9   日本感性工学会  

    瀬島吉裕

     More details

  • 論文賞

    2016.11   日本福祉工学会  

    瀬島吉裕,長篤志,神代充,渡辺富夫

     More details

    Award type:Honored in official journal of a scientific society, scientific journal 

    researchmap

  • KAZUO TANIE AWARD (Most Outstanding Research Award)

    2015.9   IEEE International Conference on Robot & Human Interactive Communication (RO-MAN2015)   Development of an Expressible Pupil Response Interface Using Hemispherical Displays

    Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe

     More details

  • 科学技術賞

    2015.7   岡山工学振興会  

    瀬島吉裕

     More details

  • Best Paper Award

    2014.9   The Japan Society of Kansei Engineering  

    Sejima Yoshihiro

     More details

  • Best Paper Award

    2012.9   IEEE International Conference on Robot & Human Interactive Communication (RO-MAN2012)   A Speech-driven Embodied Group Entrainment System with the Model of Lecturer's Eyeball Movement

    Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa, Yukari Zushi

     More details

  • Best Poster Presentation Award

    2011.10   2nd Asian Conference on Engineering Education (ACEE2011)  

     More details

  • Best Poster Presentation Award

    2011.10   ACEE2011  

    Sejima Yoshihiro

     More details

  • 仁科賞

    2010.3   科学振興仁科財団 仁科顕彰会  

    瀬島吉裕

     More details

  • 優秀研究賞

    2009.11   IEEE広島支部学生シンポジウム  

     More details

    Country:Japan

    researchmap

  • HISS記念論文賞

    2008.11   IEEE広島支部学生シンポジウム  

     More details

    Country:Japan

    researchmap

  • 優秀プレゼンテーション賞

    2006.11   IEEE広島支部学生シンポジウム  

     More details

    Country:Japan

    researchmap

▼display all

Research Projects

  • Development of attractive communication systems based on reciprocal liking

    Grant number:22H04871  2022.4 - 2024.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research on Innovative Areas (Research in a proposed research area)  Grant-in-Aid for Scientific Research on Innovative Areas (Research in a proposed research area)

      More details

    Grant amount:\11440000 ( Direct Cost: \8800000 、 Indirect Cost:\2640000 )

    researchmap

  • Development of avatar robot system to support online drinking party

    2021.5 - 2022.3

    JST 

      More details

    Authorship:Principal investigator 

    researchmap

  • A meddling care robot that listens in consideration of the privacy of the elderly

    2021.4 - 2022.3

      More details

    Authorship:Principal investigator 

    researchmap

  • Development of a pet robot that promotes intimacy using the pupil response for close rapport

    Grant number:19K12890  2019.4 - 2022.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (C)  Grant-in-Aid for Scientific Research (C)

      More details

    Authorship:Principal investigator  Grant type:Competitive

    researchmap

  • 人を惹きつける瞳輝インタフェースの開発

    2018.10 - 2019.9

    総務省  異能vation「破壊的な挑戦部門」 

    瀬島吉裕

      More details

    Authorship:Principal investigator  Grant type:Competitive

    researchmap

  • Development of an animal-type pupil response robot with empathy expression for preventing dementia

    Grant number:16K01560  2016.4 - 2019.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (C)  Grant-in-Aid for Scientific Research (C)

    Sejima Yoshihiro, WATANABE Tomio

      More details

    Grant amount:\4550000 ( Direct Cost: \3500000 、 Indirect Cost:\1050000 )

    In this study, we aimed to establish the expressions of pupil response to empathize with the user in a communication between human and robot. Therefore, we conducted the both approaches of "Analysis" that analyzes the characteristics of pupil response in communication and "Design" that develops and evaluates the advanced pupil response systems and robots. In the approach of "Analysis", using a pupil measurement device, we analyzed the pupil response with or without pleasant emotion during the utterance, and revealed that the pupil significantly dilates with the utterance. In the approach of "Design", we applied the developed pupil response model to the pupil response interface which have been developed so far and investigated the expressions of pupil response for promoting empathy.
    As a result, the user had a tendency that the sense of familiarity and empathy was enhanced by the dilation of pupil response.

    researchmap

  • Development of the Pupil Response Robot for Preventing Dementia

    Grant number:26750223  2014.4 - 2016.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Young Scientists (B)  Grant-in-Aid for Young Scientists (B)

    Sejima Yoshihiro, WATANABE Tomio, JINDAI Mitsuru

      More details

    Authorship:Principal investigator  Grant type:Competitive

    The purpose in this research is that an objective criterion is developed to measure the pupil responses during human and robot interaction and communication. Therefore, we focused on the pupil responses related to human emotions and carried out following two approaches: Analysis and Design. In the Analysis approach, the pupil responses in human face-to-face communication were analyzed using an embodied communication system with a line-of-sight measurement device and the possibility of an objective criterion were indicated. In the Design approach, based on the above-mentioned analysis results, we developed an advanced communication system which is applied to the pupil response and an expressible pupil response interface by using hemispherical displays. The effectiveness of the developed system and developed interface were demonstrated by sensory evaluation in communication experiments.

    researchmap

  • Development of an Eye-Contact Measurement System for Mental Health Care

    Grant number:24700536  2012.4 - 2014.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Young Scientists (B)  Grant-in-Aid for Young Scientists (B)

    SEJIMA Yoshihiro

      More details

    Authorship:Principal investigator  Grant type:Competitive

    In this research, we developed an eye-contact measurement system using a dichroic mirror in face-to-face communication for mental health care. In this system, a dichroic mirror which has a function that transmits visible rays, and reflects infrared rays is arranged between two talkers as a virtual display, and the gaze points on the virtual display are estimated. When the estimated gaze points are present in the eye area of opposite talker, an eye-contact is established in face-to-face communication. In addition, we performed a communication experiment for examining an eye-contact measurement method in human face-to-face communication. In the experiment, an acrylic board was arranged between two talkers as a virtual display, and the gaze points were analyzed in order to examine effects of the presence of acrylic board. The result demonstrated that the talker's line-of-sight has a tendency to concentrate on the center of the acrylic board.

    researchmap

  • 医療従事者のための場の視覚化技術を用いた傾聴力育成システムの研究開発

    2012.4 - 2013.3

    広島銀行  大学研究者助成事業 

    瀬島吉裕

      More details

    Authorship:Principal investigator  Grant type:Competitive

    researchmap

▼display all