\n
\n\n \n \n \n \n \n \n Using a virtual reality interview simulator to explore factors influencing people’s behavior.\n \n \n \n \n\n\n \n Luo, X.; Wang, Y.; Lee, L.; Xing, Z.; Jin, S.; Dong, B.; Hu, Y.; Chen, Z.; Yan, J.; and Hui, P.\n\n\n \n\n\n\n
Virtual Reality, 28(1): 56. 3 2024.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n \n Website\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{\n title = {Using a virtual reality interview simulator to explore factors influencing people’s behavior},\n type = {article},\n year = {2024},\n pages = {56},\n volume = {28},\n websites = {https://link.springer.com/10.1007/s10055-023-00934-5},\n month = {3},\n day = {28},\n id = {12dec4f3-705a-3902-a8fd-7b620d7ed955},\n created = {2023-05-18T02:58:43.475Z},\n file_attached = {true},\n profile_id = {4b66b327-35ad-3956-a9a2-307331dd9988},\n last_modified = {2024-04-14T08:26:11.885Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Virtual reality interview simulator (VRIS) is an effective and valid tool that uses virtual reality technology to train people’s interview skills. Typically, it offers candidates prone to being very nervous during interviews the opportunity to practice interviews in a safe and manageable virtual environment and realistic settings, providing real-time feedback from a virtual interviewer on their performance. It helps interviewees improve their skills, reduce their fears, gain confidence, and minimize the cost and time associated with traditional interview preparation. Yet, the major anxiety-inducing elements remain unknown. During an interview, the anxiety levels, overall experience, and performance of interviewees might be affected by various circumstances. By analyzing electrodermal activity and questionnaire, we investigated the influence of five variables: (I) Realism ; (II) Question type ; (III) Interviewer attitude ; (IV) Timing ; and (V) Preparation . As such, an orthogonal design $$L_8(4^1 \\times 2^4)$$ L 8 ( 4 1 × 2 4 ) with eight experiments ( $$O A_8$$ O A 8 matrix) was implemented, in which 19 college students took part in the experiments. Considering the anxiety, overall experience, and performance of the interviewees, we found that Question type plays a major role; secondly, Realism , Preparation , and Interviewer attitude all have middle influence; lastly, Timing has little to no impact. Specifically, professional interview questions elicited a greater degree of anxiety than personal ones among the categories of interview questions. This work contributes to our understanding of anxiety-stimulating factors during job interviews in virtual reality and provides cues for designing future VRIS.},\n bibtype = {article},\n author = {Luo, Xinyi and Wang, Yuyang and Lee, Lik-Hang and Xing, Zihan and Jin, Shan and Dong, Boya and Hu, Yuanyi and Chen, Zeming and Yan, Jing and Hui, Pan},\n doi = {10.1007/s10055-023-00934-5},\n journal = {Virtual Reality},\n number = {1}\n}
\n
\n\n\n
\n Virtual reality interview simulator (VRIS) is an effective and valid tool that uses virtual reality technology to train people’s interview skills. Typically, it offers candidates prone to being very nervous during interviews the opportunity to practice interviews in a safe and manageable virtual environment and realistic settings, providing real-time feedback from a virtual interviewer on their performance. It helps interviewees improve their skills, reduce their fears, gain confidence, and minimize the cost and time associated with traditional interview preparation. Yet, the major anxiety-inducing elements remain unknown. During an interview, the anxiety levels, overall experience, and performance of interviewees might be affected by various circumstances. By analyzing electrodermal activity and questionnaire, we investigated the influence of five variables: (I) Realism ; (II) Question type ; (III) Interviewer attitude ; (IV) Timing ; and (V) Preparation . As such, an orthogonal design $$L_8(4^1 \\times 2^4)$$ L 8 ( 4 1 × 2 4 ) with eight experiments ( $$O A_8$$ O A 8 matrix) was implemented, in which 19 college students took part in the experiments. Considering the anxiety, overall experience, and performance of the interviewees, we found that Question type plays a major role; secondly, Realism , Preparation , and Interviewer attitude all have middle influence; lastly, Timing has little to no impact. Specifically, professional interview questions elicited a greater degree of anxiety than personal ones among the categories of interview questions. This work contributes to our understanding of anxiety-stimulating factors during job interviews in virtual reality and provides cues for designing future VRIS.\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Modeling Online Adaptive Navigation in Virtual Environments Based on PID Control.\n \n \n \n \n\n\n \n Wang, Y.; Chardonnet, J.; and Merienne, F.\n\n\n \n\n\n\n pages 325-346. 2024.\n
\n\n
\n\n
\n\n
\n\n \n \n Website\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2024},\n pages = {325-346},\n websites = {https://link.springer.com/10.1007/978-981-99-8141-0_25},\n id = {544d0e58-3576-3a38-bd7d-e63a34d65abe},\n created = {2023-11-30T01:57:24.357Z},\n file_attached = {false},\n profile_id = {4b66b327-35ad-3956-a9a2-307331dd9988},\n last_modified = {2023-11-30T01:57:24.357Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n bibtype = {inbook},\n author = {Wang, Yuyang and Chardonnet, Jean-Rémy and Merienne, Frédéric},\n doi = {10.1007/978-981-99-8141-0_25},\n chapter = {Modeling Online Adaptive Navigation in Virtual Environments Based on PID Control}\n}
\n
\n\n\n\n
\n
\n\n \n \n \n \n \n \n Text2VRScene: Exploring the Framework of Automated Text-driven Generation System for VR Experience.\n \n \n \n \n\n\n \n Yin, Z.; Wang, Y.; Papatheodorou, T.; and Hui, P.\n\n\n \n\n\n\n In
2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR), pages 701-711, 3 2024. IEEE\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n \n Website\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Text2VRScene: Exploring the Framework of Automated Text-driven Generation System for VR Experience},\n type = {inproceedings},\n year = {2024},\n keywords = {Index Terms},\n pages = {701-711},\n websites = {https://github.com/Williamy946/Text2VRScene,https://ieeexplore.ieee.org/document/10494137/},\n month = {3},\n publisher = {IEEE},\n day = {16},\n id = {48697da0-ede0-34a2-b461-85fe47498a5f},\n created = {2024-04-15T09:13:18.633Z},\n file_attached = {true},\n profile_id = {4b66b327-35ad-3956-a9a2-307331dd9988},\n last_modified = {2024-04-18T04:33:30.580Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {With the recent development of the Virtual Reality (VR) industry, the increasing number of VR users pushes the demand for the massive production of immersive and expressive VR scenes in related industries. However, creating expressive VR scenes involves the reasonable organization of various digital content to express a coherent and logical theme, which is time-consuming and labor-intensive. In recent years, Large Language Models (LLMs) such as ChatGPT 3.5 and generative models such as stable diffusion have emerged as powerful tools for comprehending natural language and generating digital contents such as text, code, images, and 3D objects. In this paper, we have explored how we can generate VR scenes from text by incorporating LLMs and various generative models into an automated system. To achieve this, we first identify the possible limitations of LLMs for an automated system and propose a systematic framework to mitigate them. Subsequently, we developed Text2VRScene, a VR scene generation system, based on our proposed framework with well-designed prompts. To validate the effectiveness of our proposed framework and the designed prompts, we carry out a series of test cases. The results show that the proposed framework contributes to improving the reliability of the system and the quality of the generated VR scenes. The results also illustrate the promising performance of the Text2VRScene in generating satisfying VR scenes with a clear theme regularized by our well-designed prompts. This paper ends with a discussion about the limitations of the current system and the potential of developing similar generation systems based on our framework.},\n bibtype = {inproceedings},\n author = {Yin, Zhizhuo and Wang, Yuyang and Papatheodorou, Theodoros and Hui, Pan},\n doi = {10.1109/VR58804.2024.00090},\n booktitle = {2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)}\n}
\n
\n\n\n
\n With the recent development of the Virtual Reality (VR) industry, the increasing number of VR users pushes the demand for the massive production of immersive and expressive VR scenes in related industries. However, creating expressive VR scenes involves the reasonable organization of various digital content to express a coherent and logical theme, which is time-consuming and labor-intensive. In recent years, Large Language Models (LLMs) such as ChatGPT 3.5 and generative models such as stable diffusion have emerged as powerful tools for comprehending natural language and generating digital contents such as text, code, images, and 3D objects. In this paper, we have explored how we can generate VR scenes from text by incorporating LLMs and various generative models into an automated system. To achieve this, we first identify the possible limitations of LLMs for an automated system and propose a systematic framework to mitigate them. Subsequently, we developed Text2VRScene, a VR scene generation system, based on our proposed framework with well-designed prompts. To validate the effectiveness of our proposed framework and the designed prompts, we carry out a series of test cases. The results show that the proposed framework contributes to improving the reliability of the system and the quality of the generated VR scenes. The results also illustrate the promising performance of the Text2VRScene in generating satisfying VR scenes with a clear theme regularized by our well-designed prompts. This paper ends with a discussion about the limitations of the current system and the potential of developing similar generation systems based on our framework.\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Jump Cut Effects in Cinematic Virtual Reality: Editing with the 30-degree Rule and 180-degree Rule.\n \n \n \n \n\n\n \n Zhang, J.; Lee, L.; Wang, Y.; Jin, S.; Fei, D.; and Hui, P.\n\n\n \n\n\n\n In
2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR), pages 51-60, 3 2024. IEEE\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n \n Website\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Jump Cut Effects in Cinematic Virtual Reality: Editing with the 30-degree Rule and 180-degree Rule},\n type = {inproceedings},\n year = {2024},\n pages = {51-60},\n websites = {https://ieeexplore.ieee.org/document/10494084/},\n month = {3},\n publisher = {IEEE},\n day = {16},\n id = {59cb2b6c-fd58-313a-ae83-6a6a6004ad93},\n created = {2024-04-15T09:13:19.042Z},\n file_attached = {true},\n profile_id = {4b66b327-35ad-3956-a9a2-307331dd9988},\n last_modified = {2024-04-18T04:33:30.555Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n bibtype = {inproceedings},\n author = {Zhang, Junjie and Lee, Lik-hang and Wang, Yuyang and Jin, Shan and Fei, Dan-Lu and Hui, Pan},\n doi = {10.1109/VR58804.2024.00029},\n booktitle = {2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)}\n}
\n
\n\n\n\n