Roles: Experience & Interaction Designer, Rapid Prototyper
Development Time: 3 Months
Team Size: 5
Aesthetic: Training simulation
Platform: Oculus Rift + Microsoft Kinect 2.0
Re-Present is a virtual reality application that helps users develop their public speaking skills with focus on identifying their strengths and weaknesses. The system places a user into a public speaking setting with a virtual audience. Performance data (e.g. eye contact, pace, body postures & hand gestures, time, voice inflections, pauses, filler words) is collected while the user is giving a presentation, which is visualized in the self-evaluation section. The application allows the user to review their performance by playing back the presentation from the perspective of the audience where the user watches themselves represented as an avatar.
The project started at Carnegie Mellon University’s Entertainment Technology Center, where the team was approached by Dave Culyba, an assistant teaching professor specializing in serious games, and Kim Hyatt, an associate teaching professor at the university’s Heinz College of Information Systems and Public Policy where she teaches a course in Strategic Presentation Skills.
The clients’ intention for the project was twofold, which was to develop:
Given that the application was intended for classroom use, the team first focused on its academic context. Since Kim was an expert in the subject matter, the team actively consulted her to learn as much as we could from her pedagogical experience and expertise. Using her rubric
Using ourselves as the target audience, the team conducted focus groups of our peers to learn about existing practices for improving one’s public speaking skills, like recording oneself with a camera or presenting in front of a mirror. This helped to identify the major pain points and barriers to entry. One user commented, “I don’t really know what I’m doing wrong,” since each person has different levels of self-awareness and there was a general lack of opportunities for self-observation.
For competitive analysis of the market, we tested the existing VR applications on ourselves and our peers. From this exercise, we identified several shortcomings that negatively impacted the intended goal. First, the emphasis on objective evaluation through a summary report were provided with little to no context, making it difficult to scrutinize the mentioned problem in its entirety. Second, the user experience of going through each practice cycle felt too tedious, often aggravated by ambiguous UI elements and pages that slowed down the process.
We discovered that while other existing VR public speaking applications focused solely on objective evaluation, there was an untapped niche for introducing subjective self- and peer-evaluations into the mix, enabling a more holistic approach to learning. This area in assessment and evaluation also presented an opportunity for critical research on pedagogy.
We came up with the following design pillars to guide the brainstorming process. The application had to be:
During the ideation phase, the team adopted Sabrina Culyba’s Transformational Framework to brainstorm potential solutions.
Using real-time feedback to the user during the presentation so that they could adjust accordingly. = turns out it distracts the user and pollutes the data being collected.
Gamifying the experience to provide incentives and drive motivation. Game reward systems were far too abstracted and were not indicative of the underlying problems. It also ran the risk of being ingrained within the bounds of the system, hence restricting the potential to improve beyond.
The overall experience flow had to be seamless, so the team came up with a solution that tightened / unclogged the iteration cycle between Practice and Review modes.
Practice Mode: Prototype
The main goal of the practice mode was to provide an optimal environment for the users to practice their presentation, and offer the tools they needed to conduct their desired practice methods.
-Initially, we tried out different settings where the user was placed, from spaces as small as 10-seat meeting room to 500-seat auditorium. However, it was determined that the environment would be a familiar classroom setting to best emulate the real-life scenario.
-For interaction, the user was given a virtual clicker in one of their hands, which the user can toggle by pressing down the joystick on the desired hand. The controller uses A/B buttons to move back and forth through the presentation slides. The user can also use the clicker as a pointer, either to gesture specific highlights in the slides or to interact with the UI elements like the Play button by pointing and pressing down on the front trigger. To accommodate both left-handed and right-handed users, the user can toggle their dominant hand – the one holding the clicker – by pressing down the joystick on the desired hand.
-The audience was initially represented in the same way as the user. After playtesting however, the users felt intimidated by the army of faceless mannequins staring at them during the presentation. It soon became clear that the audience needed a level of realism to establish a comfortable environment for the users to present. To address this feedback, the virtual audience was swapped with that of a more realistic visual style.
-Initially, the audience members were designed to provide some implicit real-time feedback based on the user’s performance, such as yawning or slumping back into the chair if the pacing was too slow. However, designing an evaluation system that seemed “fair” to the users seemed difficult, and the feedback was unnecessarily distracting the user from the presentation itself. Hence audience was designed to display only neutral, random idle behaviours that evoke a sense of realism, but not more.
Practice Mode: Iteration
-Putting a pedestal in front of the player fixed them in their position. Also, this was a visual obstacle from the player’s viewpoint of their avatar in the review mode.
-Putting laptop screen in front of the players prompted them to interact with it and press the keyboard, which we didn’t want. Also it encouraged them to look down and read their slides, instead of referencing them.
-To encourage freedom, expressiveness, and mobility, we opened up the presentation space around the user by removing the pedestal and consolidating the helper screen and UI buttons onto a prop on the floor.
-audience’s feedback was distracting the user from the presentation
-audience was scripted to more subtle, neutral idle behaviours and would occasionally look around the environment to make it feel more “natural.”
-audience’s visual appearance was making the user uncomfortable due to uncanny valley. Hence the visual style was changed to a more realistic representation.
-Other real-time visual indications of the user’s performance, like the clock and the projection slide glowing when the user looks at them, were removed to eliminate further distraction.
Review Mode: Prototype
-Initially, we thought of using pass-through ZED cameras to overlay the real life video footage of the user onto the virtual classroom. However, this inconsistency in visual style led to a jarring experience. In addition, we also explored ways to interpolate the body movements from the headset and the touch controllers alone, however this proved to be too inaccurate and unreliable.
Hence we resorted to using the Microsoft Kinect 2.0 sensor, which can capture full-body motion. We created a prototype to see if users can discern their own body movements and the intended message behind them. We asked the users to act out a cooking session, and then afterwards self-identify and recollect their message purely from the body movements. To our surprise, they were successful.
-We tried out several different ways to represent the user’s avatar: from using animated characters, stick figures, gingerbread man, etc. [Insert pictures here]
Review Mode: Iteration
-However, the users commented that the misalignment how their avatars were represented and how they perceived themselves was uncomfortable – a phenomenon known as uncanny valley. They felt like they were misrepresented and some were even offended. To address this feedback, the avatar was changed into a neutral mannequin to convey the functional movements and removing any subjective appearance.
Since the user could not see their own face, it was hard to understand where exactly the user was facing at a given moment. Hence we decided to initially visualize the user’s head orientation using a line. However, this was hard to see, so we visualized the user’s field of view as a cone-shaped volume to improve its readability.
In addition, it was also important to understand the user’s eye contact patterns and their impact on the audience. However, it was found that the having the actual audience itself was unnecessary and distracting for the purposes of the Review mode. So we decided to create a heat map visualizing each audience’s gaze engagement level. To simplify the environment and maintain emphasis on the user’s avatar, we represented the virtual audience as simple UI elements. We tried multiple 3D primitive shapes, however the cubical geometry looked different from the user’s viewpoint, hence a spherical shape was chosen for its visual uniformity. These balls were also positioned where the audience’s heads would have been, and this allowed the user to intuitively identify them as the “audience.”
-The user needed control over how they played back the presentation, so that the user can decide which specific sections to focus on, and pause the play back to allow enough time for oneself to think and reflect. Initially, the timeline UI was placed horizontally on the table, however the user did not perceive it as an interact-able object, hence the UI was spatialized to address this problem. [Insert picture of initial timeline UI design here]
-Before the user enters the Review mode, it was important that the user skim over a summary of their performance data to provide a general idea and set expectations for the experience. The data was initially intended to be a concrete evaluation filtered by the system, however it was determined that an “indicative” data than an “evaluative” one would prove to be more informative so that the user can ultimately judge themselves.
-Playtesting on Peers
-conducted frequently and consistently throughout development; we were able to engage closely with our target audience from the beginning to the end of the product development cycle.
-We had a direct access to our target demographic, especially non-native English speakers since a significant portion of the class cohort were international students.
-initially we playtested each feature separately to validate the design questions relevant to that feature. And as the individual features were integrated into the system the playtest sessions became longer and longer.
Playtesting on ourselves
-As an ultimate test of the system’s efficacy, we experimented the application on ourselves.
-During the final weeks of the semester, we extensively used the application to prepare for our final presentation of the project. This was the most important and nerve-racking experience for many, standing in front of the school’s entire assembly of faculty, staff, fellow classmates, and even strangers. This is even more so for non-native speakers of English. In fact, we took a step further to demonstrate the use case scenario by presenting a presentation inside the presentation (what?).
-The team’s performance and confidence improved significantly over a period of two weeks using the application on a daily basis.
-By the end of the semester, we delivered the following Product package to our clients:
-Installation & Deployment in Classroom
-We visited the Heinz school to set up the system ready for use.
-Along with the documentation, we hosted training sessions with Kim’s Teaching Assistants to ensure that they had a comprehensive knowledge of the system as the facilitators / supervisors overseeing the students.
-Currently, the Re-Present system is being used in Professor Hyatt’s courses in strategic presentation.
-Kim was awarded the Carnegie Mellon University’s Teaching Innovation Award of 2020.
-The performance data and results from Kim’s class are published in research papers.