Sensitized me to the difficulties of designing for Mixed Reality
Showed me the importance to understand technology you are designing for

The Problem

The projects previous client application was on a smartphone. The team wants to switch to smart glasses to allow clients to use both of their hands. They bought the Vuzix M400 smart glasses, want to build a new prototype with it, but have no prior experience with the technology.

My Process

Step 1: Understanding the needs of the project

I had meetings with multiple team members to understand their concept and what their expectations are for the new technology.

Step 2: Experimenting with and educating about the new technology

I started researching about the glasses and tried out different methods of developing for it. I checked different ways of accessing the camera stream, stream data to a server, and which AR libraries work with the Vuzix M400.

After that, I started with the prototype. A designer before me prepared screens for it, but they turned out impractical. The previous designer lacked information about the smart glasses' functionality, so their design still targeted smartphones.

This lead to the following issues:

  • Existing screens contained many UI elements and did not fit the smart glasses' context, where selection and clicking of buttons are more challenging.
  • There were misconceptions about the smart glasses. For example, clients thought the Field of View of the glasses is much bigger than it was

Problems I encountered, and how I handled them
press for fullscreen

I reported to the team about these issues and proposed compromises and workarounds. Because of the previously mentioned misconceptions, I mediated between project members, sometimes as a designer and sometimes as a developer.

Step 3: Building a proof of concept

I developed a Unity application for the prototype. Due to limited resources and time constraints, the focus laid more on proving that the functionality works than the application's interaction.
While the initial idea was to use gestures for interaction, we decided to go with voice commands. Optical gesture detection turned out to be unreliable with the FOV of the Vuzix M400. Yet, the application uses hand detection to select objects.

The Outcome

The result displays the core functionality of the concept. We shot a demo video and presented the product to the manager and clients. This application was then later tested with users.

Thank you for taking the time
to read about my project!