I designed an interface for iconic Loki’s scepter.
WHY I DID
All the superheroes/supervillains do a lot of cool stuff with their gadgets/weapons in the movies but we don’t actually know how most of the things are done. So, as a part of my HCC 613 – Prototyping class I have decided to install a screen on to the Loki’s sceptre and explain how he does the things he does with the sceptre.
HOW I DID
Collecting the information about the user is essential in any design process, it helps me to better connect with the user empathetically and design better products. So, I collected the data about Loki from various sources – movies, comics, and Wikipedia. Also, As Loki is one of the most famous characters in the MCU, there are many fandom pages with tons of information.
Norse God of Fire, Loki
Male (But can change genders via shapeshifting)
Speaks multiple languages including English.
He is known for his mischief
Can do everything an average human can do
Shape shifting and he is also a trickster
Nature / Character
Coward , Selfish but can be good and help others sometimes
Right hand, with which he wields his scepter
Experience with Technology
very little empathetic
cannot be read
Green, He always wears green
Scepter with a mind stone
How did he get the scepter
Thanos gave it to him
Task and Environment Analysis
Once, I gathered enough information about the user, I started collecting data about different tasks that he can do and all the locations that can do them.
Firing Energy Blasts – Loki has done this in many environments Astral Projection – He astral projects himself to the chitauri realm to contact the enemy leader and let him know the status of the attack after capturing Hawkeye and Dr. Selvig while they were working on building a portal to the chitauri realm. Mind control – He controls the mind of Hawkeye and Dr. Selvig when he first comes to earth out of the portal from tesseract. This happens in an underground cave built by SHIELD to perform experiments on tesseract.
After gathering enough information about the user, tasks and the environment, I started ideating for the screen. I chose mind mapping to ideate and selected Direct touch interface like a fingerprint sensor on the device or a button.
I started sketching for the screen to get me into the right direction.