In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
Human communication over digital devices has seen major advancements in the last decades. We communicate over text messages, video calls etc., yet it still is somewhat stale and emotionless. Can haptics bring back some of the qualities of direct human-to-human communication?
With my bachelor thesis I want to explore the possibilities and limitations that haptic feedback can offer while communicating with each other.
While my intentions are not to target one specific device, I've decided to use the Taptic Engine present in the latest Apple phones as it offers high resolution and precision while allowing me to easily program and iterate.
Die menschliche Kommunikation über digitale Geräte hat in den letzten Jahrzehnten große Fortschritte gemacht, wir kommunizieren über Textnachrichten, Videoanrufe usw., jedoch es ist diese immer noch etwas abgestanden und emotionslos. Kann haptisches Feedback uns helfen, die Qualitäten der direkten Kommunikation von Mensch zu Mensch wiedergeben?
Mit meiner Bachelorarbeit möchte ich die Möglichkeiten und Grenzen erforschen, die haptisches Feedback bei der Kommunikation untereinander bieten kann.
Ich beabsichtige zwar nicht, ein bestimmtes Gerät als Ziel festzulegen, habe mich jedoch für die Verwendung der Taptic Engine entschieden, die in den neuesten Apple-Smartphones enthalten ist, da sie eine hohe Auflösung und Präzision bietet und es mir ermöglicht, problemlos zu programmieren und zu iterieren.
The focus of my thesis is to understand digital communication and find ways in which haptics can transmit and amplify some of the characteristics of direct human interaction.
First of all: What are haptics?
The Merriam Webster dictionary defines it since 2018:
„the use of electronically or mechanically generated movement that a user experiences through the sense of touch as part of an interface (such as on a gaming console or smartphone).“ (Merriam-Webster, n.d.).
Vibrotactile technology was present in the first smartphones and gave a good alternative to audio queues for notifying the user. Since then, haptic feedback generators have advanced to be more precise and responsive and have allowed for a seamless integration into user interface interaction. Apple has forged the path to get the public used to haptic interactions with the introduction of the Taptic Engine in 2014 (Apple, 2014), that is now present in most modern iPhones, Apple Watches and MacBooks.
As part of my work I will explore the advantages and limitations of the Taptic Engine in detail and the possibilities it presents to enhance human communication.
I want to develop a real working prototype to test my assumptions in the best way possible, but first I have to understand how digital communication evolved to this point, how haptics are changed over time and how exactly the Taptic Engine works.
Language has seen a crucial development with the introduction of digital forms of communication. It has mostly to do with what kind of language we see everyday, as we are surrounded by informal writing on the internet, as Gretchen McCulloch points out: „The internet and mobile devices have brought us an explosion of writing by normal people […] We write all the time now, and most of what we are writing is informal“ (McCulloch, 2019, p. 2).
The number of mobile internet users has increased from one to five billion from 2003 to 2018 (GSMA Intelligence, 2017) and the number of text messages sent has increased by 7700% (Statistic Brain, 2017) over the last decade.
McCulloch is a linguist that has specialized in studying the evolution of communication and understanding how the rules of language have changed with the introduction of the internet.
In her book „Because Internet“ she describes the evolution as starting from the early 90's with the people that used computers before they were seen as mainstream and went online before their friends did. They were excited about the possibilities that this new form of communication brought and had high technological skills and were willing to pave the way for the next generation.
From the late 90s to the early 00s, the internet users split between two kinds of people: the ones that embraced the internet and learned its rules by immersing themselves and using it as an escape, and people that got online for work and expanded from there to other tasks like reading newspapers, online shopping…
From there on until now, there is also a split between people that don't remember a time before the internet and therefore were socially influenced from the beginning to accept the new rules, and the ones that tried their best to resist going online as long as possible
(McCulloch, 2019, Chapter 3).
In the same way that informal speaking utilizes gestures and different tones of voice to better express a message, the writing form has adopted similar ways to express emotion, to better transmit what we are thinking and feeling.
McCulloch points out the development of a typographical tone of voice (McCulloch, 2019, Chapter 4) that came with the use of digital communication to express emotion or assimilate human characteristics only using text:
The creation of pauses between sentences or phrases that we transmit when we press the „return“ button, sending the message away. This proves to be an efficient way of live communication, as the reader can already see part of the message and prepare his response before the sender has even finished writing his point.
The way we use the ellipsis (…) between sentences may indicate that there is something left unsaid. This can also take on different meanings depending on the context and people speaking.
The period is used in longer messages that tend to have a more serious tone. They often express anger or emotion (Steinmetz, 2016).
ALL CAPS will amplify the meaning of a message most times. Making happy messages even happier and sad ones even sadder. (Heath, 2018).
Repeating letters also help emphasize and give emotion. „Yessss“, „Nooo“ or convey sounds. „Ahhh“, „hmmm“.
The single exclamation mark may convey warmth and sincerity. In contrast, using multiple ones after another expresses enthusiasm (Beck, 2018).
An obvious example for the integration of emotion and gestures into our day to day writing would be the integration of gifs in text messages. These soundless short videos can often capture a wide range of emotions and compress a lot of information into something that’s easily shareable. The most engaging gifs have been found to be those with faces in them (Bakhshi et al., 2016), trying to convey our own reaction though a pre-recorded representation of it.
Emojis come along with a similar purpose, they are static and focus on describing a specific emotion or object. A study made by Swiftkey (a keyboard app for Android) in 2016 showed these results (Medlock et al., 2016):
From the emoji categories, the most used were happy faces, sad faces, hearts, hand gestures and romantic emojis in descending order.
From the top ten most used emojis, eight of them are of faces.
Generally emojis were used to communicate happiness as the data shows that 70% of the use was of positive emotions, 15% neutral and 15% negative.
The „tears-of-joy-emoji“ was the most popular in terms of usage online, and selected as the word of the year 2015 by the Oxford dictionary (Oxford Dictionaries, 2017).
As popular as emoji are, they still can't replace written language when trying to communicate something more specific than a happy reaction. In the same study from Swiftkey concluded that the majority of emoji-only messages contained one or two of them, and longer ones were hard to find. This can also be illustrated by the example of the app emojli, launched in 2014 as the first emoji-only messenger. It was created as a joke but saw a short lived viral success with over 60,000 downloads and closed nine months after because it's low usage and maintenance costs (Gray et al., 2015).
The analysis of Erika Darics of politeness in workplace related text based communication (Darics, 2010 pp.129-150) indicates that the emoticons and emotion expressed over text changes in work related communication and amplifies politeness. This politeness is generated to construct an informal working environment and achieve successful cooperation.
These two real examples from her study illustrate how politeness is amplified.
The first one is an excerpt of a social conversation between two coworkers. Liz has just returned after a long term illness. Both of them try to solve the conversation in a polite manner, Sarah suggesting that it is her fault that they haven't kept in touch and Liz simultaneously thanks her for keeping in touch and remarks that it wasn't Sarah's fault by multiplying the vowels in the word „no“.
The second is an excerpt of a deadline reminder from a boss to a worker. It changes the significance of the smiley as it clearly doesn't function as the representation of a smile and capitals are not meant to be read as shouting. Instead, it symbolizes a friendly reminder.
In conclusion, human communication presents certain patterns to transform emotion into written text but we have to keep in mind that the context in which the communication happens matters and sometimes it amplifies certain aspects or it shifts the meaning.
Haptic feedback has seen a rapid development in the near past, but it started along with the discovery and consequent exploration of electricity.
Alessandro Volta, Italian physicist, credited with the invention of the electric battery, started exploring the effects of electricity when applied to the human body. At the end of the 18th century he applied electric shocks to his own eyes, ears, nose, tongue and skin and precisely documented its impact, using it as a tool to explore the nature of sensory perception. David Parisi, associate professor of emerging media at the College of Charleston in South Carolina, describes this as the first of five waves in which haptic feedback has developed (Parisi, 2018).
The second wave of progress dates to the 18th century and introduced the term „haptics“, defined later as the „doctrine of touch“ (Baldwin, 1911). It centered around touch itself and studied the sense through specific laboratory instruments. Thanks to the experiments of Ernst Heinrich Weber in the 1820s, the sense of touch was divided into subcomponents: pressure, weight , temperature, pain, and movement. Thanks to his „two threshold technique“ he helped map out the body's sensitivity and touch acuity (Parisi, 2018, p. 6).
In the next phase this knowledge was instrumentalized, as researchers recognized touch as something that could be studied and improved upon. The machines built by Frank Gerald around the 1950s could translate sounds, images and words for the skin. His experiments pushed haptics to become a legitimate way of transmitting information (Gerald, 1956).
Late in the 20th century the fourth wave came as computer scientists found ways to make touch experiences that could be stored, transmitted, and synthesized by computers, making it possible to feel digital experiences. One good example of this is the “Tactile Television” developed by Paul Bach-y-Rita.
This led designers to work together with hapticians to develop more effective user interfaces. An important shift in the focus of touch studies occurred with the argumentation of Marvin Minsky that it was time to translate “feel into feel” rather than trying to transcribe images and sounds into touch (Minsky, 1980).
The fifth wave saw a commercialization of haptics. Big brands like Nintendo, Sony and Apple incorporated haptics in the electronics. Markets created a demand for touch and sought the opportunity of reconnecting with the lost sense. Up until this point, haptics presented a design problem, after it presented a marketing challenge.
Precise Haptic feedback is integrated more and more into smartphones to simulate and enhance certain interactions. It is even possible to trigger haptic feedback over the browser on modern android devices (MDN Web Docs, n.d.).
As the vision of Ivan Southerland already predicted in 1965, the evolution of computers would allow to present information in as many senses as possible. He envisioned a system that could capture human movement combined with a force feedback generator that would simulate the body's physical interaction with matter. As Michael Abrash, software developer for Oculus explained at the Oculus Connect 2 conference in 2015: “haptics is at the core of the way we interact with our surroundings, and without it, we’ll never be embodied in a virtual world” (Oculus 2015 October 3).
A prime example of this tendency are some of the virtual reality systems that have been developed in the last years. There are multiple approaches to get a good level of immersion, and each present different challenges.
The Teslasuit, for example incorporates haptic feedback, biometrics, motion capture and temperature control into a full-body suit that can connect to a computer to simulate multiple immersive experiences. It incorporates 80 electrostimulation channels for haptic feedback that can be controlled individually, so it achieves a great result for a full body immersion but as it tries to solve multiple problems at once, it can't provide accurate fine-tuned haptic feedback to specific parts of the body (Teslasuit, n.d.).
The complete opposite approach is the one taken by Haptx, a company focused around the senses of the hands in a vr environment. It is a glove that incorporates multiple arrays of air channels that are able to change shape to simulate touch. This is combined with a force feedback system that controls how far the individual fingers can move, achieved by connecting the tip of each finger to a release system. This technology offers far superior and precise interactions, but restricts movement from the user as all the equipment to blow air must be directly connected to the glove (Haptx, n.d.).
There are multiple approaches in between, that try to find the perfect balance between making the system accurate enough without having to trade comfort and lack of movement for the user.
A very different approach is seen by the Ultrahaptics. With the Stratos Inspire Module they have created a haptic feedback generator that simulates the sense of touch mid-air using ultrasound. It can recreate the surface of objects in a near distance that the user can feel without having to put or strap on any device as it is detached from the users body (Ultrahaptics, n.d.). The module combines the feedback generator with a Leap Motion camera to capture the exact position of the user's hand. It is mostly used in Installations to enable the interaction with a 3d object and feel it at the same time (Leap Motion, n.d.).
Vibrotactile generators are the most common type of haptic technology, they produce vibrations over the movement of mass.
They are used in mobile phones to notify the user in private situations where sound would have bothered them, in gaming controllers to enhance the experience and in many other devices.
The movement is detected by the Pacinian Corpuscle, a skin receptor specialized in vibrations. This onion-like receptor can optimally sense vibration of 250 hertz (cycles per second, a measure of frequency) (Skedung et al. 2013). It can sense the vibrations that are centimeters away from the skin and under the right conditions, low frequency sounds or even human speech (Kandel, 2000).
This allows for a very efficient way to simulate touch as devices can produce accurate feedback with very little movement. The sensitivity changes depending on the part of the body. The hand, for example, is one of the most sensitive as it has the most amount of receptors, about 300 with 48 to 60% of them on the fingers. (Stark et al., 1998)
There are different types of Vibrotactile generator technologies. We can differentiate between ERM (Eccentric Rotating Mass), LRA (Linear Resonant Actuators) and Forced Impact actuators.
ERM motors are the most common and are widely used since the invention of dc motors. They consist of a fixed motor that rotates an unbalanced piece of mass attached to its shaft. The difference of the rotating mass produces movement and as a result the device vibrates. As dc motors are cheap, small and uncomplicated, this is very cost effective. They are also easy to program, as the voltage amount that is applied directly translates into frequency of vibration. They were used in most mobile phones since their invention and have been used in video game controllers since the 90's. The two most recent and popular gaming controllers, the controller for the Xbox One and the DualShock 4 for PlayStation 4 both have ERM haptic generators to create a more immersive gaming experience.
They also present some major problems:
They are often imprecise because they can only be controlled by the amount of voltage provided. This can be improved with PWM and extensive testing of the voltage provided but generally they are not a good option for precise haptic feedback.
They can't provide instant feedback to a signal, as the motor has to accelerate to a specific speed to provide the desired vibration and also can't decelerate instantly.
LRA present a different approach. As the name suggests, instead of generating vibrations from rotation, a linear actuator is used. A piece of metal held in place with springs is accelerated using electromagnets along a linear path. This acceleration makes the device vibrate. It actually works in the same way as a speaker, it can change frequency and strength of a vibration as well as create subtle and sharp sensations. We will analyze this in detail in the Taptic Engine chapter.
The best example for this kind of feedback generator is the Taptic Engine presented by Apple in September 2014 with the introduction of the Apple Watch. It has evolved over the years, and it is used in different devices for different purposes, but the main principle is still the same: To simulate precise and responsive haptic feedback.
This precision also comes at some cost:
They are expensive to manufacture as it is composed of multiple small delicate parts. This is clearly changing, as the demand for this kind of technology has increased in the past years.
They are relatively large in size. As the movement is linear and doesn't revolve around an axis, the space to move has to be large. See Image
There is an obvious shift towards LRA technology, as this presents the future of precise haptic feedback. A good example of this happening was the announcement of the PlayStation 5. In the official Blog announcement, the subheadline directly hints at new haptic technology being implemented in the new controller and the article expands on that, presenting LRA feedback as one of the main features.
“One of our goals with the next generation is to deepen the feeling of immersion when you play games, and we had the opportunity with our new controller to reimagine how the sense of touch can add to that immersion.” (Ryan, 2018)
To base the assumption that haptic feedback enhances digital experiences, it is best to look at some studies that have been done in this regard:
A paper that focused on haptic instant messaging (Rovers et al., 2004) found that it can contribute to overcome losses in subtle non-verbal communication cues. Implementing haptics in the text messaging context proved to be an effective way in enhancing the communication between users.
The setup for the messaging application was static as it was only run on desktop computers, but it could still show a lot of potential.
In other studies we can see how haptic feedback is fast to process and is closer to us as it interacts with our perceived personal space.
When implemented in cars it proves to be an effective substitute to visual and audio feedback as it is perceived faster (Scott and Gray, 2008). It can reduce the braking reaction time from around 1.6 to 1.4 seconds (Ahtamad et al. 2015).
IGP Media Lab has released a report in 2017 in partnership with Immersion Corp. that combines haptic feedback with advertisements and analyzes the improvement in reception comparing it to one without haptics (IPGLab, Magma, Immersion, 2017).
Immersion Corp. focuses on creating haptic experiences and licensing those to third parties. They hold over 3200 patents related to haptic technology and implementation and they are at the forefront of haptic technology (Immersion, 2019).
The report collected data of 1100 participants that watched two pre-roll ads before accessing digital content. The type of haptic feedback the ad had was selected randomly between one control ad, one without haptics and five with:
High Density Haptics
Low Density Haptics
No Notification of Haptics
Unbranded Notification of Haptics
Branded Notification of Haptics
The results of the report show that in general haptics improve:
The reported excitement and happiness while watching the ad.
The brands image as exciting
The connection the user feels to the brand.
The brands favorability
They also found that low density haptic feedback (with approximately ⅔ of the effects of high density) is more effective in brand favorability, purchase consideration, recommendation intent and relevance compared to high density, that was reported as more overwhelming.
As already mentioned, the Taptic Engine was released by Apple in September 2014 with the presentation of the first generation Apple Watch (Apple, 2014).
Apple designed various features for the Apple Watch that synced audio, visuals and haptics together to further improve the experience. Enabling the user, for example to send a heartbeat using iMessage over the Apple Watch and for the receiver to feel that beat over haptic feedback.
After five years, the Taptic Engine has found a place in almost all Apple devices, as it is present in modern iPhones, MacBooks and Watches. It also comes in various shapes and sizes, as it tries to solve different problems in each type of device. We will explore the role it has in each of them in the following sections.
In the Apple Watch it plays an essential part of the experience. As the device is always directly touching the skin, the vibration feedback is more noticeable than in other devices.
Some features would not be possible to function without haptic feedback:
In other features haptics significantly help to improve the experience but aren't as essential:
Some patterns play a less important role but still help enhance the experience:
I wanted to know how users perceived haptics on the Apple Watch and if they relied on it as heavily as I thought, so I asked on Reddit. From the 61 users that responded in the first days, 57 reported to rely solely on haptics. 7 of them said they tried the audio feedback for some time but decided to leave it off as notifications were distracting, while 50 of them said they completely turned off audio from day one (varusgarcia, 2019).
In conclusion, the Apple Watch serves as an example of technology that integrates haptics as a core part of the experience and at the same time achieves a high user acceptance.
In the case of the iPhone, the Taptic Engine was introduced with the launch of the iPhone 7 in 2016. As the phone became waterproof, there was a need to replace the physical home button with a touch sensor and a simulated „click“ as this allowed to completely seal the phone from the outside. The way Apple solved this was to use the Taptic Engine. This also gave the users the possibility to choose between different button „feels“ (Apple, 2016, 1:00:10).
Compared to the Apple Watch, haptics don't play such a major role in the iPhone, as it can't rely on the user being always in direct contact with the phone.
The main task of haptics on the iPhone besides notifying the user is to add a feel to certain interactions. Here the most common ones:
Failed authentication. When Face ID or Touch ID fails to identify the user, a light pattern synced with animation is played to make clear that the authentication process has failed.
Success. To underline that a process has finished successfully, a prominent haptic pattern is played. The best example for it is when an authentication process has been confirmed at the AppStore, the green check mark is animated at the same time than a sound and haptic pattern.
Some apps and games also use Haptics:
On the MacBook lineup (MacBook Air and MacBook Pro) haptics are used since the launch of the 12„ model in early 2015 . Apple decided to remove any moving parts for the “click„ of the trackpad and instead use a force sensor to measure the pressure of the finger and the Taptic Engine to simulate the “click„, a similar approach later seen on the iPhone. This allowed the user to click anywhere on the trackpad and feel a similar “click" feedback instead of having to press the bottom side. It also gave the user the ability to change that sensation to feel light, medium or hard or disable it (Apple, 2015, 35:26). I've noticed this personalisation the most when I use someone else's computer and realise how used I am to my own settings.
The approach to haptics on the MacBook is quite different than the other devices, as the feedback can only be noticed when the user is actively touching the trackpad, as the device itself is in most cases not held by the user himself. This limits it to fewer situations and Haptics are not played as a reliable method to convey information, it is used more as a support of visual or audio queues.
The MacBook uses haptics on some other occasions:
Quicktime Forward/Backward. When playing a video on Quicktime Player, the user can control the speed when navigating forward or backward by applying more or less force on the according button. When the level of force changes, he will get a light taptic hint to indicate that.
Keynote. Haptics play a very subtle role, when moving an element in the canvas, whenever it aligns to the grid or to a ruler, a small haptic feedback is played.
I can see a pattern emerging from the analysis of these devices:
The more attached the device is to the body, the more haptic features it gets, as it relies more on this type of feedback.
If we apply this rule to the future, we can see the potential this technology will have to support interface design, as wearables are starting to emerge and be of real use and not a gimmick.
With my thesis I wanted to explore the state of the art of haptics to apply it and try to enhance human communication. For that I also needed to understand how haptics are generated and programmed as well as analyze the tools available for developers to create these experiences.
Apple has some recommendations for developers on how to handle haptics, setting the ground rules for the use of this technology (Apple, n.d.-a; Apple, n.d.-b; Apple, n.d.-c).
To not overuse it, reserving it for important situations, when they provide a long-lasting value to the app adding to the novelty aspect.
To use the patterns provided by them only for their intended use, as they want to establish a haptic language that translates throughout the whole Apple experience and the apps should also try to follow these rules.
To use haptics in a consistent way, so that the user can get used to the patterns.
To create visual and audio cues that play in sync with the haptics to provide a coherent experience.
To take into account the delay that haptics bring with them, as the Taptic Engine needs some time to execute the feedback.
It is clear that Apple itself follows these rules very closely when developing its own apps.
As I have described before, the Apple Watch is the device in Apple's lineup where haptic feedback makes the most sense, as it almost always stays attached to the skin, and for some features it manages to transmit valuable information by itself. From a developer standpoint though, the Apple Watch doesn't allow for much customization, only having a few patterns to choose from: Notification, Up, Down, Success, Failure, Retry, Start, Stop, Click. (Apple, n.d.-a). These are directly tied to sound and can't be played exclusively with haptics.
This closed system gives Apple an advantage over third party apps because they can utilize custom patterns but it also limits the overload that users might get if custom haptics were played constantly.
On the MacBook, developers have only three patterns to choose from (Apple, n.d.-c). This underlines the limited use that haptics play in the MacBook. They can choose from:
Alignment feedback, that is used in cases where an element is moved by dragging, and it comes in contact with other elements or it aligns to a guide or grid.
Level change, that is triggered when the user applies pressure to a button with multiple force levels. It triggers when the level changes to notify the user.
The third one applies a light feedback intended for situations that the other two can't cover.
On the iPhone developers can easily implement the standard three haptics for notifying the user about the state of a task or action with Success, Warning, Failure. They can also choose between five different taps to provide other feedback: Light, Medium, Heavy, Rigid and Soft. Additionally there is the Selection feedback (Apple, n.d.-b).
Since the launch of iOS13 in October 2019, it is possible for developers to play custom haptic patterns on the iPhone while the app is being used. This means that new patterns can also be generated dynamically while using the app.
This precisely is what motivated me to pursue the claim that communication could be enhanced as it allowed for user generated patterns that diverged from the standardized ones Apple imposes.
Apple has defined two kinds of events to construct a haptic pattern:
Transient events have a fixed duration, they are brief and compact impulses.
Continuous events have no defined duration, they sustain the vibration for longer times.
These two events each have intensity and sharpness values that must be specified:
The intensity controls the strength of the event. This means that the event will feel more prominent with more intensity and more discreet and subtle with less.
The sharpness of an event determines if it will feel soft, organic (less sharpness) or crisp and precise (more sharpness).
These two combined can create many different sensations, from strong organic events (high intensity and low sharpness) to discreet but sharp events (using low intensity and high sharpness) and everything in between, to best adapt to particular situations.
Also, each event has to have the time specified, as they should be played one after another with specific timing and order. In addition, continuous events need a duration specified, to know how long each has to be played.
As a last feature, the values for sharpness and intensity can be changed over time with the use of a parameter curve, that can increase one of these values over time progressively. This is used to create a smooth transition from one value to another without having to specify each one in between. And allows for a vibration to change from feeling organic to feel crisp.
Each event is defined and included in a pattern that the iPhone can interpret and play.
There are tools that can help generate these patterns without the need to manually define each value. While experimenting with custom haptics I used Captain Ahap, a web-tool made by Fancy Pixel where i could create haptic patterns directly in the browser and play them on the iPhone by scanning a QR code. It was easy to use but also quite imprecise and didn't allow for transitions, a vital part to make haptics feel natural (Fancy Pixel, n.d.).
Haptrix is a Mac App that presents more precise features like the creation of transitions and the option to sync the pattern with audio (Core Haptic Designer n.d.).
Before I describe the concept I conceived, I want to reflect on the learnings that I made throughout my research that led me to develop this possible solution.
The first concepts I came up with focused on solving too many problems at once. I wanted to define a system for people to communicate over haptics, as this proved to be a valid method to transmit information.
While in theory this might be a correct assumption, as it can transmit organic patterns that humans can interpret, it proves difficult when putting it into practice. Haptic patterns become difficult to decipher without the help of an objective support.
I quickly realized that I had to focus on using haptics either to enhance other kinds of established communication methods and provide a subjective signal to support the more objective information or use it to generate small haptic patterns with a simple meaning that could be universally understood.
While researching the basis for human communication, a clear parallel between the real-life interactions and haptics emerged for me.
In direct communication with one another, we use all kinds of gestures and subjective cues to support what we are trying to convey. All this additional information we transmit to one another plays a part in better understanding and transmitting emotion but it is always meant as a support of speech and not a replacement. This demonstrates itself when we are incapable of hearing each other because of a high level of noise or a physical barrier between us. We would tend to use simple gestures that are integrated into our culture, and wouldn't even try to explain a complex concept without the use of a more objective way of communication.
This led me to the idea that gestures could be transmitted to one another using haptics while communicating digitally. It would be limited to live, real time communication and primarily focus on handheld video calls. The movement of the hands could be tracked by the device itself or by companion device such as the Apple Watch and it would be processed and sent along the audiovisual signal to the other device in real time. This way the video Call conversation would be enhanced over the addition of an organic transmission of gestures.
This option has real potential and could be developed further, but I also saw some problems to begin with. It would be difficult for me to implement in real time, as the movement from the gesture would need to be analyzed and converted to a haptic signal, transmitted to the receiver's device and played with minimal delay. With a slight delay this feature would probably feel detached from the gesture it is trying to enhance. I also realised that with video and audio call many of the problems that digital communication presents already are solved, as the tone of voice, facial expression and gestures are already been transmitted and therefore haptics would not play such a big role.
I saw more potential in text messages, it presented a more clear path for haptics to communicate emotion and I could see more possible approaches. As I presented at the beginning, text communication strives to convey emotions using symbolic characterizations of real interactions, gestures and facial expressions. I could also see possibilities in analyzing the text in search of cues that express emotion and interpret them using haptic feedback.
The usual design process for a usability concept is to start from a theoretical base generated from scenarios, statistics, user research and surveys. Then to develop a concept and try as best as possible to replicate the end result in order to test it out. This process would repeat until the desired goal is achieved and the result is based on solid evidence.
I saw that I had to start with the practical part in mind at the same time as with the theoretical. Of course I had to understand the audience and their needs, but to come up with a haptic concept that conveys something as personal as the touch feeling without having a real example to show is really difficult. This meant that I had to assume the task of a designer and a developer at the same time.
My approach to a new idea was to try developing it to its fundamental part, and test it out with myself and the people I had close by.
For me this process proved to be successful, as it could deliver a product that felt almost finished in a matter of days. Especially for haptics, it was essential to have a real prototype to show and not only a theoretical conclusion.
The downside of this approach is that the development of an app can be sometimes tedious, as problems may arise with some part of the code that blocks the rest of it from working. It also depends on how new and well known that aspect is, as more popular and established ones have the solutions to almost all problems already figured out by the community. This was not really the case for the Core Haptics (the framework to handle haptic feedback) as it had come out recently and is somewhat new.
I developed a concept for a text messenger composed of various features that work together to provide a closer and more dynamic experience using haptic feedback.
The premise I wanted to explore came together when researching the evolution in communication. It showed that although we have become more dependent and somewhat isolated using digital devices to communicate with each other over text, we still seek the human interactions and emotions that these replace.
HapText offers four features that help the user to express themselves in a more organic way. Each one focuses on a different aspect of messaging: text, typing, emoticons and notifications. These four are the basic building blocks of any text messaging experience so for each case I've added haptics with a different approach:
As I explored earlier, text can be a tool to convey emotions in many different ways. We have developed a set of unwritten rules that we apply constantly when texting each other. I wanted to apply subtle haptics to each of these, to further amplify the typographical language. I've chosen four rules:
Capitalization. As they symbolize a louder tone of voice, haptics feel intense but at the same time have a softer feel, as this creates a heavier more organic sensation.
Exclamation Points. They feel intense and sharp. The intensity level rises with every consecutive one.
Questions. A rising continuous sensation is played. From low and soft to more intense and sharp. To signify the anticipation the question poses.
Repeating letters. A soft and subtle sensation is played for each repeating letter.
These Rules can be combined, so that a capitalized message with a question mark would steadily increase its intensity up to the end.
This haptic sensation is always active as its intent is to indicate the tone of voice that each message presents and only enhances it. A message written in capitals already seeks attention, and haptics only help to better transmit it.
This feature also applies to text but presents a completely different approach.
It is automatically generated while typing as the force applied when pressing each letter is enough to imply the force the user wants to convey with that keystroke.
It works because the high precision gyroscopes and accelerometers that iPhones come with can determine the slightest change in movement and rotation.
The receiver can see the message being animated with timing and haptic impulse playing together. It is meant for important messages, so constant use is discouraged as it would lose importance when overused. Because of this, the feature is not activated by default and the user has to long press the send arrow to use it.
In this little test I conducted we can see the feature working.I typed the letters “A” and “L” repeatedly one after another. With the letter “A” I tried to be as gentle as possible, and with the letter “L” I pressed with more impulse.
Time graphs of gyroscope and accelerometer data for letter “A”
Time graphs of gyroscope and accelerometer data for letter “L”
If we analyze all the values that were generated by the sensors, we can clearly see a visible pattern emerging, as the X and Y values of the gyroscope and the X accelerometer present the most reliable values. It makes total sense, as these are the axes that are slightly changed with each typed letter.
From there the data is analyzed and converted into sharpness and intensity values. Here there is room for improvement, as there are multiple ways to analyze the values.
I developed functions that would:
Determine the Intensity value by analyzing how much a value has changed, the more change it would see, the more intensity the haptic pattern would have.
Determine the Sharpness by checking the time that had passed for the change in values to occur. This meant that a faster impact would generate a sharper haptic feedback.
Haptic emoticons are animated and synced with a custom haptic pattern. Both visual and haptics are designed together to convey the emotion with the most fidelity.
As a proof of concept I used a few animations from the messaging app Telegram and designed a haptic pattern for each one. This proved to be a very effective way of showcasing the potential of haptics, as the feedback was manually created to fit the movement and mood of the animation.
From the available animated emoticons in the Telegram app, I chose these five as the movement in the animation presented the greatest potential:
Party face. The emoji blows a party horn and confetti is launched over his face.
My haptic pattern starts with a medium intensity and sharpness and increases until the release of air. After that, individual transient events happen for the confetti being launched.
Sad pensive face. The emoji slowly inhales and exhales air and disappointingly shakes his head.
I used continuous haptic events with low intensity synchronized with the movement of the head to extend the melancholic feeling.
Party popper. The party popper rotates before launching a confetti cloud.
The Pattern I used here starts with low intensity and increases until the release of confetti.
Hatching chick. The egg shakes around before multiple cracks happen in the eggshell and the chick hatches and thrusts the top eggshell into the air.
The haptic pattern starts with low intensity and sharpness for the movement of the egg. They increase momentarily for each crack in the eggshell happening. For the release of the shell into the air the intensity increases quickly. As the chick looks around and flaps his wings, a more organic pattern is played using low intensity ad medium sharpness.
Closed lock with key. The key rotates and moves until it is inserted in the lock and opens it. This is then reversed to its initial position.
The haptic feedback is low in sharpness and intensity for the movement of the key and increases rapidly when the lock opens to simulate the mechanical feeling.
This sends a one-time haptic notification generated by the user. They can create this pattern by pressing and releasing on the pad.
Depending on the time the user leaves the finger pressed, it generates a transient (short) or continuous (long) event. It uses the same function as the haptic typing to generate the intensity and sharpness from the force the user applies while tapping.
When the pattern is created, the user can play it for himself and retry to fit his desired outcome.
I see potential for it to be used in a subjective and organic way and for users to develop their own rhythms to communicate with each other in a more personal way.
Throughout my research and intensive analysis of the Taptic Engine I’ve learned that haptics can be a very helpful tool to support different types of interface elements and interactions. It provides a sense of depth that the user quickly learns and adapts to. My concept for enhancing digital communication has to be thoroughly tested before it can be implemented in real life. As a last note, I want to describe the steps I want to take to get real user feedback, the problems that I see if it’s not implemented correctly and the potential that I see with the combination of other kinds of technology.
As I already mentioned, during the development of my concept I've had a very practical approach as I've developed the concepts in parallel to the theory. I'm in the process of developing a user test to track and find out how well my concept holds up in real life.
I want to create an environment that closely resembles a real text messenger experience without the technical hurdles of a real message application, as this would present multiple issues that are out of my hands.
As a compromise, I'm in the process of creating a fake message application in which the user can try out all the features and chat with a generated bot. This way, I can program it to explain the actual features to the user while directly using the application itself as well as ask the user directly for feedback.
I hope to have real user data and feedback I can showcase at my bachelor's presentation.
With these four features I tried to enhance the texting experience and provide a tool the user could utilize to express his emotions in a more organic way but I'm aware that if not implemented correctly, some problems may arise.
If haptics were to be overused, the complete concept would lose its novelty, proceed to be annoying and work against its own premise, which is to allow for better communication.
My concept doesn't really focus on solving these potential problems, as I first have to prove that the actual features work in a real life scenario. But I think that some sort of limitation would need to be established, here are some examples:
The user could only use haptics with people he trusts and is close to. To mitigate the overlad, haptics could only be used when both parties agree.
Set a daily limit for haptics. This way Haptics would be a novelty that users would spend with care. Receiving a haptic message would feel like something important.
These potential solutions would need to be tested at its core, as they completely change the dynamic of the concept.
The development and fine tuning of haptic patterns could be improved by the implementation of Machine learning to create a customized experience for each user.
For example, when using the Enhanced Type feature, the force the user inputs for each character could be calibrated over time to better fit him, as everybody uses smartphones in their own way. Similarly, machine learning could also improve the Enhanced Text feature as it could decipher the context and deliver a better result.
All the references can be found in the Pdf.