paint-brush
On Affective Computing: Past Imperfect, Future Impactfulby@carlcarrie
1,965 reads
1,965 reads

On Affective Computing: Past Imperfect, Future Impactful

by carl carrieNovember 11th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

By Carl Carrie

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - On Affective Computing: Past Imperfect, Future Impactful
carl carrie HackerNoon profile picture

By Carl Carrie

Science and business of emotional computing is a fast-growing multi-billion-dollar industry. The field plagued with ethical and privacy issues has a sordid past and a promising future. Affective Computing, a recently coined term will disrupt many industries including Smartphones, Customer Relationship Management (CRM), Security, Health Care as well as Virtual Reality (VR) and Robotics industries in a dramatic way over the next decade. There is a coterie of startups that will help lead the way.

We are driven by emotion and by rational thought. Indeed, research supports that perception and decision making have a common crucial component: emotions. Arguably, a fully actualized affective computing device will be able to harvest information rationally and emotionally better than we do.

A Trekkie analogy: Michael Burnham in the new Star Trek Discovery is a human who was raised in the Vulcan culture and taught to shed her emotional self in favor of pure logic. She struggles to embrace the concept that she needs both her Vulcan logic as well as her Human emotional layers to be self-actualized. Similarly, Artificial Intelligence (AI) will not reach its highest levels of effectiveness until both are integrated into its decision making.

Figure 1: Star Trek Discovery’s Michael Burnham — harnessing her emotions and her logic

We can all detect a happy face and respond to that.

But, many of us could miss subtle visual cues. We might miss the slight change in tone of a voice that a trained psychologist or FBI officer might detect. We would all miss the small tick up in temperature or electrodermal or infrared signals that only electronic sensors could capture. The potential for affective computing is to process all that emotional byproduct information quickly and integrate with AI to make rational decisions.

Responding to faces is something we have been doing for a very long time.

As early as 500 B.C., Pythagoras accepted or rejected students based on how gifted they looked. Pythagoras, who some believe originated physiognomics — once dismissed a prospective follower named Cylon because to Pythagoras, his appearance indicated unsavory character.¹

Despite its long historical pedigree and popularity, Physiognomy was considered a form witchcraft by 1597. But that was not the end entirely of the peculiar history of the term, as it has been resurrected in a more scientific way in the science of what is known now as affective computing.

Emotions are at the heart of the human experience. Before there was technology, before there was even language, our emotions played crucial roles in communication, social bonding, and our decision making. Today, emotions remain at the core of who we are and how we communicate. As a result, it is our most natural means of interacting with the world and has led us to build ways to interface emotions with our machines as well. That emotional bridge will start with our smartphones.

Figure 2: iPhone X Launched with FaceID Recognition as a precursor to other facial expression recognition capabilities

Modern smartphones can digitize faces and apply machine learning algorithms on specialized processors to improve on what humans have done for eons — identify faces. The iPhone X, in particular, uses a 3D face model that relies on comparing measurements based on fixed facial features including bone structure, rather than features like hair or skin color that are alterable with dye and makeup. FaceID uses a suite of sensors to map your face in 3-D and infrared light to illuminate your face, while a projector projects an array of infrared dots at it. An IR camera snaps an image of these dots, which the iPhone X authenticates you against an already-stored picture of your face.

Prologue: Arc of Innovation

Smartphones equipped with high fidelity microphones, high definition cameras and a powerful array of sensors are becoming trained on the prodigious amounts of data that we create for them in merely using them.

Soon, facial emotion recognition will come to consumers who will expect more than the ability to be authenticated for their mobile devices — they will hope that existing applications like Facebook, Snapchat, and Twitter be able to make sense of the imagery for them. What are people seeing? What are they doing? What are they thinking?

Smartphones in the very near future will also begin to interpret your body language to determine how you are feeling and tailor its response, just as we do with each other intuitively. Emotional intelligence will integrate into the technologies we use every day, running in the background, making our technology interactions more personalized, relevant and authentic.

The arc of innovation for emotional computing is highly dependent on the success of some startups and large smartphones, automotive and home automation companies such as Apple, Google, and Samsung. Affective computing exists harmoniously with data science and requires similar data sets and analytical tools and GPUs to process. Apple acquired Emotient, one of the early startups in facial emotion detection. Many other major tech companies are investing in the affective space — in particular, Amazon with their Rekognition capability on AWS, Microsoft with their Emotion API, IBM with their Tone Analyzer are notable. Recently, Facebook acquired Faciometrics to augment their ability to embed emotional analytics into Facebook.

In October, NVIDIA released its new DRIVE PX Pegasus chip that processes over 320 trillion operations per second; that is more than 10x the performance of its predecessor. Intel is developing new chips that are neuromorphic. Neuromorphic chips model in silicon the massively parallel way the brain biologically processes information as billions of neurons respond to sensory inputs such as visual and auditory stimuli; like our brain’s neurons; they can adjust the connections between each other to adapt to new tasks. The new neuromorphic chip consists of 128 computing cores; each core has 1,024 artificial neurons totaling more than 1,30,000 neurons and 130 million synaptic connections.

Last year, Intel paid $408 million buying Nervana, to accelerate developing a dedicated chip for training and executing neuromorphic networks.² Emulation using conventional CPUs would be far too slow and consume far too much energy to be of practical use.

Figure 3: Emotions are crucial to perception, learning and decision making

So why do we want computers to empathize with us? It starts with the improvement in efficiency of human-computer interaction, but it crosses into so many other realms as well.

Low-cost, wearable sensors could enable companies to measure how environment and experiences affect employee mood. Organizations could use this knowledge to design more effective work settings and processes to increase productivity and employee satisfaction. Empathy could be built into enterprise software systems to improve the user experience by, for example, sensing when employees become frustrated with a task and offering feedback or suggestions for help.

In healthcare, doctors could diagnose conditions and level of discomfort better. Students could be taught to learn faster with content that is adjusted dynamically in a way that would offer a different explanation when the student is frustrated, speed up the content when bored or slow down in times of confusion. At airports, security systems could identify people who might be carrying a bomb or smuggling contraband. Roguish or excessively risky behaviors are detected in advance for operators of retail stores or investment firms. Athletes could have real-time feedback they need for achieving peak performance. Our homes could adjust lighting, music and other ambient settings without having to ask.

Already, wearables such as the Apple Watch can do rudimentary ‘emotion’ measurements of your heart rate. And examples of emotionally aware devices are popping up in unexpected places.³

The Emerging Emotion Economy

A fledgling emotion economy has emerged as human emotions are increasingly collected and analyzed. So, where ‘Affect’ is defined as the experience of feeling or emotion. The term “Affective computing,” bridges the cognitive-emotional gap between computers and humans, and in doing so, the technology can interpret, adapt and respond to the emotional state of its human users.

Figure 4: Will computers be able to accelerate an emotional business?

As a way of communicating, emotion is ancient indeed. Long before humans invented the spoken language, we communicated non-verbally at an emotional level. The adage: most of a message’s content is conveyed by body language, a smaller percentage by the tone of voice and a tiny percentage by spoken words is now validated by research.

Emotional communication originates in our physiology. Our physiology programs us to display emotions publicly: grimaces of disgust alert others to poisonous food, pursed lips, and arched brows warn of mounting aggression, and spontaneous smiles relay our joy and friendship. It’s incredibly crazy, but in a way, it isn’t. We’re evolutionarily programmed to read our peers’ faces, and now we are teaching our machines to do exactly that. Is that wrong?

Physiognomy Roots

The etymology of Physiognomy is from the ancient Greek, Gnomos (character), Nomos (law) and Physis (nature). While some may see Physiognomy as immoral, stereotypical, inaccurate, inconsistent and racist, aspects of Physiognomy may soon be re-imagined and re-institutionalized with the aid of Artificial Intelligence and chipsets optimized for deep-learning and graphical processing.

Around 500 B.C., Pythagoras was accepting or rejecting students based on how intelligent they looked. Pythagoras, who some believe originated physiognomics — once dismissed a prospective follower named Cylon because to Pythagoras, his appearance indicated inferior character.

On 27 June 1831, 26-year-old Captain FitzRoy was commissioned as commander of the several year voyage to survey South America. FitzRoy knew a long journey could involve extreme stress and loneliness and mutiny. The previous Captain had committed suicide, and FitzRoy was determined to succeed in his mission.

In addition to his officers and crew, his ship would carry several supernumeraries, passenger’s non-standard responsibilities during the voyage. One of those supernumerary candidates was a 22-year old whose job it would be classified rocks and other natural phenomena and, of course, dine with the Captain and his officers. This young man’s long nose, however, would conspire against him. Captain Fitzroy was an ardent disciple of Lavanter, the famed criminologist who used facial characteristics to determine the shades of grey of men and this young man had exactly the type of nose that should not be on this voyage.

The young man later wrote about Captain FitzRoy and his practice of Physiognomy, “Afterwards, on becoming very intimate with Fitz-Roy, I heard that I had run a very narrow risk of being rejected, on account of the shape of my nose! He was an ardent disciple of Lavater, and was convinced that he could judge a man’s character by the outline of his features; and he doubted whether anyone with my nose could possess sufficient energy and determination for the voyage. But I think he was afterwards well-satisfied that my nose had spoken falsely.”⁴

Figure 5: Charles Darwin

If 22-year old Charles Darwin wasn’t so highly recommended and so amiable over dinner with FitzRoy, his opportunity to impact science and society would have been lost. There are consequences for misapplying facial characteristics then and now. The notion that character is etched on to an individual’s face, mirroring personality and emotion, is at odds with the modern ideal of free-will and has made Physiognomy unfashionable today.

Ironically, linking facial expressions to moods and personalities was a fascinating subject for Darwin, the scientist. Darwin was the first to suggest that they were universal; his ideas about emotions were a centerpiece of his theory of evolution. Darwin believed expressions of emotions were biologically innate and evolutionarily adaptive and that similarities in them could be seen phylogenetically.⁵

In 1872, when Darwin wrote The Expression of the Emotions in Man and Animals, it contained a vast collection of photographs of facial expressions featuring commonality of primitive emotions from animals to infants to adults in insane asylums. Darwin noted expressions that were exclusively human including blushing and grief. Moreover, Darwin classified the facial deformations that occur for each facial expression: “the contraction of the muscles round the eyes when in grief,” and “the firm closure of the mouth when in reflection.”

Our evolution and our physiology program us to socially display emotions: grimaces of disgust alert others to poisonous food, arched brows, and pursed lips warn of expected aggression while spontaneous smiles relay our friendship.

Following Darwin, emotional theorists charted six emotional states that were universal and able to be expressed and recognized. These were: happiness, sadness, anger, surprise, disgust, and fear. These classifications form a foundation for the science of Affective Computing.

In an era of pervasive cameras and big data, machine-learned affective computing can also be applied at unprecedented scale. Given society’s increasing reliance on machine learning for the automation of routine cognitive tasks, it is urgent that developers, critics, and users of artificial intelligence understand both the limits of the technology and the history of physiognomy, a set of practices and beliefs now being dressed in modern clothes.

Affective Algorithms

In general, a facial expression recognition system consists of four main steps. First, the face is localized and extracted from the background. Then, facial geometry can be estimated. Based on it, alignment methods can be used to reduce the variance of local and global descriptors to rigid and non-rigid variations. This greatly improves robustness to in-plane rotations or head pose. Finally, representations of the face are computed either globally, where global features extract information from the whole facial region, locally, and models are trained for classification or regression problems.

Affective computing often borrows from psychological and neurological research. To illustrate, ample evidence shows that ongoing brain activity influences how the brain processes incoming sensory information and that neurons fire intrinsically within large networks without any need for external stimuli The implications for efficient computing of these insights are profound — namely, it appears that emotions battle with cognition to control behavior This means classical accounts of emotion, which rely on a simple stimulus: Response narratives are highly doubtful to be effective. This implies that current state of the affective computing art is very nascent. We do not yet understand the complex feedback loops and complicated decision making that drives and controls emotional responses.

Advances in processing speeds and the disciplines of computer science, artificial intelligence, machine learning, psychology, and neuroscience, are all leading to a new flourishing in the emergent affective computing field. Computers, cameras, and sensors can capture points on the face, posture, gestures, tone of voice, speech and even the rhythm or force of keystrokes as well as the temperature of your hands to register changes in a user’s emotional state.

From a computer vision perspective, facial expression analysis from digitized photos or videos is a very challenging task due to a variety of factors. First and foremost, the training set is assumed to be incontrovertibly true. Subjects or viewers are representing their emotional content, and that could introduce modeling or estimation errors.

Other factors include:

1. the angle of vision to subject,

2. optical conditions (e.g., lighting, image stabilization, filters, shadows, orientation, resolution)

3. structures on the face such as beards or glasses,

4. partial obstruction (occlusion) of objects and

5. person-specific morphological deformations

Of course, any model is also subject to overfitting as well as modeling and estimation errors.⁶

Shockingly simple and chillingly controversial, one early study of aggressive tendencies relates facial width to height in the form of a simple ratio (fWHR).

Figure 6: Seemingly Arbitrary Correlation of Aggressive Tendencies to Facial Width to Height ratio

Michael Haselhuhn and Elaine Wong, “demonstrate a robust positive link between fWHR and aggression suggesting that fWHR is a reliable marker (and signal) of aggression in men.”⁷

Others find positive associations between fWHR not only with fearless dominance, but also with the factor self-centered impulsivity, and with overall psychopathy scores and a link to testosterone levels.⁸ Perhaps surprisingly, research also shows fWHR to be linked positively with success in competitive social contexts ranging from corporate to religious leadership and athletic prowess.

Computer languages that have been used by researchers in the field of machine learning, such as Python and R, are often used to demonstrate the speed of modeling, coupled with the analytical power and speed for how these algorithms can run. In 2017, a group of researchers published on Github, an open source repository code for affective computing. The study, called “Predicting First Impressions with Deep Learning,” looked at participants’ first impressions of people’s photographs based on dominance, trustworthiness, age, and IQ.⁹ ¹⁰ ¹¹

Affective Computing Disruption

Affective computing disruption typically comes in the form of systems that recognize, express or synthesize emotions or adjust to the changing moods of their human counterparts. They can see what we see in our faces and our gestures. A trained professional might be able to catch a bit more — dilated pupils or a slight change in intonation. A machine with sensors would be able to detect minuscule changes in perspiration or temperature.

Affective computing can disrupt many industries and is already a big business that is expected to grow in size and prominence. The global affective computing market will grow to over $12.2 billion to more than $53.98 billion by 2021, a compound annual growth rate (CAGR) of 34.7%. According to market research firm Research and Markets.¹²

In fact, the firm predicts that affective computing will disrupt the way firms, especially in retail, healthcare, government, defense, and academia sectors gather, organize, collaborate, and deliver information.

Automotive Disruption

BRAIQ (https://braiq.ai) is a NY Startup is developing technology that detects how you feel about that autonomous car that’s carting you around. Automobile manufacturers know 75% of Americans are actually “afraid’ of self-driving cars and don’t trust them. BRAIQ hopes to earn this trust by allowing their technology to intuitively read emotional signals hoping you enjoy the ride more than worrying about getting there.

Figure 7: Affective Automobiles

Darwin needed help to expand his facial analysis to make it suitable for companies like BRAIQ to have a chance to be successful. That person was Dr. Paul Ekman.

Consumer Advertising & Product Management Disruption

Dr. Ekman had traveled the globe with photographs that showed faces experiencing six basic emotions — happiness, sadness, fear, disgust, anger, and surprise. Everywhere he went, from Japan to Brazil to the remotest village of Papua New Guinea, he asked subjects to look at those faces and then to identify the emotions they saw in them. To do so, they had to pick from a set list of options presented to them by Ekman. The results were impressive. Everybody, it turned out, even preliterate Fore tribesmen in New Guinea who’d never seen a foreigner before in their lives, matched the same emotions to the same faces. Darwin, it seemed, had been right.¹³

In 1967, Dr. Ekman began to study deception with clinical cases in which the patients falsely claimed they were not depressed. These patients later committed suicide when not under supervision. When patients’ films were examined in slow motion, Ekman and Friesen saw microexpressions which revealed strong negative feelings that the patient was trying to hide.

Figure 8: Facial Expressions

Dr. Ekman coined the terms microexpressions and macroexpressions. Microexpressions are typically signs of concealed emotions and differ from macroexpressions in duration and scope. Macroexpressions usually last between 0.5 to 4 seconds and involve the entire face. Macroexpressions are relatively easy to see if one knows what to look for. However, due to varied personal, social, or even cultural situations, people sometimes are led to conceal or mask their true emotions via suppression or unconscious repression. These microexpressions, however, are expressions that go on and off the face in a fraction of a second, sometimes as fast as 1/30 of a second. They are so quick that if you blink you could miss them. In Ekman’s mind, microexpressions are involuntary and expose a person’s actual emotions. As a result, these microexpressions are also universal to everyone around the world.

Microexpressions are spontaneous expression and map of the six basic facial expressions. Microexpressions from others may leave an impression after an interaction, but lack the certainty or explicit labeling of algorithmic verification — a shift that makes the subtle a virtual roar. Deception is a notoriously tricky expression and social cue to pick up on, but studies have shown that individuals who can acutely pick up on microexpressions were able to identify other deceptive behaviors.

Figure 9: Shameful Deception



The use of specialized algorithms that magnify expressions by identifying the parts of the face in motion when expressions change and distorting the face to extrapolate the microexpressions are needed to make the subtle an enhanced virtual roar for classification algorithms to be practical. Today, machines equipped with the best affective computing algorithms can routinely outperform professionals in facial microexpression recognition experts like law enforcement professionals and psychologists. UK Startup Wearehuman (https://wearehuman.io) raised a small round of capital in April to expand on the potential of analyzing microexpressions. The startup was founded in London in 2016 and already has offices in the UK, China, and the US and staffed with data scientists, psychologists and developers expert in machine learning techniques and microexpressions.

Another Switzerland Startup nViso (http://www.nviso.ch) uses 3D facial imaging analysis, nViso’s sophisticated algorithms capture hundreds of measurement points and face movements to just 43 facial muscles they map to in real-time. nViso, like several other affective computing startups, provides a dashboard to analyze responses to ads and products.

Figure 10: nViso Dash Board

Nuralogix (http://www.nuralogix.com/) Nuralogix uses Transdermal Optical Imaging which utilizes a conventional video camera in an unconventional way to extract facial blood flow information from the human face. The startup developed a technique to “read” human emotional state called Transdermal Optical Imaging using a conventional video camera to extract information from the blood flow underneath the human face. Facial skin is translucent. Light and its respective wavelengths reflect off different layers below the skin which contains blood vessels and melanin. Like infrared based imaging, Transdermal Optimal Imaging is often more robust than conventional images which are substantially impacted by light intensity and provide unique data.

SkyBiometry (https://skybiometry.com) is not a startup having started their efforts in 2012. SkyBiometry automatically adjusts for obstructions (eg. glasses) and angles (e.g. from above) to allow classification and grouping of facial emotions in crowds while classifying emotions.

Figure 11: SkyBiometry

RealEyes (https://www.realeyesit.com) also leverages webcams and sophisticated machine learning algorithms to measure how people feel when watching video content. RealEyes a London startup has already raised $13.7 million. As their CEO reflects, “We know the behavior is driven by emotions, so an effective video will be one that creates a strong emotional response,” he says. “Thus, the best way to measure how effective a video will be to measure the emotional response to it — because the higher the engagement is, the more likely a viewer will take action. For example, a study we did with Mars across 35 brands established a link between the emotional perception of an ad and its impact on sales with 75% certainty.”¹⁴

CrowdEmotion (http://www.crowdemotion.co.uk) Founded in 2013, CrowdEmotion is a privately-held London-based startup with technology to gauge human emotions visually by mapping the movements of 43 muscles on the human face. BBC StoryWorks used this technique in a 2016 study called ‘The Science of Engagement’ where they measured the second-by-second facial movements of 5,153 people while viewing English language international news in six of BBC’s key markets.

With individual businesses that are call-center centric, the voice remains the affective computing frontier. For example, Humana uses affective computing software to correlate a steady rise in the pitch of a customer’s voice, or instances of an agent and customer talking over one another, as a cause for concern.

Customer Relationship Management Disruption

A critical ethical issue raised by giving computers the ability to “see” our emotions is the potential for emotional control and manipulation, though some would argue that some emotional manipulation may be a good thing. According to Daniel Kahneman, Nobel-prize winning behavioral economist and psychologist, much of human error is not even attributable to a systematic cause or biases, but to “noise.” Noise is random, unpredictable, and impossible to explain. By using emotionally curated algorithms, our machines can temper human judgment with “disciplined thinking” and help guide us collectively to a path of higher self-actualization.

In an age where companies highly value the ability of marketers to accurately target a population on the myriad of devices and apps that we use, one should wonder how our emotions could be used. Should employers be able to read our levels of dissatisfaction at work? Should pollsters be able to track and quantify our emotions to improve their forecasting accuracy? Should Facebook be able to communicate aggregate emotional states on varying issues? Should emoticons be automatically detected when we type or when we use an app? For early entrants into the affective computing space, it will be essential to build trust with consumers first, by providing them the choice to share their data and by informing them the type of data a company is looking to collect and how it will be used and made available to others.

A notable discussion of ‘persuasive technologies’ by Berdichevsky & Neuenschwander addresses the issue. It proposes a guideline based on the Golden Rule: “The creators of a persuasive technology should never seek to persuade anyone of something they would not consent to be persuaded of.”¹⁵

Data privacy and emotional manipulation issues notwithstanding, there are compelling reasons to use affective computing when trying to maximize value for customers or to present ideas to purchase products through ads targeted at both their physical and emotional needs.

Cogito (http://www.cogitocorp.com) is aiding Humana. MIT Media Lab spinout Cogito raised a $15M Series B round in November 2016. Cogito is based on behavioral science born of years of research at MIT’s Human Dynamic Lab and initially funded by DARPA to bring it commercial use.

Cogito tracks dialog speed, pauses, interruptions, volume changes, and tone, among other metrics in the form of a real-time dash board for feedback on call performance to shape better conversation habits.¹⁶ The insights presented in Cogito’s Dashboards can also send via their programmatic interfaces with Customer Relationship Management (CRM) systems.

Emotibot (http://www.emotibot.com/)¹⁷ Shanghai-based Emotibot classifies according to 22 emotional states across facial and other modalities. Can also detect fatigue and can provide useful emotional learning based feedback. The firm developed software that can identify 400 different variations of human “moods.” The company is now integrating this software into call centers that can help a sales assistant understand and react to customer’s emotions in real time.

Spanish startup RelEyeAble (http://www.releyeble.com/en/index.html) provides simplified real-time emotion analytics for retail products in a brick and mortar or home computing context.

So much information about us can be digitally quantified now; what we read, what we buy, and even our vitals can be tracked on our cell phones to give us an idea of our long-term health. Of course, our activities, web history, and places we visit are tracked extensively. But how comfortable would you be with a machine tracking, analyzing and even responding to your emotions without you also being aware? That is starting to happen, and as the technology is expanding, some ethical and privacy considerations are emerging.

Figure 12

Surveillance and Criminology Disruption

These startups and their business missions raise concern about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime. The principles behind Minority Report goes back to the 1870s and Cesare Lombroso’s work.


Lombroso was an Italian physician and criminologist who popularized the notion that criminal behavior was innate and only partly caused by psychological and environmental conditions. In short, he believed that some people were simply ‘born criminal.’ Cesaer Lombroso classified many facial features as regressive and inherently criminal. The expression “stuck-up” comes from this time when a person with a nose bending slightly upwards was read as having a contemptuous, superior attitude. In Lombroso’s view, whereas most individuals evolve, the violent criminal had devolved and therefore constituted a societal or evolutionary regression.If criminality was inherited, then Lombroso proposed that the “born criminal” could be distinguished by physical traits:¹⁸










• large jaws, forward projection of jaw,• low sloping forehead,• high cheekbones,• flattened or upturned nose,• handle-shaped ears,• hawk-like noses or fleshy lips,• hard shifty eyes,• scanty beard or baldness,• insensitivity to pain,• long arms relative to lower limbs.

It remains to be seen whether or not the availability of data and a symphony of algorithms will be enough to thwart nefarious activities. But, an intriguing MIT Labs spawned startup Humanyze (https://www.humanyze.com)¹⁹ harvests emotional content from wearable badge sensors. Each badge generates about 4GB of data per day, which gets uploaded to the cloud where Humanyze analyzes it and distills relevant information through a dashboard. Wearable sensors are also having an impact on office design and the New York Times discusses this development²⁰

Although 19th Century physiognomists like Cesare Lombroso were wrong about the causal relationship between face shape and (usually false) moral behavior, the truth is that human beings tend to correlate some morphologies with moral and emotional content. One challenge will be to remove biases, ethnicity and race out of the algorithmic results or handle them in a way that is morally and legally justifiable.

Kairos (https://www.kairos.com). Kairos provides visual identification and emotion recognition APIs with features for ethnicity and attention capture with a SaaS (Software as a Service) business model.

The ethical challenges are far broader than just ethnicity bias. Is affective computing a form of pseudoscience that is, unfortunately, sneaking back into the world disguised in new clothes thanks to technology? What if we’re training them to amplify our biases, and, in doing so, give new life to old ideas we have correctly dismissed? Will we know the difference?

The New York Times tells a story of two Stanford affective computing researchers that shifted their attention from identifying terrorists to identifying sexuality.²¹

Figure 13: What private innformation can be gained from a face? How much of that information will be biased or based on biases or stereotypes?

The researchers, Michal Kosinski and Yilun Wang took more than 35,000 facial images of men and women that were publicly available on a U.S. dating website and found that a computer algorithm was correct 81% of the time when it was used to distinguish between straight and gay men, and accurate 74% of the time for women. Accuracy improved to 91% when the computer evaluated five images per person. Humans who looked at the same photos were accurate only 61% of the time. Another trend the machines identified was that gay women tended to have larger jaws and smaller foreheads than straight women, while gay men had larger foreheads, longer noses and narrower jaws than straight men.²² Other analysts have pointed to problems in their scientific methods and confusing correlation and causality.²³

Affective technology appears to be capable of learning about a person’s most intimate preferences based on visual cues that the human eye may not pick up. Those details could include things like genetic traits, our race, ethnicity and psychological disorders, or even political leanings — in addition to stated sexual preferences. What if Google’s new Lens app were trained to analyze using similar algorithms and tag our photos with the results? Who designs the algorithm, the training sets, the bias reductions, the constraints on fair use?

One startup that can classify emotions by age, sex, and gender is NTechLab. (http://ntechlab.com)²⁴ NTechLab is a Russian company with an app called FindFace that can track everyone using profiling techniques on VKontakte, the Russian equivalent of Twitter. FindFace’s neural network receives part of a photo with a face and generates the 160 facial-point feature-vector. NTechLab claims its software can search through a database of a billion faces in less than half a second.

Another company that is applying affective computing techniques individually or in crowds for surveillance purposes is Sightcorp (http://sightcorp.com). Sightcorp can tracks several people simultaneously while mapping their spontaneous reactions and interest towards content in real-time and in different real-life applications ranging from surveillance to retail.

Health Care Disruption

In ancient China, there was a saying, “If you want to know whether someone is wise, just look at their forehead; if you want to know about a person’s reputation, nobleness, wealth, blessings, and longevity, look at their eyebrows, eyes, nose, mouth, ears, and jaw.” Earliest Chinese writing on face reading is commonly credited to Mr. Guiguzi (Ghost Valley Scholar: 481–221 BC) Mien Shiang, a 3,000-year-old Taoist practice that means face (mien) reading (shiang).²⁵

Even the most notorious and only female empress Empress Wu was recognized while still in her gender-neutral diapers clothing overshadowing her other brothers by the great Tang Physiognomist Yuang Tangang.

The young lord has dragon eyes and a phoenix neck, the highest possible indicators of nobility! When he then continued to examine the child from the side, he was even more surprised: “Should this child be a girl, then her career would be beyond all estimation. She might well become ruler of the empire.”²⁶

Figure 14: Empress Wu face

There are a handful of startups that continue the journey to identify medical uses of face reading.

For example, we may take for granted the ability to smile, kiss, or close our eyes at night, all of which can be affected by facial paralysis. Facial paralysis could strike anyone at any time. Wearable technology can provide real-time facial muscle information to patients and therapists and discretely may offer a significant improvement in the rehabilitation of the condition.

Co-founded by Dr. Charles Nduka, a plastic & reconstructive surgeon, EmTeq, a British startup based at the University of Sussex is looking to digitize facial expressions and use artificial intelligence to interpret them EmTeq hopes to shape the way we could treat facial palsy and autism spectrum disorders (ASD) shortly.

Planextra (http://planexta.com/) Emotion-tracking smart bracelet SenceBand uses a clinical-grade ECG (EKG)-tracking technology to track advanced heart rate analysis, including heart rate variance (HRV) and predict 64 emotional states. The SenceHub uses algorithms to turn the raw data into actionable analytics and sends of notifications about the emotional states of the user of the people in his/her network.

Affectiva (https://www.affectiva.com)²⁷ The term “affective computing” was coined by Rosalind Picard, a computer scientist at MIT and co-founder of Affectiva — an emotion measurement tech company that spun out of MIT’s explorative Media Lab in 2009. Rosalind Picard remains an active investor but has stayed at MIT to focus on her research in related medical applications.

Affectiva (is one of the most important startups in the affective computing space and has raised over $26M in venture and strategic financing. Affectiva is funded by Kleiner Perkins and several other notable investors. One of its projects is working closely with a “very large Japanese car company” (Toyota used to sponsor El Kaliouby’s lab at MIT) that is building an in-car emotion sensor that knows when you’re drowsy or distracted. The affectively empowered car can take action in an emergency situation by calling 911 or alerting a friend or family member.

Affectiva has compiled a vast corpus of data consisting of six million face videos collected in 87 countries, allowing its computing engine tuned for real expressions of emotion in the wild and to account for cultural and other differences in emotional expression.

Affectiva’s automated facial coding technology captures facial action units and identifies valence and arousal/intensity and a range of discrete emotion states, e.g., enjoyment, surprise, disgust/dislike, confusion, and skepticism.²⁸

Figure 15: Quantify

To quantify emotional response, facial expression classification algorithms process frame-by-frame in a video and locate the main features on the face. The movement, shape and other contexts or used to identify facial action units such as an eyebrow raise or smirk or a pursed lip. Machine learning classifiers subsequently map facial texture and movement to emotional states.

When we speak, our voices sometimes offer subtle cues about our emotions. Whether our voices are loud or soft or ever-so-slightly stressed can suggest what we are feeling inside. Affectiva recently announced a new cloud API recently that can discover a range of emotion in human speech.

Blending facial, vocal and physiological elements into a multimodal framework often introduce conflicting analytical results; facial expressions perceived as conveying strikingly different emotions depending on the bodily context in which they appear. Despite, those results, many believe that there is predictive promise in combining sensory input using an ensemble of deep learning techniques.

Rosalind Picard is now chief scientist at Empatica where their newest device is known as the Embrace is focused on the application of an ensemble of deep learning techniques in medical-grade wearables. The $200 Embrace, is based on an earlier Empatica model E4, which has already been used by researchers to study stress, autism, epilepsy, and other disorders in clinical studies with NASA, Intel, Microsoft, and MIT. Empatica streams in real-time biometric data: temperature, motion, and electrodermal readings. It was a complete surprise, which the Embrace was helpful in detecting dangerous epileptic seizures as well.

VR and Robotics Disruption

Affectiva’s software also powers a new live-streaming app called Chubble with an emotional component that allows you to stream, hypothetically, a live concert, to friends while their emotions are conveyed back to you via avatars.

MindMaze (https://www.mindmaze.com), a Swiss billion-dollar-valuation-unicorn known more about its VR than its affective computing has developed MASK, a VR device that reads your emotions. MASK’s electrodes include a foam liner face pad that makes contact with your head. Electrodes sense your facial muscles and can predict what your emotions will be shortly before you fully complete each facial expression. MindMaze is also being used in 50 hospitals globally where their Affective VR technology is helping stroke victims and amputees rehabilitate as well.²⁹


Robotics focused Emoshape (https://emoshape.com)synthesizes emotions so that our robots can work, play and make us believe they understand us better.

Affective computing seems to be disrupting our lives in the form of waves. While the exact timing, pace of formation of such waves of disruption are often hard to predict, the basic pattern of layered preconditions for accelerated growth that they often follow is easy to recognize: technology substrate, data availability, and specialized algorithms. Smartphones with their advanced cameras and sensors are one wave. The convergence of photographic data in clouds like Google seems to be part of another wave. The advancement of algorithms may be part of the current wave that may stimulate usage and exponential disruption. Peter Diamandis defines the Six Ds of Exponentials: digitization, deception, disruption, demonetization, dematerialization, and democratization as the necessary preconditions. It wouldn’t be hard to reframe the affective computing story this way as well. Either way, it’s here.

For those interested in exploring more on how sentiment and natural language programming align with affective computing, this ebook is a good start. I tweeted about it recently.

Carl Carrie is new to Medium. You can also find me on LinkedIn. Please say hello on Twitter too.

  1. Riedweg, Christop, Pythagoras: His Life, Teaching, and Influence.
  2. Link: http://www.financialexpress.com/industry/technology/intel-is-building-a-chip-that-works-like-human-brain-heres-all-you-need-to-know/877281/
  3. Link: http://www.telegraph.co.uk/technology/2016/01/21/affective-computing-how-emotional-machines-are-about-to-take-ove/
  4. Darwin, Charles Robert. The Voyage of the Beagle. Vol. XXIX. The Harvard Classics. New York: P.F. Collier & Son, 1909–14; Bartleby.com, 2001. www.bartleby.com/29/. [Nov 10, 2017].
  5. Matsumoto, D., Keltner, D., Shiota, M. N., Frank, M. G., & O’Sullivan, M. (2008). What’s in a face? Facial expressions as signals of discrete emotions. In M. Lewis, J. M. Haviland, & L. Feldman Barrett (Eds.), Handbook of emotions (pp. 211–234). New York: Guilford Press.
  6. Barrett, Lisa Feldman. “The theory of constructed emotion: an active inference account of interoception and categorization.” Social cognitive and affective neuroscience 12.1 (2017): 1–23.
  7. Haselhuhn MP, Ormiston ME, Wong EM. Men’s facial width-to-height ratio predicts aggression: a meta-analysis. PLOS ONE. 2015; 10:e0122637. pmid:25849992
  8. Costa, Manuela, et al. “How components of facial width to height ratio differently contribute to the perception of social traits.” PLoS ONE, vol. 12, no. 2, 2017, p. e0172739. Academic OneFile, Accessed 2 Oct. 2017.
  9. Github repository: https://github.com/mel-2445/Predicting-First-Impressions
  10. McCurrie, M., Beletti, F., Parzianello, L., Westendorp, A., Anthony, S., & Scheirer, W. (2016). Predicting first impressions with deep learning. arXiv preprint arXiv:1610.08119.
  11. McCurrie, Mel, et al. “Predicting First Impressions with Deep Learning.” Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on. IEEE, 2017.
  12. Link: http://www.marketsandmarkets.com/Market-Reports/affective-computing-market-130730395.html?gclid=EAIaIQobChMIhtTlp63u1gIVCAppCh2yFA-mEAAYASAAEgJDCPD_BwE
  13. Ekman, P. (2003). Emotions revealed (2nd ed.). New York: Times Books
  14. Link: https://www.marketingweek.com/2017/08/03/measuring-video-effectiveness/
  15. Link: http://ceur-ws.org/Vol-690/paper5.pdf
  16. Link: https://techcrunch.com/2016/08/16/cogito-leverages-human-behavior-to-nudge-customer-relationships/
  17. Additional Techcrunch Resource Link: https://techcrunch.com/2016/12/02/emotibot-wants-to-help-chatbots-know-how-you-really-feel/
  18. Link: http://www.newworldencyclopedia.org/entry/Cesare_Lombroso
  19. Additional Article Resource: http://tcrn.ch/2yq67pW
  20. NYT Article Link: (http://nyti.ms/2yqexh5
  21. Link: Murphy, Heather. “Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine”, The New York Times, October 9, 2017, https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html
  22. Link to the controversial paper: https://osf.io/fk3xr/
  23. Link: https://scatter.wordpress.com/2017/09/10/guest-post-artificial-intelligence-discovers-gayface-sigh/
  24. Link: http://voc.tv/2idhUC0
  25. Link: https://nirc.nanzan-u.ac.jp/nfile/1365
  26. Link: (related link: https://nirc.nanzan-u.ac.jp/nfile/1480
  27. Demo link: (demo: https://itunes.apple.com/us/app/affdexme/id971529011?mt=8
  28. Link: https://www.affectiva.com/wp-content/uploads/2017/03/Do_Emotions_in_Advertising_Drive_Sales_Use_of_Facial_Coding_to_Understand_The_Relati.pdf
  29. Link: https://cdn.technologyreview.com/v/files/videoapr11110813am_0.mp4?sw=600