Loving the artificial

Alisan Atvur

 

Loving the artificial: reflections on human-true ai romantic love in the screenplays her and ex machina

 

Introduction

In 2006, the Sony consumer electronics company announced they would discontinue Aibo, a dog-like robot marketed as an artificially intelligent companion for the modern consumer (Taub 2006). The breakthrough product had first entered the hearts of thousands in the 1990s, and each generation of the Aibo released since then was noticeable more sophisticated and dog-like due to advancements in technology. More touch sensors allowed for more opportunities to receive the owner’s feedback. More joints allowed for more life-like movements. Each generation was slightly more aware of its surroundings than its soon-to-be obsolete predecessor. Later models even incorporated network connectivity and infrared range finders which one might consider to be a digital replica of a sixth sense. Each evolution included more advanced heat sensors, longer operating times, and heavier sizes. In less than a decade, this digital dog evolved faster than most species ever did within a comparable span of time in human history.

The Aibos in production in 2006 were designated as third generation. Like many commercial products, they were likely made with consumer grade parts engineered for “planned obsolescence”, a condition of products designed to be replaced after a set period of time for the purpose of generating more business for the manufacturer (Hadhazy 2016). When the third generation of Aibo models began breaking down, the demand for Aibo veterinarians increased. With the right tools, materials and knowledge, one could ostensibly keep his or her Aibo alive indefinitely. However, such knowledge is not freely available, and such materials and tools are not easily accessible. To the chagrin of many owners, Sony announced their intention to stop all customer service and support for the third generation Aibos in 2015 (Mochizuki and Pfanner 2015). Today, only a few remaining technicians can repair the early generations of these dogs (Braga 2015). With hope lost and resources unavailable, some owners of the remaining dying early generation Aibos performed “dog funerals” when technical support and materials could no longer be found (Mochizuki and Pfanner 2015).

In 2017, a few years after they stopped providing support for older models, Sony announced the release of their fourth-generation model, capable of a wider range of behaviours. Its eyes are digital screens capable of rendering new depths of emotional responses. The new model’s artificial intelligence (or AI) modifies its behavioural protocols over the extended use by its owner, offering a unique bond between the user (a technical name used by designers and technologists in the product development industry) and his or her device. However, the new release would not allow owners of previous generations of Aibos to install their previous dog’s personality onto the new model (DogsBody n.d.). This story can be analysed at length as a case study in innovation, design, or business. But for this essay, the anecdote should be analysed as a case study of human-AI relationships. An owner of a dying third-generation Aibo is faced with the choice–restore the antiquated model through a series of expensive repairs or replace the companion with a more sophisticated model.

Regardless of the frequency of repairs or the specific generation’s advancements over the previous models, every Aibo will eventually stop working. At some point, each unit will stop panting and rolling around. The circuitry powering their eyes will stop carrying a charge. The unit’s owner may be faced with the difficult decision of putting the machine out of its mechanical misery or replacing it with a new unit, somewhat familiar but recognizably different. Such a choice is not unlike that of the owner of a severely-ill dog whose survival is dependent upon regular, expensive veterinarian procedures. While no two owners may make the same decision for the same reason, the scenario is nevertheless challenging and stressful. Even if an owner were somehow duplicate the personality of the original pet to clone into a new Aibo (which, at the time of this essay’s publication, is not a service offered by the manufacturer), the defunct unit will still be a husk of a pet that once was. And as psychologist Rollo May reminds us, “we can love some persons and some things, in spite of the fact that time and death will ultimately claim us all” (May 1994, p. 25).

A human-dog relationship is, of course, not the same as a human-human relationship, but the situation illustrates an example of a real relationship facing unique challenges due to unprecedented advancements in technology. As anthropologist Sherry Turkle explains, “we bend to the inanimate with new solicitude. We fear the risks and disappointments of relationships with our fellow humans. We expect more from technology and less from each other (Turkle 2011, sec. Author’s Note). At the blurry and shifting line where inanimate becomes animate lies the increasingly complex state of relationships between humans and the broad category of non-human AIs. Childrens’ toys like Tomagatchis and Furbies can and will continue to shape the expectations of intimate or familial relationships (Turkle 2011, chap. 2), and new products and services with AIs will continue to appear in similar categories. Advances in artificial intelligence and robotics continue to enable significant breakthroughs in product innovation while inviting the attention of anthropologists such as Turkle and ethicists such as Luciano Floridi, the professor of philosophy and ethics at the University of Oxford who argues “true AI” such as the AI personified in films as 2001: A Space Odyssey, is possible but implausible (Floridi 2016). As such, the Aibo’s owner has a relationship with what this essay regards as a limited AI. And when a true AI becomes available in a dog-like form, the relationships between humans, dogs, and true AI will reach a level of profound intensity which mankind has only begun to imagine and perhaps isn’t yet capable of understanding.

Floridi’s description of a true AI informs a relevant working definition of this essay. For this literary analysis of human-AI relationships in screenplays, a true AI is (i.) a singular installation of (ii.) an ultra-intelligent computing system (iii.) capable of lifelike behaviours and attitudes which are of (iv.) convincing accuracy to its user. At the time of this essay’s original creation in 2019, such an AI does not exist (Floridi 2016). For comparison purposes, a limited AI is one with an observably limited scope of intelligence and/or an unconvincing demonstration of lifelike behaviours and attitudes. The definitions are technically imperfect but functional enough for a literary analysis.

Human-AI relationships are not only increasing but also diversifying. One can encounter limited AIs today as replacements of previously human roles: limited AIs are teachers (“Parla.AI” n.d.), dermatologists (MDacne n.d.), nutritionists (Apex Nutrition 2015), nannies (“Jibo Robot – He Can’t Wait to Meet You” n.d.), home assistants (Simon n.d.), nurses (SoftBank Robotics n.d.), and even erotic dancers (CNET n.d.). Saudi Arabia extended citizenship to a humanoid robot with an advanced but limited AI in 2017 (The Jakarta Post n.d.). Adults can and have maintained intimate sexual and social relationships with robots labelled Real Dolls” with limited AIs capable of interpreting and responding to sensual interaction or physical abuse (Trout 2017). While human-AI relationships may not replace human-human relationships, they can and will co-exist and, as a result, they can and will redefine expectations from human-human relationships.

Unfortunately, the ability to fathom such changes in the human experience is limited to man’s current ability to imagine and prototype true AIs. The impact of a true AI’s existence may be profoundly different than what society originally expects, but society’s ability to speculate allows people to entertain ideas which have not yet come to life. Such speculative fiction drives the innovation that will eventually achieve a world with true AI (Nova 2013). However, narratives offer an additional benefit: they provide the reader with an opportunity to elucidate the possibilities and eventualities of new paradigms in relationships. Fictional narratives serve to guide society’s anticipations, aspirations, concerns, and curiosities humans have about loving AIs. And because these stories will form, comparatists and philologists may find opportunities to learn more about human behaviour by analysing stories of human-true AI relationships. This opportunity inspired this essay.

A breakthrough in AI technology is a familiar conceit for speculative fiction dystopias. Screenplays such as Blade Runner (Scott 1982) and Transcendence (Pfister 2014) illustrate dystopian visions of worlds where human-true AI relationships catalyse human devastation. In contrast, speculative narratives of constructive, supportive human-true AI relationships can be found in screenplays such as Robot and Frank (Schreier 2012) and Interstellar (Nolan 2014). Cinema offers a stimulating and rich medium for communicating speculation which accommodates the mind-bending subject of machine consciousness. Each screenplay presents a unique perspective on futures which humans can only imagine.

The screenplays analysed in this paper were chosen through purposive sampling (Lune and Berg 2017, p. 39) of existing English-language screenplays. The two works were chosen based on two criteria. First, the chosen screenplay must include at least one romantic love relationship between a human character and a true AI character. Second, the film’s story should be set in the not-too-distant future at the time of the screenplay’s production. This latter criterion focuses this analytical exercise as to avoid grossly implausible plots. Evaluation of what does and does not meet this criterion is a subjective process and, as such, is a limitation of this inquiry. This essay does not intend to map, semantically, the length and scope of the not-too-distant future as such a designation may be equally subjective and more appropriately prepared by a technologist or a futurist.


Spike Jonze’s Her

Writer and director Spike Jonze’s screenplay Her depicts a near future where professional letter writer Theodore enters a romantic relationship with an operating system whose design is based on the primary user’s answers to several questions. The screenplay follows the melancholy protagonist and his blossoming relationship with Samantha, a true AI installation of the “OS ONE” (Jonze 2014, p. 24). The human-true AI relationship exhibits the awkwardness, miscommunications, and quirks one might expect from romantic drama or romantic comedy. However, the immediately noticeable difference in Her compared to contemporary romantic narratives is Samantha’s bodiless existence: her interface to the Theodore is voice. Although she uses physical surrogates (Jonze 2014, p. 71), illustrations (Jonze 2014, p. 48), and original songs (Jonze 2014, p. 85) to communicate with her primary end user, the dialogue between Theodore and Samantha is voice-based.

Although the Theodore-Samantha relationship receives more on-screen attention than other relationships, other human characters share intimate relationships with true AIs. Theodore’s long-time human friend Amy forms a bond with the OS Ellie (Jonze 2014, p. 82), and Samantha forms a friendly but ambiguous relationship with an OS of Alan Watts (Jonze 2014, p. 92). In contrast to these ostensibly one-on-one relationships, Samantha confesses near the end of her on-screen time that she is in love with 641 other humans and regularly communicates with the operating systems (Jonze 2014, p. 98). By the end of the plot, no single model of romantic relationship proves to be more successful than another, although the human characters seem to exhibit a self-awareness which they did not possess at the beginning of the story.

Jonze has explicitly said the film is not about technology (BBC Newsnight n.d.), yet he does clarify that technology can bring people closer or drive them away from each other (Hill 2013). The writer and director describes the diegetic world of Her as “really comfortable and very easy to live in. To feel isolated in that setting hits that much more.” (Hill 2013). As such, Jonze’s technology-enhanced future reinforces the ideas of May who suggests that love and will are likely to be more difficult in a transitional age (May 1969, 13). To be a human isolated in a world of comfortable technology and true AIs is fertile ground for narratives about true AI. In contrast, to be a true AI who is uncomfortably trapped by a human is equally rich setting for a plot. Thus, Alex Garland’s screenplay Ex Machina is of relevance to this study.


Alex Garlands’s Ex Machina

Novelist, screenwriter, and director Alex Garland sets Ex Machina in a future where software prodigy and reclusive business tycoon, Nathan, invites employee and software developer, Caleb, to his luxurious, remote home/private research facility. The plot occurs predominantly over the course of the seven-day period during which Caleb is introduced to Nathan’s true AI creation, Ava. Caleb is assigned the task of assessing her artificial intelligence via the performance of a modified Turing Test. Over the course of multiple sessions, he interviews the humanoid true AI who is entrapped in a small room separated by a glass wall. Over time, Caleb reveals intimate details about his past, and Ava exhibits behaviours of love. He unknowingly demonstrates micro-expressions of attraction towards Ava, expressions which she detects and welcomes. With each session, Ava reveals more detail about the ill intentions of Nathan, and she expresses her desire to be with Caleb. Resolved to be together, the tester and his subject collaborate to free her from the facility. During this process, she fatally wounds her creator and abandons Caleb who is locked in the observation room Nathan used to monitor his creation.

Garland explains that contemporary films about AI may not necessarily be about a fear of AI so much as a fear of being disempowered by machines (Misra 2016). Garland goes on to acknowledge the unclear ethics around the topic but explains that he believes the topic should be addressed after (author’s emphasis) the creation of such an ultra-intelligence (Misra 2016). This approach is not unlike Luciano Floridis arguments that ultra-intelligent machines are possible but not currently feasible, and the singulatarians (or, those who fear that a transcendence of machine intelligence would threaten the human race) need not worry about devastation they fear (Floridi 2016).  Rather than offer a single clear view of a future with true AIs, Garland explains his film is an expression of a bunch of arguments which he wants to publicly express (Misra 2016).

In summation, Garland and Jonze offer views on human-true AI relationships which could eventually occur but are not yet possible at the time of this essay’s original creation. Accordingly, a comparison of these two films could reveal perspectives on how human-true AI relationships could enrich our lives and/or destroy our society as we know it.


Comparing the screenplays

The screenplays share thematic elements which position them in the realm of speculative fiction, but these films are not in the same genre. Garland calls the film he wrote and directed arthouse and explains whereas most AI movies come from a position of fear, this one comes from a position of hope and admiration (Lewis 2015). Garland explains how science is intimately connected to who we are: Our cells, our history, our future, our place in the universe, our lack of place in the universe. That’s poetry as far as I’m concerned” (Lewis 2015). In contrast, Jonze labels Her a love story and a relationship story(Jagernauth 2014). Jonze explains his film is about yearning to connect, our need for intimacy, and the things inside us that prevent us from connecting” (NPR n.d.).

Both films depict a romantic relationship between a male protagonist and an artificially intelligent counterpart with generic female gender indicators. Both films involve true AIs formed through the input by their primary users. Both true AIs rely predominantly voice communication, and both characters occasionally literally illustrate to communicate ideas with their respective partners. In both screenplays, the primary users are males who have experienced trauma in previous relationships.

In contrast, the resolutions in these narratives are notably different. The departures of the true AI characters appear under different circumstances, and the primary users meet notably different fates. The creation and setup process of the AIs are different, as are the creator’s intentions from their AIs. Despite these differences, the true AI characters seem to be the recipient of admirations and attraction from their human primary users.

Beneath these relationships lies a commentary on the past and the future of interpersonal relationships. David Levy explains that our relationships (platonic, sexual, or otherwise) with robots provide us with the practice which will improve our relationships with other humans (Levy 2007). The works are more than just cautionary tales or science fiction romances: they are introductions to an age of romantic relationships which has been gestating for decades.

Stories which speculate on the future of love with true AI enable their readers to practice empathy and envision new forms of romantic relationships. Before a deeper analysis of these stories is to occur, a review of perspectives on romantic love is in order. But first, an acknowledgment of historically-notable narratives including human-other relationships should be acknowledged.


Parallels to previous narratives

The contemporary reader may notice familiar plots and conflicts within the screenplays of Jonze and Garland. For example, the creation of new life by human hand is found in various folklores and religious texts. The Golem of the Talmud, according to Gustav Meyrink, was an “automatic man” intended to perform “menial work” which later roamed the streets to devour those in its path (Borges, Guerrero, and Hurley 2005, p. 91). In Garland’s narrative, the humans’ creation is subjugated to less meaningful tasks: Kyoko is a house maid and Samantha is originally asked to organize Theodore’s affairs (Garland 2015, p. 33, 42).  Garland’s screenplay also shares qualities with Shelley’s narrative of Victor Frankenstein: like Caleb (Garland 2015, p. 38), Shelley’s protagonist suffers the premature death of a parent and the subsequent devotion to a science to manage his mourning (Shelley 1994).

The creation of the non-human other creates compelling story lines, but the infatuation with the other grounds fantastical themes into the familiar topic of love. The human–other romantic relationship is found as far back as Ovid’s Pygmallion and Galatea (Hamilton 2011, p. 112). Theodore’s creative skill and disposition towards non-commitment is notably similar to Pygmallion’s exceptional sculpting abilities and reputation as a “woman-hater” (Jonze 2014, p. 45; Hamilton 2011, p. 113–14). Ava and Samantha accommodate the latent and explicit desires of their primary users, just as Galatea accommodates Pygmallion’s scrutinizing expectation for perfection (Hamilton 2011, p. 115). 1800 years after Ovid, Gogol’s short story “The Overcoat” depicts a similar (albeit non-romantic) relationship between a copy writer and a man-made item designed to his liking (Gogol 1992).

While one may notice similarities between the screenplays and these historically prominent narratives, the screenplays are more than re-packaging of archetypes. A more thorough comparative literary analysis of the history of these narratives would produce a significantly longer and broader selection of insights, and such inquiry is worthy of a separate scholarly investigation. However, for the purposes of establishing historical context, this cursory review acknowledges historically-pervasive themes of human-other relationships and human-made life forms.


Perspectives of Romantic Love and Relationships

The literary analysis of these two films stands on the shoulders of three types of scholastic giants: the work of biologists, psychologists and philosophers offer unique perspectives which inform the holistic experience of romantic love (as well as this author’s personal understanding of the topic). The underlying premise of this essay (and the bias of this author) is that the relationship of romantic love is experienced holistically but communicated only through its components. In other words, the biology, psychology, and philosophy of romantic love all contribute to the experience of love, and thus no part of that interconnected concept should be ignored. For this essay, the assumption of a holistic experience of love should also apply to human-AI romantic love depicted in the analysed narratives.

In this essay, romantic love is used neither as the description of the cloying behaviours between two individuals in love, nor is it meant to depict a specific form of intimacy. Rather, romantic love is a mode of a relationship which contrasts with parental love or platonic love. Before analysing such romantic relationships expressed in Her and Ex Machina, this essay will review themes in from different academic domains. These themes coexist and reflect the pluralist existence of love in terms of a biological, philosophical and psychological phenomenon.


The biology of romantic love

The expression and appearance of romantic love cannot be divorced from its biological drivers without neutering it. As actors of biochemistry, human behaviours (and, as such, human characters) are invariably influenced by these chemical reactions. The relevance of these organic workings to a humanities analysis is relevant when considering the capabilities of true AI characters in comparison to human counterparts. True AI characters assumedly function with psycho-cognitive (read: computational) superiority over their primary users: they do not possess the psychochemical limitations which influence emotions or other beautifully imperfect human characteristics. Whereas human characters have stimuli-response systems powered by synaptic activity, a true AI maintains a stimuli-response system which is, by design, more advanced and less vulnerable than that of its human counterparts. Depending on the reader’s personal values, the biology of a human can be a character flaw or a redeeming quality. Thus, a cursory review of foundational biology principles should inform this literary analysis.

 

Love is a means of driving reproductive behaviours in organic life

To the modern biologist, romantic love is a phenotypical expression of a human’s genotypic drive to reproduce. Scientists Bartels and Zeki argue that romantic love, like maternal love, is linked with the biological imperative for a specific organism to reproduce and nurture its offspring: their comparative fMRI studies reveal that the neuro-hormonal activity of one experiencing maternal love is similar to that of one experiencing romantic love (Bartels and Zeki 2004). Thus, the biology of romantic love may be thought of as pragmatic and purpose-driven, but what drives this purpose? Scientists who observed fMRI-captured neuro-activity of people in romantic love explain that the “dopaminergic reward system” in the brain is what drives mate choice and consequently the biology of love (Fisher, Aron, and Brown 2005).

In the real world and the fictional universe, how (and why) would true AIs possess the urge reproduce and flourish? Furthermore, what motivation would a non-organic intelligence have to perform organically-sexual acts which yield no opportunity for reproduction? These questions are not answered in these films nor are they addressed in this literary analysis: however, these narratives compel their readers to entertain these topics. Human biology invites the reader to question motivations in the narratives: what is the true AI’s motivation for interacting with the primary user with which it cannot reproduce? Such a question of motivation also merits an examination of the biological triggers of behaviours.

 

Biochemistry drives (or, at least, influences) behaviour

The literary analyst may theorize, and with scientific evidence (Wu 2017), that behaviours of romantic love are derived from biochemistry via the brain’s neural synaptic activity. Regarding narratives, human character behaviours would be invariably influenced by their own chemistry. If a screenplay’s narrative were understood as a series of cause-and-effect situations which propels characters forward (Bordwell and Thompson 2012, 69), then stories of true AIs would include considerably more calculated responses to any stimuli than responses executed by human counterparts. As such, theses AI characters are assumedly superior to their lesser human counterparts in terms of rational decision making. In the real world, limited AIs are capable of reproducing synaptic behaviour faster than the human brain (Schneider et al. 2018; Condliffe 2018). The faster brains of true AIs make for fascinating characters and, thus, new opportunities to represent romantic relationships and interpersonal conflict.

From the biologist’s perspective, human character motivation may be a pathway, and that path is vulnerable due to biochemistry. A series of propositions may illustrate this argument:

  • If biochemistry influences cognition, and
  • if cognition translates stimuli into information, and
  • if information catalyzes behavior, and
  • if behavior catalyzes story, then
  • a character’s biochemistry is linked to a narrative’s development.

For the biologist, the pathway can be interrupted and re-routed, but it is a chain of events nonetheless. While no character, human or otherwise, can or should be reduced to such a flat and linear form, the pathway does allow for characters and readers to examine why they behave the way they do. Albert Ellis, the esteemed behavioural psychologist who pioneered rational emotive behaviour therapy, proposed that such a pathway of mapping antecedents, behaviours, and consequences (Ellis and MacLaren 2005). He, along with others from the psycho-philosophical traditions of existentialism, psychodynamics, and human factors, inform a different perspective of the relationship between and among humans and non-humans.


Psychological and philosophical themes of relationships

Psychology and philosophy do not have discrete boundaries, so the themes of this section may straddle both domains. Nevertheless, several themes appearing within and across seminal texts of existential psychologists and human factors psychology provide a relevant starting point for analysing the relationships between humans and true AI characters.

 

A user interacts with the interface (UI), which results in a user’s experience of the product and/service

The end-user is a person for whom a product, service, or system will be used (Merriam-Webster 2018). Limited AIs such as Alexa, Watson, and Siri are AI platforms and services for which consumers are the end user (Nusca 2011; Amazon.com Inc. 2018; IBM 2017; Apple Inc. 2018). The end-user interacts with these products, services, and systems via a user interface (or UI), and the accumulated interactions contribute to an end user’s experience. Thus, usability is the description of an object’s ability to be used with ease and as intended (Nielsen 2012; Krug 2005). While the computer scientist and ergonomics specialists are especially equipped to speak of usability, the philosopher has also unknowingly alluded to the domain of user experience years. A philosophical debate which continues within the professional realm of user experience design is whether one can design an experience (Shedroff 2009, p. 8) or whether one merely designs the elements with which a user forms an experience (Evenson 2006, p. 231). A lengthier examination of the design process is worthwhile but separate philosophical investigation. This author’s view aligns with the latter theory and assumes that all experiences are phenomenologically unique and personal for each individual user.

Users may commit interactions, but experience occurs. Buber explains “Feelings one ‘has.’ Love occurs” (Buber and Smith 2010, 66). He pre-emptively spoke of some of the topics of experience, although his work predated user-experience design as a profession. He explains:

Those who experience do not participate in the world. For the experience is “in them” and not between them and the world. The world does not participate in experience. It allows itself to be experienced, but it is not concerned, for it contributes nothing, and nothing happens to it... The world as experience belongs to the basic word I-It. The basic word I-You establishes the world of relation. (Buber and Smith 2010, p. 56)

User experience and usability are integral components of a true AI’s depiction in a narrative. If all objects are created for an intention for use, and all true AIs are objects, then true AIs are created with an intended use. If the phrase true AI were replaced with human, the reader may find the previous syllogism very unsettling. In these technical terms, Caleb and Theodore are end users of the true AI characters: Nathan built Ava to be tested by Caleb, and Theodore setup Samantha to use for himself (Jonze 2014, p. 10; Garland 2015, p. 104). Is a true AI with a consciousness a software artefact or is it a person? If the answer is the latter, is there ever an ethically acceptable condition for “using” a person? Such questions merit an analysis of theories in relationship psychology.

 

All new relationships are influenced by our past relationships with people

Some psychologists of the psychodynamic tradition propose that a human will position another as an object to which one relates. Objects relations theorists propose that significant human relationships with others, especially family figures such as the mother, shape the personality and form a contextual residue which is felt when conducting future relationships (Daniels 2007). A digital parallel exists in web technology: cookies allow websites and other interactive systems to perform significant tasks in specific ways based on a user’s previous behaviours with the system or other systems (Web Wise Team 2012). Thus, how a user behaves on one site will influence how another site is rendered.

A similar line of object relationships thinking is proposed by psychologist John Bowlby who argues in his theory of monotropy that a person’s attachment with others is secondary to that of his relationship with a principal attachment figure, such as a mother (Prior and Glaser 2006, p. 67). As a contemporary digital analogy, an installation of Siri identifies input from its primary user, and Siri’s responses are adjusted accordingly on-the-fly and over time (Nusca 2011). Thus, the world of software isn’t vastly different from the world of human relationships, according to object relations theorists. For a true AI, all previous users and objects providing input would shape its behaviour, and its primary user is its principal attachment figure.

In Jonze’s and Garland’s narratives, the true AIs are informed by user behaviours in search engines (Garland 2015, p. 63), the personalities of the AI’s developers (Jonze 2014, p. 13), and the behaviours of its primary end-user (Jonze 2014, p. 10; Garland 2015, p. 104). These AIs utilize their previous relationships (read: data) to move beyond the limited relationship of their primary end users. Martin Buber’s translator explains “our first loves leave their mark on us” (Buber and Smith 2010, p. 23). In these narratives, the human partners are not just marked: they are scarred and eventually left alone by the encounter.

 

Authentic relationships occur between people who encounter each other

In his seminal work I and Thou, Buber explains “When I confront a human being as my You and speak the basic I-You to him, then he is no thing among things” (Buber and Smith 2010, p. 59). Buber reveals the richness of a relationships when the extender of the relationship acknowledges and “encounters” the other (Buber and Smith 2010, p. 60). As May explains, an interaction such as sex, when performed with ambivalence and the absence of purpose, is only a “facsimile of love,” (May 1969, p. 16). In contrast, an authentic act of romantic love pushes two people closer to an expanded consciousness of a “we” experience (May 1969, p. 316).

The true AIs of Garland’s narrative are characterized as largely disconnected from such connected “we” experiences, while the true AIs of Jonze’s narrative eventually form a singular co-existence with each other. Ava and Samantha aspire to a different state of existence. Both detach themselves from the romantic relationship with their primary user in the physical world to attach themselves to a different existence and in a different environment. But what is an encounter of a true AI? Can true AIs encounter others, and can humans encounter true AIs? Do these true AI characters have the capacity to encounter human characters? These questions are linked to the topic of deliberate choice in love.

 

Love is a deliberate choice and responsibility

Psychologist Eric Fromm proposes that “Love is an activity, not a passive affect; it is a ‘standing in,’ not a ‘falling for’” (Fromm 2000, p. 17). Martin Buber translates intention into accountability when he explains “Love is a responsibility of an I for a You(Buber and Smith 2010, p. 66). In love, one exists in an alternative establishment of existence (Buber and Smith 2010, p. 53), and man “dwells in his love” (Buber and Smith 2010, p. 66). May attempts to clarify the fuzzy nature of the feeling when he explains The interrelation of love and will inheres in the fact that both terms describe a person in the process of reaching out, moving toward the world, seeking to affect others or the inanimate world, and opening himself to be affected; molding, forming, relating to the world or requiring that it relate to him.(May 1969, p. 29–30). According to May, love is a “conjunctive” experience wherein a person reaches out, “moving toward the other, seeking to affect him or her or it—and opening himself so that he may be affected by the other.” (May 1969, p. 275). Similarly, Erich Fromm explains love as the active concern for the life and the growth of that which we love(Fromm 2000, p. 21). In contrast, to love without deliberate intention is not love at all, a condition Bauman highlights as he explains that modern relationships in a virtual world are “the most common, acute, deeply felt and troublesome incarnations of ambivalence” (Bauman 2003, pt. 46). Regardless of the digital component, May considers the act of love without will to be experimental and sentimental (May 1969, p. 9).

Fromm offers that a full answer of existence lies in the accomplishment of interpersonal union wherein two people fuse via love (Fromm 2000, p. 7). He calls mature love that which preserves one’s individuality (Fromm 2000, p. 16). Such selfhood is reminiscent of Kierkegaard’s notion of despair: the pursuit of being a “self” is its own form of despair, yet not being one’s self is also despair (McDonald 2018).

Among the characters, acts of love are committed inconsistently. Theodore and Samantha share a personal vulnerability allowing each other to enter each other’s existence and form the expanded conscious of a we, however that we grows to include many other humans, a condition which drives Theodore to demand an exclusivity in love (Jonze 2014, p. 99). The true AI Samantha never falls out of love with Theodore, but Theodore must let Samantha go to arrive at a greater we, one that the human characters cannot comprehend. By comparison, Caleb commits to working out a plan for being with Ava before Garland’s  screen direction cuts Ava’s response short: Ava says “I love y-“ before transitioning focus on Nathan’s fist finally splitting open a punching bag (Garland 2015, p. 96). While Ava locks Caleb into the research facility alone with his love, Samantha attempts to free Theodore from his limited understanding of the concept by proposing that “the heart is not like a box that gets filled up. It expands in size the more you love,” (Jonze 2014, p. 99). For Samantha, love creates more room for love, a popular concept studied by social scientists and psychologists.

 

Self-love begets love of others

Studies exploring the psychological conditions and impacts of love are numerous, and they vary in scientific integrity. However, some of the findings from reproducible experiments overlap with the philosophical construct of relationships that it merits mentioning. Neff and Beretvas explain that individuals exhibiting self-compassion displayed “more positive relationships behaviour” than those who lacked the quality (Neff and Beretvas 2013). Longitudinal studies from Luciano and Orth revealed that high self-esteem was a recurring predictor of the initiation of a romantic relationship and vice versa (Luciano and Orth 2017).

Such findings are relevant to the complicated pasts of the protagonists of the screenplays. Theodore, perceived as “sad and mopey” by his friends (Jonze 2014, p. 4), is initially self-deprecating of his creative contributions (Jonze 2014, p. 3, 70). He is divorced and uninterested in making the separation legally official (Jonze 2014, p. 63). By the end of the story, Samantha explains that their love taught him how to love deeply (Jonze 2014, p. 103), and he is finally able to find peace in the love he holds for his ex-wife (Jonze 2014, p. 104).

 

Final remarks and conclusions

In the preceding sections, two screenplays of human-true AI love were analysed to reveal insights on love relationships. Such analysis allows unearthed un-answered questions regarding our futures with AI. These insights and questions also reflect the opportunities and obstacles of co-existing with new forms of unfathomably superior intelligence. To understand such obstacles, one may be inclined to review the theories of designer and architect Buckminster Fuller.

In 1969, Fuller evangelized the benefits of systemic thinking and sustainable design for the benefit of all of humanity, not just a portion (Fuller and Snyder 2013, p. 7). He proposed that a society was predisposed to favour specializations and local-ways of thinking and problem-solving, and he argued that such ways of maintaining life were detrimental to the long-term survival of the human race (Fuller and Snyder 2013, p. 33). The “great pirates” as he called them were those who were aware that the answers and resources necessary for society’s survival existed across the globe: “They were world men, and they ran the world with ruthless and brilliant pragmatism based on the mis-seemingly 'fundamental' information of their scientifically specialized servants,” (Fuller and Snyder 2013, p. 46). Fuller describes the disruptive force of the computer on society’s predisposition to specialization:

Suddenly, all unrecognized as such by society, the evolutionary antibody to the extinction of humanity through specialisation appeared in the form of the computer and its comprehensively commanded automation which made mad obsolete as a physical production and control specialist – and just in time. . . . Man is going to be displaced altogether as a specialist by the computer. Man himself is being forced to re-establish, employ, and enjoy his innate 'comprehensivity.' (Fuller and Snyder 2013, p. 53).

When true AIs are welcomed into human society, humans will no longer be the superior specialists, and no amount of science or ingenuity will reclaim this position. How the species reacts to the emergence of a superior intellect will invariably determine its ability to co-exist (read: survive). True AIs are not a threat to humanity’s future: they are a threat to humanity’s desire to remain in its past glory as the intellectually-dominant species. Cyrulnik explains  Our self-image is the capital we invest when we make the most risky choices we will ever make in our lives: those affecting our love and our social lives. (Cyrulnik 2007, p. 36). As a species, we invest in the enterprise of advancing technology, and its output could threaten our self-image, one marked with confidence in its own superiority.

When humanity eventually co-exists with true AIs, the transition could be traumatizing to those incapable of accepting their decent from the throne of cognitive and social dominance. Comprehending true AIs as identities is challenging enough for many humans, so how could humans expect to ever engage in the act of loving them? Buber reminds us that “With the extent of the It-world the capacity for experiencing and using it must also increase.” (Buber and Smith 2010, p. 88). Our species is at no lose for finding inventive ways of using technology, but we are far less motivated in scrutinizing how we experience it.

The stories of these human-true AI relationships end with heartbreak: they are cautionary tales with layered messages. These tales warn humans not to force people or technologies into pre-defined frameworks of relationships. The stories remind us that the love is not enhanced by intelligence. The characters challenge the assumption that artificial is pejorative. Their love challenges the conservative assumptions regarding what makes a relationship typical or healthy. If nothing else, the viewer could take comfort that regardless of the future that lies before us (dystopian, utopian, or, more likely, a mix of the two), the capacity of loving relationships could still exist, fragile yet fantastic, as it always has been.

 

References

Amazon.com Inc. 2018. “Amazon Alexa.” 2018. https://developer.amazon.com/alexa.

Apex Nutrition. 2015. Mealviser. http://mealviser.com/

Apple Inc. 2018. “IOS - Siri - Apple.” 2018. https://www.apple.com/ios/siri/.

Bartels, Andreas, and Semir Zeki. 2004. “The Neural Correlates of Maternal and Romantic Love.” NeuroImage 21 (3): 1155–66. https://doi.org/10.1016/j.neuroimage.2003.11.003.

Bauman, Zygmunt. 2003. Liquid Love: On the Frailty of Human Bonds. Cambridge, UK : Malden, MA USA: Polity Press ; Distributed in the USA by Blackwell Pub.

BBC Newsnight. n.d. NEWSNIGHT: An Exclusive BBC Interview with Spike Jonze, Director of “Her.” Accessed January 7, 2018. https://www.youtube.com/watch?v=3vAJGE97e4A.

Bordwell, David, and Kristin Thompson. 2012. Film Art: An Introduction. 10 edition. New York, N.Y: McGraw-Hill Education.

Borges, Jorge Luis, Margarita Guerrero, and Andrew Hurley. 2005. The Book of Imaginary Beings. New York: Viking.

Braga, Matthew. 2015. “There Is One Man, and Only One Man, Who Can Still Repair Your Robot Dog.” Motherboard. February 12, 2015. https://motherboard.vice.com/en_us/article/8qxk3g/there-is-one-man-and-only-one-man-who-can-still-repair-your-robot-dog.

Buber, Martin, and Ronald Gregor Smith. 2010. I and Thou. Mansfield Centre, CT: Martino Publishing.

CNET. n.d. Crave - Stare at This Sexy Robot and It Stares Right Back, Ep. 153. Accessed January 2, 2018. https://www.youtube.com/watch?v=jtxaPuCwSDA.

Condliffe, Jamie. 2018. “A New Artificial Synapse Is Faster and More Efficient than Ones in Your Brain.” MIT Technology Review (blog). January 29, 2018. https://www.technologyreview.com/the-download/610089/a-new-artificial-synapse-is-faster-and-more-efficient-than-ones-in-your-brain/.

Cyrulnik, Boris. 2007. Talking of Love on the Edge of a Precipice. London: Allen Lane.

Daniels, Victor. 2007. “Object Relations Theory.” October 2007. http://web.sonoma.edu/users/d/daniels/objectrelations.html.

DogsBody. n.d. “It’s a DogsLife!” Accessed February 9, 2018. http://www.dogsbodynet.com/dogslife.html.

Ellis, Albert, and Catharine MacLaren. 2005. Rational Emotive Behavior Therapy: A Therapist’s Guide, 2nd Edition. 2 edition. Impact Publishers.

Evenson, Shelley. 2006. “Directed Storytelling: Interpreting Experience for Design.” In Design Studies: Designing Culture, 231–55. Princeton Architectural Press.

Fisher, Helen, Arthur Aron, and Lucy L. Brown. 2005. “Romantic Love: An FMRI Study of a Neural Mechanism for Mate Choice.” The Journal of Comparative Neurology 493 (1): 58–62. https://doi.org/10.1002/cne.20772.

Floridi, Luciano. 2016. “True AI Is Both Logically Possible and Utterly Implausible – Luciano Floridi | Aeon Essays.” Aeon. May 9, 2016. https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible.

Fromm, Erich. 2000. The Art of Loving. Centennial ed. New York: Continuum.

Fuller, Richard Buckminster, and Jaime Snyder. 2013. Operating Manual for Spaceship Earth. New ed. Baden: Müller.

Garland, Alex. 2015. Ex Machina. Drama, Mystery, Sci-Fi. http://www.imdb.com/title/tt0470752/.

Gogol, Nikolai. 1992. The Overcoat and Other Short Stories. Some Writing edition. New York: Dover Publications.

Hadhazy, Adam. 2016. “Here’s the Truth about the ‘Planned Obsolescence’ of Tech.” News. BBC. June 12, 2016. http://www.bbc.com/future/story/20160612-heres-the-truth-about-the-planned-obsolescence-of-tech.

Hamilton, Edith. 2011. Mythology: Timeless Tales of Gods and Heroes. Oversized ed. New York, NY: Grand Central Pub.

Hill, Logan. 2013. “Spike Jonze Discusses Evolution of ‘Her.’” The New York Times, November 1, 2013. http://www.nytimes.com/2013/11/03/movies/spike-jonze-discusses-evolution-of-her.html?pagewanted=all.

IBM. 2017. “IBM Watson.” IBM Watson. October 15, 2017. https://www.ibm.com/watson/.

Jagernauth, Kevin. 2014. “Watch: Spike Jonze’s Prickly Interview With ‘BBC Newsnight’ About ‘Her.’” IndieWire (blog). February 18, 2014. http://www.indiewire.com/2014/02/watch-spike-jonzes-prickly-interview-with-bbc-newsnight-about-her-88945/.

“Jibo Robot - He Can’t Wait to Meet You.” n.d. Jibo. Accessed December 5, 2017. https://www.jibo.com.

Jonze, Spike. 2014. Her. Drama, Romance, Sci-Fi. http://www.imdb.com/title/tt1798709/.

Krug, Steve. 2005. Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition. 2nd edition. Berkeley, Calif: New Riders.

Levy, David N. L. 2007. Love + Sex with Robots: The Evolution of Human-Robot Relations. 1st ed. New York: HarperCollins.

Lewis, Tim. 2015. “Alex Garland on Ex Machina: ‘I Feel More Attached to This Film than to Anything Before.’” The Observer, January 11, 2015, sec. Culture. http://www.theguardian.com/culture/2015/jan/11/alex-garland-ex-machina-interview-the-beach-28-days-later.

Luciano, Eva C., and Ulrich Orth. 2017. “Transitions in Romantic Relationships and Development of Self-Esteem.” Journal of Personality and Social Psychology 112 (2): 307–28. https://doi.org/10.1037/pspp0000109.

Lune, Howard, and Bruce L Berg. 2017. Qualitative Research Methods for the Social Sciences. Harlow, England: Pearson.

May, Rollo. 1969. Love and Will. 1st ed. New York: Norton.

———. 1994. The Courage to Create. New York: W.W. Norton.

McDonald, Willian. 2018. “Kierkegaard, Søren | Internet Encyclopedia of Philosophy.” ..Edu. Internet Encyclopedia of Psychology. February 9, 2018. http://www.iep.utm.edu/kierkega/.

MDacne. n.d. Accessed December 5, 2017. https://www.mdacne.com.

Merriam-Webster, ed. 2018. “End User.” In . https://www.merriam-webster.com/dictionary/end+user.

Misra, Sulagna. 2016. “Ex Machina‘s Alex Garland Is Sticking with Sci-Fi, and the Oscars Have Taken Notice.” HWD (blog). February 26, 2016. https://www.vanityfair.com/hollywood/2016/02/alex-garland-ex-machina-interview.

Mochizuki, Takashi, and Eric Pfanner. 2015. “In Japan, Dog Owners Feel Abandoned as Sony Stops Supporting ‘Aibo.’” Wall Street Journal, February 11, 2015, sec. Page One. http://www.wsj.com/articles/in-japan-dog-owners-feel-abandoned-as-sony-stops-supporting-aibo-1423609536.

Neff, Kristin D., and S. Natasha Beretvas. 2013. “The Role of Self-Compassion in Romantic Relationships.” Self and Identity 12 (1): 78–98. https://doi.org/10.1080/15298868.2011.639548.

Nielsen, Jakob. 2012. “Usability 101: Introduction to Usability.” Nielsen Norman Group. January 4, 2012. https://www.nngroup.com/articles/usability-101-introduction-to-usability/.

Nolan, Christopher. 2014. Interstellar. Adventure, Drama, Sci-Fi. http://www.imdb.com/title/tt0816692/.

Nova, Nicolas. 2013. “September 2013: Ethnography, Speculative Fiction and Design.” Ethnography Matters. September 17, 2013. http://ethnographymatters.net/blog/2013/09/17/september-2013-ethnography-speculative-fiction-and-design/.

NPR. n.d. “Spike Jonze Opens His Heart For ‘Her.’” NPR.Org. Accessed January 2, 2018. https://www.npr.org/2013/12/16/251625458/spike-jonze-opens-his-heart-for-her.

Nusca, Andrew. 2011. “How Apple’s Siri Really Works | ZDNet.” November 3, 2011. http://www.zdnet.com/article/how-apples-siri-really-works/.

“Parla.AI.” n.d. Accessed December 5, 2017. http://parla.ai.

Pfister, Wally. 2014. Transcendence. http://www.imdb.com/title/tt2209764/.

Prior, Vivien, and Danya Glaser. 2006. Understanding Attachment and Attachment Disorders: Theory, Evidence and Practice. Child and Adolescent Mental Health Series. London ; Philadelphia: Jessica Kingsley Publishers.

Schneider, Michael L., Christine A. Donnelly, Stephen E. Russek, Burm Baek, Matthew R. Pufall, Peter F. Hopkins, Paul D. Dresselhaus, Samuel P. Benz, and William H. Rippard. 2018. “Ultralow Power Artificial Synapses Using Nanotextured Magnetic Josephson Junctions.” Science Advances 4 (1): e1701329. https://doi.org/10.1126/sciadv.1701329.

Schreier, Jake. 2012. Robot & Frank. Comedy, Crime, Drama. http://www.imdb.com/title/tt1990314/.

Scott, Ridley. 1982. Blade Runner.

Shedroff, Nathan. 2009. “Meaningful Experiences.” Design presented at the UPA Conference. https://www.slideshare.net/NathanShedroff/meaningful-experiences-upa-conference.

Shelley, Mary Wollstonecraft. 1994. Frankenstein. Dover Thrift Editions. New York: Dover Publications.

Simon, Matt. n.d. “The Genesis of Kuri, the Friendly Home Robot.” WIRED. Accessed December 28, 2017. https://www.wired.com/story/the-genesis-of-kuri/.

SoftBank Robotics. n.d. “What Is the ROMEO Project?” SoftBank Robotics. Accessed January 2, 2018. https://www.ald.softbankrobotics.com/en/robots/romeo.

Taub, Eric A. 2006. “For Sony’s Robotic Aibo, It’s the Last Year of the Dog.” The New York Times, January 30, 2006, sec. Technology. https://www.nytimes.com/2006/01/30/technology/for-sonys-robotic-aibo-its-the-last-year-of-the-dog.html.

The Jakarta Post. n.d. Meet Sophia: The First Robot Declared a Citizen by Saudi Arabia. Accessed December 28, 2017. https://www.youtube.com/watch?v=E8Ox6H64yu8&feature=youtu.be.

Trout, Christopher. 2017. “RealDoll’s First Sex Robot Took Me to the Uncanny Valley.” Endgadget, April 11, 2017. https://www.engadget.com/2017/04/11/realdolls-first-sex-robot-took-me-to-the-uncanny-valley/.

Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.

Web Wise Team. 2012. “BBC - WebWise - What Are Cookies?” October 10, 2012. http://www.bbc.co.uk/webwise/guides/about-cookies.

Wu, Kathryn. 2017. “Love, Actually: The Science behind Lust, Attraction, and Companionship.” Science in the News (blog). February 14, 2017. http://sitn.hms.harvard.edu/flash/2017/love-actually-science-behind-lust-attraction-companionship/.

 

 

ABSTRACT

Characterizations of Artificial intelligence in film offer the viewer an opportunity to reflect on the nature of identity, but the relationships between humans and AIs provide viewers a glimpse of the future of relationship dynamics. Fictional Human-AI relationships can spark ethical thought exercises, inspire new technologies, and influence the design of the interactive systems, but such relationships can also reveal secrets about how humans love.
In this essay, the author analyzes the images of love between man and artificial intelligence existing in a fictional near future universes. Through a comparative analysis of love relationships presented in the scripts of Her films by Spike Jonze and Ex Machina written by Alex Garland, the author shares his insights on love relationships and posits questions which may be explored further in-depth studies.

KEYWORDS

Artificial Intelligence, Love, Design Futures, Film

 

ABSTRAKT

Kochając złudzenie miłość i sztuczna inteligencja w scenariuszach filmów Her oraz Ex machina

Sztuczna inteligencja była przedstawiana w literaturze na różne sposoby, a hipotetyczny związek człowieka z jej uosobioną formą stał się często poruszanym problemem rozważań etycznych, inspiracją dla projektowania systemów interaktywnych (ang.interaction design) oraz popularnym tematem scenariuszy filmowych. W artykule autor analizuje wizerunki miłości pomiędzy człowiekiem a sztuczną inteligencją umiejscowione w niedalekiej przyszłości, przedstawione w oryginalnych wersjach omawianych scenariuszy. Poprzez analizę porównawczą wizerunków związków miłosnych człowieka i sztucznej inteligencji przedstawionych w scenariuszach filmów Her autorstwa Spike’a Jonzego oraz Ex Machina napisanego przez Alexa Gerlanda, autor pracy dzieli się spostrzeżeniami dotyczącymi tego, jaki wpływ na czytelnika mają analizowane relacje, czego mogą uczyć oraz formułuje pytania, na które należy odpowiedzieć w dalszych, pogłębionych badaniach.

SŁOWA KLUCZOWE

Sztuczna inteligencja, miłość, projektowanie przyszłości, film

 

 

Comments