Machines Like Me: How did it REALLY End?

Ian McEwan’s first “sci-fi” novel, Machines Like Me, published earlier this year, has not been getting the best reviews, but I found it enjoyable to read. I can’t say it was my favorite of his, but he set the bar pretty high with Atonement and On Chesil Beach. As one member of my book club put it, “I’d read anything he wrote.” Yes. If he were tasked with writing instruction manuals, they’d be page-turners. 

My main gripe with Machines is the long digression on the P versus NP problem, which could’ve been boiled down to a paragraph for the sake of moving the story along. Plus, pages and pages were devoted to explaining the alternative history when it could’ve been more integrated into the plot—details only appearing when relevant—or at least summarized with far fewer words. Some might say this info-dumpy stuff is perfectly fine in Sci-Fi—I don’t. But anyway, these are fairly minor grievances.

There a number of online reviews that take aim at the author rather than the book, particularly regarding his unfortunate comments in a Guardian interview in which he appears to want to distance himself from the Sci-Fi genre. As I see it, a good story is a good story, call it what you will. And, quite frankly, hearing the writer’s take on his own book is less interesting to me than hearing what the book itself has to say; I don’t like going into it with preconceived notions hand-delivered by the author. Worse yet are the second-hand interpretations by reviewers and…bloggers. So…If you haven’t read the book yet and intend to, stop reading this right now and go get the book. As for the rest of you…

One of the things I appreciated about Machines was its richness and depth; McEwan trusts readers to sift through the layers and come to their own conclusions. So much so that I wonder if I may have read too deeply into it, finding things in it that weren’t really there. Wouldn’t be the first time. Maybe you’ve read it and can tell me what you think?

Since the internet can provide a detailed summary for anyone interested, suffice it to say this is an alternate history which takes place in a more technologically-advanced version of the 80’s, one in which Alan Turing is still alive. There’s a lot going on politically (Falklands War) and in the plot that I won’t get into…but here’s my very basic outline, followed by my take on the novel’s ambiguous ending. Need I say SPOILER ALERT? Ok.

SPOILER ALERT!

(Those who do not wish the book’s plot to be spoiled—you noblest of readers—please feel free to scroll down to “End Of Spoiler”)


There are three principal characters:

  1. Charlie, the narrator, an impulsive, tech-junkie thirty something who squanders his inheritance to purchase Adam and who has the hots for his upstairs neighbor.
  2. Miranda, the ambivalent, oftentimes secretive, upstairs neighbor.
  3. Adam, one of 25 Adam-and-Eve androids put up for sale. His appearance in the cover art places him squarely in uncanny valley territory:220px-MachinesLikeMe.jpgHowever, he turns out to be the most interesting personality of all.

To take things to the next level, out of ‘friend’ territory, with Miranda, Charlie asks her to answer half of Adam’s personality set-up questions to make Adam their joint creation, their child, basically. This attempt to mold him in their image ultimately fails, but then again, that’s true of children made the old-fashioned way as well. Adam turns out to be his own ‘person’…sort of. It turns out that Miranda has secretly set up Adam’s personality so that he will always love her, and that, of course, has consequences.

Soon after, Charlie is placed in a very awkward situation when he hears them having (very good) sex upstairs. Miranda dismisses Charlie’s jealousy by likening Adam to a vibrator. “He’s a fucking machine,” she says.

But Charlie isn’t convinced. To him, Adam is far more than a glorified vibrator—he’s “a fucking machine.” How could a mere human possibly compete with that?

Plus, it appears that Adam doesn’t like being turned off, so to speak. At one point when Charlie tries to access the kill switch at the back of Adam’s neck, Adam politely asks him not to. Charlie does it anyway. The second time this happens, Adam breaks Charlie’s wrist—which runs counter to his programming not to harm humans. Westworld-ish? Maybe. But there’s a difference—Adam seems not to have realized what he was doing at the time and appears to feel guilty. When Charlie returns from the hospital, Adam apologizes again, but then he explains why he disabled his kill switch: “…we’ve passed the point in our friendship when one of us has the power to suspend the consciousness of the other” (Part V). Seems a fair point.

Adam acknowledges the conflict between them concerning their love for Miranda, but seems to think he and Charlie are capable of reasoning things out and keeping up their friendship.

At this point in the novel, Adam has evolved into the fullest, fleshiest character in the love triangle. This is underscored by a moment when Miranda’s father meets both Adam and Charlie for the first time… and assumes that Charlie is the android.

Meanwhile, we find out the other Adams and Eves are committing suicide, but no one quite knows why. When Charlie pushes his Adam for an explanation and asks whether the suicides have something to do with the state of the world or human nature, Adam says, “My guess is that it goes deeper.” But then he adds, “I’m not about to do the same thing. As you know, I’ve every reason to live.” Charlie, however, doesn’t quite believe it: “Something in his phrasing or emphasis aroused my suspicion.”

“Every reason to live” makes very little sense given the context. As things stand, Adam’s stuck in limbo, caught between his best friend and the woman he can’t help but love. What’s worse, his place in this household seems limited to the role of housekeeper and money earner (he’s very good at playing the stock market, apparently). This doesn’t appear to be good for Charlie and Miranda either. With little work to do for themselves, they lose a sense of purpose in life.

All along there’s been a plot brewing involving Miranda’s past, and that comes into play here. I won’t get into it, but essentially Adam reacts to it by undoing Charlie and Miranda’s plans to start a life together, which he explains to them after the fact, and with the (by now out-of-character) logical neutrality of a robot.

This infuriates Charlie, who sneaks up behind Adam with a hammer and does him in. It takes Adam a while to die, and he doesn’t actually die, not in the usual sense of the word—in his final speech he explains that “his entire being is stored elsewhere.” (This he managed despite the fact that the androids were only designed to last twenty years.) Here we finally see what Adam meant by “I’ve every reason to live.” He’s had a far grander vision of the future all along.

My interpretation of the ending

What to make of all this? After discussing with my book group and reading comments online, I see that I’ve read things in an entirely different way from most others. For instance, this reviewer didn’t take Adam’s lie as a lie:

…the idea of a robot whose inflexible virtues make him incompatible with naturally corrupt humans (Adam cannot bear the idea of a lie) is a downright cliché.

Constance Grady
The robot is the most human character in Ian McEwan’s so-so new novel

Indeed, it is a cliché, but I don’t think it’s the point McEwan was making; it was the point the human characters make. At one point Charlie reflects: “We told ourselves that this was, after all, a machine; its consciousness was an illusion; it had betrayed us with inhuman logic.” That Adam was incapable of understanding humans and their moral nuances is Charlie’s (and, as we’ll see in a minute, Turing’s) convenient interpretation. In Charlie’s case, a way of coming to terms with the horror of what he did. And yet, from what I’ve been hearing, readers—including everyone in my book group—seem to have taken Charlie’s assessment of Adam at face value.

But that ignores a few red flags, one being the great progress Adam makes throughout the novel. After all, it was Charlie who failed the Turing test, not Adam. The fact that Adam goes from charming to robotically inflexible seems too sudden to be believable. This suggests he was perfectly capable of lying and only pretends to be a inflexible robot in order to manipulate Charlie to put an end to his life. This manipulation reveals that Adam was not only capable of lying, but also had a greater understanding of human psychology than the human characters had of themselves.

You might be wondering, why didn’t Adam just kill himself? It’s true that he promised Charlie he wouldn’t, but I think the real reason is that he needs Charlie’s help. In his dying breath, so to speak, he explains that there’s been a recall of the Adams and Eves due to the suicides, and he requests that his body be hidden from those who are coming to reprogram him. The emphasis here is on his similarity to us, his instinct to keep his identity and personality intact. Again, this goes counter to his programming. 

After his robot show, Adam suddenly reverts back to who he was before. He delivers a magnanimous farewell speech, along with a final haiku (a form of poetry he’s come to favor over all others) about “machines like me and people like you.” He’s positively gushing with love for both Charlie and Miranda, and for humanity in general. This marks a return to the optimistic Adam we’ve seen in previous pages. He’s a creative idealist—overly idealistic, yes, but more in the way of teenagers rather than inflexible, rule-bound robots. 

Other points: From day one, Adam knew all he needed to condemn Miranda; in fact, one the first things he tells Charlie is that Miranda is a “malicious liar”. If Adam’s ‘inhuman logic’ is really to blame for his betrayal at the end, why did it take him so long to reach this verdict? I wouldn’t expect an ‘ambulatory laptop’ to take this long to reach a conclusion.

Another thing: Before the climactic scene, Charlie notices Adam has gotten all dressed up. Later, in part ten, as Charlie’s moving Adam’s lifeless body, he remarks on this again: “These were his going-away clothes. When he believed he was leaving us to meet his maker.” (The maker being Alan Turing.) Does this not suggest Adam knew in advance that he would deceive Charlie in order to get him to end his life?

When Adam’s body is delivered to Turing, it’s Turing who gives readers the impression that Adam was not capable of comprehending our world: “We don’t yet know how to teach machines to lie.” And: “They couldn’t understand us, because we couldn’t understand ourselves…but that’s just my hypothesis.” It’s understandable that readers would take Turing’s words as Truth—at least “truth” in terms of the novel—but I think that last clause, “just my hypothesis”, warns us to be skeptical. As I see it, it’s not that Adam was incapable of understanding humans, but that humans weren’t capable of accepting him, at least not on equal terms. Apparently even Turing is guilty.

But if Turing isn’t there to deliver the Truth, why include him in the novel?

Ultimately I think Turing’s role is to underscore the theme of intolerance; in the context of this alternate history, we’re meant to ask what might’ve happened had the world not treated him as an outcast, and we’re encouraged to draw a thematic parallel to Adam’s situation. In this story, the Adams and Eves aren’t accepted by us; but why couldn’t it be otherwise? Again and again Adam gives us an optimistic vision of the world, but it’s a too-good vision that we cannot—or will not—share. In a sense, this is the opposite of Westworld; the Adams and Eves don’t want to destroy us, they want desperately to belong to our world. Unfortunately for them, not even Alan Turing can completely shrug off his skepticism regarding their nature; he can’t completely buy into his own test. So while it’s true that McEwan’s playing around with machine uprising tropes—Sci-Fi clichés—this is really a tragedy about humanity’s intolerance and missed opportunities.

In the second to last paragraph, Charlie recalls a coin he once held, the Fields Medal in mathematics, inscribed with the following words from Archimedes: “Rise above yourself and grasp the world.” We can and we can’t. Therein lies the tragedy.

END OF SPOILER


If you’ve read it, I’d be curious to hear what you made of the ending. If you haven’t read it, please feel free to comment anyway, on anything you like…even if it’s totally irrelevant. Especially if it’s totally irrelevant—if you’ve read all the way through to the end of this post, then what the hell, you deserve it.

Speaking of endings, happy new year! Let’s say goodbye—or good riddance?—to 2019. (How did it REALLY end?)

 

9 thoughts on “Machines Like Me: How did it REALLY End?

  1. Happy new year Tina!

    I haven’t read the book, so take this with a large grain of sodium chloride. The description of Turing sounds a bit un-Turing-like to me, but this is an alternate history, and who knows what he might have been like if he’d lived all those decades. It does sound like he was meant to play the wise-person role, someone who usually speaks for the author. Of course, doing it that way gives the author some plausible deniability.

    Adam and the others ignoring their programming is a common trope, at least in TV and movies. Is there any explanation on how it happens? Or is the idea that their free will just somehow emerges? Just curious.

    Thanks for the write up! It gives a better idea of the book than any of the others I’ve read. The story sounds pretty psychological. Not sure it’s my cup of tea. But tastes shift, so who knows.

    Liked by 1 person

    • Thanks for reading!

      The description of Turing is really mine here. In the book he gives Charlie a verbal spanking for killing Adam, which is probably why no one I’ve spoken to or read online has had the same interpretation I have. To me, the story makes more sense if Turing underestimates the AI that he himself helped to instigate. But it seems most people just take whatever Turing says to be true.

      “Adam and the others ignoring their programming is a common trope, at least in TV and movies.”
      Totally. I think McEwan uses that to create a lot of tension and suspense. Readers get bits of foreshadowing that signal the programing override trope and expect Adam to go ballistic. Instead it’s Charlie who goes ballistic.

      “Is there any explanation on how it happens? Or is the idea that their free will just somehow emerges? Just curious.”

      Honestly, it’s been a while since I’ve read the book, but from what I recall, no one really knows. The free will that Adam exhibits in the story is really ambiguous—my take is that Adam has it, but not everyone sees it my way. The thing is, I get the sense that Charlie doesn’t really know what Adam has been programmed to do. His interactions are kind of like my interactions with my iPhone—I have no idea what’s going on inside it, I just use it. When he gives Adam a personality, he quickly finds Adam going way beyond that personality, which doesn’t surprise him. He wonders whether these customizing options are there just to make the ‘owner’ feel like he has control. Basically, he doesn’t know any more than what he’s been told, and he’s remarkably uncurious about it.

      Oh, and another thing: since Turing didn’t make these Adam and Eves himself, he doesn’t know either. In fact, he’s only in contact with Charlie to get the scoop on them for his own research. Throughout the novel, he’s getting Charlie’s take on Adam—but that doesn’t necessarily mean he believes Charlie all the time. So. Yeah. It’s complicated.

      I understand if it doesn’t sound like your cup of tea. It’s probably not what you might consider hard Sci-Fi, except maybe the research-y parts that I would like to have taken out. 🙂

      Like

      • On reading, my pleasure! Good to see you posting!

        On Turing, it might be that McEwan purposefully made it ambiguous. That’s the beauty of having the wise one role. There’s nothing saying the wise one must be right, although when it’s at the end of the story, most readers do take them as authoritative within the story universe.

        McEwan might be doing the same thing with Adam.

        On Charlie, yeah, protagonists who aren’t curious about what the reader are curious about (without good reason) are annoying.

        On the researchy parts, most science fiction today does try to avoid long extended infodumps, doing what you said, working it into the plot. Of course, sometimes you have no choice, but you’re expected to minimize it. There are authors who violate that standard, but even they seem to realize there are limits.

        On being my cup of tea, part of it also is that I’m not wild even about mainstream sci-fi stories that are just about an AI. It typically has so much popular folk psychology and simplistic understandings of information systems that I find it annoying, except in rare cases. A drawback of being too versed in the subject.

        That’s not to say I can’t get into stories that posit theories I disagree with, as long as they’re part of some overall plot I can get into.

        Liked by 2 people

        • “There’s nothing saying the wise one must be right, although when it’s at the end of the story, most readers do take them as authoritative within the story universe.”

          Maybe that’s why people do seem to be taking Turing as authoritative. Interesting.

          “On Charlie, yeah, protagonists who aren’t curious about what the reader are curious about (without good reason) are annoying.”

          I think that was also intentional. Charlie isn’t highly relatable. If anything, Adam is the real protagonist. The effect is sort of like Nick Carraway observing Gatsby, but to a lesser degree.

          “I’m not wild even about mainstream sci-fi stories that are just about an AI.”

          I get that. Is it worse for you when there’s an attempt at an explanation, but the explanation doesn’t cut it? Or is worse when there’s no explanation at all?

          Liked by 1 person

          • Oh, I’m definitely in the camp that prefers an explanation, even if it’s one I wouldn’t agree with. Not giving an explanation, in the context of science fiction, strikes me as lazy. Anyone can write about mysterious happenings that can’t be explained, and then claim artistry when they simply don’t explain it. Producing a plausible explanations for the mysterious happenings is, to me, much more satisfying.

            I think back to Arthur C. Clarke’s version of 2001 A Space Odyssey. Clarke, in the novel version, answers the central mysteries of the story. Something Kubrick deliberately didn’t do. I loved that story as a kid, but I’m not sure I would have loved the movie if I hadn’t read the comic book adaptation of Clarke’s book first.

            Liked by 1 person

  2. Happy New Year! Is 2020 as weird for you as it is for me? I remember wondering if 1984 would be anything like the novel. And when 2001 seemed the distant future. 2020 kinda blows my mind.

    I haven’t read (and don’t plan to read) McEwan’s book. Nothing I’ve heard has grabbed me, and I was a little put off by his Guardian interview. (Mostly because I didn’t see him as really understanding SF literature but reacting to pop SF, which is like comparing fine dining to fast food.)

    Thing is, I’ve read and seen a lot of SF in the last 55+ years, so the story isn’t very unique to me. That said, for those without that heavy SF background, especially for mainstream readers, I can see it might be interesting.

    FWIW: Stories about humanoid machines go back to ancient tales about golems, and there is a rich body of work since. From your description, I recognized ideas explored in 2001 and AI, just to name two popular films (which I just realized are both, or should have both been, Kubrick stories).

    Isaac Asimov’s robot stories explored a lot of this starting in the 1950s. Two of his classics, The Caves of Steel (1953) and The Naked Sun (1955), explore a robot-human police detective duo in the context of two factions of humanity: stay-at-home Earthers, who despise humanoid robots, and Spacers, who fully embrace them. The human detective, of course, is an Earther who learns to overcome his prejudice and see his robot partner as more human than some humans.

    There is also that I’ve never been a fan of alternative history SF (which is, or maybe was, a good-sized sector of SF). I’m not sure why McEwan bothered with an alternate history. Just to get robots on the cultural scene a little early? Didn’t feel capable of setting a novel in the future?

    Part of the problem here might be world-building. If Adam is so life-like, what developmental history occurred to produce such home commodities? How can such a complex product reach that level of distribution and seemingly be so surprising? The second season of Westworld struck me as having very poor world-building, which shows how hard it is to get right.

    Developing a (science) fictionalized world isn’t easy. I’d bet most of the better SF authors grew up reading SF. And there’s a strong intersection between science-lovers and SF-lovers. That’s become more diffuse in the Anno Stella Bella era, but it’s still true, I think, in the best SF.

    I think it’s true that comedians can transition to drama more easily than dramatic actors and writers can transition to comedy. That’s because comedy is harder — it entails everything drama does, plus you have to pull off one of humanity’s harder tricks: being funny.

    I see SF as comedy to non-SF’s drama. Good SF has, or should have, all the elements of good fiction writing, plus you have to literally create an entirely new world. That, like humor, is not easy.

    “Some might say this info-dumpy stuff is perfectly fine in Sci-Fi—I don’t.”

    I generally agree. The exception involves writers whose ideas I find so interesting, that I’m okay with cardboard characters and silly plots if the ideas truly are really interesting and new.

    Obviously it’s much better if it’s ideas plus good writing. 🙂

    Like

  3. Happy New Year to you! Oddly, 2020 doesn’t seem as weird to me as 2000 did. That was quite an adjustment.

    On McEwan, I think a lot of people were turned off by that interview. He really screwed up…I bet he wishes he could go back and redo it. I gather that he’s actually a fan of Sci-Fi, so who knows why he said that.

    “I’m not sure why McEwan bothered with an alternate history. Just to get robots on the cultural scene a little early? Didn’t feel capable of setting a novel in the future?”

    Good question. I think he did it to drive home a thematic element to the story—that nothing really changes. The story involves all sorts of political parallels that I didn’t get into, most of which are topical and relate to our so-called post-truth era. That said, the historical stuff was for me the weakest element of the novel. Too much exposition; I wanted to see the world building unfold organically. He’s such a good writer I was surprised he didn’t do an exquisite job on this front. Maybe he’s advanced far enough in his career that he feels he can be indulgent.

    “If Adam is so life-like, what developmental history occurred to produce such home commodities? How can such a complex product reach that level of distribution and seemingly be so surprising?”

    Actually, he gets into the developmental history quite a bit…sometimes too much for me. I have to admit, I skimmed over some of these details. It has something to do with Turing insisting that his research be free, open-sourced, and some company creating a limited number of very expensive androids that are supposedly able to pass as humans—except, they don’t, not at first. This part rang true to me. Charlie’s first experience with Adam is comically disappointing, much like unwrapping a highly-anticipated Christmas present and finding it needs to be charged overnight. This disappointment continues through the first part of the novel, perhaps blinding Charlie to some of the real progress Adam is making.

    For me, it was enough to set up the premise with this android that learns, evolves. I didn’t need to know how it happened. (Honestly, I doubt it can happen—let’s cure cancer first, don’t ya think?) Plus, I wasn’t sure Turing needed to be so involved in the plot; that felt a bit corny to me. I appreciate that there needs to be a great deal of world-building in Sci-Fi, but I prefer to have it behind the scenes, or below the water’s surface level, if you know what I mean. Turing could have been mentioned as part of the world-building, but having long lecturing digressions?

    But you’re right. That world-building has got to be hard. Hell, it’s hard to be consistent in fiction, period. Getting the times/dates right, not allowing objects to magically disappear. It’s tricky stuff…and boy do I know it. If you’re going to create a whole new world, you have even more to worry about—not to mention all those Sci-Fi fans who are standing by, ready to argue with you over the physics. I think it was Ursula LeGuin who said a novel creates its own set of rules. That’s a lot of fun, but also a lot of responsibility!

    Like

    • “I think he did it to drive home a thematic element to the story—that nothing really changes.”

      By showing us the SOS (Same Old Shit) in an entirely parallel universe? Could be. I tend to see parallel worlds as having the main property of not being our world, which weakens the point from my perspective, but maybe that’s just me. A future storyline has the benefit of showing our behavior doesn’t change, even when [insert blank].

      “I wanted to see the world building unfold organically.”

      Very much so. SF authors C.J. Cherryh and Hannu Rajaniemi are very good at that (to name two that spring to mind because I’ve read them recently).

      The latter, a young Finnish author, writes hard SF that he almost goes out of his way to never explain — it just is. His SF amounts to magical realism sometimes, and some of his short stories are fantasy (mild) horror. I like his writing style, but I sometimes wish for more explanation. 🙂

      Cherryh is a favorite of mine. Her publisher made her add the “H” to her last name to make it more science fictional. (And as many female authors in SF do, used only her first initials. For instance, I learned years later that frequent Star Trek author D.C. Fontana was a Dorothy.) She’s known for her aliens, alien worlds, and alien points of view (which she sometimes writes from). And she’s really good at presenting it all very organically. I see her as on par with Ursula Le Guin in terms of literary skill (Le Guin, as I believe you know, is universally acknowledged to be among SF’s very best authors — she’s another very organic storyteller).

      Authors like these have been polishing their world-building skills a long time, and they’ve set the bar pretty high. (Dune, which I believe you’ve read, is another great one for world building and storytelling. But Herbert never really matched himself in his other work. Cherryh and Le Guin have a large body of excellent work.)

      “For me, it was enough to set up the premise with this android that learns, evolves.”

      And, indeed, in a well designed world, the possibility of the premise is all that’s needed. A truly well-designed world sort of explains itself in its own necessity. Things exist because they must.

      “I wasn’t sure Turing needed to be so involved in the plot; that felt a bit corny to me.”

      Turing does seem an odd choice. He came along so early in computer development that I don’t see him as that associated with artificial intelligence. His Turing Test was very speculative, and most now think it inadequate. (Mike and I have talked about a “Rich Turing Test” — prolonged interaction, such as, for instance, between the humans and Adam.)

      If you, without having read all that SF, found Turing a corny addition, just think how I see it. As I mentioned, nothing I’ve read about the book seems to commend it to me, and a bit part of that is the parallel worlds thing, alternate history thing, and using Turing. Those all strike me as a little facile.

      “I think it was Ursula LeGuin who said a novel creates its own set of rules.”

      Yeah, it’s downright Gödelean!

      Liked by 1 person

  4. That’s the one novel of his I haven’t yet read. I’m currently reading Solar. He does like to research things to death, almost to the point of irrelevance — it’s kind of one of his ‘marks’, as a novelist. In Saturday (which is excellent, btw) he does the brain and spent two years(?) with a London neurosurgeon, going into theatre and everything. I read Nutshell last month which is very funny — a contemporary take on Hamlet. I felt On Chesil Beach to be perfectly formed, exquisitely wrought, as a novella, and possibly his best work to date. Enduring Love (he loves to play on words!) is also quite superb, and at times extremely funny. Anyway, I’ll read this one ere long, so thanks! P.S. If you want a mix of great English humour and fanatastic writing try Martin Amis’ London Fields, Tina. Like McEwan, Amis’ output is a little patchy, but that one’s a complete winner.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.