Americans awoke yesterday to revolting news: a gunman opened fire on a Las Vegas concert crowd, killing at least 59 people and wounding hundreds more. This horrifying event has rekindled the country’s long-smoldering debate on gun violence. Why are mass shootings a uniquely American problem?
Inadequate mental health care and lax gun laws are likely causes, but violent video games are also frequently blamed. Critics point out that the gaming industry’s most popular franchises are war simulators, which let players wield many of the same weapons used to perpetrate real-world attacks.
Is there a causal connection between Call of Duty and mass violence? The evidence isn’t clear. One recent study contradicted previous research by suggesting that playing violent video games has no negative long-term impacts on gamers’ empathy or aggression.
The data is the data, but that conclusion feels wrong to me. I resonated with Marco Arment’s reflections yesterday on Twitter:
We can’t honestly believe that hyper-realistic renderings of war and gun violence selling to millions of people is completely harmless.
— Marco Arment (@marcoarment) October 2, 2017
Back when I still played video games, I loved the Metal Gear Solid franchise, which encouraged stealth over brutality. In many situations, it was actually easier to tranquilize the enemy or sneak around him, rather than fire live rounds. Knock out the stooge, then hide his comatose body so that his comrades didn’t detect your presence.
Playing Metal Gear “nonviolently” took longer, but at least I didn’t have Henchman #14’s blood on my conscience. I was plagued by the thought that violence—even imaginary violence—could damage my soul somehow. I worried that rehearsing murder would make me less empathetic. Even though the “victims” were algorithms and polygons, attacking them might undermine my own reverence for real-life humanity.
Video games aren’t unique in their tendency to normalize violence. Many other American pasttimes do the same thing. We hand our children toy pistols, then encourage them to mime-murder each other. Before our sporting events, professional soldiers toting firearms preside over a national anthem whose lyrics celebrate war. We ravenously consume MMA, boxing, and football—“sports” that treat gladiatorial violence as marketable entertainment.
These “harmless” practices aren’t the direct causes of our mass shooting epidemic. But they’re symptoms of a sickness in the American psyche, a perverse reverence for violence. We happily steep ourselves in foul waters, then act surprised when monsters emerge from the depths. ■
— Matt Hauger (@matthauger) October 3, 2017
This might be the first time I’ve been genuinely intrigued by virtual reality. This Dalí Museum exhibit allows visitors to “go inside and beyond Dali’s 1935 painting Archeological Reminiscence of Millet’s ‘Angelus’ and explore the world of the surrealist master like never before.”
Jean-François Millet’s seminal work Angelus haunted Dalí as a child, and there’s a nightmarish quality both to his painted meditation and to this digital recreation. Dalí’s signature surrealistic style translates well to VR; ironically, his abstract imagery somehow feels more real than a photorealistic equivalent might.
If you can’t make it to the Florida museum, check out the 360º video in the YouTube app on Android or iPhone; the on-rails experience tracks movement (via your phone’s accelerometer) and allows you to peer in any direction.
How did he do it? First, Garrett relied heavily on the game’s soundscape to orient himself around its 3D space. He even used the venerable Zelda hookshot “as a form of echolocation,” listening for the difference between the weapon striking walls and whiffing through open air. He also relied heavily on software emulation—Garrett saves his game state every few seconds, then restores that state when experiments go awry.
Garrett’s achievement testifies to his perseverance and ingenuity; it took five years of occasional gameplay to finish the task. Few gamers have the patience to do that sort of repetitive, time-consuming work.
Nintendo also deserves credit—for putting such care into Ocarina’s soundscape. The game’s sound engine places each noise in its proper stereo location. Plus, key occurrences on-screen have discernible audio equivalents. For example, when Link chaperones Zelda through Ganondorf’s castle, Zelda’s feet make tiny, just-perceptible noises.
What if every game developer took low-vision accessibility more seriously? What if game studios put the same care into their sound engines that they put into graphics and physics? What if every game’s sound design made it possible for blind gamers to play—withoutresorting to trial and error?
Imagine, for example, if your avatar’s footsteps reverberated more like real life. The sound would echo differently depending on your distance from the nearest wall, the texture of the floor, or the proximity of a deadly chasm. Just this one feature would allow a blind gamer to navigate virtual realms much like Daniel Kish explores the real world.
Games might even implement a “low-vision mode.” With this setting enabled, on-screen events would create constant, audible cues.
Take the recent Arkham Batman series as a theoretical example. How might these games sound if they were programmed with the sight-impaired gamer in mind? Each mob thug would grumble and yell incessantly; that way, the player could tell exactly where each foe stood, relative to Batman’s current position. Or, as the Batmobile motored through Gotham City, audio cues could distinguish open street intersections from adjacent buildings. That way, a gamer could hear exactly when to hit that e-brake. Finally, for less action-heavy sequences, Batman might speak his inner monologue out loud—describing the environment or the puzzle at hand in exhaustive detail.
If more game developers attended to such details, a standard “low-vision vocabulary” would solidify over time. These conventions would guide devs’ work and allow blind gamers to quickly grok new games. Game engines (e.g. Unreal, Unity) would incorporate these features, giving developers a head-start on building blind-accessible titles. Design studios might even hire blind game developers to ensure that their games met the needs of the sight-impaired.
UPDATE: Reader Ian Hamilton responded via Twitter with a series of helpful thoughts. In particular, he notes that many fighting games (e.g. ‘Mortal Kombat X’) already include audio cues that make it easier for sight-impaired gamers to compete. Ian also linked to an interesting Game Developers Conference panel on “Reaching the Visually Impaired Gamer”.
Edwin Evans-Thirlwell, writing for Eurogamer:
[Kinect] had become central to Microsoft’s efforts to transform Xbox into an all-singing, all-dancing delivery vector for every kind of media, backed by a futuristic UI, with video games merely part of the package… But when the dream of an all-in-One tomorrow fell over — demolished by Sony’s focus on specs and gaming applications with the substantially cheaper PlayStation 4 — Kinect went down with it.
Fascinating oral history of the Xbox Kinect, which set a Guinness world record as the “fastest-selling consumer electronics device” back in 2011.
I’ve skipped the last two console generations, but Microsoft’s motion-detecting peripheral nearly sold me on the Xbox—in a way that first-person shooters never could. Even my wife, who generally ignores video games, was impressed by Dance Central.
Since those promising early days, the Kinect has lost all momentum. Lackluster console sales forced Microsoft to drop the peripheral from its hardware bundles. That move may have saved the Xbox One, but it also dried up the market for Kinect-targeted games.
That limited selection makes it unlikely that I’ll purchase a game console anytime soon.
From kindergarten through eighth grade, I attended a private Christian school. After nine years, that sheltered environment felt familiar and comforting. It was also expensive; by the summer before my freshman year of high school, my family could no longer afford the tuition, and I was forced to transfer to the local public school.
It was a rough transition. I now had 150 classmates instead of twenty; I felt lost in the crowd. To my naïve astonishment, kids brazenly smoked in the restrooms. Fist-fights broke out on the lawn outside the school almost daily. Like clockwork each day before lunch, snickering bullies shouldered me into the lockers. Worst of all, I knew absolutely no one in my class; I had to start new friendships from scratch, years after most cliques had set in stone.
Starved for social contact, I treasured those few friendships I had outside of school. In particular, I clung to a younger neighbor from our low-income neighborhood. We rode the same bus (he attended the junior high), so each day we’d hunker down in the same seat.
And there, on that bus, we’d invent worlds.
Over time, we had developed a sort of spoken role-playing game that translated well to the bus trip. My friend would talk his way through an interactive adventure that I imagined and described. I’d place his character in some godforsaken place—an abandoned warehouse, a subterranean lair, a tall tower—and he’d have to “battle” his way out. He’d tell me each move he wanted to make, and I’d explain him what happened as a result. Each bus ride became an impromptu, oral performance of a text-based adventure game—think Zork or Hitchhiker’s Guide.
This imagined world was haunted by pop culture’s most famous arch-villains: the Joker. Chucky from the Child’s Play horror movies. Nightmare on Elm Street’s Freddy Kreuger. My friend’s avatar faced off against each in turn—a series of “boss battles,” advancing from the least threatening to the most vile.
Who was that chief bad guy? None other than the Terminator, everyone’s favorite homicidal android. We so adored the Terminator films that we even named our game “Zzz-ching”—the noise the robot made as it stalked my friend through deserted corridors. Zzz-ching became our default pasttime, on the bus and off.
That year in public school was scary. I felt lonely and overwhelmed by an unfamiliar, chaotic context. As silly as it might seem, our little game represented a welcome escape. It was a world I could control completely, when the real world seemed dangerously unpredictable. It was creative work that someone else appreciated, when I felt ignored in the mass of other students. For a few minutes each morning and afternoon, Zzz-ching provided some distraction and camaraderie—just enough to make public school a bit more bearable.
You will always have a soft spot for the films you loved when you were twelve. For me, 1993 was the golden era of film-making. The Fugitive, released that year, remains my favorite Harrison Ford movie—even besting my beloved Star Wars and Indiana Jones series. Similarly, I could watch Groundhog Day a thousand times and still laugh out loud.
But one 1993 film had a bigger impact on me than any other: Jurassic Park. Unlike most movies, I can remember seeing it in the theater with my older brother. I couldn’t believe what I was seeing; how had they created such believable monsters? Afterwards, I bought (and nearly out-wore) the all-symphonic soundtrack on cassette tape. The newly-released sequel, Jurassic World, even intrigues me, though the reviews say it’s middling at best. I still day-dream about the “science” cited in Park—whether geneticists might clone dinosaurs within my lifetime.
Spoiler alert: they won’t. Not in my lifetime—not ever. DNA degrades too quickly to survive sixty-plus million years. And even if we could somehow sequence a species’ DNA, we have no way to bring that animal to term. A real-life Jurassic Park will never happen.
So… what about “un-real” life? The film may provide the blueprint for a convincing dinosaur experience—in virtual reality. As technology advances, VR’s limitations become more apparent. Moving our material bodies around a digital landscape is awkward. There’s no convincing, seamless way to interact physically with these virtual environments, no “Holodeck” tech that could convince us that a freely-explorable Mesozoic landscape is real.
But Jurassic Park’s marquee theme ride—the automated Jeep safari—provides the perfect constraints for a fully-engaging dinosaur experience in VR.
Of course, it wouldn’t be so much a game as a themed experience. Imagine climbing into the familiar jungle-painted SUV, then donning a set of VR goggles. Then you’d experience an on-the-rails ride through of Jurassic Park itself. You’d be locked inside the car—not for your safety, but to preserve the illusion. Within the constraints of the Jeep, you’d be free to customize your experience. You choose which window to gaze through. You could crane your head to gape up at a brachiosaur through the sunroof. You could peer through the rain to catch a glimpse of a feasting T-Rex. You could track a pterodactyl’s soaring flight across the sky through the windshield.
It’s a far cry from actual, living, roaring dinos. But it’s also the closest we’ll ever get to seeing them with our own eyes. If Steven Spielberg could create convincing dinosaurs on-screen twenty-two years ago, surely today’s visual effects wizards could do the same in VR.
One last bonus? Virtual dinosaurs always show up and perform on cue.
Is Mesozoic still a thing? ↩
When I was a kid, my life revolved around video games. I spent every free moment mashing buttons. When I couldn’t game—at school, on the bus, or drifting off to sleep—I dreamt about gaming. Each month, I’d pore over the latest Electronic Gaming Monthly magazine, scrutinizing each screenshot meticulously.
My childhood memories can be divided into distinct console eras. First, the NES epoch, when my brothers and I salivated over—then received—Super Mario Brothers 3. During the Sega Genesis years, I became an adrenaline junkie, addicted to Sonic the Hedgehog’s reckless speed. In high school, I graduated to the PlayStation and immersed myself in the dense gameplay of Metal Gear Solid.
Them something changed. Somewhere along the way, I began to lose interest in video games.
Part of it was simple cost; my family sometimes struggled to pay the bills. Video games were a luxury we couldn’t afford. Even back then, single titles sold for $50-60 a pop.
But even after I started earning my own spending money, my love for games waned. In high school, girlfriends, sports, and the nascent Internet claimed my free time.
Then came college. For many young adults (especially men), college is when gaming takes hold. Even then (in 1999), network gaming was huge. Many guys in my dorm played Madden or Halo day and night.
Meanwhile, I was overwhelmed with schoolwork: piano practice, ensemble rehearsal, papers and assigned readings. There was no time for Halo LAN parties. Besides, I was exhausted. Often, I’d leave the dorm room before seven, then not return ’til long after midnight, when both roommates had powered down the Nintendo 64 and climbed into bed. I’d stumble through the dark and collapse onto my bed. Video games had dropped off my radar entirely.
Eventually, I graduated from college, found a job, and was surprised to find myself with hours of free time every evening. I tried to recapture that teenage magic and leap back into the gaming scene, picking up where I left off. I blew through Metal Gear Solid 2 and, later, Metal Gear Solid 3.
But that was the end of my gaming renaissance. Somehow, I couldn’t bring myself to spend time or money on games again. Part of me still enjoyed playing, of course. But another, louder part of me despised myself for binging away a weekend. I’d feel guilty, grimy, and unhealthy by the time I dropped the controller. Eventually, I buried my PlayStation 2 beneath a pile of DVDs; it’s sat there ever since.
Since then, I’ve watched two hardware generations pass me by. I can’t justify plunking down three or four hundred dollars on a modern console. A cheaper product—say, an Apple TV with an app store—might tempt me back. I doubt it, though. At some point, it seems, my gaming addiction lost its grip over me.
I wonder… is this the normal progression—to abandon games as the demands of work and home ownership and family press in? Or am I a millennial aberration? I’d love to hear from other one-time, adolescent gamers; did you maintain your obsession into adulthood?
Once upon a time, I was an elderly hobbit.
Or, at least, I played one online.
ElendorMUSH is an Internet-based role-playing game set in J.R.R. Tolkien’s Middle-earth universe (familiar from The Hobbit and Lord of the Rings). The “MUSH” stands for “Multi-User Shared Hallucination.” Each new user creates a character, choosing a race (e.g. hobbit, elf, orc) and a name that fits the selected culture. Over time, the character’s details get fleshed out, as the player decides on an appearance, a backstory, and a weapon of choice. Then, game time is spent “role-playing”–that is, acting out Tolkienesque scenes with other players.
The twist? MUSHes are entirely text-based. Elendor boasts no slick graphics, no beautifully-rendered landscapes, and no orchestral soundtracks. The entire game experience is mediated through the written word.
If that seems needlessly ascetic, consider the fact that the MUSH’s heyday was the mid–90s. Broadband hadn’t yet been widely embraced, and most users had piss-poor Internet connections. Text consumed far less bandwidth than graphics; a text-based game could be played even over the slowest dial-up connection.
Fifteen-year-old Matt would fire up our wheezy old Apple IIgs, dial into the local university’s network, and sign into ElendorMUSH. There, I acted out a fairly mundane hobbitish existence. I played Osmbise Bushet, of the Shire Bushets. Osmbise had left his Northfarthing home and moved to Bree to escape a suffocating family life. He served as a police constable in that frontier town, keeping rabble-rousers and drunkards in line. He spent his off-hours drinking tea–never ale–at the Inn of the Prancing Pony.
It didn’t take long for Elendor to take over my life. I role-played for hours on end and soon earned a reputation as a good writer. To my delight, no one on the MUSH knew (or, likely, cared) that I was a gangly, awkward teenager. Before long, I was invited to become a local administrator over the Bree user group. Excited, I upped my commitment and connected even more often. Often, I wouldn’t crawl into bed until just before dawn.
My virtual obsession may seem lame. But, in my defense, I felt that I was contributing to something communal and artistic. No, Elendor’s role-playing transcripts wouldn’t compare to Tolkien’s inspired prose. But the shared writing experience encouraged improvisation, literacy, and creativity. I can think of worse habits.
More importantly, Elendor represented a sanctuary from a disjointed, dysfunctional family life. In the wake of my parents’ divorce, things had fallen apart. Addiction, abuse, and mental illness reigned. MUSHing offered a safe, predictable alternative to the (frightening and uncontrollable) real world.
And, although I didn’t realize it at the time, my “Osmbise” character embodied my emotional response to this fractured home life. My hobbit abstained from alcohol; I wished for my parents to overcome their own addictions. Osmbise fled the Shire to avoid his family; I wanted to get away, too. And Osmbise served as a constable, keeping the peace; I had desperately tried to establish order at home, caring for younger siblings and cleaning far more than a fifteen-year-old ever should.
Soon after my promotion to local admin, connecting to Elendor grew more challenging. My dial-up connection frequently flaked out, booting me off the MUSH and interrupting my role-playing sessions. Other times, I’d be forced offline by real-world demands; we only had one phone line, and my mom didn’t appreciate me hogging the connection with weirdly cultish Internet games.
Eventually, the local university cancelled the old student account I used to connect to the Internet. I was forced to quit Elendor, cold turkey. I managed to detox and eventually moved onto more typical teenage pastimes: part-time jobs, dating, sports. By the time we finally got broadband at home, the virtual world held less appeal for me. In college, I never even owned a computer. My MUSHing days were done.
Elendor itself never went away. It’s still kicking out there on the Internet. But, like Middle-earth itself, the virtual world was doomed to diminish with time. By the mid–2000s, the broadband revolution had arrived, and online gaming exploded. Graphics-heavy MMORPGs like World of Warcraft siphoned users away from the text-centric games. Elendor’s virtual population plummeted. Today, you’re lucky to see a dozen players logged in at any one time. Back in the late 90s, hundreds of users joined the game each evening.
Last year, I reconnected to the MUSH for the first time in decades. I was instantly transported back to adolescence, with all its hurt and happiness. And, to my surprise, several players there recognized me immediately. There are still a few Elendor diehards who remember the grey-haired hobbit with the funny name.
- Don’t ask me where the name came from. I invented the most ridiculous, un-hobbit-like moniker I could think of, hoping to stand out. ↩
After decades of dreaming, virtual reality lies within reach. Valve, bastion of traditional PC gaming, now openly discusses a timetable ’til we have legit VR. Oculus, a VR headset start-up, has garnered accolades from the tech press. It’s no longer a question of “If we ever get VR,” but “When?”
What’s holding VR back? It’s not the visuals; we’re already close enough to photo-realism, even on the gaming console. The audio’s not a problem, either; games gamed CD-quality audio twenty years ago. Even the headset technology is catching up, packaging sight and sound into a light-weight, inexpensive package. No, the real problem is touch; how can you believe a virtual world that you can’t walk around or touch or climb?
Even without touch technology, I worry about VR addiction. Once compelling virtual worlds exist, will gamers escape dissatisfying real-world lives by slipping on their headsets? Too many young men already float through life in a gaming-centric haze, stumbling from bed straight into MMOs and FPSs, then back into bed again. Will affordable, convincing VR accelerate our decline into a Ready Player One dystopia?
Despite the risks to our humanity, and despite the remaining technological challenges, I find VR intriguing. My fascination has more to do with Star Trek than with the flashy neon worlds of Tron or Lawnmower Man. I don’t want a first-person version of traditional video games; I want the Holodeck. Picard, Data and the gang escaped into Sherlock Holmes mysteries, private eye capers, and high-seas adventure.
I want to experience my favorite fiction, first-hand. Imagine having the ability to live into your favorite movies and TV shows—to try on your favorite roles. You might play Bilbo in The Hobbit, or Mikey from The Goonies. A tiny heads-up display would show you the script (if you needed it). Or maybe the AI would be more sophisticated, and non-player characters would understand your words and actions, then respond accordingly. You could rewrite the script. Go ahead; find out what happens if Bilbo refuses to pass along the Ring. Go ahead; let Chunk get killed. It’d be the ultimate Choose Your Own Adventure story.
Holodeck episodes were reliably terrible. But the technology sparks my imagination. ↩