Americans awoke yesterday to revolting news: a gunman opened fire on a Las Vegas concert crowd, killing at least 59 people and wounding hundreds more. This horrifying event has rekindled the country’s long-smoldering debate on gun violence. Why are mass shootings a uniquely American problem?
Inadequate mental health care and lax gun laws are likely causes, but violent video games are also frequently blamed. Critics point out that the gaming industry’s most popular franchises are war simulators, which let players wield many of the same weapons used to perpetrate real-world attacks.
Is there a causal connection between Call of Duty and mass violence? The evidence isn’t clear. One recent study contradicted previous research by suggesting that playing violent video games has no negative long-term impacts on gamers’ empathy or aggression.
The data is the data, but that conclusion feels wrong to me. I resonated with Marco Arment’s reflections yesterday on Twitter:
We can’t honestly believe that hyper-realistic renderings of war and gun violence selling to millions of people is completely harmless.
— Marco Arment (@marcoarment) October 2, 2017
Back when I still played video games, I loved the Metal Gear Solid franchise, which encouraged stealth over brutality. In many situations, it was actually easier to tranquilize the enemy or sneak around him, rather than fire live rounds. Knock out the stooge, then hide his comatose body so that his comrades didn’t detect your presence.
Playing Metal Gear “nonviolently” took longer, but at least I didn’t have Henchman #14’s blood on my conscience. I was plagued by the thought that violence—even imaginary violence—could damage my soul somehow. I worried that rehearsing murder would make me less empathetic. Even though the “victims” were algorithms and polygons, attacking them might undermine my own reverence for real-life humanity.
Video games aren’t unique in their tendency to normalize violence. Many other American pasttimes do the same thing. We hand our children toy pistols, then encourage them to mime-murder each other. Before our sporting events, professional soldiers toting firearms preside over a national anthem whose lyrics celebrate war. We ravenously consume MMA, boxing, and football—“sports” that treat gladiatorial violence as marketable entertainment.
These “harmless” practices aren’t the direct causes of our mass shooting epidemic. But they’re symptoms of a sickness in the American psyche, a perverse reverence for violence. We happily steep ourselves in foul waters, then act surprised when monsters emerge from the depths. ■
— Matt Hauger (@matthauger) October 3, 2017
How did he do it? First, Garrett relied heavily on the game’s soundscape to orient himself around its 3D space. He even used the venerable Zelda hookshot “as a form of echolocation,” listening for the difference between the weapon striking walls and whiffing through open air. He also relied heavily on software emulation—Garrett saves his game state every few seconds, then restores that state when experiments go awry.
Garrett’s achievement testifies to his perseverance and ingenuity; it took five years of occasional gameplay to finish the task. Few gamers have the patience to do that sort of repetitive, time-consuming work.
Nintendo also deserves credit—for putting such care into Ocarina’s soundscape. The game’s sound engine places each noise in its proper stereo location. Plus, key occurrences on-screen have discernible audio equivalents. For example, when Link chaperones Zelda through Ganondorf’s castle, Zelda’s feet make tiny, just-perceptible noises.
What if every game developer took low-vision accessibility more seriously? What if game studios put the same care into their sound engines that they put into graphics and physics? What if every game’s sound design made it possible for blind gamers to play—withoutresorting to trial and error?
Imagine, for example, if your avatar’s footsteps reverberated more like real life. The sound would echo differently depending on your distance from the nearest wall, the texture of the floor, or the proximity of a deadly chasm. Just this one feature would allow a blind gamer to navigate virtual realms much like Daniel Kish explores the real world.
Games might even implement a “low-vision mode.” With this setting enabled, on-screen events would create constant, audible cues.
Take the recent Arkham Batman series as a theoretical example. How might these games sound if they were programmed with the sight-impaired gamer in mind? Each mob thug would grumble and yell incessantly; that way, the player could tell exactly where each foe stood, relative to Batman’s current position. Or, as the Batmobile motored through Gotham City, audio cues could distinguish open street intersections from adjacent buildings. That way, a gamer could hear exactly when to hit that e-brake. Finally, for less action-heavy sequences, Batman might speak his inner monologue out loud—describing the environment or the puzzle at hand in exhaustive detail.
If more game developers attended to such details, a standard “low-vision vocabulary” would solidify over time. These conventions would guide devs’ work and allow blind gamers to quickly grok new games. Game engines (e.g. Unreal, Unity) would incorporate these features, giving developers a head-start on building blind-accessible titles. Design studios might even hire blind game developers to ensure that their games met the needs of the sight-impaired.
UPDATE: Reader Ian Hamilton responded via Twitter with a series of helpful thoughts. In particular, he notes that many fighting games (e.g. ‘Mortal Kombat X’) already include audio cues that make it easier for sight-impaired gamers to compete. Ian also linked to an interesting Game Developers Conference panel on “Reaching the Visually Impaired Gamer”.
Edwin Evans-Thirlwell, writing for Eurogamer:
[Kinect] had become central to Microsoft’s efforts to transform Xbox into an all-singing, all-dancing delivery vector for every kind of media, backed by a futuristic UI, with video games merely part of the package… But when the dream of an all-in-One tomorrow fell over — demolished by Sony’s focus on specs and gaming applications with the substantially cheaper PlayStation 4 — Kinect went down with it.
Fascinating oral history of the Xbox Kinect, which set a Guinness world record as the “fastest-selling consumer electronics device” back in 2011.
I’ve skipped the last two console generations, but Microsoft’s motion-detecting peripheral nearly sold me on the Xbox—in a way that first-person shooters never could. Even my wife, who generally ignores video games, was impressed by Dance Central.
Since those promising early days, the Kinect has lost all momentum. Lackluster console sales forced Microsoft to drop the peripheral from its hardware bundles. That move may have saved the Xbox One, but it also dried up the market for Kinect-targeted games.
That limited selection makes it unlikely that I’ll purchase a game console anytime soon.
When I was a kid, my life revolved around video games. I spent every free moment mashing buttons. When I couldn’t game—at school, on the bus, or drifting off to sleep—I dreamt about gaming. Each month, I’d pore over the latest Electronic Gaming Monthly magazine, scrutinizing each screenshot meticulously.
My childhood memories can be divided into distinct console eras. First, the NES epoch, when my brothers and I salivated over—then received—Super Mario Brothers 3. During the Sega Genesis years, I became an adrenaline junkie, addicted to Sonic the Hedgehog’s reckless speed. In high school, I graduated to the PlayStation and immersed myself in the dense gameplay of Metal Gear Solid.
Them something changed. Somewhere along the way, I began to lose interest in video games.
Part of it was simple cost; my family sometimes struggled to pay the bills. Video games were a luxury we couldn’t afford. Even back then, single titles sold for $50-60 a pop.
But even after I started earning my own spending money, my love for games waned. In high school, girlfriends, sports, and the nascent Internet claimed my free time.
Then came college. For many young adults (especially men), college is when gaming takes hold. Even then (in 1999), network gaming was huge. Many guys in my dorm played Madden or Halo day and night.
Meanwhile, I was overwhelmed with schoolwork: piano practice, ensemble rehearsal, papers and assigned readings. There was no time for Halo LAN parties. Besides, I was exhausted. Often, I’d leave the dorm room before seven, then not return ’til long after midnight, when both roommates had powered down the Nintendo 64 and climbed into bed. I’d stumble through the dark and collapse onto my bed. Video games had dropped off my radar entirely.
Eventually, I graduated from college, found a job, and was surprised to find myself with hours of free time every evening. I tried to recapture that teenage magic and leap back into the gaming scene, picking up where I left off. I blew through Metal Gear Solid 2 and, later, Metal Gear Solid 3.
But that was the end of my gaming renaissance. Somehow, I couldn’t bring myself to spend time or money on games again. Part of me still enjoyed playing, of course. But another, louder part of me despised myself for binging away a weekend. I’d feel guilty, grimy, and unhealthy by the time I dropped the controller. Eventually, I buried my PlayStation 2 beneath a pile of DVDs; it’s sat there ever since.
Since then, I’ve watched two hardware generations pass me by. I can’t justify plunking down three or four hundred dollars on a modern console. A cheaper product—say, an Apple TV with an app store—might tempt me back. I doubt it, though. At some point, it seems, my gaming addiction lost its grip over me.
I wonder… is this the normal progression—to abandon games as the demands of work and home ownership and family press in? Or am I a millennial aberration? I’d love to hear from other one-time, adolescent gamers; did you maintain your obsession into adulthood?
I’ve been underwhelmed by “next-generation” gameplay videos for the XBOX One and PlayStation 4.
Though I haven’t been an active gamer for nearly a decade, I pay attention when new consoles get released. I’m eager to see just how photo-realistic the new games can get. But this time around, I’m hard-pressed to tell the difference between the first crop of next-gen games and the previous generation’s latest and greatest. Maybe I’ve been out of the gaming scene too long?
Or… maybe we’ve reached the point of diminishing returns when it comes to game graphics—the point where even substantial increases in processing power return less and less dramatic improvements.
This wouldn’t necessarily be bad news! I’d love to see gamemakers focus less on whiz-bang visuals and more on “under the hood” features. The games already look real. Now, make them feel real; dedicate that extra horsepower to better artificial intelligence, more responsive surroundings, more open-ended gameplay, and dynamically-generated storylines.
- What if the bad guys learned your battle strategies? They “notice” that you tend to hide in A/C ducts, so they start tossing grenades into every vent.
- Or what if you could destroy any in-game building, wall by wall, and watch the rubble crumble?
- Or what if you could infiltrate the enemy’s headquarters any way you wished: parachute onto the roof, skulk in through the sewers, zip-line in from an adjacent skyscraper, or don a disguise and saunter through the front door?
- Finally, what if the storyline was truly unscripted, and the game could spontaneously generate new character dialogue, responding to your decisions?
Up ’til now, games have felt static and rote; they pull the user along preset (if pretty) paths by invisible strings. But imagine if developers focused on features like these (instead of pixel-painting). Gamers would become creative agents instead of puppets. And that, more than piled-on polygons, would make games feel “next-gen”.
I’ve heard that it takes a few years for developers to really unlock a console’s potential. Thus, you might not see the best graphics the PS4 or XBOX One can produce for a while. But previous console releases (e.g. going from the PS2 to the PS3) boasted dramatic visual improvements over their predecessors; I just don’t see it this time. ↩
Apple pundits keep clamoring for Nintendo to make iOS games. The argument goes like this: Nintendo’s hardware business is circling the drain. To save itself from disaster, the Japanese gamemaker must adapt its many valuable franchises into kick-ass iPhone versions. After all, what developer wouldn’t want to leverage Apple’s thriving App Store to bolster its sagging quarterly results?
Let’s assume, for a moment, that Nintendo did release its most popular games on iOS. Imagine an iPhone version of Super Mario Brothers, or Zelda on the iPad. And let’s assume that the games prove hugely successful and send the gamemaker’s profits soaring. Why wouldn’t Nintendo be thrilled?
What if this isn’t (just) a profit deal? What if Nintendo has higher priorities than sheer earning potential? What if Nintendo has evaluated iOS as a gaming platform—and found it wanting?
For example, maybe Nintendo balks at the prospect of developing touchscreen-only control schemes. “Finger-paint” gaming works great for Angry Birds and Scrabble. But it fails miserably for intricate platformers like Mario and Metroid. It’s hard to envision Nintendo’s developers—so committed to quality gameplay—plastering a D-pad over their careful level design. Would the button-mashing battles of Super Smash Bros. work with no buttons? Until Apple (or some third-party partner) bundles a credible hardware controller with every iOS device, you’re asking Nintendo to compromise on user experience—to risk alienating their biggest fans.
Even if Nintendo were satisfied with the hardware, the iOS gaming ecosystem itself might turn them off. What if the microtransaction economy repulses them (as it should)? Nearly every top-grossing iOS game these days is “free to play”, demanding frequent in-app purchases to unlock the full game experience. What if Nintendo refuses to pervert its classic franchises in this way? What if they’d rather bow out gracefully than prey upon their users’ base, lizard-brain impulses? What if they’d rather go bankrupt than treat Mario as a glorified Skinner box?
No true geek wants Nintendo to operate in the red. Its loyal fans, grateful for decades of incredible games, are rooting for the gamemaker to stave off fiscal catastrophe. But the best companies—companies like Nintendo and like Apple—refuse to prioritize short-term profit margins over user experience. That’s bad for business, in the long run.
An exclusive iOS version of Mario would undoubtedly help Apple (it would permanently establish iOS as the definitive mobile gaming platform). But it’s a riskier bet for Nintendo, whose treasured franchises could quickly lose their cultural cachet.
Sports video games have come a long way, baby. Back in the day, you needed a Ph.D. in modern art just to identify what sport you were actually playing. This red block here blips its way to that green block, and… touchdown! Er, or is it… home run? Oh, sorry, no… uh, birdie? Not exactly lifelike. But fast-forward thirty years or so, and you get this:
I don’t own a console from the current generation, so screenshots like this one flabbergast me. Pop in a disc, and this is what your XBOX or Playstation will pump out right now. And this from game systems that are already growing long in the tooth. In just a few decades, we’ve graduated from abstract bricks to cartoonish sprites to grotesque polygons to this.
Now, I’ll admit; we’re not at photorealism yet. The bodies lack grit. The collision detection wonks out. The player models wear their faces like death masks, devoid of expression. Sideline characters lurch and jerk like animatronic puppets. But we’re getting closer to true simulation; digital Lebron may seem unhuman–but you can tell it’s him. Virtual Tiger stares back with creepy, soulless eyes—but how’s that any different than the real thing? (Ha! J/K, Mr. Woods.)
So… Here’s my question: what happens when the final hurdles to realism are overcome? When video games can pump out enough pixels to render video indistinguishable from a TV broadcast? When programmers can compose algorithms that bestow faux personality to the players? When developers perfect the stadium details—the crowd’s chaotic swell, a jersey rippling in full sprint, spittle flying from coach’s mouths? When 3D games can project Bill Cowher’s chin straight out into your living room?
Will fans eventually forsake real-world sports for virtual versions? Watching in HD already trumps sitting in the nosebleed section… wouldn’t playing the game be better still? Why settle for mere spectating when you could command life-like athletes to and fro across the pitch? Could video games eventually pose a competitive threat to the major leagues?
It sounds ridiculous. After all, how can “fake” simulations replicate the “real” human drama that sports offer? We fans relish the human-to-human storylines that make sports so fascinating. But aren’t those storylines already manufactured by the sports industrial complex? Twenty-four/seven ESPN coverage and sound bite journalism overhype and artificially inflate the drama. If it’s not real conflict in the first place, why couldn’t it be replicated by my game console?
So… what if simulated sports trumped their live counterparts? What would we lose? “Tradition!” some immediately reply. After all, yanks_slugger_158 hardly belongs in Coopertown alongside Ruth and Mays. Others would fret about our children’s ballooning waistlines. Without toned sports heroes to adore and emulate, our kids would never go outside again. Instead, they’d park their ample posteriors before the idiot box, ’til their fingers grew too pudgy to push the buttons.
Fair enough, but virtual sports might gain us some things, too. We could assuage the national guilt about the lavish wastefulness that surrounds our professional leagues. No more disposable billion-dollar stadiums. No more ridiculous multi-million-dollar contracts for muscle-bound shlubs. We might even (what a concept!) pay our teachers and nurses a fair and proportionate wage instead!
There’s another guilt-inducing plague that virtual sports could alleviate. For decades now, our culture has condoned–even celebrated–a perverse sort of prostitution. For entertainment, we pay athletes to destroy themselves. Contact sports cripple their bodies and rot out their brains. See, as an example, several recent studies, which suggest that playing football slurries brains to mush. The hockey rink, the soccer pitch, the football field: these are our Coliseums, the players our gladiators. Shouldn’t we demand that these brutal exhibitions stop?
And how better to stop them than to make virtual avatars suffer, rather than flesh-and-blood-and-brain humans. Let digital athletes brave the brain-shocking blows. Let simulated players pump up with poisonous performance enhancers! Problem solved.
At least until the artificial jocks become self-aware. But let’s not get ahead of ourselves.