Author: GETAWAYTHEBERKSHIRES

Home / Author: GETAWAYTHEBERKSHIRES

Daniel Kish sees more than you might expect, for a blind man. Like many individuals deprived of sight, he relies on his non-visual senses to perceive, map, and navigate the world. But people tend to find Kish's abilities rather remarkable. Reason being: Kish can echolocate. Yes, like a bat.

As a child, Kish taught himself to generate sharp clicking noises with his mouth, and to translate the sound reflected by surrounding objects into spatial information. Perhaps you've seen videos like this one, in which Kish uses his skills to navigate a new environment, describe the shape of a car, identify the architectural features of a distant building—even ride a bike:

Impressive as his abilities are, Kish insists he isn't special. "People who are blind have been using various forms of echolocation to varying degrees of efficiency for a very long time," he says. What's more, echolocation can be taught. As president of World Access For the Blind, one of Kish's missions is helping blind people learn to cook, travel, hike, run errands, and otherwise live their lives more independently—with sound. "But there’s never been any systematic look at how we echolocate, how it works, and how it might be used to best effect."

A study published Thursday in PLOS Computational Biology takes a big step toward answering these questions, by measuring the mouth-clicks of Kish and two other expert echolocators and converting those measurements into computer-generated signals.

Researchers led by Durham University psychologist Lore Thaler performed the study in what's known in acoustic circles as an anechoic chamber. The room features double walls, a heavy steel door, and an ample helping of sound-dampening materials like foam. To stand inside an anechoic chamber is to be sonically isolated from the outside world. To speak inside of one is to experience the uncanny effect of an environment practically devoid of echoes.

But to echolocate inside of one? I asked Kish what it was like, fully expecting him to describe it as a form of sensory-deprivation. Wrong. Kish says that, to him, the space sounded like standing before a wire fence, in the middle of an infinitely vast field of grass.

This unique space allowed Thaler and her team to record and analyze thousands of mouth-clicks produced by Kish and the other expert echolocators. The team used tiny microphones—one at mouth level, with others surrounding the echolocators at 10-degree intervals, suspended at various heights from thin steel rods. Small microphones and rods were essential; the bigger the equipment was, the more sound they would reflect, reducing the fidelity of their measurements.

Thaler's team began the study expecting the acoustic properties of mouth-clicks to vary between echolocators. But the noises they produced were very similar. Thaler characterizes them as bright (a pair of high-pitched frequencies at around 3 and 10 kilohertz) and brief. They tended to last just three milliseconds before tapering off into silence. Here's a looped recording of one of Kish's clicks:

The researchers also analyzed the spatial path that the sound waves traveled after leaving the echolocators' mouths. "You can think of it as an acoustic flashlight," Thaler says. When you turn a flashlight on, the light distributes through space. A lot of it travels forward, but there's scattering to the left and right, as well." The beam patterns for clicks occupy space in a similar fashion—only with sound instead of light.

Thaler's team found that the beam pattern for the mouth-clicks roughly concentrated in a 60-degree cone, emanating from the echolocators' mouths—a narrower path than has been observed for speech. Thaler attributes that narrowness to the brightness of the click's pitch. Higher frequencies tend to be more directional than lower ones, which is why, if you've ever set up a surround sound system, you know that a subwoofer's placement is less important than that of a higher-frequency tweeter.

Thaler and her team used these measurements to create artificial clicks with acoustic properties similar to the real thing. Have a listen:

These synthetic clicks could be a boon for studies of human echolocation, which are often restricted by the availability of expert practitioners like Kish. "What we can do now is simulate echolocation, in the real world with speakers or in virtual environments, to develop hypotheses before testing them with human subjects," Thaler says. "We can create avatars, objects, and environments in space like you would in a video game, and model what the avatar hears." Preliminary studies like these could allow Thaler and other researchers to refine their hypotheses before inviting echolocation experts in to see how their models match the real thing.

These models won’t be perfect. To keep measurements consistent, Kish and the other echolocators had to keep still while inside the chamber. “But in the real world, they move their heads and vary the acoustic properties of their clicks, which can help them gain additional information about where things are in the world,” says Cynthia Moss, a neuroscientist at Johns Hopkins University whose lab studies the mechanisms of spatial perception. (Thaler says her team is currently analyzing the results of a dynamic study, the results of which they hope to publish soon.)

Still, Moss says the study represents a valuable step toward understanding how humans echolocate, and perhaps even building devices that could make the skill more broadly achievable. Not everyone can click like Kish. “I’ve worked with a guy who used finger-snaps, but his hand would get tired really fast,” Moss says. Imagine being able to hand someone a device that emits an pitch-perfect signal—one that they could learn to use before, or perhaps instead of, mastering mouth-clicks.

I ask Kish what he thinks about a hypothetical device that could one day produce sounds like he does. He says it already exists. About a third of his students are unable or unwilling to produce clicks with their mouths. "But you pop a castanet in their hands and you get instant results," he says. "The sound they produce, it's like ear candy. It's uncanny how bright, clear, and consistent it is."

But Kish says he's all for more devices—and more research. "We know that these signals are critical to the echolocation process. Bats use them. Whales use them. Humans use them. It makes sense that those signals should be studied, understood, optimized." With the help of models like Thaler's, Kish might just get his wish.

Mixing technology and romance can create a dangerous cocktail. Few people know that better than we do here at WIRED. Not only do we report on the chaos, but as a tech-curious bunch, we’ve had more than our fair share of mishaps and dud apps deleted after just a few days. (Ever wondered what it’s like being on dating apps in San Francisco? “People think it’s interesting that I work for WIRED,” says editorial assistant Lydia Horne. “But sometimes they just want to pitch me their startup’s latest products.”) Still, we’ve persevered. And some of us have found ways to weave technology into our romantic lives in genuinely useful ways.

The key is to mostly steer clear of apps directly marketed at couples. No one really needs a new messaging service—or calendar, or photo album, or shared list generator—preloaded with extra-cutesy emojis.

As a throng of proud, self-described nerds, we’ve discovered that many of the most practical apps and devices for streamlining relationship matters are the same ones you use in your social life, and even at your office. Here are some of the unlikely relationship heroes WIRED staffers swear by.

The Business (Apps) of Love

Kimberly Chua, Senior Digital Producer: “I’ve been a bridesmaid in three weddings, and any method of communication was always woefully inadequate. To plan my own wedding, I’ve been using Slack for messaging and the spreadsheet app Airtable. Slack lets me organize conversations with my bridesmaids by topic: Here’s where we talk logistics, here’s where we talk about dresses—I even gave them a private channel where they can talk about me. Airtable lets you sort tasks into separate views so my fiancé and I could easily divide the work. It felt silly at first to use the tools I was using at work, but it became very obvious we needed a way to create as little stress as possible, and that this was it.”

Emily Dreyfuss, Senior Writer: “We used an Excel spreadsheet to name our son. We had 72 name choices and we couldn’t agree, so we put them all into a spreadsheet. My husband wrote an algorithm to let us do blind voting and narrow it down by half. The algorithm let us rank each name and got rid of the bottom ones, and it let us save one so that the other person couldn’t vote it out. It was all very complicated! When we were left with four names, we left technology behind, headed to the hospital with all four choices, and picked based on the baby’s face.”

Joanna Pearlstein, Deputy Editor, Newsroom Standards: “I married a fellow geek, and as is evident to anyone who knows me, I love spreadsheets. For our wedding, we used Google Sheets to track costs, guests, seating arrangements, the dates on which thank-you notes were written and sent. Using Google allows me to make sure the spreadsheet is available on every computer, which is important."

Saraswati Rathod, Researcher: “I moved to San Francisco after the last lesbian bar closed down, so I didn’t really get to participate in that community until I found Meetup. On Meetup anyone with an interest can easily get a group of queer women together and basically take over a bar for the night. To me, these Meetup groups have become a staple of San Francisco’s queer community and it’s how I met the woman I’m currently dating—so it really is kind of like a digital-age bar that way.”

Logistics

Chua: “My fiancé does all the shopping because he does all the cooking, but we needed a place where we could easily keep one shopping list. The Google Home app makes it really easy, even though he’s on Android and I’m on iOS. It’s perfect: The second one of us opens the fridge and realizes we’re out of milk, they can just shout “Hey Google, add milk to the shopping list.” I can even add last-minute things while he’s shopping. We never forget anything anymore.”

Emma Grey Ellis: “So this is a good example of a tech application that would be stalky in a casual relationship, but is supremely useful in a marriage. Neither my partner nor I are particularly good at keeping an eye on our texts while we’re moving about the city. To make things worse, neither of us has any chill. So to avoid hassling each other while we’re driving and sending texts, like “Where are you? Did you die?” (or less dramatically, “Are you still at the store?”) we constantly share our locations through Google Maps.”

Anonymous: “There is an app called Private Photo Vault for those things you don’t want uploading to the cloud. It’s password-protected, and more importantly, as soon as you upload an image to the vault, it disappears from your Photos app. That way you don’t have to worry about anyone running into something racy (or just weird) if they scroll too far.”

Rebecca Heilweil, Reporting Fellow: “I’m in an old relationship that’s new to long distance. Besides always existing in each other’s lives through text messages, we’ve been prescheduling surprise meals for each other with Caviar. The trick is to put the other person’s phone number in the delivery contact information, and to not ask what they want first.”

Alex Baker-Whitcomb, Manager of Audience Development: “When we were at a distance and feeling terrible that we couldn’t celebrate together, we’d send each other bottles of alcohol with Saucey. It was the perfect way to say ‘Sorry I can’t be there with you, but here’s some champs.’ ”

Peter Rubin, Platforms Editor: “Beyond the obvious three words—Kindle Shared Library—my wife and I carved out a little piece of the cloud for ourselves. We got a home server, and set up various devices to auto-backup. Now, if our phones or Dropboxes fill up with photos or other docs, we just keep the essentials and nuke the rest, knowing they're nestled in the sweet embrace of our NAS.”


How We Love: Read More

Related Video

Gadgets

App Pack | Apps To Help You Survive and Conquer Valentine's Day

Whether you're celebrating Valentine's Day with a long-term partner or just trying to spare your single self a little heartache, there's an app to go along with your plans.

On the last Monday of September, 32 field workers stepped onto a 15-acre experimental plot in an undisclosed part of Washington and made apple harvest history. The fruits they plucked from each tree were only a few months old. But they were two decades and millions of dollars in the making. And when they landed, pre-sliced and bagged on grocery store shelves earlier this month, they became the first genetically modified apple to go on sale in the United States.

The Arctic Apple, as it’s known, is strategically missing an enzyme so it doesn’t go brown when you take a bite and leave it sitting out on the counter. It’s one of the first foods engineered to appeal directly to the senses, rather than a farmer’s bottom line. And in a bid to attract consumers, it’s unapologetic about its alterations.

The apple has courted plenty of controversy to get where it is today—in about a hundred small supermarkets clustered around Oklahoma City. But now that it’s here, the question is, will consumers bite? Dozens of biotech companies with similar products in the pipeline, from small startups to agrochemical colossuses like Monsanto and Dupont are watching, eager to find out if the Arctic Apple will be a bellwether for the next generation of GMOs, or just another science project skewered on the altar of public opinion.

Neal Carter bought his first apple orchard in 1995, up in the gently sloping valley of Summerland, British Columbia. When he started, the future president of Okanagan Specialty Fruits didn’t have grand plans for upending the industry. But in his first few seasons he was struck by just how many apples (and how much money) he had to throw away on account of browning from the routine bumps and jostles of transit and packaging. Most years it was around 40 percent of his crop.

When you cut an apple, or handle it roughly, its cells rupture, and compounds that had been neatly compartmentalized come in contact with each other. When that happens, an enzyme called polyphenol oxidase, or PPO, triggers a chemical reaction that produces brown-colored melanin within just a few minutes. Carter thought there had to be a way to breed or engineer around that. So when he came across Australian researchers already doing it in potatoes, he wasted no time in licensing their technology, a technique known as gene silencing. Rather than knocking out a gene, the idea is to hijack the RNA instructions it sends out to make a protein.

The problem, Carter found out later, was that apples were a lot more complicated, genetically speaking, than the potato. In taters, the browning enzyme was coded into a family of four sub-genes that were chemically very similar. All you had to do was silence the dominant one, and it would cross-react with the other three, taking them all down in one go. Apples, on the other hand, had four families of PPO genes, none of which reacted with the others. So Carter’s team had to design a system to target all of them at once—no simple task in the early aughts.

To do it, the Okanagan scientists inserted a four-sequence apple gene (one for each member of the apple PPO family) whose base pairs run in reverse orientation to the native copies. To make sure it got expressed, they also attached some promoter regions taken from a cauliflower virus. The transgene’s RNA binds to the natural PPO-coding RNA, and the double-stranded sequence is read as a mistake and destroyed by the cell’s surveillance system. The result is a 90 percent reduction in the PPO enzyme. And without it, the apples don’t go brown.

It took Okanagan years to perfect the technique, which was subject to regulatory scrutiny on account of the viral DNA needed to make it work. Today, with the arrival of gene editing technologies like Crispr/Cas9, turning genes on and off or adding new ones has become much more straightforward. Del Monte is already growing pink pineapples, Monsanto and Pioneer are working on antioxidant-boosted tomatoes and sweeter melons, J.R. Simplot has a potato that doesn’t produce cancer-causing chemicals when it’s fried. Smaller startups are busy engineering all kinds of other designer fruits and veggies. And it’s not obvious how exactly this new wave of gene-edited foods will be regulated.

Gene editing gets around most of the existing laws that give the Food and Drug Administration and the Department of Agriculture authority over biotech food crops. In January, the Obama administration proposed a rule change that would look more closely at gene-edited crops before automatically approving them. But earlier this month the USDA withdrew that proposed rule, citing science and biotech industry concerns that it would unnecessarily hinder research and development.

Carter, whose fruits were cleared by the USDA and the FDA in 2015, says his Arctic Apples are evidence the existing process works. But there were times when he wasn’t sure they were going to make it. “It took us close to 10 years, where we had the apples, we had the data, we kept submitting answers to questions, and then wouldn’t hear anything back,” says Carter. “It’s a bit of a black hole, and that whole time you’re not sure if you’re going to even be able to pay your electricity bills and keep your lights on.”

    More on GMO Foods

  • Sarah Zhang

    QR Codes for GMO Labeling Could Actually Be a Great Idea. Could

  • Ferris Jabr

    Organic GMOs Could Be The Future of Food — If We Let Them

  • Nick Stockton

    In a First, the FDA Clears Genetically Modified Salmon for Eating—It Just Took 20 Years

Talking to Carter, Okanagan still feels like a small family business, especially when he says the word “process” with that endearing, long Canadian “O”. This year’s Arctic Apple harvest amounted to 175,000 pounds—just a drop in the apple bucket. But shortly after its US regulatory approvals, his company was acquired by Intrexon Corporation, a multinational synthetic biology conglomerate that owns all the other big-name GMOs you might have heard of. Like Oxitec’s Zika-fighting mosquitoes, and the fast-growing AquAdvantage salmon.

That’s one reason customers might be wary of the Arctic Apple. Another is transparency. While Carter says they’re taking that literally—the bags have a plastic see-through window to view the not-brown slices for yourself—others say Okanagan hasn’t gone far enough in telling people how its apple was made. The letters G-M-O don’t appear anywhere on the bag. Instead, in accordance with a 2016 GMO food labeling law, there’s a QR code, which you can scan with a smartphone to get more information online.

Some consumer groups think that doesn’t go far enough, but scientists counter that they’re focusing on the wrong things. “Breeding technologies are just a distraction from the big questions,” says Pam Ronald, who studies plant genetics at the University of California and who is married to an organic farmer. “Like, how do we produce enough food without using up all our soil and water? How do we reduce toxic inputs? Those are the grand challenges of agriculture today, that technology can help address.”

Ronald works on food crops designed to fight food insecurity in the developing world—like drought-resistant corn and vitamin-enriched rice. When she first heard of the Arctic Apple at a conference in 2015, she wasn’t that impressed. It’s not exactly a famine-fighter. But when Carter sent her a box of fruits a few weeks later, her kids had a different take. They brought them into school to show to their biology classes, and according to Ronald, their classmates just went wild. “Kids really hate brown apples, and it made me realize I don’t really like them either,” she says.

Living where food is abundant, most people don’t really grasp how GMOs touch their lives. “It’s that distance that consumers are removed from agriculture that creates the fear,” says Ronald. “But if you see a brown apple you’re probably aware that you throw it away, and maybe you feel guilty about that. Connecting biotechnology to something you can see and feel and taste like that could be transformational.”

Related Video

Culture

Serving Genetically Modified Food at Dinner Parties

When hosting a party where genetically modified foods are what’s for dinner, is it proper etiquette to warn your GMO-averse friends ahead of time? Mr. Know-It-All offers sage advice on how to handle.

According to Google's "Frightgeist" map of trending costumes, there's no way around it: it's gonna be a very Fortnite Halloween. From Miami up to Boston and Anchorage down to LA, the Epic Games title is dominating people's searches for holiday getups. (At least, mostly; New England seems to be really into unicorns and fairies, and Glendive, Montana inexplicably is into "The ’50s" as a costume idea.) You, though, are a free thinker. Just because you can put on some cargo pants or an orange shirt and do the "Shoot" dance doesn't mean you want to look like every other yob out there getting pushed around in a shopping cart.

But at the same time, Halloween is on a Wednesday this year, which means that the parties start tonight—and you, my friend, have likely not gone farther than Googling how to turn "The ’50s" into a costume. (The answer is Brylcreem and taffeta. However, the ’50s was not a better time. Please do not dress up like it.) You need a fix, and fast. What, you're going to drag out that Sexy Pizza Rat costume for yet another year? The madness ends, and it ends here. Thanks to the ceaseless churn of the internet, Fornite is far from the only easy solution to life's sartorial conundra. So take a cue from us, and take a cue from current events and meme culture. Most of these costumes are already lurking in your home, just waiting to be thrown together hastily before you leave the house to dodge the shopping carts.

Spot Mini, the Boston Dynamics Robot

Of course robots aren't going to take over world. So what if they can open doors? Or fight through human resistance to open more doors? Or do parkour? The only thing that would truly strike fear is if they could robo-twerk to Bruno Mars, which is obviously never gonna hap—sorry, what's that? Ah. I see. So then putting on knee-high black compression socks and a yellow tracksuit and doing a herky-jerky Running Man is less of a Halloween costume than a last-ditch effort to blend in. Got it. —Peter Rubin

The American Chopper Argument

This meme, derived from an episode of the reality show American Chopper where Paul Teutul Sr. fired his son Paul Teutal Jr., is actually a great way to lay out the simple facets of a debate. (Just like Plato used to do!) As the argument escalates, each frame of the meme lays out a different point from each side of the debate. For this reason, it probbably lends itself most readily to a duo or couple's costume. To execute it, one person needs to wear jeans, a black T-shirt with the sleeves cut off and a grey handlebar moustache (also platinum hair and some fake arm tattoos, if you can swing it). The other person needs to wear jeans, a black T-shirt (with sleeves), and a black baseball cap. As for the debate itself, it's possible to pick a debate laid out in one of the many examples of the meme or create your own. Put the points of the debate on signs that you both can walk around with to act out the argument. Also, if you want to encourage interactivity, wear the costumes while carrying dry-erase boards so that your fellow celebrants can use you as props to lay out arguments of their own. —Angela Watercutter

Yanny—or Is It Laurel?

"Who are you supposed to be?"
"Larnef."
"Who?"
"Yannel."
"Wait, what?"
"Sorry, did you not understand me? Laruvel." Yes, simply by printing this out and hanging it around your neck (or just buying a t-shirt from its creator), you too can be as annoying as the many, many news stories about the senseless auditory illusion, none of which actually found the opera singer who recorded the snippet more than a decade ago. Well, none that weren't the one by our own Louise Matsakis, anyway. Nice one, Louise! —P.R.

The A Star Is Born Meme

No doubt Halloween parties this year will have more than a few variations on Lady Gaga and Bradley Cooper's characters from A Star Is Born. But the meme, that four-panel "I just wanted to take another look at you" bit, that one is not likely to be left in the shallow, either. So how do you do it? A couple of ways: You could make a couple T-shirts, like these superfans. If you have a spare car door lying around—or are handy enough to fashion one by hand—you could roll down the window, get a greasy mop like Jackson Maine's, and just walk around all night saying "hey!" and seeing who turns around to take a look at you. Bonus points if you can get a companion to play Ally/Gaga and find you charming all night long. —A.W.

An Absolute Unit

While the meme officially got its start in 2017, the spring is when the internet's love of majestic rotundity really took root. But here's the thing: you can't just be an absolute unit and expect people to know that's what you're doing. The key here is wordplay. So maybe carry a bottle of vodka in one hand and two plums in the other; now you're an Absolut eunuch! If you're heading somewhere with a lot of programmers, go as an Absolute Unix Terminal. (If you're not going somewhere with a lot of programmers, though, do not do this. Please.)

Gritty

OK, so full disclosure: you might actually need a lion mask to be able to pull off professional hockey's terrifying new mascot. And an Ernie mask to paste it on top of, just to give you the perfect slavering murder-rictus. Oh, and giant googly eyes. And a Philadelphia Flyers jersey. But you've got black sweatpants somewhere, right? I mean, who doesn't have black sweatpants? What, you've never dressed up like someone who skates around during commercial breaks and then lurks in the bowels of sports arenas, waiting to prey on innocents? I'm beginning to question whether you even want this. —P.R.

The "Him Too" Kid

You gotta hand it to Pieter Hanson. After his mom tweeted a (since deleted) picture of him in his Navy uniform and saying he "won’t go on solo dates due to the current climate of false sexual accusations by radical feminists with an axe to grind” and the #HimToo hashtag, Hanson followed it up by creating his own Twitter account denoucing what his mom said, and #HimToo as well. This costume, then, is pretty easy. If you have Navy whites, complete with cap, just wear those. And maybe carry a sign with a big red slash through "#HimToo" for people who may not know the meme and just think your a sailor. Also, be ready to strike Hanson's now-famous pose. —A.W.

Is This a Pigeon?

You may not recognize the name, but you've definitely seen the meme. Derived from a scene in the anime The Brave Fighter of Sun Fighbird in which a humanoid thinks a butterfly is a pigeon, this year it became the go-to way to demonstrate any person or group's misunderstanding of some concept or topic. To make this a costume, you'll need a pair of black-rimmed glasses, a short black haircut (or wig), a red button-up shirt and grey trenchcoat, a thick hardcover book, and a fake butterfly that you can attach to your shoulder with some kind of spring-y wire. As for how you interpret the meme, that's up to you. Either consult Know Your Meme for some well-known examples, or be a smarty-pants and come up with one on your own. Once you have it, render it in bold type and attach it to your face, chest, and butterfly in the appropriate places. Then, flutter away. —A.W.

Blunt-Hitting Elon Musk

Live in Canada? Or District of Columbia, or one of the eight states that has fully legalized the recreational use of cannabis? Just grab an OCCUPY t-shirt and a pack of backwoods, and you can take it from here. Just don't threaten to take Tesla private and drive, if you know what we're saying. —P.R.

Lime Scooter

Not all memes live on Twitter. Some of them live strewn in the bushes in front of that church down your block, or clustered more than 20 deep in front of your subway station. They're great memes, and they provide a real service to people, they're just a little … annoying sometimes. Why not be a little annoying yourself? All it takes is a QR code slapped on your forehead, a black t-shirt/white shorts/green sneakers combo, and a willingness to fall over in front of people at the worst possible moment. Or, if you've got a friend of high-school age, just give them piggyback rides all night—as long as those rides involve you sprinting into traffic or down crowded sidewalks, with no helmets involved whatsoever. "Last-mile solution" never sounded so spooky! —P.R.

Redpilled Kanye West

MAGA hat? Check. Trump photo? Check. Now just call the photo "dad," call for overturning the 13th Amendment, and watch everyone's face crumple with grief and pity. Don't worry, you're still a genius—it's just that we're trying to bring you down. Yeah, that's it. —P.R.

It was 2016, and Aneesh Chaganty was fumbling through the most important phone call of his barely-begun career. The young filmmaker had been given 15 minutes to convince actor John Cho to star in Search, a mystery about a father trying to track down his missing teenage daughter. The characters’ ordeal—and their entire relationship—would be told via a series of screens, as its hero uses everything from Facebook to FaceTime to Reddit to solve his kid’s disappearance.

Other films have taken the same web-centered approach, like 2015’s horror hit Unfriended, but Chaganty wanted to do something different: “The Memento of screen movies,” he says. For Cho, however, the concept didn’t click. “It was the first time I’d spoken to a celebrity in my life, and I completely botched the call,” says Chaganty. “I didn’t tell him we were trying to do something new. His hesitation was that this wouldn't be a movie movie—that it would just be a YouTube video.”

Chaganty, 27, didn’t give up. He still had Cho’s number, so he decided to text him, to see if they could get some actual face-time together. The two eventually met for coffee in Los Angeles, no longer separated by a device. “He sat down, and I stood up and just pitched my ass off,” Chaganty says. He wound up selling Cho on the film, and a year and a half later, Search was screening at the Sundance Film Festival, where it would win multiple awards—and be scooped up for $5 million by Sony’s Screen Gems division. Newly retitled Searching, the movie opens today in several cities, following a highly promising limited-release opening weekend.

Related Stories

Co-written with Sev Ohanian—who produced Fruitvale Station, Ryan Coogler’s breakout debut—Searching is the latest film to deftly employ small-screen habits to tell a big-screen story: This year alone, there’s been the speedy text-message chain that kicks off Crazy Rich Asians; the soul-baring YouTube videos of Eighth Grade; and the parent-baffling emoji-missives of Blockers. It’s a marked change from the countless goofy interfaces and Finder-Spyder searches that dominated movies and TV shows starting in the ‘90s. Now a new generation of filmmakers, many of whom came of age in the digital era, are finding ways to ensure the online experience is presented as IRL-ish as possible.

“Ever since [1998’s] You've Got Mail, people have been this trying to figure out how technology falls into a story,” says Chaganty. “House of Cards was one of the first shows to have text messages pop up, and that was revolutionary in its own time. But I think the success stories we’re seeing now is because people are saying, ‘How can we keep it accurate and realistic, and still serve the tone of the movie?’”

In Searching, Cho plays David Kim, a recent widower who’s constantly in touch with his only daughter, Margot (Michelle La). When Margot doesn’t come home after school one day, David begins scouring her online life—Venmo transactions, Facebook friends, livestream archives, even old Tumblr posts—in an attempt to figure out where she went. He also enlists the support of a deeply concerned detective (Will & Grace’s Debra Messing), whom he communicates with largely by FaceTime.

Much of the early part of the film plays out on David’s computer, including an opening montage that condenses the first 16 years of his daughter’s life, as well as the final years of his wife’s cancer battle, into a series of clickable videos, calendar events, and email exchanges. It’s a niftily constructed and unexpectedly moving sequence, one that serves as a reminder of just how much of our existence plays out in front of us on-screen.

Chaganty began thinking about filmmaking when he was 8 years old, having seen a photo of writer-director M. Night Shyamalan in the newspaper India West. “I vividly remember thinking, ‘He looks like me. I want to do that,’” Chaganty wrote in a letter he posted to Twitter last week. As a middle-school student growing up in San Jose, California, he began using hand-me-down, slightly outdated cameras he’d been gifted by his parents, two movie-loving software entrepreneurs. Chaganty made home movies with titles like The Shed and The Attic, inspired in part by the no-budget short films Shyamalan had made as a kid. “I thought, ‘Oh, he made movies using the means I have now—which is no means,” says Chaganty. “Those really got me into filmmaking.”

He and Ohanian, 31, first met at University of Southern California’s School of Cinematic Arts. Afterward, the two collaborated on Seeds, a touching short-form travelogue shot using Google Glass. Chaganty would end up working in the company’s Creative Labs division for two years until quitting his job to pursue Searching. “I called my dad up after I started at Google, and told him about my day,” says Chaganty. “He was like, ‘Awesome. But remember: You’re coming back here to make movies.’ They’ve been very much, ‘Don’t chase a paycheck, chase your dream’ since I was growing up.”

While writing Searching, Chaganty and Ohanian took in as many crime-culture artifacts as possible, from the big-screen version of Gone Girl to Netflix’s Making a Murderer to episodes of Serial. Because the movie takes place on screen, the two wrote what they called “a scriptment”—essentially a 50-page outline featuring dialogue and action descriptions, but downplaying the technical specifics.

“Early on, we realized this couldn't be a script that said, ‘INT. — GOOGLE CHROME — FACEBOOK — TAGGED PHOTOS — NIGHT,” says Chaganty. “If you're trying to convince actors to be in a movie, that’s the worst way to do it.” And while the film itself was shot in less than two weeks, Searching’s editors spent months putting together the David Kim’s on-screen world: “Every single asset you see‚—whether it's a line of text on a text message, or an email window—had to be created from scratch.”

The glut of online ephemera in Searching means the filmmakers were able to spread hidden in-jokes, clues, and messages throughout the film—including a nod to the man who unknowingly helped launch Chaganty’s career nearly two decades ago. “There's a moment where we log onto Facebook,” he says, “and a news item that says, ‘M. Night Shyamalan: Filmmaker agrees to meet with super-fan director after director’s surprise cameo in film.’ Hopefully, someone will send him a screenshot, or tell him, ‘Go watch Searching.’” If so, it would give Chaganty a near-perfect twist ending of his own.

One of the first things you notice about videos of calving glaciers is the utter absence of scale. The craggy chunks of ice that break away could be the size of football fields or cities or maybe even whole states—but without a point of reference it can be next to impossible to say. They are unintelligibly immense.

That perceptual effect happens in person, too. "There's no real way to determine its size just by looking at it," says New York University oceanographer David Holland, whose research team has spent a decade observing glacier behavior in Greenland. A distant, dislodged iceberg might look small at first glance, "but then you'll watch a helicopter fly towards it, and the helicopter will shrink and shrink and shrink and pretty soon it just disappears."

Which is why you probably can't tell that the newly born berg in this time-lapse video is in fact 4 miles wide, half a mile deep, and over a mile long. A sizable chunk of Greenland's Helheim glacier, it's roughly the size of lower Manhattan and weighs between 10 billion and 14 billion tons. When it dislodged from Helheim and crashed into the ocean on June 22, it accounted for some 3 percent of the ice that Greenland is expected to contribute to the sea in 2018 in the span of just 30 minutes. Much of Greenland's ice deposits will occur in dramatic, short-lived events such as this.

That's exactly why this video is so valuable to Holland, whose team is studying how calving glaciers could contribute to catastrophic sea-level rise across the globe. "Abrupt sea level change is only going to happen one way, and that's with some big part of western Antarctica becoming violently unstable due to calving—or not," he says. "If not, then there will not be major, abrupt sea level changes." And to model whether and how Antarctica might fall apart, you need to understand the rate and processes by which ice breaks off. Greenland's icebergs—including Helheim—serve as fabulous natural laboratories.

Glaciers often shed pieces of themselves, but only rarely do researchers capture large events on camera. In the course of his career, Holland has seen it happen just three times. (The largest calving ever filmed was shot during the production of the documentary Chasing Ice, on the 17th day of a glacier-watching stakeout.) "You can be out in the field for two weeks with your camera on and the glacier just sits there doing nothing," says Denise Holland, David's wife and the logistics coordinator for NYU’s Environmental Fluid Dynamics Laboratory and NYU Abu Dhabi’s Center for Global Sea Level Change. But that kind of documentation is essential to understanding—and modeling—how and why glaciers calve.

Consider the video above, which begins with a big so-called tabular iceberg breaking off from the main part of Helheim glacier. Almost immediately thereafter, a second type of iceberg, called a pinnacle iceberg, can be seen calving off toward camera-right. The tabular iceberg is built like a pancake: large, flat, relatively stable. But the pinnacle berg has an aspect ratio like a slice of bread. Tall and skinny, it wants to lie down, so as it separates from the iceberg from the bottom first, its feet shoot forward out from under it as it slides into the sea. Sheets of pinnacle icebergs proceed to rip off from the glacier in sequence, driving the tabular berg farther down the fjord and breaking it into smaller chunks. "It's like a house of cards: One piece falls off, and the rest of the pieces peel off one after the other," David says. "It's complete chaos."

That chaos can be difficult to model. Look closely and you'll notice that not all of the pinnacle icebergs in this video detach from the bottom first. Some separate from the top, reflecting a different type of structural failure. Different structural failures occur at different rates. If you don't know what those rates are, it's hard to say how accurate your models are.

"If you're going to project sea levels, you need to pass through the eye of the needle first and get the delivery of ice to the ocean correct—and that's not possible right now," David says. "It could be in the future, with more observation and more modeling, but this event had too much going on for anyone to responsibly say they could predict or understand what happened."

Until that future arrives, we'll have video like this one to remind us of the enormous complexity—and just plain enormity—of calving glaciers.

As we’ve taken our small-screen destiny into our own hands—skinny bundles, “over the top” content, a device-agnostic smorgasbord of streaming—our hands have become empty, idle. Channel surfing feels futile, if not obsolete. TV is no longer a remote-controlled menu to peruse as much as it’s a Tube Goldberg machine carrying our eyes from one diversion to the next. Choice is everywhere; agency, not so much.

Algorithms forever recommend what to watch. Autoplay functions cue up the next episode without waiting for your input. With nothing left to do but gaze and glaze, a viewer’s chief responsibility is to not fall asleep (lest you wake to find yourself five episodes into an unwitting binge of Hell’s Kitchen). It’s strange, then, given its role as the architect of programmatic passivity, that Netflix is handing back the reins via choose-your-own-adventure experiences it’s calling “interactive content.”

Starting in late 2017, Netflix piloted the idea in a handful of children’s shows, peppering installments of Puss in Boots and Buddy Thunderstruck with moments that asked viewers to pick a prompt: Should Puss kiss Dulcinea or shake her hand? Should Buddy and Darnell have a Wet Willie contest or work out and “get jacked”? The decisions gave you a glimmer of control, but Netflix’s latest ambitions lie more in a Sliding Doors or Clue direction: complex stories for grown-ups that reward their choices with starker consequences.

Related Stories

Netflix’s first concerted push into interactive TV, “Bandersnatch,” aired at the end of 2018. A standalone episode of dystopian sci-fi satire Black Mirror (of course), it told the tale of a video­game designer who tries to adapt a choose-your-own-adventure novel that drove its author insane (oh, of course). Not a fourth wall was left standing. The result, a time-­bending existential thriller with terrifying overtones, was twisty and meta enough not to feel like a gimmick. But it’s difficult to imagine another, less shrewd show pulling off such structural contortions.

Not to say they won’t try. As Todd Yellin, Netflix’s vice president of product, told me before “Bandersnatch” premiered, “We’re starting to hear other stories. There’s a rich vein.” Corporate coquettishness aside, more experiences are in the offing—and judging by the company’s prodigious investments in anime, romantic comedy, and other genres, plenty of them.

Netflix knows the value of our choices well. We’re already being prompted to navigate narrative junctures; it’s called “personalization.” We watch shows, so we’re offered new shows. We watch those shows, then learn about still other shows. Each time we bump from one to the next unravels a Boolean knot, an if-then dance of demographics and precedent—who you are, what you’ve watched—that seeks to keep you right where you are rather than discovering the charms of another streaming platform.

Interactive TV may support more insidious ends, though. We’re already on the cusp of relinquishing our subconscious to technology: VR headsets that track our gaze and see our pupils dilate; virtual assistants that read our mood; sneakers that can tell we’re getting tired because our running stride falters. These are reactions, not choices. They don’t have an opt-out feature. And while they might not seem it, our narrative choices add up to a near-biometric signature too, a portrait visible only in aggregate. Do we seek chaos? Play it safe? How long does it take us to select an option about breakfast cereal versus one where we can urge a character to commit suicide? Netflix already famously pores over every byte of viewer behavior data. Now the buttons we choose, the prompts we pick, the tastes they suggest could become part of that great graph that defines how the company sees us. Television in the age of psychographics.

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

Officially, Netflix sees the interactive option as a “lean in” alternative to the “lean back” nature of conventional TV. But what really changes, experientially? Choose-your-own-adventure storytelling is, at its root, curiosity dressed up as control. By the third time you’ve followed one of the paths in “Bandersnatch” to an arbitrary ending, the only reason to loop back to try another tributary is a completist’s sense of duty. (What’s a watercooler moment when everyone at the watercooler saw only a portion of what’s possible?) When the show finally ends, you feel respect for creator Charlie Brooker’s ingenuity, but you don’t come away feeling changed, as you might after a tightly written, sharply edited, well-constructed hour of television. The more malleable the story, the less cogent the experience.

Videogames, the only real analog for interactive storytelling, have always balanced the trade-off by choosing their illusion, giving players pockets of free will in a straitjacket. You may not affect the outcome in an adventure game like God of War or Red Dead Redemption 2—you’ll get there or you won’t—but navigating the challenges in the story offsets the determinism with a visceral sense of autonomy. (Multiplayer games like Overwatch and Fortnite do away with explicit narrative entirely, baking their lore into the background so as not to interfere with their compete-die-repeat Groundhog Day-ness.)

Netflix’s choose-your-own-adventure content will find its audience—first through novelty, then because creators will tease ever more fireworks out of the form. But interactive TV starts at a disadvantage: It is arriving just as we’ve learned, in so many ways, not to interact at all.


Peter Rubin (@provenself) wrote about the Tetris effect in issue 26.11.

This article appears in the February issue. Subscribe now.

Who doesn't love a good slow motion video? The Slow Mo Guys—Gav and Dan—sure do! In this video of theirs, they use a high speed camera to capture the motion of four different bullets. And lucky for me, the motion looks to be perfect for a video analysis: They give both a reference scale (the black and white markers in the back) as well as the frame rate (100,000 frames per second).

Let's just jump right into an analysis. I will be using Tracker Video Analysis to get position and time data for each bullet after it leaves the weapon. The bullets are so small that it can be difficult to always see them—for all but the largest bullets, I can only mark the bullets when they are passing in front of the white backgrounds. Still, this should be enough for an analysis.

Now for the data. I marked all the bullet positions so you don't have to. Here's a plot of position vs. time for each one (you can also view the plotly version).

I'm pretty happy with this—however, there is a problem. During the video, Gav and Dan switched from the slow-mo view back to a commentary view because the 45 caliber bullet was taking too long. When they switched back to the slow motion view, their timing was off. You can see this in a graph of position vs. time for that bullet. Oh, you can also notice the missing data when the bullet passed in front of the black parts of the background.

But how off is it? Let me first make the assumption that the bullet has a constant velocity in the horizontal direction. If this is the case, then a linear fit to the first part of the data gives a speed of 287.6 m/s. I should add that this speed would convert to 642 mph, which is faster than the speed listed on the video at 577 mph. Perhaps the displayed frame rate is different than the recorded frame rate? Maybe Gav and Dan could give me the answer here.

Anyway, back to the data. From the linear fit, I get the following equation of motion for the bullet.

This equation of motion should give the position of the bullet for any time. The "jump" time is at 0.00825 seconds. The constant velocity equation says that the bullet should have a position of 1.795 meters but the data from the video puts it at 1.886 meters. What about the reverse of this problem? If I know that the position is 1.886, what time should it be? That's a pretty straightforward problem to solve (algebraically). You can do that for yourself as a homework assignment, but I get a correct time of 0.008567 seconds. So, they were "late" by 0.000317 seconds. But wait! That's how far they were off in "real" time, but the video was played back in slow motion. It was recorded at 100,000 fps—but I assume it was displayed at 30 fps. That means this short time interval was actually off by 1 second. That's the mistake.

But that's just a cosmetic error, not really what I wanted to look at. Instead, I want to know if it's possible to estimate the amount of air resistance on these bullets as they leave the muzzle. I have to admit that air resistance on bullets can be pretty tricky. When these suckers are moving super fast, the simpler models for air resistance don't always work. But no matter what, an air drag force on a bullet should push in the opposite direction of the motion of the bullet and slow it down. So I will see if I can estimate the acceleration of the bullet during this short flight.

In one dimension, the acceleration is defined as the change in velocity divided by the change in time. That can be written as the following equation.

I just need to find the velocity at the beginning of the trajectory and then at the end. This will just be the slope of the position-time graph at these two points. Then I can divide by the time of flight for a rough approximation of the acceleration. Here's what I get:

  • Barrett: v1 = 934 m/s, v2 = 854 m/s, Δt = 0.0051 sec, acceleration = 15,686 m/s2. This seems very high.
  • AK-47: v1 = 752 m/s, v2 = 698 m/s, Δt = 0.0062 sec, acceleration = 8710 m/s2.
  • 45 cal: v1 = 246 m/s, v2 = 242 m/s, Δt = 0.012 sec, acceleraiton = 333 m/s2.
  • 9 mm: v1 = 351 m/s, v2 = 330 m/s, Δt = 0.0105 sec, acceleration = 2000 m/s2.

Since these values for the acceleration seem super high, I am going to roughly estimate the acceleration using a basic model for air drag. Here is the equation I will use:

In this expression, ρ is the density of air (about 1.2 kg/m3), A is the cross sectional area of the bullet and C is the drag coefficient. I can approximate the bullet size and mass from this wikipedia page and I will just use a drag coefficient of 0.295. With these values and the velocity right out of the barrel, I get an acceleration of 624 m/s. OK, that is high—but not quite as high as the measured acceleration. Still, I think the values from the video aren't super crazy. That bullet is moving really fast and interacting with the air that will make it slow down quite a bit—especially at first.

Of course ballistics physics can get pretty complicated, but that will never stop me from making some rough estimates.

Related Video

Science

The Slow Mo Guys Answer Slow Motion Questions From Twitter

The Slow Mo Guys (Gavin Free and Dan Gruchy) use the power of Twitter to answer some common questions about The Slow Mo Guys, The Super Slow Show, and filming in slow motion. What is their process like when coming up with new video ideas? What's their favorite video they've done? Where do they get all the food for the Super Slow Show?
The Slow Mo Guys star in the YouTube original series The Super Slow Show. Catch the final episodes April 11th.

Professor Christine Blasey Ford was a teenager when she says Supreme Court nominee Brett Kavanaugh tried to rape her. You know the story by now. She didn’t report it at the time, but has come forward now that Kavanaugh is close to being confirmed as a justice to the highest court in the land. On Friday morning, President Trump tweeted that he had “no doubt” that if it had happened, Blasey Ford would have reported it right away.

That’s not how this works. That’s not how any of this works. I know this because this is my story, too, and the story of millions of people. Don’t believe me? Look at Twitter today. Look at the hashtag #WhyIDidntReport. Read the cacophony of stories—each different but the same. Stories of assault by strangers, friends, family members, teachers. The hashtag exposes the sheer banality of rape in America. Sexual assault is not rare. It’s common. According to the National Crime Victimization Survey, there were 320,000 sexual assaults in the US in 2016. And 77 percent of people who experienced rape or sexual assault say they did not tell police.

That number is likely much higher. Though the NCVS data is the best the US has for now, critics have long warned that in addition to suffering from the risk of underreporting that befalls all self-reported surveys, its methodology specifically discourages reporting. In a study from five years ago, the National Academy of Sciences found that the government’s survey was probably vastly undercounting sexual crimes. That report found that a separate survey devoted to sexual assault and rape would have more accurate results.

Tweets are not a replacement for this data. But they can augment it. The stories told today give texture to the statistics that tell us this is common. Three hundred and twenty thousand—even if that number is low—is too big and abstract a number to really fathom. But the tweets shared this morning are real, and individual, and impossible to forget.

In an era of misinformation and bots on social media, when we have daily coverage of the pain that can be inflicted by social media, this hashtag is a reminder of how powerful these mediums can be in bringing people together. (Of course, it was also Twitter that the president used to share the tweet that so startled sexual assault survivors this morning.)

But it’s also worth remembering that a hashtag doesn’t tell the whole story of sexual assault in America. Not everyone is on Twitter, and many people aren’t comfortable sharing their stories—even vaguely—in such a public place. But for some, it’s a crucial outlet to validate our identities at a time when it feels like those in power would like us to be silent. Or invisible.

I say our, because I am included in this. When I read Trump’s tweet this morning, first I stopped breathing. When the most powerful person in the land denies your lived experience, it feels like someone punching you in the diaphragm.

When I breathed again, I paced the room, thinking about when I was a teenager, three years older than Ford at the time of her alleged assault. I was in college, and a boy I trusted date raped me in his room. I told a few friends and then didn’t mention it for years. I didn’t report it. I had a lot of reasons not to, but chief among them was: I didn’t think anyone would care. Why were you in his room, I thought they’d ask. I had previously reported a much less serious sexual assault—groping—in high school, and nothing had happened. Why go through the public embarrassment of that again? I didn’t even tell my family about it for 15 years.

This morning, I picked up my phone and tweeted about that incident. I wanted to speak directly to the president, or anyone reading his tweet and thinking it sounded right. Like the women and men who took to Twitter this morning, I wanted to declare: I exist, here is my story.

Reading through the tweets on the hashtag drives home the innumerable reasons people do not report these events. Chief among them is that they won’t be believed, and then they’ll be punished by whoever has an interest in protecting the status quo. Yet, the collectivism in a hashtag gives us all solidarity. Though it is at once the most public airing of our most personal story, it somehow feels less intimate to tweet about this kind of experience than to sit across the table from a family member or friend and tell them.

Why don’t people report? Here’s what some said.

I’m a man and it would make me seem weak.

It would ruin my career before it had even begun.

Nothing happened the first time I reported.

The person who raped me is the person I would have needed to report to.

They were a friend and I was in denial.

He told me he’d kill me if I told anyone.

Men are tweeting about how, for them, the stigma of coming out and reporting their sexual assault was too much to bear. That’s in line with research that’s been saying the same thing for years. People are sharing about how they didn't report professors or bosses who had power over their professional lives. Or how they didn't report family members on whom they literally depended for everything. They’re tweeting about police officers and administrators whom they did tell, but who doubted and blamed them.

This hashtag has power. After I had tweeted and I later saw the trending hashtag, I felt like my story was a raindrop in a lake, at once singular but part of something bigger. I was grateful.
I was floored by what so many people have gone through, even while not being surprised. The specifics of their pain: “He held my face so I couldn’t breathe.” “He was stronger than me, and my cousin.” “I was 13.”

Every woman and many men I know have a story. Or many stories. In 2016, in the weeks after the Access Hollywood tape came out, I wrote a list of the sexual assault and harassment in my life that I could remember. It wasn’t exhaustive, but it was exhausting. It had never occurred to me to write them down before because that kind of experience is so much an accepted part of life for women. “After we are leered at and groped, we get off the train, and go to work, and we don’t mention it, because why would we? This is part of being a woman,” I wrote at the time. I assumed everyone knew.

But everyone doesn’t know. That’s what the #metoo movement, and the backlash to it, has taught us. And that’s why so many people are reliving their own assaults today to share their stories. It hurts to educate people about the ordinariness of sexual assault. It means having to think about something someone might not want to think about. It means remembering the reasons you felt stifled from sharing in the first place. For many of us, it means remembering how violated and embarrassed and guilty, and above all, alone we felt.

I hesitated to tweet this morning. Even though I’d already written about my experience and told my family, and even though I really don’t feel as traumatized by it as I used to, I worried it could in some way seem unprofessional to tell my story. But this thing that happened to me when I was 18; it’s a truth I carry inside me every day.

Even now, telling feels dangerous, despite the fact that the story being told is so universal, which is exactly the point. These are our stories to tell.

For the first time since launching the Curiosity rover in 2011, NASA is sending a spacecraft to the surface of Mars. Exciting! Surface missions are sexy missions: Everyone loves roving robots and panoramic imagery of other worlds. But the agency's latest interplanetary emissary won't be doing any traveling (it's a lander, not a rover). And while it might snap some pictures of dreamy Martian vistas, it's not the surface that it's targeting.

InSight—short for Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport—will be the first mission to peer deep into Mars' interior, a sweeping geophysical investigation that will help scientists answer questions about the formation, evolution, and composition of the red planet and other rocky bodies in our solar system.

The mission is scheduled to launch some time this month, with a window opening May 5. When the lander arrives at Mars on November 26 of this year, it will land a few degrees north of the equator in a broad, low-lying plain dubbed Elysium Planitia. The locale will afford InSight—a solar-powered, burrowing spacecraft—two major perks: maximum sun exposure and smooth, penetrable terrain. It is here that InSight will unfan its twin solar arrays, deploy its hardware, and settle in for two years of work.

Using a five-fingered grapple at the end of a 2.4-meter robotic arm, the lander will grab its research instruments from its deck (a horizontal surface affixed to the spacecraft itself), lift them into the air, and carefully place them onto the planet's surface. A camera attached to the arm and a second one closer to the ground will help InSight engineers scope out the lander's immediate surroundings and plan how to deploy its equipment.

"Have you ever played the claw game at arcades?” asks payload systems engineer Farah Alibay. “That's essentially what we're doing, millions of miles away." The process will require weeks to prepare, plan, and execute, and involve JPL's In-Situ Instrument Lab—a simulation facility in Pasadena, California where mission planners can practice maneuvering the lander before beaming instructions to Mars. But if the InSight team can pull it off, it will be the first time a robotic arm has been used to set down hardware on another planet.

InSight has two main instruments, the first of which is the Seismic Experiment for Interior Structure, or SEIS. An exquisitely sensitive suite of seismometers, SEIS is designed to detect the size, speed, and frequency of seismic waves produced by the shifting and cracking of the Red Planet's interior. You know: Marsquakes.

"It's as good as any of the Earth-based seismometers that we have," says InSight project manager Tom Hoffman; it can measure ground movements smaller than the width of a hydrogen atom. "If there happened to be a butterfly on Mars, and it landed very lightly on this seismometer, we'd actually be able to detect that," Hoffman says. Other things it could detect, besides Marsquakes, include liquid water, meteorite impacts, and plumes from active volcanoes.

For as sensitive as it is, SEIS is damn hardy. "Seismometer designs on Earth are meant to be delicately handled, placed down, and never touched again," says lead payload systems engineer Johnathan Grinblat. SEIS's journey to Mars will be a little more exciting, what with the rocket launch, atmospheric entry, descent, and landing. "It's going to vibrate and experience lots of shocks, so it has to be robust to that," Grinblat says.

It'll also need to withstand dramatics swings in temperature; temperatures at Mars' equatorial regions can reach 70° Fahrenheit on a sunny summer day, and plummet as low as -100° Fahrenheit at night. To see that it does, InSight engineers matrioshka-d its instruments inside multiple layers of protection. The first is a vacuum-sealed titanium sphere, the second an insulating honeycomb structure. The third is a domed wind and thermal shield that will cover the sensors like a high-tech barbecue lid.

Those systems in place, InSight will reach for its second instrument, the Heat Flow and Physical Properties Probe. Also known as HP3, the 18-inch probe is effectively a giant, self-driving nail. It will jackhammer itself some 16 feet into Mars' soil—deep enough to be unaffected by temperature fluctuations on the planet's surface. "When scientists study temperature flow on Earth, they have to burrow even deeper," says Suzanne Smrekar, InSight's deputy principal investigator, because the moist soil conducts heat deep underground. “So Mars is actually pretty easy, relatively speaking.”

Tell that to the probe. Its descent through the Martian terrain will take weeks. As it burrows, it will pause periodically to measure how effectively the surrounding soil conducts heat. Temperature sensors will trail the probe on a tether, like thermometric beads on a string. Together, the temperature readings and conductivity measurements will tell InSight's scientists how much heat is emanating from the planet's insides—and that heat, or lack of it, will help tell researchers what the planet is made of, and how its composition compares to Earth's.

But before InSight takes Mars' temperature and senses for quakes, it'll have to launch, brave the desolate wilds of interplanetary space, and land. Exciting? Unquestionably. But also: "Everything about going to Mars is terrifying," Alibay says. "We're launching on a rocket that is a barely controlled bomb. We're going through six months of vacuum, being bombarded by solar electric energetic particles. We're going to a planet that we have to target, because if we miss it, we can't just turn around. And we have to land. And once we're on the surface, doing the deployments, any number of things could go wrong."

Alibay's not a pessimist. She's an engineer; anticipating misfires and miscalculations, she says, is part of the job description. Plus, she knows her history: Fewer than 50 percent of Mars missions succeed. "Not because we don't know what we're doing," she says, "but because it's really hard."

Not that that should ever prevent NASA from trying. After all: We do not go to space because it is easy.

More Mars

  • Check out the clean room where NASA prepared InSight for launch.

  • Researchers recently discovered clean water ice just below Mars' surface. InSight could detect even more.

  • Go behind the scenes as NASA tests the most powerful rocket ever, part of the agency's a decades-spanning effort to send astronauts to explore asteroids, Mars, and beyond.

Related Video

Science

NASA Discovers Evidence for Liquid Water on Mars

For years, scientists have known that Mars has ice. More elusive, though, is figuring out how much of that water is actually in liquid form. Now, NASA scientists have found compelling evidence that liquid water—life-giving, gloriously wet H 20—exists on Mars.