Tag Archive : SCIENCE

/ SCIENCE

Nick Goldschmidt has been lucky so far. A wildfire has burned more than 8,000 acres just north of his vineyards in Geyserville, California, but so far his vines are OK. So is his house in Healdsburg, roughly midway between Geyserville and a 36,000-acre fire that destroyed more than 2,800 homes in Santa Rosa.

But now, amid the charred, empty spaces that scar northern California’s winegrowing region, under skies yellowed by smoke, Goldschmidt has a race to win. Wildfires can ruin the flavor of wine grapes, a problem called smoke taint. “I’ve worked with smoke before,” Goldschmidt says. “It is not an easy thing to fix. But in my experience, it’s more about contact time. So the key thing is, if you have vineyards near the fire, you’ve got to get the grapes off.”

Depending on wind, smoke from the Atlas Fire could potentially reach Goldschmidt’s Napa vineyard, where about 15 percent of the fruit remains to be harvested. He now plans to harvest the rest by the weekend.

That’s typical of Napa, where 80 to 85 percent of the 2017 harvest is done. In nearby Sonoma, 90 percent of the grapes are in. But that still means that a few grapes could get exposed to smoke, and fire and heat could damage the vines. In a region key to California’s $34 billion wine industry—and that figure doesn’t even include the enormous tourist business—that’s a big deal. Fires have killed 31 people so far, destroyed thousands of homes, and consumed the efforts of more than 8,000 firefighters. And the winemakers of the area are trying to make sure the damage to their livelihoods doesn’t get worse.

Winemaking regions around the world, especially in Australia, have been dealing with the consequences of more active fire seasons near vineyards since at least the turn of the century—but the problems haven’t really hit California yet. The state’s frequent fires haven’t intersected with its vineyards. Until now.

Smoke is complicated stuff. Everyone in the Bay Area has gotten a taste in the past few days—that medicinal, ashy, burnt flavor comes from, among other ingredients, molecules called polycyclic aromatic hydrocarbons, nitrogen and sulfur oxides, other organic compounds, and even tiny particles carried aloft by heat and air currents. If you’ve ever sat near a campfire or cooked on a grill, you know it’s not necessarily an unpleasant aroma, as cognitively dissonant as that may feel when you realize it comes from blazes that have destroyed the lives of thousands.

But what’s delicious in bacon or lox generally isn’t—depending on how much you have—in wine. The actual flavor compounds are molecules called volatile phenols. “Volatile” means they evaporate, and in chemistry “phenols” are benzene rings (a hexagon of carbon atoms with hydrogen atoms sticking off like a snowflake) connected to a hydrogen-and-an-oxygen. You might know them better as the aroma of peat in some whiskies or of antiseptic or Band-Aids. Their volatility means that in your mouth, they turn into a vapor that gets sucked up retronasally, through the back of the throat to the sensitive layer of nerve endings behind your nose that translates chemicals into odors.

Smoke taint in grapes has two specific markers: guaiacol and 4-methylguaiacol. They taste like, well, smoke. Expose grapes on the vine to them, and the wine will taste smoky. Obvious, right? Except no. “The mechanism is a little bit unclear,” says Kerry Wilkinson, an oenologist who studies smoke taint at the University of Adelaide. Leaves have pores called stomata involved in respiration, “but when grapevines are exposed to smoke, the stomata close almost immediately and photosynthesis stops,” she says. “The guaiacol conjugates are getting to not only the skins but the pulp of the fruit. I think it’s just permeation, but I don’t think anyone’s done the research.”

Making things even more complicated, grapevines have their own way of dealing with a barbecue. “Those compounds, once they’re taken up, the grapevine will stick one or more sugar molecules on them,” Wilkinson says. “We think that’s to make them less toxic to the plant.” This process—it’s called glycosylation, and the sugars are called glycosides—turns the volatile phenols non-volatile. Which means you can’t taste them in the grape juice.

But ferment that juice into wine, and acids in it will break those sugars off. Poof: Smoke gets in your wines.

I don’t mean to be flip here; smoke taint from the Canberra bushfires of 2003 cost Australian vineyards more than $4 million; fires in 2004 cost another $7 million. Once grapes are tainted, the wine isn’t easy to fix. Those ashy flavors are too strong; you can’t just try to blend in other, untainted wine to cover it up. Efforts to filter it with activated charcoal and reverse osmosis can filter out flavors you might want in the wine, too. Heck, guaiacol and 4-methylguaiacol are markers for oak-barrel aging, too. Nobody ever describes an over-oaked Chardonnay as “smoke-tainted,” but—well, maybe they should, actually. And some grape varietals—shiraz, particularly—already have naturally high guaiacol levels.

All of which might be fine. Most of the grapes were picked before the fires came. In general, “if the fruit’s already been harvested this year, it should be OK,” Wilkinson says. A couple dozen wineries suffered damage so far, from minor to total. But Northern California has almost 250,000 acres of wine grape vines—more than 100,000 of them in Napa and Sonoma Counties as of 2016.

It looks like the grapes those vines produce will be OK next year. “There’s no carryover effect from one season to the other,” Wilkinson says. “We haven't seen any evidence to suggest that any of those smoke compounds are bound up from one season to another in the grapevine.” They get into the grapes, which come off, and the leaves, which fall off or get pruned. (And a little more luck: The grapes left to harvest in Napa are mostly Cabernet Sauvignon, which turns out to be more resistant to smoke taint than some other varietals.)

But vines themselves are sensitive to heat. “They can be scorched, and if it’s severe, that can permanently damage or kill grapevines,” Wilkinson says. “If there’s just a little bit of scorching, vines can recover, but the yield can be decreased in the season immediately after.”

It turns out it’s pretty hard to burn down a California vineyard. In part that’s because most of them are irrigated, so they’re wet and thus resistant to fire. Even when the cover crop growing between the vines gets burned in a fast-moving wildfire, “it’s pretty damn hard” to get a California vineyard to catch, Goldschmidt says.

When he was working in Chile, though, he saw a wind-driven wildfire very much like the ones affecting California destroy a vineyard. “That was devastating,” Goldschmidt says. “A lot of those vineyards are dry-farmed, so they burned much more easily.”

Heat damage is a lot like frost damage, something California vintners know a lot about. Proper pruning and treatment can save an injured vine. The trick is knowing if they’re injured, and how badly. Sometimes vintners will have to cut through the trunks of the vines to assess whether the phloem, the living and respiring part of the wood, is still healthy. But that’s a destructive test—sometimes destroying the vine in an attempt to save it.

So grape-growers look for other ways to assess their vines. “We look at things like, are the irrigation lines melted? Is there indication of scorching of the trunk and canopy? How much fire damage was there to anything growing in between the vines?” Wilkinson says.

That’s an assessment that’ll probably have to wait until the fires are under control. Maybe Goldschmidt’s luck will hold out. “This is my 29th vintage in Sonoma. It’s the first time the Alexander Valley was earlier in maturity than the Napa Valley. Usually it’s 10 days later,” he says. “If it had been the other way I would have really been hammered.”

Ordinarily I might end the story with that, but in this case it’s not as lucky as it sounds. Napa and Sonoma did indeed have a weird year. It rained hard after years of drought, and then over the summer it got really hot. Vintners irrigated when they might not have, which lowered the sugar levels in the grapes as they took up the water…and then it got hot again. “It’s been really hard to make a harvest decision based on sugar,” Goldschmidt says. “It’s been more about flavor and tannin.”

Based on those organoleptic assessments—the fanciest possible way of saying “how it tastes”—most of Napa and Sonoma brought in their fruit in July and August instead of, well, now.

To whom should the vintners send a thank-you? “Over the last five years or so we’ve had this period of very high temperatures that coincided with low precipitation, punctuated by very wet conditions,” says Noah Diffenbaugh, a climate researcher at Stanford. It’s exactly what Diffenbaugh’s group warned would happen in a prescient 2006 paper in the Proceedings of the National Academy of Sciences titled “Extreme heat reduces and shifts United States premium wine production in the 21st century.”

    Wildfires

  • Joe Eaton

    How Bizarre Is This Year’s Wildfire Season, Really?

  • Adam Rogers

    The Napa Fire Is a Perfectly Normal Apocalypse

  • Laura Mallonee

    Photo of the Week: Hell Descends on California's Wine Country

Their point? At first, heat and rare-but-extreme rain is going to change how winegrowing regions work. Eventually more northerly regions will be better for grapes—hello, Oregon’s Willamette Valley—and existing grape-growing regions will change the varieties they grow.

It’s in the nature of global warming that extreme climate events will become less rare. “We’ve done a lot of work trying to understand how global warming impacts temperature extremes,” Diffenbaugh says. “The extremes are really where we feel the climate.”

The vagaries of climate change let the 2017 vintage mostly dodge the economic devastation that smoke taint would have caused. It’s a faintly silver lining to the clouds of ash and smoke now parked over thousands of acres of death and destruction. But that silver lining won’t last. These won’t be the last fires; next time, maybe the harvest won’t happen first. The most frightening truth about the extreme climate event that is the northern California fires is that such events won’t always be extreme. They’ll be normal.

Related Video

Culture

Booze Science | Ice

Booze Science is better drinking through chemistry. WIRED articles editor Adam Rogers explores the scientific ways ice can influence a cocktail with Jennifer Colliau, beverage director at San Francisco's innovative bar The Interval at The Long Now.

Inside a red-bricked building on the north side of Washington DC, internist Shantanu Nundy rushes from one examining room to the next, trying to see all 30 patients on his schedule. Most days, five of them will need to follow up with some kind of specialist. And odds are they never will. Year-long waits, hundred-mile drives, and huge out of pocket costs mean 90 percent of America’s most needy citizens can’t follow through on a specialist referral from their primary care doc.

But Nundy’s patients are different. They have access to something most people don’t: a digital braintrust of more than 6,000 doctors, with expert insights neatly collected, curated, and delivered back to Nundy through an artificial intelligence platform. The online system, known as the Human Diagnosis Project, allows primary care doctors to plug into a collective medical superintelligence, helping them order tests or prescribe medications they’d otherwise have to outsource. Which means most of the time, Nundy’s patients wait days, not months, to get answers and get on with their lives.

In the not-too-distant future, that could be the standard of care for all 30 million people currently uninsured or on Medicaid. On Thursday, Human Dx announced a partnership with seven of the country’s top medical institutions to scale up the project, aiming to recruit 100,000 specialists—and their expert assessments—in the next five years. Their goal: close the specialty care gap for 3 million Americans by 2022.

In January, a single mom in her 30s came to see Nundy about pain and joint stiffness in her hands. It had gotten so bad that she had to stop working as a housekeeper, and she was growing desperate. When Nundy pulled up her chart, he realized she had seen another doctor at his clinic a few months prior who referred her to a specialist. But once the patient realized she’d have to pay a few hundred dollars out of pocket for the visit, she didn’t go. Instead, she tried get on a wait list at the public hospital, where she couldn’t navigate the paperwork—English wasn’t her first language.

Now, back where she started, Nundy examined the patient’s hands, which were angrily inflamed. He thought it was probably rheumatoid arthritis, but because the standard treatment can be pretty toxic, he was hesitant to prescribe drugs on his own. So he opened up the Human Dx portal and created a new case description: “35F with pain and joint stiffness in L/R hands x 6 months, suspected AR.” Then he uploaded a picture of her hands and sent out the query.

Within a few hours a few rheumatologists had weighed in, and by the next day they’d confirmed his diagnosis. They’d even suggested a few follow-up tests just to be sure and advice about a course of treatment. “I wouldn’t have had the expertise or confidence to be able to do that on my own,” he says.

Nundy joined Human Dx in 2015, after founder Jayanth Komarneni recruited him to pilot the platform’s core technologies. But the goal was always to go big. Komarneni likens the network to Wikipedia and Linux, but instead of contributors donating encyclopedia entries or code, they donate medical expertise. When a primary care doc gets a perplexing patient, they describe their background, medical history, and presenting symptoms—maybe adding an image of an X-ray, a photo of a rash, or an audio recording of lung sounds. Human Dx’s natural language processing algorithms will mine each case entry for keywords to funnel it to specialists who can create a list of likely diagnoses and recommend treatment.

Now, getting back 10 or 20 different doctors’ takes on a single patient is about as useful as having 20 friends respond individually via email to a potluck invitation. So Human Dx’s machine learning algorithms comb through all the responses to check them against all the project’s previously stored case reports. The network uses them to validate each specialist's finding, weight each one according to confidence level, and combine it with others into a single suggested diagnosis. And with every solved case, Human Dx gets a little bit smarter. “With other online tools if you help one patient you help one patient,” Komarneni says. “What’s different here is that the insights gained for one patient can help so many others. Instead of using AI to replace jobs or make things cheaper we’re using it to provide capacity where none exists.”

Komarneni estimates that those electronic consults can handle 35 to 40 percent of specialist visits, leaving more time for people who really need to get into the office. That’s based on other models implemented around the country at places such as San Francisco General Hospital, UCLA Health System, and Brigham and Women’s Hospital. SFGH’s eReferral system cut the average waiting time for an initial consult from 112 days to 49 within its first year.

That system, which is now the default for every SFGH specialty, relies on dedicated reviewers who get paid to respond to cases in a timely way. But Human Dx doesn’t have those financial incentives—its service is free. Today, though, by partnering with the American Board of Medical Specialities, Human Dx can now offer continuing education and improvement credits to satisfy at least some of the 200 hours doctors are required to complete every four years. And the American Medical Association, the nation’s largest physician group, has committed to getting its members to volunteer, as well as supporting program integrity by verifying physicians on the platform.

    Related Stories

  • Nick Stockton

    Veritas Genetics Scoops Up an AI Company to Sort Out Its DNA

  • Megan Molteni

    Thanks to AI, Computers Can Now See Your Health Problems

  • Megan Molteni

    The Chatbot Therapist Will See You Now

It’s a big deal to have the AMA on board. Physicians have historically been wary of attempts to supplant or complement their jobs with AI-enabled tools. But it’s important to not mistake the organization’s participation in the alliance for a formal pro-artificial intelligence stance. The AMA doesn’t yet have an official AI policy, and it doesn’t endorse any specific companies, products, or technologies, including Human Dx’s proprietary algorithms. The medical AI field is still young, with plenty of potential for unintended consequences.

Like discrepancies in quality of care. Alice Chen, the chief medical officer for the San Francisco Health Network and co-director of SFGH’s Center for Innovation in Access and Quality, worries that something like Human Dx might create a two-tiered medical system, where some people get to actually see specialists and some people just get a computerized composite of specialist opinions. “This is the edge of medicine right now,” Chen says. “You just have to find the sweet spot where you can leverage expertise and experience beyond traditional channels and at the same time ensure quality care.”

Researchers at Johns Hopkins, Harvard, and UCSF have been assessing the platform for accuracy and recently submitted results for peer review. The next big hurdle is money. The project is currently one of eight organizations in contention for a $100 million John D. and Catherine T. MacArthur Foundation grant. If Human Dx wins, they’ll spend the money to roll out nationwide. The alliance isn’t contingent on the $100 million award, but it would certainly be a nice way to kickstart the process—especially with specialty visits accounting for more than half of all trips to the doctor’s office.

So it’s possible that the next time you go in for something that stumps your regular physician, instead of seeing a specialist across town, you’ll see five or 10 from around the country. All it takes is a few minutes over lunch or in an elevator to put on a Sherlock Holmes hat, hop into the cloud, and sleuth through your case.

Related Video

Technology

The Robot Will See You Now – AI and Health Care

Artificial intelligence is now detecting cancer and robots are doing nursing tasks. But are there risks to handing over elements of our health to machines, no matter how sophisticated?

Climate Change Is Killing Us Right Now

March 20, 2019 | Story | No Comments

This story originally appeared on New Republic and is part of the Climate Desk collaboration.

A young, fit US soldier is marching in a Middle Eastern desert, under a blazing summer sun. He’s wearing insulated clothing and lugging more than 100 pounds of gear, and thus sweating profusely as his body attempts to regulate the heat. But it’s 108 degrees out and humid, too much for him bear. The brain is one of the first organs affected by heat, so his judgment becomes impaired; he does not recognize the severity of his situation. Just as his organs begin to fail, he passes out. His internal temperature is in excess of 106 degrees when he dies.

An elderly woman with cardiovascular disease is sitting alone in her Chicago apartment on the second day of a massive heatwave. She has an air conditioner, but she’s on a fixed income and can’t afford to turn it on again—or maybe it broke and she can’t afford to fix it. Either way, she attempts to sleep through the heat again, and her core temperature rises. To cool off, her body’s response is to work the heart harder, pumping more blood to her skin. But the strain on her heart is too much; it triggers cardiac arrest, and she dies.

Such scenarios could surely happen today, if they haven’t already. But as the world warms due to climate change, they’ll become all too common in just a few decades—and that’s according to modest projections.

This is not meant to scare you quite like this month’s cover story in New York magazine, “The Uninhabitable Earth.” That story was both a sensation and quite literally sensational, attracting more than two million readers with its depiction of “where the planet is heading absent aggressive action.” In this future world, humans in many places won’t be able to adapt to rising temperatures. “In the jungles of Costa Rica, where humidity routinely tops 90 percent, simply moving around outside when it’s over 105 degrees Fahrenheit would be lethal. And the effect would be fast: Within a few hours, a human body would be cooked to death from both inside and out,” David Wallace-Wells writes. “[H]eat stress in New York City would exceed that of present-day Bahrain, one of the planet’s hottest spots, and the temperature in Bahrain ‘would induce hyperthermia in even sleeping humans.’”

These scenarios are supported by the science. “For heat waves, our options are now between bad or terrible,” Camilo Mora, a geography professor at University of Hawaii at Manoa, told CNN last month. Mora was the lead author of a recent study, published in the journal Nature, showing that deadly heat days are expected to increase across the world. Around 30 percent of the world’s population today is exposed to so-called “lethal heat” conditions for at least 20 days a year. If we don’t reduce fossil-fuel emissions, the percentage will skyrocket to 74 percent by the year 2100. Put another way, by the end of the century nearly three-quarters of the Earth’s population will face a high risk of dying from heat exposure for more than three weeks every year.

This is the worst-case scenario. Even the study’s best-case scenario—a drastic reduction in greenhouse gases across the world—shows that 48 percent of humanity will be exposed regularly to deadly heat by the year 2100. That’s because even small increases in temperature can have a devastating impact. A study published in Science Advances in June, for instance, found that an increase of less than one degree Fahrenheit in India between 1960 and 2009 increased the probability of mass heat-related deaths by nearly 150 percent.

And make no mistake: Temperatures are rising, in multiple ways. “We’ve got a new normal,” said Howard Frumkin, a professor at the School of Public Health at the University of Washington. “I think all of the studies of trends to date show that we’re having more extreme heat, and we’ve having higher average temperatures. Superimposed on that, we’re seeing more short-term periods of extreme heat. Those are two different trends, and they’re both moving in the wrong direction.” Based on those trends, the US Global Change Research Program predicts “an increase of thousands to tens of thousands of premature heat-related deaths in the summer … each year as a result of climate change by the end of the century.” And that’s along with the deaths we’ve already seen: In 2015, Scientific American noted that nine out of the ten deadliest heat waves ever have occurred since 2000; together, they’ve killed 128,885 people.

In other words, to understand how global warming wreaks havoc on the human body, we don’t need to be transported to some imagined dystopia. Extreme heat isn’t a doomsday scenario but an existing, deadly phenomenon—and it’s getting worse by the day. The question is whether we’ll act and adapt, thereby saving countless lives.

There are two ways a human body can fail from heat. One is a direct heat stroke. “Your ability to cool yourself down through sweating isn’t infinite,” said Georges Benjamin, executive director of the American Public Health Association. “At some point, your body begins to heat up just like any other object. You go through a variety of problems. You become dehydrated. Your skin dries out. Your various organs begin to shut down. Your kidneys, your liver, your brain. As gross as this may sound, you in effect, cook.” (So maybe Wallace-Wells wasn’t being hyperbolic after all.)

Heat death can also be happen due to a pre-existing condition, the fatal effects of which were triggered by high temperature. “Heat stress provokes huge amounts of cardiovascular strain,” said Matthew Cramer of the Institute of Exercise and Environmental Medicine. “For these people, it’s not necessarily that they’ve cooked, but the strain on their cardiovascular system has led to death.” This is much more common than death by heat stroke, but is harder to quantify since death certificates cite the explicit cause of death—“cardiac arrest,” for instance, rather than “heat-related cardiac arrest.”

In both scenarios, the body’s natural ability to cool itself off through sweating has either reached its capacity or has been compromised through illness, injury, or medication. There are many people who have reduced capacity for sweating, such as those who have suffered severe burns over large parts of their bodies. Cramer, who studies heat impacts on burned people, says 50,000 people suffer severe burn injuries per year in America, and the World Health Organization considers burns “a global public health problem,” with the majority of severe burn cases occurring in low- and middle-income countries.

Bodies that are battling illness or on medication may also struggle with heat regulation. Diuretics tend to dehydrate people; anticholinergics and antipsychotics reduce sweating and inhibit heat dissipation. An analysis of the 2003 heat wave in France that killed 15,000 people suggested that many of these deaths could have been avoided had people been made aware of the side effects of their drugs. As for illnesses, “Anything that impairs the respiratory or circulatory system will increase risk,” said Mike McGheehin, who spent 33 years as an environmental epidemiologist at the Centers for Disease Control and Prevention. “Obesity, diabetes, COPD, heart disease, and renal disease.” Kidney disease, mental illness, and multiple sclerosis. The list goes on and on.

This summer has presented many opportunities for bodies to break down from heat. Temperature records, some more than a century old, have been broken across California, Nevada, Utah, Idaho and Arizona. (Speaking of Arizona, it’s been so hot there that planes can’t fly.) And it’s not just America. Last month, Iran nearly set the world record for highest temperature ever recorded. The May heatwave that hit India and Pakistan set new world records as well, including what the New York Times called “potentially the hottest temperature ever recorded in Asia”: 129.2 degrees Fahrenheit. Worldwide, 2017 is widely expected to be the second-hottest year, after 2016, since we began keeping global average temperature records in 1880.

These trends have public health professionals concerned about how people are going to deal with the heat when it comes their way. “Clearly this is one of the most important problems we’re going to see from a public health perspective,” Benjamin said. “This is not a tomorrow problem. It’s a significant public health problem that we need to address today.”

    Related Stories

  • Eric Niiler

    Thanks, Climate Change: Heat Waves Will Keep on Grounding Planes

  • Adam Rogers

    The West Is on Fire. Blame the Housing Crisis

  • Nick Stockton

    How Climate Change Denial Threatens National Security

It’s a public health problem especially in cities, says Brian Stone, a professor at Georgia Tech’s City and Regional Planning Program. “Our fundamental work shows that larger cities are warming at twice the rate of the planet,” he said, describing a phenomenon known as urban heat islands, where built-up areas tend to be hotter than surrounding rural areas, mainly because plants have been replaced by heat-absorbing concrete. Global warming is making that phenomenon worse. “We’re really worried about the rate of how quickly we’re starting to see cities heat up,” Stone said.

According to Stone’s analysis, the most rapidly warming city is Louisville, Kentucky, followed by Phoenix, Arizona, and Atlanta, Georgia. But he’s less concerned about cities like Phoenix, which already have infrastructure to deal with brutally high temperatures, than he is about Chicago, Buffalo, and other cities in the northern United States that have really never had to deal with extreme heat. That is precisely why the Chicago heat wave of 1995 that killed 759 people was so deadly. According to the Chicago Tribune, the city was “caught off guard,” and had “a power grid that couldn’t meet demand and a lack of awareness on the perils of brutal heat.”

In other words, Stone and others say, excessive death rates are not always due to just extreme temperatures, but unusual temperatures. People are more likely to die when they are confronted with temperatures they don’t expect and thus aren’t prepared for. That’s why officials in cities not experiencing heat-related extremes need to improve emergency response systems, now. “Those people have got to start thinking in term of, ‘two years ago we had four hot days, the year after we had eight hot days,’” Benjamin said. “Public health systems should be put in place to respond to prolonged heat waves. Emergency cooling centers where people can go should be built. Identify where the people who are most socially isolated live.” Absent preventative action, heat-related deaths in New York City could quintuple by the year 2080, according to recent research.

Some cities have already started to prepare. Stone recently completed a heat adaptation study for Louisville that includes not only emergency management planning but also ways the city can prevent itself from getting so hot (by improving energy efficiency and installing green roofs, for instance). But as for now, he said, it’s rare to see a city actually adopt policies supportive of heat management. “We do see flooding adaptation plans—New York City has one, and New Orleans has one—but heat adaptation planning is a very new idea, in the US and really around the world,” he said. “It takes a lot to convince a mayor that a city can actually cool itself down. It’s not intuitive.”

The good news is that humans adapt to heat, both physiologically (through acclimatization) and socially (with air conditioning, for instance). That will continue, according to the US Global Change Research Program, which states with very high confidence that adaptation efforts in humans “will reduce the projected increase in deaths from heat.”

But there’s a limit to this. “There’s no way to adapt to heat that’s more than a certain amount,” Frumkin said. “And socially, there’s always going to be people we miss, who don’t have access to air conditioning.” McGeehin noted those people will likely be poor, elderly, and minority populations. “It’s a quintessential public health problem in that it impacts the most disenfranchised of our society. Young, healthy, middle-class people will largely be left alone,” he said.

Air conditioners also have limits, especially in cities where blackouts can occur. “It is inevitable,” Stone said, that large cities will see blackouts during future heat waves. “The number of blackouts we see year over year is increasing dramatically,” he said. “Whether that’s caused by the heatwave or just happens during the heatwave doesn’t really matter…. The likelihood of an extensive blackout during a heatwave is high, and getting higher as we add more devices and stressors to the grid.”

It’s a “cruel irony,” Frumkin said, that as the world gets hotter, we need more air conditioning, and thus consume more electricity. And if that electricity comes from fossil fuel sources, it will create more global warming, which in turn will increase the demand for air conditioning. The answer, he said, is to “decarbonize the electric grid.” But that’s easier said than done, especially when the Trump administration is devoted to increasing the use of fossil fuels to support the country’s electrical grid.

As with many other efforts to fight climate change, though, cities don’t need Washington’s help to take action on heat adaptation. “Cities can manage their own heat islands on their own, and that’s where we most need to be focused,” Stone said. But that will require convincing elected leaders that extreme heat is big a threat as, say, rising seas—and one that can’t be addressed with something as obvious as a sea wall. That’s the challenge, says McGeehin: “Heat as a major natural disaster is mostly overlooked in this country.” It’s a quiet killer, and perhaps more lethal because of it.

Related Video

Science

How Climate Change Is Already Affecting Earth

Though the planet has only warmed by one-degree Celsius since the Industrial Revolution, climate change's effect on earth has been anything but subtle. Here are some of the most astonishing developments over the past few years.

As I understand it, the whole point of cooking a turkey is to take it at some temperature and then increase it to a higher temperature. Sure, maybe there's something about family togetherness in there, but really, Thanksgiving is all about thermal transfer. The USDA recommends a minimum internal temperature of 165°F (74°C). I guess this is the minimum temperature to kill all the bad stuff in there—or maybe it is the lowest temperature that it can be and still taste great.

Either way, if you want to increase the temperature of the turkey you need to add energy. Perhaps this energy comes from fire, or an oven or even from hot oil—but it needs energy. But be careful. There is a difference between energy and temperature. Let me give you an example.

Suppose you put some leftover pizza in the oven to heat it up. Since you don't want to make a mess, you just rip off a sheet of aluminum foil and put the pizza on that and then into the oven. The oven is set to 350 degrees Fahrenheit so that after 10 minutes, both the pizza and the foil are probably close to that temperature. Now for the demonstration. You can easily grab the aluminum foil without burning yourself, but you can't do the same to the pizza. Even though these two objects have the same temperature, they have different amounts of thermal energy.

The thermal energy in an object depends on the object's mass, the object's material and the object's temperature. The change in thermal energy for an object then depends on the change in temperature.

In this expression, m is the mass of the object and the variable c is the specific heat capacity. The specific heat capacity is a quantity that tells you how much energy it takes to one gram of the object by 1 degree Celsius. The specific heat capacity of water is 4.18 Joules per gram per degree Celsius. For copper, the specific heat capacity is 0.385 J/g/°C (yes, water has a very high specific heat capacity).

But what about turkey? What is the energy needed to heat up 1 gram of turkey by 1°C? That is the question I want to answer. Oh sure, I could probably just do a quick search online for this answer, but that's no fun. Instead I want to calculate this myself.

    More on Turkey

  • Arielle Pardes

    Thanksgiving Hack: Cook Your Turkey Sous Vide

  • Maryn McKenna

    Why It's So Tough to Keep Antibiotics Out of Your Turkey

  • Jennifer Chaussee

    Here's the Real Reason Thanksgiving Makes You Sleepy

Here is the basic experimental setup. I am going to take a turkey breast (because I am too impatient to use the whole turkey) and put it in a known amount of hot water. I will then record the change in temperature of the water and the change in temperature of the turkey. Of course, this will have to be in an insulated container such that all of the energy that leaves the water will go into the turkey.

With the change in temperature of the water, I can calculate (based on the known specific heat capacity of water) the energy lost. Assuming all this energy goes into the turkey, I will then know the increase in energy of the turkey. With the mass and change in turkey temperature, I will have the specific heat capacity of a turkey.

Just to be clear, I can set the changes in energy to be opposite from each other and then solve for the specific heat capacity of the turkey. Like this.

OK, it's experiment time. I am going to start with 2,000 mL (2 kilograms) of hot water and add it to a foam box with my turkey breast. I will monitor both the temperature of the water and the turkey. Oh, the turkey has a mass of 1.1 kilograms. Here's what this looks like (without the box lid).

I collected data for quite a while and I assumed that the water and the turkey would reach an equilibrium temperature—but I was wrong. Apparently it takes quite a significant amount of time for this turkey to heat up. Still, the data should be good enough for a calculation.

Hopefully it's clear that the red curve is the hot water and the blue is for the turkey. From this plot, the water had a change in temperature of -21.7°C and the turkey had +27°C. Putting these values along with the mass of the water and turkey, I get a turkey specific heat capacity of 6.018 J/g/°C. That's a little bit higher than what I was expecting—but at least it is in the ballpark of the value for water. But overall, I'm pretty happy.

But what can you do with the specific heat capacity for a turkey? What if you want to do a type of sous-vide cooking in which the turkey is placed in a vacuum-sealed bag and then added water at a particular temperature? Normally, the temperature of the water is kept at some constant temperature. But what if you want to start with hot water and cold turkey and then end up with perfect temperature turkey? In order to do this, you could calculate the starting mass and temperature of water that would give you the best ending turkey temperature. I will let you do this as a homework assignment.

Of course there is another way to cook a turkey. You could drop it from some great height such that it heats up when it lands. Oh, wait—I already did this calculation.

Related Video

Science

Food Myths: Does Turkey Make You Sleepy?

You finish that thanksgiving feast and immediately all you want to do is sleep. Many people blame the turkey for their sudden comatose state, but that may not be 100% true.

Get Ready for a Schooling in Angular Momentum

March 20, 2019 | Story | No Comments

It's almost always the last topic in the first semester of introductory physics—angular momentum. Best for last, or something? I've used this concept to describe everything from fidget spinners to standing double back flips to the movement of strange interstellar asteroids.

But really, what the heck is angular momentum?

Let me start with the following situation. Imagine that there are two balls in space connected by a spring. Why are there two balls in space? I don't know—just use your imagination.

Not only are these balls connected by a spring, but the red ball has a mass that is three times the mass of the yellow ball—just for fun. Now the two balls are pushed such that they move around each other—just like this.

Yes, this is a numerical calculation. If you want to take a look at the code and play with it yourself (and you should), here it is. If you want all the details about how to make something like this, take a look at this post on the three body problem.

When we see stuff like these rotating spring-balls, we think about what is conserved—what doesn't change. Momentum is a good example of a conserved quantity. We can define momentum as:

Let me just make a plot of the total momentum as a function of time for this spring-ball system. Since momentum is a vector, I will have to plot one component of the momentum—just for fun, I will choose the x-coordinate. Here's what I get.

In that plot, the red curve is the x-momentum of the red (heavier) ball and the blue curve is for the yellow ball (yellow doesn't show up in the graph very well). The black line is the total momentum. Notice that as one object increases in momentum, the other object decreases. Momentum is conserved. You could do the same thing in the y-direction or the z-direction, but I think you get the idea.

What about energy? I can calculate two types of energy for this system consisting of the balls and the spring. There is kinetic energy and there is a spring potential energy:

The kinetic energy depends on the mass (m) and velocity (v) of the objects where the potential energy is related to the stiffness of the spring (k) and the stretch (s). Now I can plot the total energy of this system. Note that energy is a scalar quantity, so I don't have to plot just one component of it.

The black curve is again the total energy. Notice that it is constant. Energy is also conserved.

But is there another conserved quantity that could be calculated? Is the angular velocity conserved? Clearly it is not. As the balls come closer together, they seem to spin faster. How about a quick check, using a plot of the angular velocity as a function of time.

Nope: Clearly, this is not conserved. I could plot the angular velocity of each ball—but they would just have the same value and not add up to a constant.

OK, but there is something else that can be calculated that will perhaps be conserved. You guessed it: It's called the angular momentum. The angular momentum of a single particle depends on both the momentum of that particle and its vector location from some point. The angular momentum can be calculated as:

Although this seems like a simple expression, there is much to go over. First, the L vector represents the angular momentum—yes, it's a vector. Second, the r vector is a distance vector from some point to the object and finally the p vector represents the momentum (product of mass and velocity). But what about that "X"? That is the cross product operator. The cross product is an operation between two vectors that produces a vector result (because you can't use scalar multiplication between two vectors).

I don't want to go into a bunch of maths regarding the cross product, so instead I will just show it to you. Here is a quick python program showing two vectors (A and B) as well as A x B (you would say that as A cross B).

You can click and drag the yellow A vector around and see what happens to the resultant of A x B. Also, don't forget that you can always look at the code by clicking the "pencil" icon and then click the "play" to run it. Notice that A X B is always perpendicular to both A and B—thus this is always a three-dimensional problem. Oh, you can also rotate the vectors by using the right-click or ctrl-click and drag.

But now I can calculate (and plot) the total angular momentum of this ball-spring system. Actually, I can't plot the angular momentum since that's a vector. Instead I will plot the z-component of the angular momentum. Also, I need to pick a point about which to calculate the angular momentum. I will use the center of mass for the ball-spring system.

There are some important things to notice in this plot. First, both the balls have constant z-component of angular momentum so of course the total angular momentum is also constant. Second, the z-component of angular momentum is negative. This means the angular momentum vector is pointing in a direction that would appear to be into the screen (from your view).

So it appears that this quantity called angular momentum is indeed conserved. If you want, you can check that the angular momentum is also conserved in the x and y-directions (but it is).

But wait! you say. Maybe angular momentum is only conserved because I am calculating it with respect to the center of mass for the ball-spring system. OK, fine. Let's move this point to somewhere else such that the momentum vectors will be the same, but now the r-vectors for the two balls will be something different. Here's what I get for the z-component of angular momentum.

Now you can see that the z-component for the two balls both individually change, but the total angular momentum is constant. So angular momentum is still conserved. In the end, angular momentum is something that is conserved for situations that have no external torque like these spring balls. But why do we even need angular momentum? In this case, we really don't need it. It is quite simple to model the motion of the objects just using the momentum principle and forces (which is how I made the python model you see).

But what about something else? Take a look at this quick experiment. There is a rotating platform with another disk attached to a motor.
What happens with the motor-disk starts to spin? Watch. (There's a YouTube version here.)

Again, angular momentum is conserved. As the motor disk starts to spin one way, the rest of the platform spins the other way such that the total angular momentum is constant (and zero in this case). For a situation like this, it would be pretty darn difficult to model this situation with just forces and momentum. Oh, you could indeed do it—but you would have to consider both the platform and the disk as many, many small masses each with different momentum vectors and position vectors. It would be pretty much impossible to explain with that method. However, by using angular momentum for these rigid objects, it's not such a bad physics problem.

In the end, angular momentum is yet another thing that we can calculate—and it turns out to be useful in quite a number of situations. If you can find some other quantity that is conserved in different situations, you will probably be famous. You can also name the quantity after yourself if that makes you happy.

Related Video

Science

Science of Sport: Gymnastics

Charlotte Drury, Maggie Nichols, and Aly Raisman talk to WIRED about the skill, precision, and control they employ when performing various Gymnastic moves and when training for the Olympics.

On Monday night, residents of the Los Angeles neighborhoods of Westwood, Los Feliz, Silver Lake, and parts of the San Fernando Valley experienced a mild earthquake—a magnitude 3.6. Most people slept through the temblor and no damage was reported.

But a select group of 150 LA residents got a text alert on their mobile phone a full eight seconds before the quake hit at 11:10 pm—enough time for people to drop, cover, and hold on. Along with a pinned location of quake's epicenter, the text gave its magnitude and intensity, the number of seconds left before the shaking, and instructions on what to do. The system detects an earthquake's up-and-down p-wave, which travels faster and precedes the destructive horizontal s-wave, and converts that signal into a broadcast warning.

Other parts of the world have similar systems—but accessible to a wider population. On Tuesday afternoon, Mexico City sirens blared a few seconds before a magnitude 7.1 earthquake struck the capital, flattening hundreds of buildings and killing at least 200 people. When an 8.1 magnitude quake hit on September 7 off the coast of Mexico, the SASMEX alert system collecting data from sensors along Mexico’s western coast gave residents more than a minute’s warning from sirens and even news reports on radio and TV. A complementary smartphone app is used by millions of Mexicans. And Japan also has a sophisticated earthquake text-alert system, giving tsunami and earthquake warnings to the entire nation.

So why is the US earthquake system stuck in beta mode with only a lucky few getting an earthquake heads-up? The LA residents received their early warning as part of a pilot study conducted by the US Geological Survey and Santa Monica-based Early Warning Labs. But experts say lack of money and bureaucratic inertia has stymied the USGS ShakeAlert warning system, despite a decade of promises and positive trial runs.

The USGS has only installed about 40 percent of the 1,675 sensors it needs to protect seismically vulnerably areas of the West Coast in Los Angeles, the San Francisco Bay Area, and Seattle, says Doug Given, who coordinates the ShakeAlert system at the USGS Pasadena office.
“We still don’t have full funding,” says Given. “We are on a continuing resolution through December 8 and are operating at the level of last year’s budget."

ShakeAlert costs a measly $16 million each year to build and operate, but the USGS has only been given $10 million each year. The Trump administration's proposed budget had zeroed-out the entire ShakeAlert program, but dozens of lawmakers from San Diego to Seattle protested. A House committee blocked the cuts in July, but the final budget document is still awaiting passage.

The promise of ShakeAlert—which goes beyond the smartphone app tested by those LA residents—has already been shown in many ways. The system gives automated early warnings to slow BART trains in the Bay Area and protect California oil and gas refinery operations. ShakeAlert will even automatically put NASA’s deep space telescope in Goldstone, California into a safe mode. A few luxury condo buildings in Marina del Rey, Calif., and Santa Monica College have also purchased a commercial version of the ShakeAlert warning, which piggybacks off the USGS sensors but offers a direct signal to the building that slows elevators inside.

But getting a widespread text alert system up and running for the millions of Californians (and Oregonians and Washingtonians) is a tougher sell. The engineers and scientists working on the project have to be confident there won’t be false alarms that would weaken the warning’s credibility.

    More on Earthquakes

  • Lizzie Wade

    Mexico City’s Earthquake Alert Worked. The Rest of the Country Wasn’t So Lucky

  • Nick Stockton

    Experts Answer Your Biggest Questions About Earthquakes

  • Sarah Zhang

    The Way We Measure Earthquakes Is Stupid

They are also dealing with a bottleneck from US phone companies who haven’t been able to embed the warning signal into existing wireless networks, according to Josh Bashioum, founder and principal investigator of Early Warning Labs. “Unfortunately, the way our telcos are set up, they aren’t fast enough to deliver an early warning,” Bashioum says.

The providers don't have the ability to send an automated text message to the millions of people living in Southern California, for example, that could also override all the other signals that phones are processing at the same time. These texts have to go out in the narrow window between the detection of the p-wave and the arrival of the potentially deadly s-wave, or they aren't any good. Then again, Japanese cell companies have figured it out.

The USGS and Bashiouim have been meeting with the cell providers to push the effort, but Given expects it won’t happen for another three to five years. In the meantime, he hopes to at least get more seismic sensors in the ground so that scientists can alert first responders when a big quake hits. “The closer your [seismic] station is to the earthquake, the quicker you are going to recognize it detect it and send the alert,” Given says. “Given that we don’t know where the earthquake is going to occur, we have to have sensors all over the potential area of coverage.”

Sure, he could put a lot more sensors along the San Andreas fault, which has the highest odds of another quake. But that won't stop other quakes from hitting. For now, residents who live near seismic zones will have to make do with a real-time warning, and hope their building is up to code.

Related Video

Science

Cal Stadium Quake Retrofit

The rift under UC Berkeley's arena has been called a tectonic time bomb. Here's the university's $321 million retrofit plan.

Marathon wisdom told you it was too rainy, too slippery, and too warm for fast times at this morning’s Berlin Marathon, but Eliud Kipchoge refused to be overcome, either by the conditions or by his competitors. He won a race against perhaps the strongest field assembled in the past decade, even after a surprise attack by a debutant marathoner, Guye Adola, threatened to spoil his day. Kipchoge eventually missed the world record by 35 seconds, finishing in 2:03:32—a miraculous time in the circumstances. In both the fact and the manner of his victory, he has laid to rest any debate about who is the best marathon runner of this generation.

Berlin woke up in a cloud. In the forested Tiergarten, where the race starts, it was 57 degrees—significantly too hot for the fastest times—and the air was thick and moist. The official weather forecast said it was 99 percent humidity, but it’s hard to imagine how they missed that final one percent. The air was like soup. Humidity is a problem for elite athletes.

If the atmosphere was thick, so was the sense of expectation. As the three star athletes—Eliud Kipchoge, who ran 2:00:25 in Nike’s Breaking2 experiment earlier this year; Wilson Kipsang, the only man ever to win New York, London, and Berlin; and Kenenisa Bekele, world and Olympic record holder in 5,000 and 10,000 meters, and last year’s Berlin winner—warmed up in front of the start line, they betrayed their states of mind. Bekele looked tight with nerves as he stretched out his arms above his head, while Kipsang and Kipchoge ran some fast sprints and smiled easily to the crowd. Kipsang’s grin cracked briefly when the starter announced his rival, Kipchoge, as “the world’s best marathon runner.”

Thick Air and Slippery Turns

From the start, Kipchoge, wearing a white singlet, black half-tights, and red shoes, tucked in behind the three elite pacers, who had been asked to lead the fastest athletes to halfway in a previously unthinkable split time of 60 minutes and 50 seconds. The rain soon became intense, and it became obvious that nobody was going to run so fast for the first half. Simply turning a corner required care and concentration. Every time the lead pack did so, they slowed considerably. As the rain intensified, Gideon Kipketer, the rangy pacemaker (and Kipchoge’s training partner) screwed his face up into the weather.

The lead pack, which included not just the three big names but the Ethiopian debutant Adola and the Kenyan Vincent Kipruto, made halfway in 61:30, a second or two outside world record pace. In the conditions, it was an excellent split. The weather also started to lift a little, and Kipchoge looked increasingly comfortable.

Bekele, though, was dropped from the lead pack at halfway, unable to live with the pace. He did not finish the race. By 17 miles, only one pacemaker had survived—Sammy Kitwara. He dropped out at the 30-kilometer (18.6 mile) mark, and so—to everyone’s surprise—did Wilson Kipsang, clutching his stomach.

Almost everyone was suffering. Not only was the road slippery, but the athletes’ clothes were sticking to the skin, and—most importantly—all the runners would have found it hard to regulate their temperature. One of the limiting factors in marathon running is an athlete’s ability to dissipate the heat generated while synthesizing the energy needed to run so fast. Mostly, body heat is lost through sweating. But, the thicker and warmer the air, the harder that process becomes.

For the final seven and a half miles, it was Kipchoge, the master, versus Adola, the newcomer. Adola, who is taller and has a scruffier gait, seemed relaxed, and Kipchoge looked actively irritated by the close attention the Ethiopian was paying him. Kipchoge asked Adola more than once to move either in front or behind him. Adola continued as he was, shoulder to shoulder with the senior man. As they jostled, the world record drifted away. At the 35-kilometer (21.7 mile) marker, Kipchoge was around six seconds outside world record pace. But, oddly, it was at this moment that Kipchoge began to smile. Battle was joined.

    More Racing

  • Ed Caesar

    The Blockbuster Showdown At This Year's Berlin Marathon

  • Nicholas Thompson

    Sex, Drugs, and the Inside Lane: Recapping the 2017 World Championships of Track

  • Ed Caesar

    The Epic Untold Story of Nike’s (Almost) Perfect Marathon

Race to the Finish

At around 23 miles, Adola attacked, opening a gap of 10 meters and moving to the other side of the road as if to accentuate the distance between he and Kipchoge. The Kenyan responded, and seemed to be reeling Adola in, but the Ethiopian pressed again. Even as the world record drifted toward impossibility, nobody who was watching the race cared. This was thrilling sport, a true duel. With two miles to go, Kipchoge seemed visibly to muster reserves of energy for a final attempt to break Adola, and at the final drinks station at 40 kilometers (24.8 miles), he caught him, and then blew past him.

Kipchoge finished with a kick. When he crested the line, he looked as happy as a lottery winner. He hugged his coach, Patrick Sang, and saluted the crowd. Sang is not normally given to hyperbole, but his pride, minutes after the race had ended, was uncontainable.

“In these terrible conditions, two-oh-three is amazing,” Sang told me. “There was the mental challenge, the physical challenge, the environmental challenge… He is one of the great runners.”

I’d go a step further. Eliud Kipchoge has never broken the world record, but I’ve now watched four races in which he was in shape to do so—the London Marathon of 2016, which he won in 2:03:05, the Rio Olympic Marathon which he won in 2:08:44, the Breaking2 race at Monza which he won in 2:00:25, and today’s Berlin Marathon. In each case, he would have ripped chunks out of the world record in perfect conditions. But he has either been running on a slow course, or in slow conditions, and the title of world-record holder has evaded him. That’s marathon racing. In this sport, you have to be good and lucky.

Kipchoge may never break the world record now. The years, and the marathons, are piling up. He would never admit this, but it’s possible his chance has come and gone. In the final reckoning, it won’t matter. Nobody who watched Kipchoge win those four races could be in any doubt of his superiority. Today’s race was a reminder not just of his physical talents but of his mental fortitude. World record or no world record, he is the greatest.

Related Video

Science

How Nike Nearly Cracked the Perfect Marathon

Runners have been trying to break through the 2 hour marathon mark for decades. Here's the incredible science behind how Eliud Kipchoge came within 25 seconds in Nike's Breaking2 project.

When someone takes their own life, they leave behind an inheritance of unanswered questions. “Why did they do it?” “Why didn’t we see this coming?” “Why didn’t I help them sooner?” If suicide were easy to diagnose from the outside, it wouldn’t be the public health curse it is today. In 2014 suicide rates surged to a 30-year high in the US, making it now the second leading cause of death among young adults. But what if you could get inside someone’s head, to see when dark thoughts might turn to action?

That’s what scientists are now attempting to do with the help of brain scans and artificial intelligence. In a study published today in Nature Human Behavior, researchers at Carnegie Mellon and the University of Pittsburgh analyzed how suicidal individuals think and feel differently about life and death, by looking at patterns of how their brains light up in an fMRI machine. Then they trained a machine learning algorithm to isolate those signals—a frontal lobe flare at the mention of the word “death,” for example. The computational classifier was able to pick out the suicidal ideators with more than 90 percent accuracy. Furthermore, it was able to distinguish people who had actually attempted self-harm from those who had only thought about it.

Thing is, fMRI studies like this suffer from some well-known shortcomings. The study had a small sample size—34 subjects—so while the algorithm might excel at spotting particular blobs in this set of brains, it’s not obvious it would work as well in a broader population. Another dilemma that bedevils fMRI studies: Just because two things occur at the same time doesn’t prove one causes the other. And then there’s the whole taint of tautology to worry about; scientists decide certain parts of the brain do certain things, then when they observe a hand-picked set of triggers lighting them up, boom, confirmation.

In today’s study, the researchers started with 17 young adults between the ages of 18 and 30 who had recently reported suicidal ideation to their therapists. Then they recruited 17 neurotypical control participants and put them each inside an fMRI scanner. While inside the tube, subjects saw a random series of 30 words. Ten were generally positive, 10 were generally negative, and 10 were specifically associated with death and suicide. Then researchers asked the subjects to think about each word for three seconds as it showed up on a screen in front of them. “What does ‘trouble’ mean for you?” “What about ‘carefree,’ what’s the key concept there?” For each word, the researchers recorded the subjects' cerebral blood flow to find out which parts of their brains seemed to be at work.

Then they took those brain scans and fed them to a machine learning classifier. For each word, they told the algorithm which scans belonged to the suicidal ideators and which belonged to the control group, leaving one person at random out of the training set. Once it got good at telling the two apart, they gave it the left-out person. They did this for all 30 words, each time excluding one test subject. At the end, the classifier could reliably look at a scan and say whether or not that person had thought about killing themselves 91 percent of the time. To see how well it could more generally parse people, they then turned it on 21 additional suicidal ideators, who had been excluded from the main analyses because their brain scans had been too messy. Using the six most discriminating concepts—death, cruelty, trouble, carefree, good, and praise—the classifier spotted the ones who’d thought about suicide 87 percent of the time.

“The fact that it still performed well with noisier data tells us that the model is more broadly generalizable,” says Marcel Just, a psychologist at Carnegie Mellon and lead author on the paper. But he says the approach needs more testing to determine if it could successfully monitor or predict future suicide attempts. Comparing groups of individuals with and without suicide risk isn’t the same thing as holding up a brain scan and assigning its owner a likelihood of going through with it.

But that’s where this is all headed. Right now, the only way doctors can know if a patient is thinking of harming themselves is if they report it to a therapist, and many don’t. In a study of people who committed suicide either in the hospital or immediately following discharge, nearly 80 percent denied thinking about it to the last mental healthcare professional they saw. So there is a real need for better predictive tools. And a real opportunity for AI to fill that void. But probably not with fMRI data.

    More on Mental Health

  • Megan Molteni

    Artificial Intelligence Is Learning to Predict and Prevent Suicide

  • Megan Molteni

    The Chatbot Therapist Will See You Now

  • Robbie Gonzalez

    Virtual Therapists Help Veterans Open Up About PTSD

It’s just not practical. The scans can cost a few thousand dollars, and insurers only cover them if there is a valid clinical reason to do so. That is, if a doctor thinks the only way to diagnose what’s wrong with you is to stick you in a giant magnet. While plenty of neuroscience papers make use of fMRI, in the clinic, the imaging procedure is reserved for very rare cases. Most hospitals aren’t equipped with the machinery, for that very reason. Which is why Just is planning to replicate the study—but with patients wearing electronic sensors on their head while they're in the tube. Electroencephalograms, or EEGs, are one hundredth the price of fMRI equipment. The idea is to tie predictive brain scan signals to corresponding EEG readouts, so that doctors can use the much cheaper test to identify high-risk patients.

Other scientists are already mining more accessible kinds of data to find telltale signatures of impending suicide. Researchers at Florida State and Vanderbilt recently trained a machine learning algorithm on 3,250 electronic medical records for people who had attempted suicide sometime in the last 20 years. It identifies people not by their brain activity patterns, but by things like age, sex, prescriptions, and medical history. And it correctly predicts future suicide attempts about 85 percent of the time.

“As a practicing doctor, none of those things on their own might pop out to me, but the computer can spot which combinations of features are predictive of suicide risk,” says Colin Walsh, an internist and clinical informatician at Vanderbilt who’s working to turn the algorithm he helped develop into a monitoring tool doctors and other healthcare professionals in Nashville can use to keep tabs on patients. “To actually get used it’s got to revolve around data that’s already routinely collected. No new tests. No new imaging studies. We’re looking at medical records because that’s where so much medical care is already delivered.”

And others are mining data even further upstream. Public health researchers are poring over Google searches for evidence of upticks in suicidal ideation. Facebook is scanning users’ wall posts and live videos for combinations of words that suggest a risk of self-harm. The VA is currently piloting an app that passively picks up vocal cues that can signal depression and mood swings. Verily is looking for similar biomarkers in smart watches and blood draws. The goal for all these efforts is to reach people where they are—on the internet and social media—instead of waiting for them to walk through a hospital door or hop in an fMRI tube.

Related Video

Technology

The Robot Will See You Now – AI and Health Care

Artificial intelligence is now detecting cancer and robots are doing nursing tasks. But are there risks to handing over elements of our health to machines, no matter how sophisticated?

Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality—the participants remained deeply divided about how the theory should be interpreted.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Some physicists just shrug and say we have to live with the fact that quantum mechanics is weird. So particles can be in two places at once, or communicate instantaneously over vast distances? Get over it. After all, the theory works fine. If you want to calculate what experiments will reveal about subatomic particles, atoms, molecules and light, then quantum mechanics succeeds brilliantly.

But some researchers want to dig deeper. They want to know why quantum mechanics has the form it does, and they are engaged in an ambitious program to find out. It is called quantum reconstruction, and it amounts to trying to rebuild the theory from scratch based on a few simple principles.

If these efforts succeed, it’s possible that all the apparent oddness and confusion of quantum mechanics will melt away, and we will finally grasp what the theory has been trying to tell us. “For me, the ultimate goal is to prove that quantum theory is the only theory where our imperfect experiences allow us to build an ideal picture of the world,” said Giulio Chiribella, a theoretical physicist at the University of Hong Kong.

There’s no guarantee of success—no assurance that quantum mechanics really does have something plain and simple at its heart, rather than the abstruse collection of mathematical concepts used today. But even if quantum reconstruction efforts don’t pan out, they might point the way to an equally tantalizing goal: getting beyond quantum mechanics itself to a still deeper theory. “I think it might help us move towards a theory of quantum gravity,” said Lucien Hardy, a theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

The Flimsy Foundations of Quantum Mechanics

The basic premise of the quantum reconstruction game is summed up by the joke about the driver who, lost in rural Ireland, asks a passer-by how to get to Dublin. “I wouldn’t start from here,” comes the reply.

Where, in quantum mechanics, is “here”? The theory arose out of attempts to understand how atoms and molecules interact with light and other radiation, phenomena that classical physics couldn’t explain. Quantum theory was empirically motivated, and its rules were simply ones that seemed to fit what was observed. It uses mathematical formulas that, while tried and trusted, were essentially pulled out of a hat by the pioneers of the theory in the early 20th century.

Take Erwin Schrödinger’s equation for calculating the probabilistic properties of quantum particles. The particle is described by a “wave function” that encodes all we can know about it. It’s basically a wavelike mathematical expression, reflecting the well-known fact that quantum particles can sometimes seem to behave like waves. Want to know the probability that the particle will be observed in a particular place? Just calculate the square of the wave function (or, to be exact, a slightly more complicated mathematical term), and from that you can deduce how likely you are to detect the particle there. The probability of measuring some of its other observable properties can be found by, crudely speaking, applying a mathematical function called an operator to the wave function.

But this so-called rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. Quantum mechanics seems largely built of arbitrary rules like this, some of them—such as the mathematical properties of operators that correspond to observable properties of the system—rather arcane. It’s a complex framework, but it’s also an ad hoc patchwork, lacking any obvious physical interpretation or justification.

Compare this with the ground rules, or axioms, of Einstein’s theory of special relativity, which was as revolutionary in its way as quantum mechanics. (Einstein launched them both, rather miraculously, in 1905.) Before Einstein, there was an untidy collection of equations to describe how light behaves from the point of view of a moving observer. Einstein dispelled the mathematical fog with two simple and intuitive principles: that the speed of light is constant, and that the laws of physics are the same for two observers moving at constant speed relative to one another. Grant these basic principles, and the rest of the theory follows. Not only are the axioms simple, but we can see at once what they mean in physical terms.

What are the analogous statements for quantum mechanics? The eminent physicist John Wheeler once asserted that if we really understood the central point of quantum theory, we would be able to state it in one simple sentence that anyone could understand. If such a statement exists, some quantum reconstructionists suspect that we’ll find it only by rebuilding quantum theory from scratch: by tearing up the work of Bohr, Heisenberg and Schrödinger and starting again.

Quantum Roulette

One of the first efforts at quantum reconstruction was made in 2001 by Hardy, then at the University of Oxford. He ignored everything that we typically associate with quantum mechanics, such as quantum jumps, wave-particle duality and uncertainty. Instead, Hardy focused on probability: specifically, the probabilities that relate the possible states of a system with the chance of observing each state in a measurement. Hardy found that these bare bones were enough to get all that familiar quantum stuff back again.

Hardy assumed that any system can be described by some list of properties and their possible values. For example, in the case of a tossed coin, the salient values might be whether it comes up heads or tails. Then he considered the possibilities for measuring those values definitively in a single observation. You might think any distinct state of any system can always be reliably distinguished (at least in principle) by a measurement or observation. And that’s true for objects in classical physics.

In quantum mechanics, however, a particle can exist not just in distinct states, like the heads and tails of a coin, but in a so-called superposition—roughly speaking, a combination of those states. In other words, a quantum bit, or qubit, can be not just in the binary state of 0 or 1, but in a superposition of the two.

But if you make a measurement of that qubit, you’ll only ever get a result of 1 or 0. That is the mystery of quantum mechanics, often referred to as the collapse of the wave function: Measurements elicit only one of the possible outcomes. To put it another way, a quantum object commonly has more options for measurements encoded in the wave function than can be seen in practice.

Hardy’s rules governing possible states and their relationship to measurement outcomes acknowledged this property of quantum bits. In essence the rules were (probabilistic) ones about how systems can carry information and how they can be combined and interconverted.

Hardy then showed that the simplest possible theory to describe such systems is quantum mechanics, with all its characteristic phenomena such as wavelike interference and entanglement, in which the properties of different objects become interdependent. “Hardy’s 2001 paper was the ‘Yes, we can!’ moment of the reconstruction program,” Chiribella said. “It told us that in some way or another we can get to a reconstruction of quantum theory.”

More specifically, it implied that the core trait of quantum theory is that it is inherently probabilistic. “Quantum theory can be seen as a generalized probability theory, an abstract thing that can be studied detached from its application to physics,” Chiribella said. This approach doesn’t address any underlying physics at all, but just considers how outputs are related to inputs: what we can measure given how a state is prepared (a so-called operational perspective). “What the physical system is is not specified and plays no role in the results,” Chiribella said. These generalized probability theories are “pure syntax,” he added — they relate states and measurements, just as linguistic syntax relates categories of words, without regard to what the words mean. In other words, Chiribella explained, generalized probability theories “are the syntax of physical theories, once we strip them of the semantics.”

The general idea for all approaches in quantum reconstruction, then, is to start by listing the probabilities that a user of the theory assigns to each of the possible outcomes of all the measurements the user can perform on a system. That list is the “state of the system.” The only other ingredients are the ways in which states can be transformed into one another, and the probability of the outputs given certain inputs. This operational approach to reconstruction “doesn’t assume space-time or causality or anything, only a distinction between these two types of data,” said Alexei Grinbaum, a philosopher of physics at the CEA Saclay in France.

To distinguish quantum theory from a generalized probability theory, you need specific kinds of constraints on the probabilities and possible outcomes of measurement. But those constraints aren’t unique. So lots of possible theories of probability look quantum-like. How then do you pick out the right one?

“We can look for probabilistic theories that are similar to quantum theory but differ in specific aspects,” said Matthias Kleinmann, a theoretical physicist at the University of the Basque Country in Bilbao, Spain. If you can then find postulates that select quantum mechanics specifically, he explained, you can “drop or weaken some of them and work out mathematically what other theories appear as solutions.” Such exploration of what lies beyond quantum mechanics is not just academic doodling, for it’s possible—indeed, likely—that quantum mechanics is itself just an approximation of a deeper theory. That theory might emerge, as quantum theory did from classical physics, from violations in quantum theory that appear if we push it hard enough.

Bits and Pieces

Some researchers suspect that ultimately the axioms of a quantum reconstruction will be about information: what can and can’t be done with it. One such derivation of quantum theory based on axioms about information was proposed in 2010 by Chiribella, then working at the Perimeter Institute, and his collaborators Giacomo Mauro D’Ariano and Paolo Perinotti of the University of Pavia in Italy. “Loosely speaking,” explained Jacques Pienaar, a theoretical physicist at the University of Vienna, “their principles state that information should be localized in space and time, that systems should be able to encode information about each other, and that every process should in principle be reversible, so that information is conserved.” (In irreversible processes, by contrast, information is typically lost—just as it is when you erase a file on your hard drive.)

What’s more, said Pienaar, these axioms can all be explained using ordinary language. “They all pertain directly to the elements of human experience, namely, what real experimenters ought to be able to do with the systems in their laboratories,” he said. “And they all seem quite reasonable, so that it is easy to accept their truth.” Chiribella and his colleagues showed that a system governed by these rules shows all the familiar quantum behaviors, such as superposition and entanglement.

One challenge is to decide what should be designated an axiom and what physicists should try to derive from the axioms. Take the quantum no-cloning rule, which is another of the principles that naturally arises from Chiribella’s reconstruction. One of the deep findings of modern quantum theory, this principle states that it is impossible to make a duplicate of an arbitrary, unknown quantum state.

It sounds like a technicality (albeit a highly inconvenient one for scientists and mathematicians seeking to design quantum computers). But in an effort in 2002 to derive quantum mechanics from rules about what is permitted with quantum information, Jeffrey Bub of the University of Maryland and his colleagues Rob Clifton of the University of Pittsburgh and Hans Halvorson of Princeton University made no-cloning one of three fundamental axioms. One of the others was a straightforward consequence of special relativity: You can’t transmit information between two objects more quickly than the speed of light by making a measurement on one of the objects. The third axiom was harder to state, but it also crops up as a constraint on quantum information technology. In essence, it limits how securely a bit of information can be exchanged without being tampered with: The rule is a prohibition on what is called “unconditionally secure bit commitment.”

These axioms seem to relate to the practicalities of managing quantum information. But if we consider them instead to be fundamental, and if we additionally assume that the algebra of quantum theory has a property called non-commutation, meaning that the order in which you do calculations matters (in contrast to the multiplication of two numbers, which can be done in any order), Clifton, Bub and Halvorson have shown that these rules too give rise to superposition, entanglement, uncertainty, nonlocality and so on: the core phenomena of quantum theory.

Another information-focused reconstruction was suggested in 2009 by Borivoje Dakić and Časlav Brukner, physicists at the University of Vienna. They proposed three “reasonable axioms” having to do with information capacity: that the most elementary component of all systems can carry no more than one bit of information, that the state of a composite system made up of subsystems is completely determined by measurements on its subsystems, and that you can convert any “pure” state to another and back again (like flipping a coin between heads and tails).

Dakić and Brukner showed that these assumptions lead inevitably to classical and quantum-style probability, and to no other kinds. What’s more, if you modify axiom three to say that states get converted continuously—little by little, rather than in one big jump—you get only quantum theory, not classical. (Yes, it really is that way round, contrary to what the “quantum jump” idea would have you expect—you can interconvert states of quantum spins by rotating their orientation smoothly, but you can’t gradually convert a classical heads to a tails.) “If we don’t have continuity, then we don’t have quantum theory,” Grinbaum said.

A further approach in the spirit of quantum reconstruction is called quantum Bayesianism, or QBism. Devised by Carlton Caves, Christopher Fuchs and Rüdiger Schack in the early 2000s, it takes the provocative position that the mathematical machinery of quantum mechanics has nothing to do with the way the world really is; rather, it is just the appropriate framework that lets us develop expectations and beliefs about the outcomes of our interventions. It takes its cue from the Bayesian approach to classical probability developed in the 18th century, in which probabilities stem from personal beliefs rather than observed frequencies. In QBism, quantum probabilities calculated by the Born rule don’t tell us what we’ll measure, but only what we should rationally expect to measure.

In this view, the world isn’t bound by rules—or at least, not by quantum rules. Indeed, there may be no fundamental laws governing the way particles interact; instead, laws emerge at the scale of our observations. This possibility was considered by John Wheeler, who dubbed the scenario Law Without Law. It would mean that “quantum theory is merely a tool to make comprehensible a lawless slicing-up of nature,” said Adán Cabello, a physicist at the University of Seville. Can we derive quantum theory from these premises alone?

“At first sight, it seems impossible,” Cabello admitted—the ingredients seem far too thin, not to mention arbitrary and alien to the usual assumptions of science. “But what if we manage to do it?” he asked. “Shouldn’t this shock anyone who thinks of quantum theory as an expression of properties of nature?”

Making Space for Gravity

In Hardy’s view, quantum reconstructions have been almost too successful, in one sense: Various sets of axioms all give rise to the basic structure of quantum mechanics. “We have these different sets of axioms, but when you look at them, you can see the connections between them,” he said. “They all seem reasonably good and are in a formal sense equivalent because they all give you quantum theory.” And that’s not quite what he’d hoped for. “When I started on this, what I wanted to see was two or so obvious, compelling axioms that would give you quantum theory and which no one would argue with.”

So how do we choose between the options available? “My suspicion now is that there is still a deeper level to go to in understanding quantum theory,” Hardy said. And he hopes that this deeper level will point beyond quantum theory, to the elusive goal of a quantum theory of gravity. “That’s the next step,” he said. Several researchers working on reconstructions now hope that its axiomatic approach will help us see how to pose quantum theory in a way that forges a connection with the modern theory of gravitation—Einstein’s general relativity.

Look at the Schrödinger equation and you will find no clues about how to take that step. But quantum reconstructions with an “informational” flavor speak about how information-carrying systems can affect one another, a framework of causation that hints at a link to the space-time picture of general relativity. Causation imposes chronological ordering: An effect can’t precede its cause. But Hardy suspects that the axioms we need to build quantum theory will be ones that embrace a lack of definite causal structure—no unique time-ordering of events—which he says is what we should expect when quantum theory is combined with general relativity. “I’d like to see axioms that are as causally neutral as possible, because they’d be better candidates as axioms that come from quantum gravity,” he said.

Hardy first suggested that quantum-gravitational systems might show indefinite causal structure in 2007. And in fact only quantum mechanics can display that. While working on quantum reconstructions, Chiribella was inspired to propose an experiment to create causal superpositions of quantum systems, in which there is no definite series of cause-and-effect events. This experiment has now been carried out by Philip Walther’s lab at the University of Vienna—and it might incidentally point to a way of making quantum computing more efficient.

“I find this a striking illustration of the usefulness of the reconstruction approach,” Chiribella said. “Capturing quantum theory with axioms is not just an intellectual exercise. We want the axioms to do something useful for us—to help us reason about quantum theory, invent new communication protocols and new algorithms for quantum computers, and to be a guide for the formulation of new physics.”

But can quantum reconstructions also help us understand the “meaning” of quantum mechanics? Hardy doubts that these efforts can resolve arguments about interpretation—whether we need many worlds or just one, for example. After all, precisely because the reconstructionist program is inherently “operational,” meaning that it focuses on the “user experience”—probabilities about what we measure—it may never speak about the “underlying reality” that creates those probabilities.

“When I went into this approach, I hoped it would help to resolve these interpretational problems,” Hardy admitted. “But I would say it hasn’t.” Cabello agrees. “One can argue that previous reconstructions failed to make quantum theory less puzzling or to explain where quantum theory comes from,” he said. “All of them seem to miss the mark for an ultimate understanding of the theory.” But he remains optimistic: “I still think that the right approach will dissolve the problems and we will understand the theory.”

Maybe, Hardy said, these challenges stem from the fact that the more fundamental description of reality is rooted in that still undiscovered theory of quantum gravity. “Perhaps when we finally get our hands on quantum gravity, the interpretation will suggest itself,” he said. “Or it might be worse!”

    More Quanta

  • Megan Molteni

    Harvey Evacuees Leave Their Belongings—and Health Records—Behind

  • Natalie Wolchover

    The Man Who's Trying to Kill Dark Matter

  • Frank Wilczek

    Your Simple (Yes, Simple) Guide to Quantum Entanglement

Right now, quantum reconstruction has few adherents—which pleases Hardy, as it means that it’s still a relatively tranquil field. But if it makes serious inroads into quantum gravity, that will surely change. In the 2011 poll, about a quarter of the respondents felt that quantum reconstructions will lead to a new, deeper theory. A one-in-four chance certainly seems worth a shot.

Grinbaum thinks that the task of building the whole of quantum theory from scratch with a handful of axioms may ultimately be unsuccessful. “I’m now very pessimistic about complete reconstructions,” he said. But, he suggested, why not try to do it piece by piece instead—to just reconstruct particular aspects, such as nonlocality or causality? “Why would one try to reconstruct the entire edifice of quantum theory if we know that it’s made of different bricks?” he asked. “Reconstruct the bricks first. Maybe remove some and look at what kind of new theory may emerge.”

“I think quantum theory as we know it will not stand,” Grinbaum said. “Which of its feet of clay will break first is what reconstructions are trying to explore.” He thinks that, as this daunting task proceeds, some of the most vexing and vague issues in standard quantum theory—such as the process of measurement and the role of the observer—will disappear, and we’ll see that the real challenges are elsewhere. “What is needed is new mathematics that will render these notions scientific,” he said. Then, perhaps, we’ll understand what we’ve been arguing about for so long.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Business

What the What Is Quantum Computing? We've Got You Covered

Thanks to the superposition principle, a quantum machine has the potential to become an exponentially more powerful computer. If that makes little sense to you, here's quantum computing explained.

Tech companies are eyeing the next frontier: the human face. Should you desire, you can now superimpose any variety of animal snouts onto a video of yourself in real time. If you choose to hemorrhage money on the new iPhone X, you can unlock your smartphone with a glance. At a KFC location in Hangzhou, China, you can even pay for a chicken sandwich by smiling at a camera. And at least one in four police departments in the US have access to facial recognition software to help them identify suspects.

But the tech isn’t perfect. Your iPhone X might not always unlock; a cop might arrest the wrong person. In order for software to always recognize your face as you, an entire sequence of algorithms has to work. First, the software has to be able to determine whether an image has a face in it at all. If you’re a cop trying to find a missing kid in a photo of a crowd, you might want the software to sort the faces by age. And ultimately, you need an algorithm that can compare each face with another photo in a database, perhaps with different lighting and at a different angle, and determine whether they’re the same person.

To improve these algorithms, researchers have found themselves using the tools of pollsters and social scientists: demographics. When they teach face recognition software about race, gender, and age, it can often perform certain tasks better. “This is not a surprising result,” says biometrics researcher Anil Jain of Michigan State University, “that if you model subpopulations separately you’ll get better results.” With better algorithms, maybe that cop won’t arrest the wrong person. Great news for everybody, right?

It’s not so simple. Demographic data may contribute to algorithms’ accuracy, but it also complicates their use.

Take a recent example. Researchers based at the University of Surrey in the UK and Jiangnan University in China were trying to improve an algorithm used in specific facial recognition applications. The algorithm, based on something called a 3-D morphable model, digitally converts a selfie into a 3-D head in less than a second. Model in hand, you can use it rotate the angle of someone’s selfie, for example, to compare it to another photograph. The iPhone X and Snapchat use similar 3-D models.

The researchers gave their algorithm some basic instructions: Here’s a template of a head, and here’s the ability to stretch or compress it to get the 2-D image to drape over it as smoothly as possible. The template they used is essentially the average human face—average nose length, average pupil distance, average cheek diameter, calculated from 3-D scans they took of real people. When people made these models in the past, it was hard to collect a lot of scans because they’re time-consuming. So frequently, they’d just lump all their data together and calculate an average face, regardless of race, gender, or age.

The group used a database of 942 faces—3-D scans collected in the UK and in China—to make their template. But instead of calculating the average of all 942 faces at once, they categorized the face data by race. They made separate templates for each race—an average Asian face, white face, and black face, and based their algorithm on these three templates. And even though they had only 10 scans of black faces—they had 100 white faces and over 800 Asian faces—they found that their algorithm generated a 3-D model that matched a real person’s head better than the previous one-template model.

“It’s not only for race,” says computer scientist Zhenhua Feng of the University of Surrey. “If you have a model for an infant, you can construct an infant’s 3-D face better. If you have a model for an old person, you can construct that type of 3-D face better.” So if you teach biometric software explicitly about social categories, it does a better job.

Feng’s particular 3-D models are a niche algorithm in facial recognition, says Jain—the trendy algorithms right now use 2-D photos because 3-D face data is hard to work with. But other more widespread techniques also lump people into categories to improve their performance. A more common 3-D face model, known as a person-specific model, also often uses face templates. Depending on whether the person in the picture is a man, woman, infant, or an elderly person, the algorithm will start with a different template. For specific 2-D machine learning algorithms that verify that two photographs contain the same person, researchers have demonstrated that if you break down different appearance attributes—gender, race, but also eye color, expression—it will also perform more accurately.

    More on Bias in AI

  • Scott Rosenberg

    Why AI Is Still Waiting For Its Ethics Transplant

  • Sophia Chen

    AI Research Is in Desperate Need of an Ethical Watchdog

  • Megan Garcia

    How to Keep Your AI From Turning Into a Racist Monster

So if you teach an algorithm about race, does that make it racist? Not necessarily, says sociologist Alondra Nelson of Columbia University, who studies the ethics of new technologies. Social scientists categorize data using demographic information all the time, in response to how society has already structured itself. For example, sociologists often analyze behaviors along gender or racial lines. “We live in a world that uses race for everything,” says Nelson. “I don’t understand the argument that we’re not supposed to here.” Existing databases—the FBI’s face depository, and the census—already stick people in predetermined boxes, so if you want an algorithm to work with these databases, you’ll have to use those categories.

However, Nelson points out, it’s important that computer scientists think through why they’ve chosen to use race over other categories. It’s possible that other variables with less potential for discrimination or bias would be just as effective.“Would it be OK to pick categories like, blue eyes, brown eyes, thin nose, not thin nose, or whatever—and not have it to do with race at all?” says Nelson.

Researchers need to imagine the possible applications of their work, particularly the ones that governments or institutions of power might use, says Nelson. Last year, the FBI released surveillance footage they took to monitor Black Lives Matter protests in Baltimore—whose state police department has been using facial recognition software since 2011. “As this work gets more technically complicated, it falls on researchers not just to do the technical work, but the ethical work as well,” Nelson says. In other words, the software in Snapchat—how could the cops use it?

Related Video

Business

Robots & Us: A Brief History of Our Robotic Future

Artificial intelligence and automation stand to upend nearly every aspect of modern life, from transportation to health care and even work. So how did we get here and where are we going?