Tag Archive : SCIENCE

/ SCIENCE

This story originally appeared on CityLab and is part of the Climate Desk collaboration.

If you want an unusual but punchy telling of the world’s explosion of climate-warping gases, look no further than this visualization of CO2 levels over the past centuries soaring like skyscrapers into space.

2A Brief History of CO2 Emissions” portrays the cumulative amount of this common greenhouse gas that humans have produced since the mid-1700s. It also projects to the end of the 21st century to show what might happen if the world disregards the Paris Agreement, an ambitious effort to limit warming that 200 countries signed onto in 2015. (President Donald Trump still wants to renege on it.) At this point, the CO2-plagued atmosphere could see jumps in average temperature as high as 6 to 9 degrees Fahrenheit, the animation’s narrator warns, displaying a model of Earth looking less like planet than porcupine.

“We wanted to show where and when CO2 was emitted in the last 250 years—and might be emitted in the coming 80 years if no climate action is taken,” emails Boris Mueller, a creator of the viz along with designer Julian Braun and others at Germany’s University of Applied Sciences Potsdam and the Potsdam Institute for Climate Impact Research. “By visualizing the global distribution and the local amount of cumulated CO2, we were able to create a strong image that demonstrates very clearly the dominant CO2-emitting regions and time spans.”

The visualization begins with a small, white lump growing on London around 1760—the start of the Industrial Revolution. More white dots quickly appear throughout Europe, rising prominently in Paris and Brussels in the mid-1800s, then throughout Asia and the US, where in the early 1900s emissions skyrocket in the New York region, Chicago, and Southern California.

By the time the present day rolls around, the world looks home to the biggest construction project in existence, with spires that’d put the Burj Khalifa to shame ascending in the US, China, and Europe—currently the worst emitters in terms of volume of CO2.

For this project, the team pulled historical data from the US Department of Energy-affiliated Carbon Dioxide Information Analysis Center. The “CO2 emission estimates are deduced from information on the amount and location of fossil-fuel combustion and cement production over time,” says Elmar Kriegler, the viz’s scientific lead. “Therefore, the visualization also tells the history of the Industrial Revolution which started in England, spread across Europe and the United States, and finally across the world.”

Astute observers will notice a couple of troubling things, such as the huge amount of emissions pouring out of urban areas like London, New York, and Tokyo. Cities and the power plants that keep them humming remain the world’s largest source of anthropogenic greenhouse gases. Also notable: the relative absence of emissions in some parts of the planet. That isn’t necessarily a good thing. “Some regions, in particular Africa, still do not show a significant cumulative CO2-emissions signal,” says Kriegler, “highlighting that they are still in the beginning of industrialization and may increase their emissions rapidly in the future, if they follow the path of Europe, the U.S., Japan, and recently China and Southeast Asia.”

How likely is it the worst-case scenario portrayed in this viz is nearing our doorstep? The viz’s creators argue that some current damage is here to stay. But they have some cause for optimism, too. “Reducing CO2 emissions to zero in the second half of the century can be achieved with decisive, global-scale emissions-reductions policies and efforts,” Kriegler says. “The Paris Agreement can be an important [catalyst] for this development if embraced fully by the world’s leading emitters and powers. But as we say in the movie, the time to act is now.”

Related Video

Science

How Climate Change Is Already Affecting Earth

Though the planet has only warmed by one-degree Celsius since the Industrial Revolution, climate change's effect on earth has been anything but subtle. Here are some of the most astonishing developments over the past few years.

Huntington’s disease is brutal in its simplicity. The disorder, which slowly bulldozes your ability to control your body, starts with just a single mutation, in the gene for huntingtin protein. That tweak tacks an unwelcome glob of glutamines—extra amino acids—onto the protein, turning it into a destroyer that attacks neurons.

Huntington’s simplicity is exciting, because theoretically, it means you could treat it with a single drug targeted at that errant protein. But in the 24 years since scientists discovered it the gene for huntingtin, the search for suitable drugs has come up empty. This century’s riches of genetic and chemical data seem like it should have sped up research, but so far, the drug pipeline is more faucet than fire hydrant.

Part of the problem is simply that drug design is hard. But many researchers point to the systems of paywalls and patents that lock up data, slowing the flow of information. So a nonprofit called the Structural Genomics Consortium is countering with a strategy of extreme openness. They’re partnering with nine pharmaceutical companies and labs at six universities, including Oxford, the University of Toronto, and UNC Chapel Hill. They’re pledging to share everything with each other—drug wish lists, results in open access journals, and experimental samples—hoping to speed up the long, expensive drug design process for tough diseases like Huntington’s.

Rachel Harding, a postdoc at the University of Toronto arm of the collaboration, joined up to study the Huntington’s protein after she finished her PhD at Oxford. In a recent round of experiments, her lab grew insect cells in stacks of lab flasks fed with pink media. After slipping the cells a DNA vector that directed them to produce huntingtin, Rachel purified and stabilized the protein—and once it hangs out in a deep freezer for a while, she’ll map it with an electron microscope at Oxford.

Harding’s approach deviates from the norm in one major way: She doesn’t wait to publish a paper before sharing her results. After each of her experiments, “we’ll just put that into the public domain so that more people can use our stuff for free,” she says: protocols, the genetic sequences that worked for making proteins, experimental data. She’d even like to share protein samples with interested researchers, as she’s offered on Twitter. All this work is to create a map of huntingtin, “how all the atoms are connected to each other in three-dimensional space,” Harding says, including potential binding sites for drugs.

The next step is to ping that protein structure with thousands of molecules–chemical probes–to see if any bind in a helpful way. That’s what Kilian Huber, a medicinal chemistry researcher at Oxford University’s arm of the Structural Genomics Consortium, spends his days working on. Given a certain protein, he develops a way to measure its activity in cells, and then tests it against chemicals from pharmaceutical companies’ compound libraries, full of thousands of potential drug molecules.

If they score a hit, Huber and his consortium collaborators have pledged not to patent any of these chemicals. To the contrary, they want to share any chemical probe that works so it can quickly get more replication and testing. Many times, at other researchers’ requests, he has “put these compounds in an envelope, and sent them over,” he says. Recipient researchers generally cover shipping costs, and the organization as a whole has shipped off more than 10,000 samples since it started in 2004.

Under the umbrella of the SGC, about 200 scientists like Kilian and Rachel have agreed to never file any patents, and to publish only open access papers. CEO Aled Edwards beams when he talks about the group’s “metastatic openness.” Asking researchers to agree to share their work hasn’t been a problem. “There’s a willingness to be open,” he says, “you just have to show the way.”

Is Sharing Caring?

There are a few challenges to such a high degree of openness. The academic labs are involved in which projects they tackle first—but it’s their funders that ultimately decide which tricky proteins everyone will work on. Each government, pharmaceutical company, or nonprofit that gifts $8 million to the organization can nominate proteins to a master to-do list, which researchers at these companies and affiliate universities tackle together.

That list could be a risk for the pharma companies at the table: While it doesn’t specify which company nominated which protein, the entire group can see that somebody is interested in a Huntington’s strategy, for example. But they’re hedging their bets on a selective reveal of their priorities. For several million dollars—a fraction of most of these companies’ R&D budgets—companies including Pfizer, Novartis, and Bayer buy into the scientific expertise of this group and stand to get results a bit faster. And since no one is patenting any of the genes, protein structures, or experimental chemicals they produce, the companies can still file their own patents for whatever drugs they create as a result of this research.

That might seem like a bum deal for the scientists doing all the work of discovery. But mostly, scientists at the SGC seem thrilled that collaborating can accelerate their research.

    Related Stories

  • Sarah Zhang

    Why Pharma Wants to Put Sensors in This Blockbuster Drug

  • Daniela Hernandez

    Fixing a Broken Drug Business by Spreading the Wealth

  • Josh McHugh

    Drug Test Cowboys: The Secret World of Pharmaceutical Trial Subjects

“Rather than trying to do everything yourself, I can just share whatever I'm generating, and give it to the people that I think are experts in that area,” says Huber. “Then they will share the information back with us, and that, to me, is the key, from a personal point of view, on top of hopefully being able to support the development of new medicines,” says Huber. Because all the work is published open access, technically anyone in the world could benefit.

Edwards has pushed the SGC to slowly open up new steps of the drug discovery process. They started out working on genes, which is why they’re named a ‘genomics consortium’, then eked their way to sharing protein structures like the ones Harding works on. Creating and sharing tool compounds like Huber’s is their latest advance. “We’re trying to create a parallel universe where we can invent medicines in the open, where we can share our data,” Edwards says.

He hopes their approach will expand into a wider movement, so that other life science researchers get on board with data sharing, and open-source science improves repeatability and speeds up research findings. The Montreal Neurological Institute stopped filing patents on any of its discoveries last year. And there are other groups, like the Open Source Malaria Project, that have made a point of keeping all of their science in the open.

Sharing data won’t necessarily solve the inflating price of certain drugs. But it could certainly speed up understanding of new compounds, and shore up their chances of getting through clinical trials. The drug-making process is so complicated that if data sharing shaved just a bit of time off each step, it could save people years of waiting. The Huntington’s patients are waiting.

Related Video

Culture

Expired Medication: A Dose of Truth

Medicine has an expiration stamp—but Is it actually, you know, serious? Or are those sell-by dates just a Big Pharma racket? Mr. Know-It-All gives you a healthy dose of the truth.

Every week, two million people across the world will sit for hours, hooked up to a whirring, blinking, blood-cleaning dialysis machine. Their alternatives: Find a kidney transplant or die.

In the US, dialysis is a roughly 40-billion-dollar business keeping 468,000 people with end-stage renal disease alive. The process is far from perfect, but that hasn't hindered the industry's growth. That's thanks to a federally mandated Medicare entitlement that guarantees any American who needs dialysis—regardless of age or financial status—can get it, and get it paid for.

The legally enshrined coverage of dialysis has doubtlessly saved thousands of lives since its enactment 45 years ago, but the procedure’s history of special treatment has also stymied innovation. Today, the US government spends about 50 times more on private dialysis companies than it does on kidney disease research to improve treatments and find new cures. In this funding atmosphere, scientists have made slow progress to come up with something better than the dialysis machine-filled storefronts and strip malls that provide a vital service to so many of the country's sickest people.

Shuvo Roy, UC San Francisco

Now, after more than 20 years of work, one team of doctors and researchers is close to offering patients an implantable artificial kidney, a bionic device that uses the same technology that makes the chips that power your laptop and smartphone. Stacks of carefully designed silicon nanopore filters combine with live kidney cells grown in a bioreactor. The bundle is enclosed in a body-friendly box and connected to a patient’s circulatory system and bladder—no external tubing required.

The device would do more than detach dialysis patients—who experience much higher rates of fatigue, chronic pain, and depression than the average American—from a grueling treatment schedule. It would also address a critical shortfall of organs for transplant that continues despite a recent uptick in donations. For every person who received a kidney last year, 5 more on the waiting list didn’t. And 4,000 of them died.

There are still plenty of regulatory hurdles ahead—human testing is scheduled to begin early next year1—but this bioartificial kidney is already bringing hope to patients desperate to unhook for good.

Innovation, Interrupted

Kidneys are the body’s bookkeepers. They sort the good from the bad—a process crucial to maintaining a stable balance of bodily chemicals. But sometimes they stop working. Diabetes, high blood pressure, and some forms of cancers can all cause kidney damage and impair the organs' ability to function. Which is why doctors have long been on the lookout for ways to mimic their operations outside the body.

The first successful attempt at a human artificial kidney was a feat of Rube Goldberg-ian ingenuity, necessitated in large part by wartime austerity measures. In the spring of 1940, a young Dutch doctor named Willem Kolff decamped from his university post to wait out the Nazi occupation of the Netherlands in a rural hospital on the IJssel river. There he constructed an unwieldy contraption for treating people dying from kidney failure using some 50 yards of sausage casing, a rotating wooden drum, and a bath of saltwater. The semi-permeable casing filtered out small molecules of toxic kidney waste while keeping larger blood cells and other molecules intact. Kolff's apparatus enabled him to draw blood from his patients, push it through the 150 feet of submerged casings, and return it to them cleansed of deadly impurities.

In some ways, dialysis has advanced quite a bit since 1943. (Vaarwel, sausage casing, hello mass-produced cellulose tubing.) But its basic function has remained unchanged for more than 70 years.

Not because there aren’t plenty of things to improve on. Design and manufacturing flaws make dialysis much less efficient than a real kidney at taking bad stuff out of the body and keeping the good stuff in. Other biological functions it can’t duplicate at all. But any efforts to substantially upgrade (or, heaven forbid, supplant) the technology has been undercut by a political promise made four and a half decades ago with unforeseen economic repercussions.

In the 1960s, when dialysis started gaining traction among doctors treating chronic kidney failure, most patients couldn't afford its $30,000 price tag—and it wasn’t covered by insurance. This led to treatment rationing and the arrival of death panels to the American consciousness. In 1972, Richard Nixon signed a government mandate to pay for dialysis for anyone who needed it. At the time, the moral cost of failing to provide lifesaving care was deemed greater than the financial setback of doing so.

    Organs, Made to Order

  • Megan Molteni

    First Human-Pig Chimera Is a Step Toward Custom Organs

  • Nick Stockton

    Scientists 3-D Print Mouse Ovaries That Actually Make Babies

  • Megan Molteni

    Scientists Build a Menstrual Biochip That Does Everything But Bleed

  • Matt Simon

    The Robots Are Coming for Your Heart

But the government accountants, unable to see the country’s coming obesity epidemic and all its attendant health problems, greatly underestimated the future need of the nation. In the decades since, the number of patients requiring dialysis has increased fiftyfold. Today the federal government spends as much on treating kidney disease—nearly $31 billion per year—as it does on the entire annual budget for the National Institutes of Health. The NIH devotes $574 million of its funding to kidney disease research to improve therapies and discover cures. It represents just 1.7 percent of the annual total cost of care for the condition.

But Shuvo Roy, a professor in the School of Pharmacy at UC San Francisco, didn’t know any of this back in the late 1990s when he was studying how to apply his electrical engineering chops to medical devices. Fresh off his PhD and starting a new job at the Cleveland Clinic, Roy was a hammer looking for interesting problems to solve. Cardiology and neurosurgery seemed like exciting, well-funded places to do that. So he started working on cardiac ultrasound. But one day, a few months in, an internal medicine resident at nearby Case Western Reserve University named William Fissell came up to Roy and asked: “Have you ever thought about working on the kidney?”

Roy hadn’t. But the more Fissell told him about how stagnant the field of kidney research had been, how ripe dialysis was for a technological overhaul, the more interested he got. And as he familiarized himself with the machines and the engineering behind them, Roy began to realize the extent of dialysis' limitations—and the potential for innovation.

Limitations like the pore-size problem. Dialysis does a decent job cleansing blood of waste products, but it also filters out good stuff: salts, sugars, amino acids. Blame the polymer manufacturing process, which can’t replicate the 7-nanometer precision of nephrons—the kidney's natural filters. Making dialysis membranes involves a process called extrusion, which yields a distribution of pore sizes—most are about 7nm but you also get some portion that are much smaller, some that are much larger, and everything in between. This is a problem because that means some of the bad stuff (like urea and excess salts) can sneak through and some of the good stuff (necessary blood sugars and amino acids) gets trapped. Seven nanometers is the size of albumin—a critical protein that keeps fluid from leaking out of blood vessels, nourishes tissues, and transports hormones, vitamins, drugs, and substances like calcium throughout the body. Taking too much of it out of the bloodstream would be a bad thing. And when it comes to the kidney’s other natural functions, like secreting hormones that regulate blood pressure, dialysis can’t do them at all. Only living cells can.

“We were talking about making a better Bandaid,” Roy says. But as he and Fissell looked around them at the advances being made in live tissue engineering, they started thinking beyond a better, smaller, faster filter. “We thought, if people are growing ears on the backs of mice, why can’t we grow a kidney?”

It turned out, someone had already tried. Sort of.

Dialysis, Disrupted

Back in 1997 when Fissell and Roy were finishing up their advanced training at Case Western, a nephrologist named David Humes at the University of Michigan began working to isolate a particular kind of kidney cell found on the backend of the nephron. Humes figured out how to extract them from cadaver kidneys not suitable for transplant and grow them in his lab. Then he took those cells and coated the inside of hollow fibre-membrane filled tubes similar to the filter cartridge on modern dialysis machines. He had invented an artificial kidney that could live outside the human body on a continuous flow of blood from the patient and do more than just filter.

The results were incredibly encouraging. In clinical trials at the University of Michigan Hospital, it improved the mortality rates for ICU patients with acute renal failure by half. There was just one problem. To work, the patient had to be permanently hooked up to half a hospital room’s worth of tubes and pumps.

The first time Roy saw Humes’ set-up, he immediately recognized its promise—and its limitations. Fissell had convinced him to drive from Cleveland to Ann Arbor in the middle of a snowstorm to check it out. The trip convinced them that the technology worked. It was just way too cumbersome for anyone to actually use it.

In 2000, Fissell joined Humes to do his nephrology fellowship at Michigan. Roy stayed at the Cleveland Clinic to work on cardiac medical devices. But for the next three years, nearly every Thursday afternoon Fissell hopped in his car and drove three hours east on I-90 to spend long weekends in Roy’s lab tackling a quintessentially 21st century engineering problem: miniaturization. They had no money, and no employees. But they were able to ride the wave of advancements in silicon manufacturing that was shrinking screens and battery packs across the electronics industry. “Silicon is the most perfected man-made material on Earth,” Roy says from the entrance to the vacuum-sealed clean room at UCSF, where his grad students produce the filters. If they want to make a slit that’s 7 nanometers wide, they can do that with silicon every time. It has a less than one percent variation rate.

The silicon filters had another advantage, too. Because Roy and Fissell wanted to create a small implantable device, they needed a way to make sure there wasn’t an immune response—similar to transplant rejection. Stacks of silicon filters could act as a screen to keep the body’s immune cells physically separated from Humes’ kidney cells which would be embedded in a microscopic scaffold on the other side. The only thing getting through to them would be the salt and waste-filled water, which the cells would further concentrate into urine and route to the bladder.

By 2007 the three researchers had made enough progress to apply for and receive a 3-year $3 million grant from the NIH to prove the concept of their implantable bioartificial kidney in an animal model. On the line was a second phase of funding, this time for $15 million, enough to take the project through human clinical trials. Roy moved out west to UCSF to be closer to the semiconductor manufacturing expertise in the Bay Area. Fissell worked on the project for a few more years at the Cleveland Clinic before being recruited to Vanderbilt while Humes stayed at the University of Michigan to keep working with his cells. But they didn’t make the cut. And without money, the research began to stall.

By then though, their kidney project had taken on a following of its own. Patients from all over the world wanted to see it succeed. And over the next few years they began donating to the project—some sent in five dollar bills, others signed checks for a million dollars. One six-year-old girl from upstate New York whose brother is on dialysis convinced her mother to let her hold a roadside garden vegetable sale and send in the proceeds. The universities chipped in too, and the scientists started to make more progress. They used 3D printing to test new prototypes and computer models of hydraulic flow to optimize how all the parts would fit together. They began collaborating with the surgeons in their medical schools to figure out the best procedure for implanting the devices. By 2015 the NIH was interested again. They signed on to another $6 million over the next four years. And then the FDA got interested.

That fall the agency selected the Kidney Project to participate in a new expedited regulatory approval plan intended to get medical innovations to patients faster. While Roy and Fissell have continued to tweak their device, helped along by weekly shipments of cryogenically frozen cells from Humes’ lab, FDA officials have shepherded them through two years of preclinical testing, most of which has been done in pigs, and shown good results. In April, they sent 20 agency scientists out to California to advise on their next step: moving into humans.

The plan is to start small—maybe ten patients tops—to test the safety of the silicon filter’s materials. Clotting is the biggest concern, so they’ll surgically implant the device in each participant’s abdomen for a month to make sure that doesn’t happen. If that goes well they will do a follow-up study to make sure it actually filters blood in humans the way it’s supposed to. Only then can they combine the filter with the bioreactor portion of the device, aka Humes’ renal cells, to test the full capacity of the artificial kidney.

The scientists expect to arrive at this final stage of clinical trials, and regulatory approval, by 2020. That may sound fast, but one thing they’ve already got a jump on is patient recruiting. Nearly 9,000 of them have already signed up to the project’s waitlist, ready to be contacted when clinical trials get the green light.

These patients are willing to accept the risk of pioneering a third option, besides transplants, which are too expensive and too hard to get for most people, and the drudgery of dialysis. Joseph Vassalotti, a nephrologist in Manhattan and the Chief Medical Officer for the National Kidney Foundation says “the more choices patients have the better,” even though he’s skeptical the device will become a reality within the next few years. An implantable kidney would dramatically improve their quality of life and be a welcome innovation after so many years of treatment status quo. “During World War II we didn’t think dialysis would be possible,” Vassalotti says. “Now half a million Americans are being treated with it. It’s amazing the progress just a few decades makes.”

1Correction: 12:50pm ET The Kidney Project is now slated to begin clinical trials in early 2018. A previous version of this article incorrectly stated they would take place later this year. Changes have also been made to correctly identify the size and timing of grants to the Kidney Project.

Related Video

Science

If the Tin Man Actually Had a Heart, It'd Look Like This

A robotic heart points the way to a future where soft robots help us heal.

The Napa Fire Is a Perfectly Normal Apocalypse

March 20, 2019 | Story | No Comments

Blame the wind, if you want. In Southern California they call it the Santa Ana; in the north, the Diablos. Every autumn, from 4,000 feet up in the Great Basin deserts of Nevada and Utah, air drops down over the mountains and through the canyons. By the time it gets near the coast it’s hot, dry, and can gust as fast as a hurricane.

Or blame lightning, or carelessness, or downed power lines. No one yet knows the cause of the more than a dozen fires ablaze around California, but fires start where humans meet the wild forests, where people build for solitude or space or beauty. Things go wrong in those liminal spaces, at the interface between the wilds and the built.

So blame sprawl, or civilization’s cycling of wilderness into rural into exurban into suburban—urban agglomerations with an ever-expanding wavefront.

Blame all of it. There’s a reason the great Californian writer Raymond Chandler called it the Red Wind—winds “that come down through the mountain passes and curl your hair and make your nerves jump and your skin itch.” Those winds blast down from the mountains and fan small fires into infernos, and sometimes those infernos maim or kill a city. In 1991 it was in the hills of Oakland. And this past weekend it was Napa and Sonoma, and the town of Santa Rosa. At least 15 people are dead. More than 1,500 houses are gone. The skies of the West are full of dust and ash.

Pushed by the wind, fires can throw burning embers a mile and a half ahead. The fire front starts moving faster than anyone can respond, jumping from ridgeline to ridgeline.

A fire’s progress through the forests and wildlands of North America isn’t exactly formulaic, but scientists understand it reasonably well. In the city, though? “Most wildland firefighters are not trained in structural protection, but the urban fire departments are not trained to deal with dozens or hundreds of houses burning at the same time,” says Volker Radeloff, a forestry researcher at the University of Wisconsin. “When these areas with lots of houses burn, the fires become very unpredictable.”

Buildings, the material bits of cities, don’t burn like woodlands. “A wildfire typically doesn’t last in one spot more than a minute or two. In grass it can be like 10 seconds,” says Mark Finney, a US Forest Service researcher at the Missoula Fire Sciences Laboratory. “But structures can burn for a long time. That means they have a long time to be able to spread the fire, to be able to ignite adjacent structures.” They throw off embers as they decompose, and those wide walls emit and transfer heat.

In Southern California, Santa Ana fires push into populated areas more frequently. They kill more people and destroy more buildings. Diablo-powered fires aren't as common in the state's northern half, but they're not unknown.

Fires happen without Santa Anas too, of course, but “they typically don’t grow bigger,” says Yufang Jin, an ecosystem dynamics researcher at UC Davis and lead author on a 2015 paper about the difference. “During summertime in southern California, the typical wind pattern blows from the ocean to inland. The wind speed is usually not that strong, and the relative humidity is usually high.” That can tamp a fire down.

During Santa Ana season, conditions are the opposite. And the particularly bad Diablo winds in the north this year come after the end of a drought that left plenty of fuel. Fire researchers sometimes fight about whether meteorology or fuel conditions are more important to wildfires; this past weekend had both—the perfect firestorm. Cal Fire, the agency responsible for wildfires in the state, has issued another Red Flag Warning for the same conditions later this week. According to a spokesperson, roughly 4,000 firefighters are already deployed.

    More on Wildfires

  • Adam Rogers

    The West Is on Fire. Blame the Housing Crisis

  • Joe Eaton

    How Bizarre Is This Year’s Wildfire Season, Really?

  • Laura Mallonee

    The Hellish Beauty of California's Wildfires

California housing policies are more likely to push single-family houses out into the edges of communities than encourage the construction of dense city centers. Climate change makes wet seasons wetter and hot seasons hotter—which builds fuel. “Based on analysis using climate model projections, the frequency of Santa Ana events is uncertain,” Jin says. “But all the models agree that the intensity of Santa Ana events is going to be much stronger.”

Models say the same thing about sea level rise and hurricanes. A continent away from the fires in California, cities along the Gulf of Mexico and in the Caribbean have been battered by tropical cyclones, one after the other. This year, ocean water heated by a warming climate, unusually wet weather, and a lack of the vertical wind shear that can tame a big storm combined to produce an anomalous season. It has already been a fire season and a hurricane season that are, as researchers say, consistent with models of a changing climate.

Cities are not immortal. Economics and wars can kill them, but so can storms and fires. That’s especially true if cities aren’t built to resist—if cities are built in ways that make the change worse instead of fighting it.

So keep thinking about blame as northern California rebuilds—if regulations get brave enough to insist on denser cities, less flammable materials, different ornamental vegetation, underground power lines. The risk of fire will never be zero, but everyone knows what would knock a few points off. Whether anyone will make those changes—well, the red wind makes people do crazy things.

Related Video

Science

NASA Sets a Fire in Space—For Science!

NASA started a blaze aboard the unmanned Orbital ATK Cygnus cargo vehicle. It’s the Spacecraft Fire Experiment. Seriously, that’s exactly what NASA is calling it.

Daniel Kish sees more than you might expect, for a blind man. Like many individuals deprived of sight, he relies on his non-visual senses to perceive, map, and navigate the world. But people tend to find Kish's abilities rather remarkable. Reason being: Kish can echolocate. Yes, like a bat.

As a child, Kish taught himself to generate sharp clicking noises with his mouth, and to translate the sound reflected by surrounding objects into spatial information. Perhaps you've seen videos like this one, in which Kish uses his skills to navigate a new environment, describe the shape of a car, identify the architectural features of a distant building—even ride a bike:

Impressive as his abilities are, Kish insists he isn't special. "People who are blind have been using various forms of echolocation to varying degrees of efficiency for a very long time," he says. What's more, echolocation can be taught. As president of World Access For the Blind, one of Kish's missions is helping blind people learn to cook, travel, hike, run errands, and otherwise live their lives more independently—with sound. "But there’s never been any systematic look at how we echolocate, how it works, and how it might be used to best effect."

A study published Thursday in PLOS Computational Biology takes a big step toward answering these questions, by measuring the mouth-clicks of Kish and two other expert echolocators and converting those measurements into computer-generated signals.

Researchers led by Durham University psychologist Lore Thaler performed the study in what's known in acoustic circles as an anechoic chamber. The room features double walls, a heavy steel door, and an ample helping of sound-dampening materials like foam. To stand inside an anechoic chamber is to be sonically isolated from the outside world. To speak inside of one is to experience the uncanny effect of an environment practically devoid of echoes.

But to echolocate inside of one? I asked Kish what it was like, fully expecting him to describe it as a form of sensory-deprivation. Wrong. Kish says that, to him, the space sounded like standing before a wire fence, in the middle of an infinitely vast field of grass.

This unique space allowed Thaler and her team to record and analyze thousands of mouth-clicks produced by Kish and the other expert echolocators. The team used tiny microphones—one at mouth level, with others surrounding the echolocators at 10-degree intervals, suspended at various heights from thin steel rods. Small microphones and rods were essential; the bigger the equipment was, the more sound they would reflect, reducing the fidelity of their measurements.

Thaler's team began the study expecting the acoustic properties of mouth-clicks to vary between echolocators. But the noises they produced were very similar. Thaler characterizes them as bright (a pair of high-pitched frequencies at around 3 and 10 kilohertz) and brief. They tended to last just three milliseconds before tapering off into silence. Here's a looped recording of one of Kish's clicks:

The researchers also analyzed the spatial path that the sound waves traveled after leaving the echolocators' mouths. "You can think of it as an acoustic flashlight," Thaler says. When you turn a flashlight on, the light distributes through space. A lot of it travels forward, but there's scattering to the left and right, as well." The beam patterns for clicks occupy space in a similar fashion—only with sound instead of light.

Thaler's team found that the beam pattern for the mouth-clicks roughly concentrated in a 60-degree cone, emanating from the echolocators' mouths—a narrower path than has been observed for speech. Thaler attributes that narrowness to the brightness of the click's pitch. Higher frequencies tend to be more directional than lower ones, which is why, if you've ever set up a surround sound system, you know that a subwoofer's placement is less important than that of a higher-frequency tweeter.

Thaler and her team used these measurements to create artificial clicks with acoustic properties similar to the real thing. Have a listen:

These synthetic clicks could be a boon for studies of human echolocation, which are often restricted by the availability of expert practitioners like Kish. "What we can do now is simulate echolocation, in the real world with speakers or in virtual environments, to develop hypotheses before testing them with human subjects," Thaler says. "We can create avatars, objects, and environments in space like you would in a video game, and model what the avatar hears." Preliminary studies like these could allow Thaler and other researchers to refine their hypotheses before inviting echolocation experts in to see how their models match the real thing.

These models won’t be perfect. To keep measurements consistent, Kish and the other echolocators had to keep still while inside the chamber. “But in the real world, they move their heads and vary the acoustic properties of their clicks, which can help them gain additional information about where things are in the world,” says Cynthia Moss, a neuroscientist at Johns Hopkins University whose lab studies the mechanisms of spatial perception. (Thaler says her team is currently analyzing the results of a dynamic study, the results of which they hope to publish soon.)

Still, Moss says the study represents a valuable step toward understanding how humans echolocate, and perhaps even building devices that could make the skill more broadly achievable. Not everyone can click like Kish. “I’ve worked with a guy who used finger-snaps, but his hand would get tired really fast,” Moss says. Imagine being able to hand someone a device that emits an pitch-perfect signal—one that they could learn to use before, or perhaps instead of, mastering mouth-clicks.

I ask Kish what he thinks about a hypothetical device that could one day produce sounds like he does. He says it already exists. About a third of his students are unable or unwilling to produce clicks with their mouths. "But you pop a castanet in their hands and you get instant results," he says. "The sound they produce, it's like ear candy. It's uncanny how bright, clear, and consistent it is."

But Kish says he's all for more devices—and more research. "We know that these signals are critical to the echolocation process. Bats use them. Whales use them. Humans use them. It makes sense that those signals should be studied, understood, optimized." With the help of models like Thaler's, Kish might just get his wish.

On the last Monday of September, 32 field workers stepped onto a 15-acre experimental plot in an undisclosed part of Washington and made apple harvest history. The fruits they plucked from each tree were only a few months old. But they were two decades and millions of dollars in the making. And when they landed, pre-sliced and bagged on grocery store shelves earlier this month, they became the first genetically modified apple to go on sale in the United States.

The Arctic Apple, as it’s known, is strategically missing an enzyme so it doesn’t go brown when you take a bite and leave it sitting out on the counter. It’s one of the first foods engineered to appeal directly to the senses, rather than a farmer’s bottom line. And in a bid to attract consumers, it’s unapologetic about its alterations.

The apple has courted plenty of controversy to get where it is today—in about a hundred small supermarkets clustered around Oklahoma City. But now that it’s here, the question is, will consumers bite? Dozens of biotech companies with similar products in the pipeline, from small startups to agrochemical colossuses like Monsanto and Dupont are watching, eager to find out if the Arctic Apple will be a bellwether for the next generation of GMOs, or just another science project skewered on the altar of public opinion.

Neal Carter bought his first apple orchard in 1995, up in the gently sloping valley of Summerland, British Columbia. When he started, the future president of Okanagan Specialty Fruits didn’t have grand plans for upending the industry. But in his first few seasons he was struck by just how many apples (and how much money) he had to throw away on account of browning from the routine bumps and jostles of transit and packaging. Most years it was around 40 percent of his crop.

When you cut an apple, or handle it roughly, its cells rupture, and compounds that had been neatly compartmentalized come in contact with each other. When that happens, an enzyme called polyphenol oxidase, or PPO, triggers a chemical reaction that produces brown-colored melanin within just a few minutes. Carter thought there had to be a way to breed or engineer around that. So when he came across Australian researchers already doing it in potatoes, he wasted no time in licensing their technology, a technique known as gene silencing. Rather than knocking out a gene, the idea is to hijack the RNA instructions it sends out to make a protein.

The problem, Carter found out later, was that apples were a lot more complicated, genetically speaking, than the potato. In taters, the browning enzyme was coded into a family of four sub-genes that were chemically very similar. All you had to do was silence the dominant one, and it would cross-react with the other three, taking them all down in one go. Apples, on the other hand, had four families of PPO genes, none of which reacted with the others. So Carter’s team had to design a system to target all of them at once—no simple task in the early aughts.

To do it, the Okanagan scientists inserted a four-sequence apple gene (one for each member of the apple PPO family) whose base pairs run in reverse orientation to the native copies. To make sure it got expressed, they also attached some promoter regions taken from a cauliflower virus. The transgene’s RNA binds to the natural PPO-coding RNA, and the double-stranded sequence is read as a mistake and destroyed by the cell’s surveillance system. The result is a 90 percent reduction in the PPO enzyme. And without it, the apples don’t go brown.

It took Okanagan years to perfect the technique, which was subject to regulatory scrutiny on account of the viral DNA needed to make it work. Today, with the arrival of gene editing technologies like Crispr/Cas9, turning genes on and off or adding new ones has become much more straightforward. Del Monte is already growing pink pineapples, Monsanto and Pioneer are working on antioxidant-boosted tomatoes and sweeter melons, J.R. Simplot has a potato that doesn’t produce cancer-causing chemicals when it’s fried. Smaller startups are busy engineering all kinds of other designer fruits and veggies. And it’s not obvious how exactly this new wave of gene-edited foods will be regulated.

Gene editing gets around most of the existing laws that give the Food and Drug Administration and the Department of Agriculture authority over biotech food crops. In January, the Obama administration proposed a rule change that would look more closely at gene-edited crops before automatically approving them. But earlier this month the USDA withdrew that proposed rule, citing science and biotech industry concerns that it would unnecessarily hinder research and development.

Carter, whose fruits were cleared by the USDA and the FDA in 2015, says his Arctic Apples are evidence the existing process works. But there were times when he wasn’t sure they were going to make it. “It took us close to 10 years, where we had the apples, we had the data, we kept submitting answers to questions, and then wouldn’t hear anything back,” says Carter. “It’s a bit of a black hole, and that whole time you’re not sure if you’re going to even be able to pay your electricity bills and keep your lights on.”

    More on GMO Foods

  • Sarah Zhang

    QR Codes for GMO Labeling Could Actually Be a Great Idea. Could

  • Ferris Jabr

    Organic GMOs Could Be The Future of Food — If We Let Them

  • Nick Stockton

    In a First, the FDA Clears Genetically Modified Salmon for Eating—It Just Took 20 Years

Talking to Carter, Okanagan still feels like a small family business, especially when he says the word “process” with that endearing, long Canadian “O”. This year’s Arctic Apple harvest amounted to 175,000 pounds—just a drop in the apple bucket. But shortly after its US regulatory approvals, his company was acquired by Intrexon Corporation, a multinational synthetic biology conglomerate that owns all the other big-name GMOs you might have heard of. Like Oxitec’s Zika-fighting mosquitoes, and the fast-growing AquAdvantage salmon.

That’s one reason customers might be wary of the Arctic Apple. Another is transparency. While Carter says they’re taking that literally—the bags have a plastic see-through window to view the not-brown slices for yourself—others say Okanagan hasn’t gone far enough in telling people how its apple was made. The letters G-M-O don’t appear anywhere on the bag. Instead, in accordance with a 2016 GMO food labeling law, there’s a QR code, which you can scan with a smartphone to get more information online.

Some consumer groups think that doesn’t go far enough, but scientists counter that they’re focusing on the wrong things. “Breeding technologies are just a distraction from the big questions,” says Pam Ronald, who studies plant genetics at the University of California and who is married to an organic farmer. “Like, how do we produce enough food without using up all our soil and water? How do we reduce toxic inputs? Those are the grand challenges of agriculture today, that technology can help address.”

Ronald works on food crops designed to fight food insecurity in the developing world—like drought-resistant corn and vitamin-enriched rice. When she first heard of the Arctic Apple at a conference in 2015, she wasn’t that impressed. It’s not exactly a famine-fighter. But when Carter sent her a box of fruits a few weeks later, her kids had a different take. They brought them into school to show to their biology classes, and according to Ronald, their classmates just went wild. “Kids really hate brown apples, and it made me realize I don’t really like them either,” she says.

Living where food is abundant, most people don’t really grasp how GMOs touch their lives. “It’s that distance that consumers are removed from agriculture that creates the fear,” says Ronald. “But if you see a brown apple you’re probably aware that you throw it away, and maybe you feel guilty about that. Connecting biotechnology to something you can see and feel and taste like that could be transformational.”

Related Video

Culture

Serving Genetically Modified Food at Dinner Parties

When hosting a party where genetically modified foods are what’s for dinner, is it proper etiquette to warn your GMO-averse friends ahead of time? Mr. Know-It-All offers sage advice on how to handle.

One of the first things you notice about videos of calving glaciers is the utter absence of scale. The craggy chunks of ice that break away could be the size of football fields or cities or maybe even whole states—but without a point of reference it can be next to impossible to say. They are unintelligibly immense.

That perceptual effect happens in person, too. "There's no real way to determine its size just by looking at it," says New York University oceanographer David Holland, whose research team has spent a decade observing glacier behavior in Greenland. A distant, dislodged iceberg might look small at first glance, "but then you'll watch a helicopter fly towards it, and the helicopter will shrink and shrink and shrink and pretty soon it just disappears."

Which is why you probably can't tell that the newly born berg in this time-lapse video is in fact 4 miles wide, half a mile deep, and over a mile long. A sizable chunk of Greenland's Helheim glacier, it's roughly the size of lower Manhattan and weighs between 10 billion and 14 billion tons. When it dislodged from Helheim and crashed into the ocean on June 22, it accounted for some 3 percent of the ice that Greenland is expected to contribute to the sea in 2018 in the span of just 30 minutes. Much of Greenland's ice deposits will occur in dramatic, short-lived events such as this.

That's exactly why this video is so valuable to Holland, whose team is studying how calving glaciers could contribute to catastrophic sea-level rise across the globe. "Abrupt sea level change is only going to happen one way, and that's with some big part of western Antarctica becoming violently unstable due to calving—or not," he says. "If not, then there will not be major, abrupt sea level changes." And to model whether and how Antarctica might fall apart, you need to understand the rate and processes by which ice breaks off. Greenland's icebergs—including Helheim—serve as fabulous natural laboratories.

Glaciers often shed pieces of themselves, but only rarely do researchers capture large events on camera. In the course of his career, Holland has seen it happen just three times. (The largest calving ever filmed was shot during the production of the documentary Chasing Ice, on the 17th day of a glacier-watching stakeout.) "You can be out in the field for two weeks with your camera on and the glacier just sits there doing nothing," says Denise Holland, David's wife and the logistics coordinator for NYU’s Environmental Fluid Dynamics Laboratory and NYU Abu Dhabi’s Center for Global Sea Level Change. But that kind of documentation is essential to understanding—and modeling—how and why glaciers calve.

Consider the video above, which begins with a big so-called tabular iceberg breaking off from the main part of Helheim glacier. Almost immediately thereafter, a second type of iceberg, called a pinnacle iceberg, can be seen calving off toward camera-right. The tabular iceberg is built like a pancake: large, flat, relatively stable. But the pinnacle berg has an aspect ratio like a slice of bread. Tall and skinny, it wants to lie down, so as it separates from the iceberg from the bottom first, its feet shoot forward out from under it as it slides into the sea. Sheets of pinnacle icebergs proceed to rip off from the glacier in sequence, driving the tabular berg farther down the fjord and breaking it into smaller chunks. "It's like a house of cards: One piece falls off, and the rest of the pieces peel off one after the other," David says. "It's complete chaos."

That chaos can be difficult to model. Look closely and you'll notice that not all of the pinnacle icebergs in this video detach from the bottom first. Some separate from the top, reflecting a different type of structural failure. Different structural failures occur at different rates. If you don't know what those rates are, it's hard to say how accurate your models are.

"If you're going to project sea levels, you need to pass through the eye of the needle first and get the delivery of ice to the ocean correct—and that's not possible right now," David says. "It could be in the future, with more observation and more modeling, but this event had too much going on for anyone to responsibly say they could predict or understand what happened."

Until that future arrives, we'll have video like this one to remind us of the enormous complexity—and just plain enormity—of calving glaciers.

Who doesn't love a good slow motion video? The Slow Mo Guys—Gav and Dan—sure do! In this video of theirs, they use a high speed camera to capture the motion of four different bullets. And lucky for me, the motion looks to be perfect for a video analysis: They give both a reference scale (the black and white markers in the back) as well as the frame rate (100,000 frames per second).

Let's just jump right into an analysis. I will be using Tracker Video Analysis to get position and time data for each bullet after it leaves the weapon. The bullets are so small that it can be difficult to always see them—for all but the largest bullets, I can only mark the bullets when they are passing in front of the white backgrounds. Still, this should be enough for an analysis.

Now for the data. I marked all the bullet positions so you don't have to. Here's a plot of position vs. time for each one (you can also view the plotly version).

I'm pretty happy with this—however, there is a problem. During the video, Gav and Dan switched from the slow-mo view back to a commentary view because the 45 caliber bullet was taking too long. When they switched back to the slow motion view, their timing was off. You can see this in a graph of position vs. time for that bullet. Oh, you can also notice the missing data when the bullet passed in front of the black parts of the background.

But how off is it? Let me first make the assumption that the bullet has a constant velocity in the horizontal direction. If this is the case, then a linear fit to the first part of the data gives a speed of 287.6 m/s. I should add that this speed would convert to 642 mph, which is faster than the speed listed on the video at 577 mph. Perhaps the displayed frame rate is different than the recorded frame rate? Maybe Gav and Dan could give me the answer here.

Anyway, back to the data. From the linear fit, I get the following equation of motion for the bullet.

This equation of motion should give the position of the bullet for any time. The "jump" time is at 0.00825 seconds. The constant velocity equation says that the bullet should have a position of 1.795 meters but the data from the video puts it at 1.886 meters. What about the reverse of this problem? If I know that the position is 1.886, what time should it be? That's a pretty straightforward problem to solve (algebraically). You can do that for yourself as a homework assignment, but I get a correct time of 0.008567 seconds. So, they were "late" by 0.000317 seconds. But wait! That's how far they were off in "real" time, but the video was played back in slow motion. It was recorded at 100,000 fps—but I assume it was displayed at 30 fps. That means this short time interval was actually off by 1 second. That's the mistake.

But that's just a cosmetic error, not really what I wanted to look at. Instead, I want to know if it's possible to estimate the amount of air resistance on these bullets as they leave the muzzle. I have to admit that air resistance on bullets can be pretty tricky. When these suckers are moving super fast, the simpler models for air resistance don't always work. But no matter what, an air drag force on a bullet should push in the opposite direction of the motion of the bullet and slow it down. So I will see if I can estimate the acceleration of the bullet during this short flight.

In one dimension, the acceleration is defined as the change in velocity divided by the change in time. That can be written as the following equation.

I just need to find the velocity at the beginning of the trajectory and then at the end. This will just be the slope of the position-time graph at these two points. Then I can divide by the time of flight for a rough approximation of the acceleration. Here's what I get:

  • Barrett: v1 = 934 m/s, v2 = 854 m/s, Δt = 0.0051 sec, acceleration = 15,686 m/s2. This seems very high.
  • AK-47: v1 = 752 m/s, v2 = 698 m/s, Δt = 0.0062 sec, acceleration = 8710 m/s2.
  • 45 cal: v1 = 246 m/s, v2 = 242 m/s, Δt = 0.012 sec, acceleraiton = 333 m/s2.
  • 9 mm: v1 = 351 m/s, v2 = 330 m/s, Δt = 0.0105 sec, acceleration = 2000 m/s2.

Since these values for the acceleration seem super high, I am going to roughly estimate the acceleration using a basic model for air drag. Here is the equation I will use:

In this expression, ρ is the density of air (about 1.2 kg/m3), A is the cross sectional area of the bullet and C is the drag coefficient. I can approximate the bullet size and mass from this wikipedia page and I will just use a drag coefficient of 0.295. With these values and the velocity right out of the barrel, I get an acceleration of 624 m/s. OK, that is high—but not quite as high as the measured acceleration. Still, I think the values from the video aren't super crazy. That bullet is moving really fast and interacting with the air that will make it slow down quite a bit—especially at first.

Of course ballistics physics can get pretty complicated, but that will never stop me from making some rough estimates.

Related Video

Science

The Slow Mo Guys Answer Slow Motion Questions From Twitter

The Slow Mo Guys (Gavin Free and Dan Gruchy) use the power of Twitter to answer some common questions about The Slow Mo Guys, The Super Slow Show, and filming in slow motion. What is their process like when coming up with new video ideas? What's their favorite video they've done? Where do they get all the food for the Super Slow Show?
The Slow Mo Guys star in the YouTube original series The Super Slow Show. Catch the final episodes April 11th.

For the first time since launching the Curiosity rover in 2011, NASA is sending a spacecraft to the surface of Mars. Exciting! Surface missions are sexy missions: Everyone loves roving robots and panoramic imagery of other worlds. But the agency's latest interplanetary emissary won't be doing any traveling (it's a lander, not a rover). And while it might snap some pictures of dreamy Martian vistas, it's not the surface that it's targeting.

InSight—short for Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport—will be the first mission to peer deep into Mars' interior, a sweeping geophysical investigation that will help scientists answer questions about the formation, evolution, and composition of the red planet and other rocky bodies in our solar system.

The mission is scheduled to launch some time this month, with a window opening May 5. When the lander arrives at Mars on November 26 of this year, it will land a few degrees north of the equator in a broad, low-lying plain dubbed Elysium Planitia. The locale will afford InSight—a solar-powered, burrowing spacecraft—two major perks: maximum sun exposure and smooth, penetrable terrain. It is here that InSight will unfan its twin solar arrays, deploy its hardware, and settle in for two years of work.

Using a five-fingered grapple at the end of a 2.4-meter robotic arm, the lander will grab its research instruments from its deck (a horizontal surface affixed to the spacecraft itself), lift them into the air, and carefully place them onto the planet's surface. A camera attached to the arm and a second one closer to the ground will help InSight engineers scope out the lander's immediate surroundings and plan how to deploy its equipment.

"Have you ever played the claw game at arcades?” asks payload systems engineer Farah Alibay. “That's essentially what we're doing, millions of miles away." The process will require weeks to prepare, plan, and execute, and involve JPL's In-Situ Instrument Lab—a simulation facility in Pasadena, California where mission planners can practice maneuvering the lander before beaming instructions to Mars. But if the InSight team can pull it off, it will be the first time a robotic arm has been used to set down hardware on another planet.

InSight has two main instruments, the first of which is the Seismic Experiment for Interior Structure, or SEIS. An exquisitely sensitive suite of seismometers, SEIS is designed to detect the size, speed, and frequency of seismic waves produced by the shifting and cracking of the Red Planet's interior. You know: Marsquakes.

"It's as good as any of the Earth-based seismometers that we have," says InSight project manager Tom Hoffman; it can measure ground movements smaller than the width of a hydrogen atom. "If there happened to be a butterfly on Mars, and it landed very lightly on this seismometer, we'd actually be able to detect that," Hoffman says. Other things it could detect, besides Marsquakes, include liquid water, meteorite impacts, and plumes from active volcanoes.

For as sensitive as it is, SEIS is damn hardy. "Seismometer designs on Earth are meant to be delicately handled, placed down, and never touched again," says lead payload systems engineer Johnathan Grinblat. SEIS's journey to Mars will be a little more exciting, what with the rocket launch, atmospheric entry, descent, and landing. "It's going to vibrate and experience lots of shocks, so it has to be robust to that," Grinblat says.

It'll also need to withstand dramatics swings in temperature; temperatures at Mars' equatorial regions can reach 70° Fahrenheit on a sunny summer day, and plummet as low as -100° Fahrenheit at night. To see that it does, InSight engineers matrioshka-d its instruments inside multiple layers of protection. The first is a vacuum-sealed titanium sphere, the second an insulating honeycomb structure. The third is a domed wind and thermal shield that will cover the sensors like a high-tech barbecue lid.

Those systems in place, InSight will reach for its second instrument, the Heat Flow and Physical Properties Probe. Also known as HP3, the 18-inch probe is effectively a giant, self-driving nail. It will jackhammer itself some 16 feet into Mars' soil—deep enough to be unaffected by temperature fluctuations on the planet's surface. "When scientists study temperature flow on Earth, they have to burrow even deeper," says Suzanne Smrekar, InSight's deputy principal investigator, because the moist soil conducts heat deep underground. “So Mars is actually pretty easy, relatively speaking.”

Tell that to the probe. Its descent through the Martian terrain will take weeks. As it burrows, it will pause periodically to measure how effectively the surrounding soil conducts heat. Temperature sensors will trail the probe on a tether, like thermometric beads on a string. Together, the temperature readings and conductivity measurements will tell InSight's scientists how much heat is emanating from the planet's insides—and that heat, or lack of it, will help tell researchers what the planet is made of, and how its composition compares to Earth's.

But before InSight takes Mars' temperature and senses for quakes, it'll have to launch, brave the desolate wilds of interplanetary space, and land. Exciting? Unquestionably. But also: "Everything about going to Mars is terrifying," Alibay says. "We're launching on a rocket that is a barely controlled bomb. We're going through six months of vacuum, being bombarded by solar electric energetic particles. We're going to a planet that we have to target, because if we miss it, we can't just turn around. And we have to land. And once we're on the surface, doing the deployments, any number of things could go wrong."

Alibay's not a pessimist. She's an engineer; anticipating misfires and miscalculations, she says, is part of the job description. Plus, she knows her history: Fewer than 50 percent of Mars missions succeed. "Not because we don't know what we're doing," she says, "but because it's really hard."

Not that that should ever prevent NASA from trying. After all: We do not go to space because it is easy.

More Mars

  • Check out the clean room where NASA prepared InSight for launch.

  • Researchers recently discovered clean water ice just below Mars' surface. InSight could detect even more.

  • Go behind the scenes as NASA tests the most powerful rocket ever, part of the agency's a decades-spanning effort to send astronauts to explore asteroids, Mars, and beyond.

Related Video

Science

NASA Discovers Evidence for Liquid Water on Mars

For years, scientists have known that Mars has ice. More elusive, though, is figuring out how much of that water is actually in liquid form. Now, NASA scientists have found compelling evidence that liquid water—life-giving, gloriously wet H 20—exists on Mars.

It took a while, but Russia finally got body-checked out of the Olympic Games. The road to ruin began in 2015, when two Russian track athletes-turned-whistleblowers raised suspicion about widespread state-sponsored doping at the 2012 London Games, followed by an independent report about problems at the 2014 Sochi Winter Olympics. Now, the International Olympic Committee has slammed the door on Russia's Olympic dreams, accusing the country of running a state-sponsored program involving more than a thousand athletes since 2011. The Russian team and all of its sports officials were banned from the upcoming winter games in PyeongChang, South Korea, in February, although individual Russian athletes who prove they are clean could compete under a neutral flag.

IOC president Thomas Bach announced the ban at a press conference in Lausanne, Switzerland Tuesday, citing a 17-month investigative report by Samuel Schmid, Switzerland’s former president. “The report clearly lays out an unprecedented attack on the integrity of the Olympic games and sport,” Bach said. “As an athlete myself, I’m feeling very sorry for all the clean athletes who have suffered from this manipulation.”

Despite the earlier warnings of Russian foul play, Bach said Tuesday the IOC didn’t have all the information needed back then to make its decision. The Schmid report detailed the structure of the Russian sports bureaucracy and how it is intertwined with the Russian government. It also gave interesting details about how Russian intelligence agents were able to unlock tamper-proof urine sample bottles—using a dental instrument and a lot of hard work.

Swiss forensic investigators spent two months to unlock the secrets of the supposedly impregnable BEREG-KIT bottles. These Swiss-made bottles are considered tight after five clicks of the sealing ring, with a maximum closure of 15 clicks. But by using a long, thin pointy metal instrument, the investigators were able to jimmy open the seal by carefully inserting it into the plastic ring and pushing it up. The process left tiny scratches on the inside of the bottle—though they were only visible under a microscope. That's how they were able to identify tampering in the Sochi samples.

More than a quarter of the Russian urine samples were likely tainted or swapped out with clean urine collected from the same athletes months earlier. The report also found that suspect Russian urine samples contained high levels of salt, several times higher than found in the human body, which was used to reconstitute the urine.

The Russians didn’t invent any new performance enhancing drugs for Sochi. “They just bought them from the pharmacy,” says Mark Johnson, a San Diego-based author who has written about doping in sports. “It shows that where when you take the resources of a government, both the scientific and financial resources and their research resources and apply it to a problem, they can find a solution,” Johnson says. “If it is one athlete trying to pry open a bottle, you can’t do it.”

The culture of doping in Russia and the Soviet Union goes back to the 1960s, when success in sports brought glory to the nation. Between 1968 and 2017, Russian athletes were stripped of 50 Olympic medals—including one third of their 33 won at the Sochi games.

The IOC’s Bach said that international athletes that finished behind the Russians who doped will have a special ceremony in South Korea. “We will do our best to reorganize ceremonies in PyeongChang in 2018 to try to made up for the moment they missed on the finish line or the podium,” Bach said. “The IOC will propose or will be taking measures for a more robust anti-doping system under [the World Anti-Doping Agency] so that something like this can not happen again.”

    More on the Olympics

  • Eric Niiler

    Olympic Drug Cops Will Scan for Genetically Modified Athletes

  • Chelsea Leu

    It’ll Be Really Hard to Test for Doping at the Rio Olympics

  • Emma Grey Ellis

    What Would Happen if the Olympics Banned Russia?

But Johnson and other critics are skeptical that the IOC’s Russia ban or a new testing system instituted will make a difference in future Olympic games. He notes that record-breaking athletic performance—regardless of whether it is the result of drugs, or perhaps soon through gene-editing techniques—is part of what draws TV viewers, advertisers, and national prestige.

“The objective of pro sports is to entertain and push the boundaries of performance, it’s not to be a moralistic teacher or imposer of values,” says Johnson. Of course, Olympic officials beg to differ. They point to the stated "Olympism" values of fair play, ethics. and hard work.

But sports and national prestige will always go together, and Russians have long since decided doping is worth the risk. Vitaly Murko, Russia’s former minister of sport, was implicated in having a direct role in the doping program by the Schmid report and an earlier 2015 investigation by WADA. On Tuesday, the IOC gave him a lifetime ban. But even though Murko can’t go to the Olympics, he won't be leaving sports behind. Now the country’s deputy prime minister, he is also president of the Russian football union—and next summer, he will be an official host of Russia’s World Cup soccer tournament.

Related Video

Sports

New Technology At The Olympics

This new technology is changing the Olympic Games.