Category: Story

Home / Category: Story

Three hundred and sixty four days ago, Jiwoo Lee’s friends helped her celebrate her 18th birthday by baking her a Rice Crispr cake. They bedecked the gooey, cereal-based treat with blue and red frosted double helixes in honor of her favorite high school hobby—gene editing. Lee, who won top awards at the 2016 Intel International Science and Engineering Fair, is one of the youngest champions of the “Crisprize everything!” brigade. Her teenage passion and talent with the molecular tool even caught the eye of Crispr co-discoverer Jennifer Doudna. On Monday, the eve of her 19th birthday, Lee explained to the audience at the WIRED25 Summit how Crispr works and what her hopes are for the potential of the disruptive technology to one day snip away all human disease.

“The next five to ten years hold enormous potential for discovery and innovation in medicine,” she said. Now a sophomore at Stanford, Lee described just how quickly things are moving. In the last year alone scientists have used Crispr to annihilate malaria-causing mosquitoes, cure Huntington’s disease in mice, and supercharge human immune cells to better seek and destroy cancer.

Crispr-based cancer treatments are of particular interest to Silicon Valley’s tech elite. The first human trial in the US kicked off this year, financed by the Parker Institute for Cancer Immunotherapy, a charity set up by Sean Parker of Napster and Facebook fame.

“At some point I got frustrated with the monoculture of the consumer internet world,” he remarked onstage. “It was unsatisfying spending all our time making products that were as addictive as possible.” And working with scientists like Alex Marson, a biologist and infectious disease doctors at UC San Francisco, takes him back to a time when the work, not the valuation, was the true reward.

“Where we are now with biotech feels quite a bit like where we were with information technology in the late 1990s,” said Parker. “When we were just interested in building these products that we thought would make the world a better place.”

Marson is a pioneer in the field, using Crispr to rewire T cells—the immune sentinels responsible for attacking bodily threats. In a recent Nature paper, he showed that with the right mix of genome-editing machinery and a zap of electricity, it was possible to rewrite vast stretches of code to give T cells dramatic new functions. That means they can be made to be more effective at killing cancer, becoming an assassin squad of Manchurian candidates targeting tumors. But that’s just the beginning.

“We think we’ll be able to start putting in new logic to the underlying code of immune cells to treat broad spectrums of disease, not just cancer,” said Marson. While he was hesitant to attempt a tech analogy in front of Parker, it was hard to avoid. Scientists have begun to think about cells as hardware, and the DNA inside them the software packages that tell them what to do. Marson noted that advances in the ability to make and edit vast quantities of human cells have already delivered the first cell-based medicines to market—the first treatments, for cancer, were approved last year by the US Food and Drug Administration.

With the hardware problem largely solved, Marson believes the next step is to get better at building the instruction packages. And Crispr is the tool that’s making it possible. “It’s helping us iterate faster and faster to discover which software programs will work for which diseases.”

The government’s new weather forecast model has a slight problem: It predicts that outside temperatures will be a few degrees colder than what nature delivers. This “cold bias” means that local meteorologists are abandoning the National Weather Service in favor of forecasts produced by British and European weather agencies.

For the past few weeks, the National Weather Service has been forecasting snowfall that ends up disappearing, according to Doug Kammerer, chief meteorologist at WRC-TV in Washington, DC. “It’s just not performing well,” Kammerer says. “It has continued to show us getting big-time snowstorms in this area, where the European model will not show it.”

The new model, known as GFS-FV3 (Finite Volume on a Cubed Sphere dynamical core), has often overpredicted snow in the Northeast Corridor between Washington and Boston, a region where incorrect forecasts affect the lives of tens of millions of people.

The existing NWS forecast model, called the Global Forecast System, or GFS, has long been considered second in accuracy to the European models. Now Kammerer and others say the new FV3 upgrade is worse than the forecast model put out by our neighbors to the north. “The running joke now among meteorologists is that [the FV3] is looking more like the Canadian model,” Kammerer says. For those not plugged into weather humor, apparently the Canadian model also predicts big snowstorms that ultimately vanish.

The FV3 was developed over the past three years by NOAA’s Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. FV3 forecasts were released a few weeks ago for testing by local meteorologists, and many of them took to Twitter to complain about the results. “I have no faith in the FV3 [for snowfall]”, tweeted Boston-based Judah Cohen, a meteorologist at Atmospheric Environmental Research, a private firm that provides forecasts to commercial and government clients.

On Wednesday, the National Weather Service tweeted that the FV3 will be fully operational on March 20. But a NWS official told WIRED on Friday that the agency might push it back a few weeks because of all the complaints.

The FV3 upgrade uses an enhanced set of algorithms that have been developed in the past few years by climate scientists to describe the interaction between the atmosphere and the oceans. These algorithms, which capture the physics of cloud formation, tropical storms, and polar winds, among other things, are then populated with temperature data from satellites and surface observations to generate a three- or 10-day forecast.

“No model is perfect,” says David Novak, acting director of the NWS’ National Center for Environmental Prediction. “The weather community knows this.” Novak acknowledges that the FV3 has a “cold bias” and that the agency is working to fix it. “It tends to be colder than what is observed. It appears to be a systematic issue, we are doing our due diligence and investigating these reports.”

Novak says the 35-day government shutdown slowed final testing of the FV3. When federal climate scientists and programmers got back to work on January 25, the agency expected the model to be almost ready to go live. It looks like that deadline will now be pushed back.

He argues, however, that the FV3 isn’t all bad. He says it produces more accurate forecasts of hurricane intensity and the jet stream, the current of high-altitude air around the northern hemisphere that drives much of the United States’ weather patterns. “We found a lot of the good things,” Novak says. “We do know there are some areas that may need additional improvement.”

NOAA recently signed an agreement with the National Center for Atmospheric Research, a Boulder-based research facility that also develops forecast models. Antonio Busalacchi, director of NCAR’s parent agency, says he’s optimistic that the new NWS model will get better over time. “It’s premature to evaluate any one modeling system based on a snapshot with snowfall forecasts,” Busalacchi says. “One needs to look at the totality of the system.”

At the same time, Busalacchi says that NWS and its parent agency, NOAA, might want to rely on help from academic scientists who are developing their own forecast models. “We want to get in a position where the research community and operational community are more collaborative than we have been in the past,” he says.

As for Kammerer, he says he'll keep watching the new NWS model as he prepares his own forecast for the weather in Washington, DC. But maybe not for the next snow day.

In 2015, the 12-person organizing committee of the first International Summit on Human Gene Editing—which included Crispr co-inventors Jennifer Doudna and Emmanuelle Charpentier—issued a statement on how the world should responsibly push forward the science of permanently altering the DNA of Homo sapiens. Combined with other changes that had sprung biology free from the ivory tower, the concern was that someone with a modicum of expertise would go rogue and start a Crispr-It-Yourself baby project.

Only three years later, we now know that someone is named He Jiankui. On Wednesday, the Chinese-born, American-trained scientist shared details of the first such experiment, in which He claims to have Crispr’d a pair of twin girls and implanted one additional edited embryo in a woman’s womb. The news was met with nearly universal condemnation from the world’s scientists. Chinese government authorities ordered an investigation, calling it a brazen violation of Chinese law and a breach of an ethical bottom line, “which is both shocking and unacceptable.”

LEARN MORE

The WIRED Guide to Crispr

On top of all these alleged transgressions, He may have committed one more: the unauthorized use of Crispr components intended only for research purposes. According to the consent form he gave to potential parents, He’s team was using materials purchased from two American biotechnology companies to make edits to human embryos bound for implantation. The documents named Massachusetts-based Thermo Fisher Scientific as the supplier of Cas9—the bacterial protein that clamps onto DNA and delivers a double-stranded slice—and Bay Area startup Synthego as the maker of its synthetic guide RNA. If gene editing were a butcher shop, Thermo Fisher would craft the knives and Synthego would instruct on which cuts to make.

He may have spent the last two years working in secret, but he hasn’t been working in a void. American Crispr companies, many of them founded or advised by the field’s biggest stars, have also been hard at work to lower the costs and labor associated with gene editing. Their mission is to make Crispr accessible to everyone. And now they’re getting a lesson in what democratization of that technology really looks like.

“The difficult thing is that when materials leave our hands and they’re in someone else’s hands, there’s no way of ultimately controlling it,” says Paul Dabrowski, who co-founded Synthego in 2012 with his brother Michael. Back then, Doudna and Charpentier, along with fellow gene editing pioneer Feng Zhang, had just introduced the world to what was possible with Crispr—faster and easier genome manipulation than ever before. But Silicon Valley technologists like the Dabrowskis (who formerly worked as rocket engineers at SpaceX) looked at all that micropipetting and lines of hand-typed genetic code and saw an opportunity to go even faster.

They started Synthego to bring the best of Moore’s Law—miniaturization, automation, parallelization—to gene editing. By adding in some slick software and artificially intelligent design, Synthego made ordering Crispr constructs to target any human gene a matter of a few clicks, a few hundred dollars, and waiting for the FedEx driver to show up at your door.

When Doudna joined Synthego’s advisory board earlier this year, she described it as an essential company, one that was "poised to transform the industry by making the application of Crispr simpler, faster, and more valuable to innovators previously unable to realize its full potential,” she said in a press release at the time.

Of course, He’s team could have obtained Crispr components elsewhere or made them from scratch in his lab at Southern University of Science and Technology, in Shenzhen. But both time and money were of the essence in He’s race to make biological history.

At the Second International Summit on Human Genome Editing, held this week in Hong Kong, He said that he personally was covering patient medical care and experiment expenses, without funding from either of his companies or the university. And He, who trained as a biophysicist, isn’t regarded as a Crispr expert. Ordering components from a company that guarantees high editing efficiency—Synthego’s press material claims to put “high-quality gene editing results within reach of all Crispr researchers”—would have made a lot of sense for someone with more ambition than experience.

Synthego acknowledged to WIRED that its synthetic RNAs may have been used in He’s human embryonic engineering experiment. And that any clinical use of its products explicitly violates both the product labeling and the company’s terms of sale, which state in all caps, “FOR RESEARCH USE ONLY, AND NOT FOR HUMAN OR ANIMAL THERAPEUTIC OR DIAGNOSTIC USE.” In response to the revelations of the past few days, Synthego says it is now re-evaluating its ordering and customer screening processes.

Currently, it employs a two-step system. The first is an automated university email authentication, followed by a manual evaluation of the purchaser’s scientific resume and publications, to determine if they have a legitimate research history. Neither would likely have flagged He’s project, given that he held a post at Southern University of Science and Technology and had a solid publication record—though mostly in the adjacent field of single-cell sequencing.

Dabrowski says it’s too soon to say what future precautions the company will employ. But he’s interested in borrowing some lessons from other industries that bank on trust. In the same way Lyft and Uber built trust among drivers and riders with its star rating scheme, maybe there’s a transparent credit system for scientists that could offer additional screening. “I don’t know if this is a solvable problem,” says Dabrowski. “But if it is, it’s going to take everyone working together, the whole research ecosystem.”

That ecosystem includes other companies. With licenses to foundational Crispr patents from Zhang, Doudna, and Charpentier’s work, Thermo Fisher is an industry-leading gene editing supply company. Besides Crispr proteins, guides, and design tools, it offers hands-on training courses and a free webinar series called “Master the Art of Crispr Editing” in English, Mandarin, and Korean. Thermo Fisher did not respond to WIRED’s questions by the time of publication.

A global consensus wasn’t enough to stop one rogue scientist from bringing Crispr babies into the world. Science, as a self-regulating enterprise, failed. If the goal for the time being is total enforcement of the technology, the burden is going to have to be shared by industry as well.

More Great WIRED Stories

  • What's the fastest 100 meter dash a human can run?
  • Amazon wants you to code the AI brain for this little car
  • Spotify's year-end ads highlight the weird and wonderful
  • You can pry my air fryer out of my cold, greasy hands
  • Airports cracked Uber and Lyft—cities should take note
  • Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories

Related Video

Science

Crispr Gene Editing Explained

Maybe you've heard of Crispr, the gene editing tool that could forever change life. So what is it and how does it work? Let us explain.

This story originally appeared on Reveal and is part of the Climate Desk collaboration.

William Whitt suffered violent diarrhea for days. But once he began vomiting blood, he knew it was time to rush to the hospital. His body swelled up so much that his wife thought he looked like the Michelin Man, and on the inside, his intestines were inflamed and bleeding.

For four days last spring, doctors struggled to control the infection that was ravaging Whitt, a father of three in western Idaho. The pain was excruciating, even though he was given opioid painkillers intravenously every 10 minutes for days.

His family feared they would lose him.

“I was terrified. I wouldn’t leave the hospital because I wasn’t sure he was still going to be there when I got back,” said Whitt’s wife, Melinda.

Whitt and his family were baffled: How could a healthy 37-year-old suddenly get so sick? While he was fighting for his life, the U.S. Centers for Disease Control and Prevention quizzed Whitt, seeking information about what had sickened him.

Finally, the agency’s second call offered a clue: “They kept drilling me about salad,” Whitt recalled. Before he fell ill, he had eaten two salads from a pizza shop.

The culprit turned out to be E. coli, a powerful pathogen that had contaminated romaine lettuce grown in Yuma, Arizona, and distributed nationwide. At least 210 people in 36 states were sickened. Five died and 27 suffered kidney failure. The same strain of E. coli that sickened them was detected in a Yuma canal used to irrigate some crops.

For more than a decade, it’s been clear that there’s a gaping hole in American food safety: Growers aren’t required to test their irrigation water for pathogens such as E. coli. As a result, contaminated water can end up on fruits and vegetables.

After several high-profile disease outbreaks linked to food, Congress in 2011 ordered a fix, and produce growers this year would have begun testing their water under rules crafted by the Obama administration’s Food and Drug Administration.

But six months before people were sickened by the contaminated romaine, President Donald Trump’s FDA – responding to pressure from the farm industry and Trump’s order to eliminate regulations – shelved the water-testing rules for at least four years.

Despite this deadly outbreak, the FDA has shown no sign of reconsidering its plan to postpone the rules. The agency also is considering major changes, such as allowing some produce growers to test less frequently or find alternatives to water testing to ensure the safety of their crops.

The FDA’s lack of urgency dumbfounds food safety scientists.

“Mystifying, isn’t it?” said Trevor Suslow, a food safety expert at the University of California, Davis. “If the risk factor associated with agricultural water use is that closely tied to contamination and outbreaks, there needs to be something now. … I can’t think of a reason to justify waiting four to six to eight years to get started.”

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

The deadly Yuma outbreak underscores that irrigation water is a prime source of foodborne illnesses. In some cases, the feces of livestock or wild animals flow into a creek. Then the tainted water seeps into wells or is sprayed onto produce, which is then harvested, processed and sold at stores and restaurants. Salad greens are particularly vulnerable because they often are eaten raw and can harbor bacteria when torn.

After an E. coli outbreak killed three people who ate spinach grown in California’s Salinas Valley in 2006, most California and Arizona growers of leafy greens signed agreements to voluntarily test their irrigation water.

Whitt’s lettuce would have been covered by those agreements. But his story illustrates the limits of a voluntary safety program and how lethal E. coli can be even when precautions are taken by farms and processors.

Farm groups contend that water testing is too expensive and should not apply to produce such as apples or onions, which are less likely to carry pathogens.

“I think the whole thing is an overblown attempt to exert government power over us,” said Bob Allen, a Washington state apple farmer.

While postponing the water-testing rules would save growers $12 million per year, it also would cost consumers $108 million per year in medical expenses, according to an FDA analysis.

For Whitt and his family, his illness has been traumatic as well as costly. After returning home from his nine-day hospital stay, he relied on narcotic painkillers for about six weeks. The infection caused a hernia and tore holes in the lining of his stomach that surgeons had to patch with mesh. Five months later, he still has numbness from the surgery and diarrhea every week.

Whitt and his wife said it is irresponsible for the FDA to postpone the water-testing requirements when officials knew that people like Whitt could pay a hefty price.

“People should be able to know that the food they’re buying is not going to harm them and their loved ones,” Melinda Whitt said. “At this point, we question everything that goes into our mouths.”

FDA shows no urgency

The federal government often requires water testing to protect the public: Tap water is tested to make sure it meets health standards, and so are beaches, lakes and swimming pools.

But under the Trump administration plan, large growers wouldn’t have to start inspecting their water systems and annually test surface waters for pathogens until 2022.

Then they will have an additional two years to ensure irrigation water that comes in contact with vegetables and fruit does not contain E. coli above a certain concentration.

For the smallest farms, inspections and annual testing will begin in 2024, and they will have until 2026 to meet E. coli standards.

That means full compliance with the safeguards wouldn’t come until 20 years after three people died from eating California spinach, 15 years after Congress signed the Food Safety Modernization Act and eight years after Whitt and more than 200 others were sickened by romaine lettuce.

While the delay is just a proposal for now, the FDA has assured growers that it will not enforce the requirements in the meantime.

FDA officials declined interview requests. But a spokeswoman said the agency proposed the delay to ensure the testing requirements are effective.

“The Yuma outbreak does indeed emphasize the urgency of putting agricultural water standards in place, but it is important that they be the right standards, ones that both meet our public health mission and are feasible for growers to meet,” FDA spokeswoman Juli Putnam said in response to written questions.

In addition, the FDA did not sample water in a Yuma irrigation canal until seven weeks after the area’s lettuce was identified as the cause of last spring’s outbreak. And university scientists trying to learn from the outbreak say farmers have not shared water data with them as they try to figure out how it occurred and avoid future ones.

Why farmers should test water

The FDA has yet to unravel the mystery of how the Yuma romaine sickened so many people. But irrigation water is a “viable explanation,” the FDA said in an August update. Analysis of water samples from canals detected E. coli with the same genetic fingerprint as the bacteria that sickened Whitt and others. A large cattle feedlot is under investigation as a possible source.

The romaine outbreak is reminiscent of the 2006 spinach outbreak, which sickened at least 200 people in 26 states, killing a 2-year-old boy and two elderly women. Inspectors traced the E. coli strain to a stream contaminated with feces from cattle and wild pigs that then seeped into well water.

Many growers irrigate with water straight from streams or wells without testing it for pathogens. Pathogens from water can be absorbed by a plant’s roots. A CDC review reported that almost half of all foodborne illnesses from 1998 to 2008 were caused by produce.

Scientists from Rutgers, The State University of New Jersey, found in 2014 that investigations of tainted produce “often implicate agricultural water as a source of contamination.” Another study by FDA researchers in May noted that salmonella in irrigation water “has been regarded as one of the major sources for fresh produce contamination, and this has become a public health concern.”

In the wake of the public outcry over the spinach outbreak, California and Arizona suppliers of salad greens created their own voluntary safety program in 2007. Since then, water testing has become commonplace in the Salinas Valley, known as the nation’s “salad bowl” because about 60 percent of all leafy greens are grown there.

On one recent foggy summer morning, Gary and Kara Waugaman stood in the fields of a ranch near the Salinas Valley town of Watsonville. The Waugamans are food safety coordinators for Lakeside Organic Gardens, a vegetable grower and shipper. Clad in neon vests and jeans, they drove from field to field, examining soil, surveying plants and testing water.

“We got red chard, green chard, rainbow chard, green kale, red kale, lacinato and then collards,” Gary Waugaman said, pointing at row after row of colorful leafy plants.

Kara Waugaman stepped onto an open, concrete-lined reservoir. A single duck floated on the surface. It appeared clean, “but you can’t tell anything by looking,” she warned.

In her years of testing water from this underground well, she never has found a sample with fecal contamination high enough to violate industry standards. Using a special stick, she dipped a small glass bottle into the reservoir; it disappeared with a tiny glug, then emerged full of clear water.

Next, the Waugamans drove to another farm. Baby Brussels sprouts poked out of leafy plants. A powerful rotating sprinkler showered Kara Waugaman as she ran toward it and quickly filled a small bottle.

For about 10 years, the Waugamans have sent samples to a laboratory that tests for generic E. coli. If a certain concentration of what is known as “indicator” bacteria is detected, it could be a sign of more dangerous pathogens like the one that sickened Whitt.

The two farms the Waugamans visited that day participate in the voluntary California Leafy Greens Marketing Agreement. Members test agricultural water once a month and submit to audits by state inspectors.

Mike Villaneva, the agreement’s technical director, said he hopes growers elsewhere soon will get on board with water testing.

“Our feeling is that everyone ought to know their water quality, and the only way you know that is by testing,” he said.

But if the Yuma farms were voluntarily testing their water for pathogens, how did E. coli contaminate the lettuce? There may never be an answer.

“Everyone is in shock because the (growers) really felt their (voluntary) program would prevent not every and all sporadic illnesses, but a large outbreak like this,” Suslow said. “They’re reeling with that failure and working to figure out what to do to prevent it from taking place again.”

He hopes this failure will persuade them to give researchers access to water data collected before the romaine outbreak and in the future.

Villaneva and Gary Waugaman said the monthly testing is not foolproof; it minimizes, but doesn’t eliminate, the risks. Also, pathogens from livestock and other animals can get into crops from wind, dust and other means.

The contaminated lettuce likely came from multiple farms. But the only grower named so far, Harrison Farms, is a member of the Arizona alliance that agreed to follow the voluntary safety measures, including water testing.

Harrison Farms said in a statement that it has tested its irrigation water on a monthly basis for the past 10 years and that it met federal standards for E. coli during the last growing season. The farm said its fields and water supply “underwent a thorough investigation” by the FDA in May that “did not yield any significant findings.”

Although the federal rules may not have prevented the Yuma outbreak, experts say they could help prevent the next one. The requirements would have been mandatory nationwide and applied to all produce.

But Patty Lovera of Food & Water Watch, a Washington, D.C.-based group that advocates for safe food and water, called the Obama-era rules “unmanageable.” She said produce contaminated by tainted water is unacceptable, but so is shutting down small farms that can’t afford the testing.

“It’s a terrible situation,” she said. “The (federal rule) solution could have a lot of casualties. That’s not acceptable either.”

Stuart Reitz thinks onion growers shouldn’t have to test water at all.

“We haven’t seen any evidence that there’s contamination of onions from any pathogenic bacteria in irrigation water,” said Reitz, a scientific adviser to the Malheur County Onion Growers Association in Oregon.

Allen, the Washington apple farmer, estimates that it would cost him about $5,000 for the first two years of testing his irrigation water. He thinks it’s a waste of time and money because no outbreaks have been tied to the state’s apples.

“I’m not gonna test,” he said. “If they want to throw me in jail, well then, OK, guess I have to go to jail.”

FDA to growers: ‘Keep doing what you’re doing’

The FDA’s deference to growers was on full display at a February meeting, two months before the romaine outbreak made national headlines.

During a two-day recorded workshop with growers and other industry officials, Stephen Ostroff, the FDA’s deputy commissioner for foods and veterinary medicine, told growers that federal scientists had investigated “far too many produce-related outbreaks over the years where water turned out to be the culprit. There is no question that to reduce the risk of contamination of produce by the water that’s used on the crops, we need water standards.”

But Ostroff reassured the audience members that the FDA wants their feedback to develop new “requirements that are less burdensome while protecting public health.”

“We see revisiting the water standards as a collaboration with stakeholders, including all of the stakeholders in this room,” he said.

“All options are on the table, including reopening the rule,” he told them.

The safety requirements would not be implemented anytime soon, FDA officials told the group.

“Rather than kind of rushing to make a set decision, (we’re) just focusing on, you know, working with you guys for now,” said FDA staff fellow Chelsea Davidson.

James Gorny, a former industry lobbyist whom the FDA hired in February to implement produce safety rules, told the group that the agency would not ask anything of growers in the interim.

“The FDA has clearly stated, ‘Keep doing what you’re doing.’ We’re not asking you to do any more at this point in time,” he said.

Gorny’s career is a classic example of the revolving door between federal agencies and the industries they regulate.

In 2006 and 2007, Gorny was a registered lobbyist for the United Fresh Produce Association. Then he worked for the FDA as a food safety scientist for several years. In 2013, he became a vice president of another growers group, the Produce Marketing Association, which has spent $120,000 on lobbying so far this year, according to the Center for Responsive Politics, a campaign finance watchdog group.

Gorny’s hiring by the FDA mirrors a pattern across public health and environmental agencies. The Trump administration has appointed dozens of former industry officials and lobbyists to relax regulations designed to protect public health.

Whitt is suing the restaurant in Nampa, Idaho, that sold him the contaminated salads, and his anger flares when he talks about the FDA delay, as well as all the growers, shippers and processors that played a role in the outbreak and haven’t been identified yet.

“I think everybody is at fault,” Whitt said.

Now his family doesn’t trust the nation’s food supply.

“I’m terrified to eat vegetables,” Whitt said. “I won’t eat them unless they’re cooked. We won’t eat salads. I personally think it’s a broken system right now.”


More Great WIRED Stories

  • Everyone wants to go to the moon—logic be damned
  • College Humor gives comedy subscription a serious effort
  • Tips to get the most out of Screen Time controls on iOS 12
  • Tech disrupted everything. Who's shaping the future?
  • An oral history of Apple's Infinite Loop
  • Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories

Related Video

Science

See the Gear the CDC's Disease Detectives Use in the Field

The CDC's elite Disease Detectives investigate outbreaks of dangerous diseases like Ebola, Marburg Virus, Rabies, Anthrax and more. Find out about the equipment they use — from protective gear to bat traps, machetes to blow guns, liquid nitrogen and more.

If you are a Spider-Man fan (like me), surely you've seen Spider-Man: Homecoming. It's great.

OK, you know what comes next. Here is the part where I look at a particular scene and talk about the physics. It's just what I do, I can't help myself. But even if there is a physics problem, I still love the movie.

SPOILER ALERT

There, you have been warned—just in case.

Now I will describe the scene without revealing any major plot points. So Spider-Man is on a plane. No, not flying inside a plane. He is literally ON the top of the plane. This plane is crashing. It's not a nose-dive crash, but more like a crash landing. The aircraft hits an empty beach and makes a spectacular display of debris flying all over the place as it slides to a stop. The impact throws Spider-Man off the plane. He falls back onto the beach along with the rest of the aircraft and cargo pieces—much of it on fire.

But wait! That's not what would happen. The key physics idea involved in this scene is the momentum principle. This says that a net force changes the momentum of an object and the momentum is the product of mass and velocity. If you want Spider-Man to slow down (and move off the back of the plane), then there must be a force pushing in that direction to decrease his momentum. Here is a diagram that might help.

Yes, there is a force pushing on the plane. It's a frictional force due to the interaction between the body of the plane and sand. This backward force decreases the momentum of the plane and makes it slow down. But what about Spider-Man? You could argue that he also has a backward-pushing force from the air pushing on him as he moves forward. That might be true, but it would be a very small backward force. Remember that in order for Spider-Man to fall off the plane he needs to slow down MORE than the plane. This tiny force won't do the job.

How about a nice physics demo showing what should happen? Here is a cart on a low friction track with a small block on top. The cart is like the plane and the block is supposed to be Spider-Man. On one section of the track, there is some paper to create a frictional force pushing in the opposite direction as the momentum of the plane—this friction slows down the plane. Watch what happens.

What? Spider-Man (the block) falls FORWARD off the plane (the cart). This is what should happen. The plane slows down, but Spider-Man doesn't. He would fall off the plane because it slows down more than him and he would end up in front of the plane. What about all the debris? Well, that might end up behind the plane. If stuff is falling off the plane because of an interaction with the ground, then there will be a frictional force pushing it backward and slowing it down. It could indeed end up behind the plane.

So this scene has a physics mistake. It's not a big deal—but it does bring up an interesting point. Why doesn't this particular scene throw viewers off with its incorrect physics? The answer is that the motion of Spider-Man agrees with the common sense idea about force and motion. The idea (which does not agree with the momentum principle) is that forces makes things move. If you want to move at a constant speed, you need a constant force. This is a "common sense" idea because it seems to work. If you push on a book sitting on the table, it moves. When you stop pushing it, it stops. So, when the plane is sliding on the sand there should be some force (let's call it a motion force—which isn't real) pushing it forward. Once Spider-Man lets go of the plane, he no longer has a force pushing him forward so he stops and falls off the back.

OK, but let's say we want Spider-Man to end up behind the plane, for the plot to work. There are two ways we could make this happen. The first is to have an accelerating (speeding up) plane. As the plane speeds up, Spider-Man holds on to the plane, which creates a force to keep him speeding up too. Once he lets go of the plane, the plane speeds up but he doesn't. The result is that Spider-Man would end up behind the plane. Why would the plane accelerate? I guess it would have to be due to the engines—but if the plane is accelerating, how will it end up stopped on the sand? I don't know.

The second option is to give some backward force on Spider-Man. Suppose Spider-Man has something like a web-parachute (I don't know why). This would create a significant amount of air drag and maybe slow him down enough that he slows down more than the plane and "falls" behind it. Or maybe Spider-Man grabs something on the ground (I'm picturing a lamp post) as he moves past it. This also would exert a significant backward-pushing force to slow him down more than the plane.

OK, let's just be clear. Even though the physics in this scene doesn't completely match real life—that's OK. I still think that Spider-Man: Homecoming might be my favorite movie in the Marvel Cinematic Universe.

As a fan of science fiction and science, I have to say that The Expanse has a bunch of great science. It's not just the science in the show. The characters also seem to demonstrate an understanding of physics. One scene from the first season stands out in particular as a classic physics example.

I guess I should give a spoiler alert, but I'm not really giving away any major plot elements. But you have been warned.

OK, since you are still here let me describe the scene. Two main characters (Jim and Naomi) are running on a gangway connected to a spaceship. This gangway is inside a bigger ship that is accelerating (with the engines on) to produce artificial gravity. But wait! They are under fire. Some other dude wants to stop them from getting into the ship, so he fires his weapon. Eventually, someone shoots an important part of the bigger spaceship and its engines cut off. With no thrust, Jim and Naomi lose their artificial gravity and start floating off the gangway. They have magnetic boots, but the boots only work on the gangway. They are doomed.

Now for the physics. Jim took high school in physics and he even paid attention. Let's review the key physics concepts so that you can fully understand what he did next.

It's all about forces and momentum. A force is an interaction between two objects. Yes, you can have many objects all interacting, but you just deal with two objects at a time. Suppose the two objects are you and a ball. If you move your hand forward, you can exert a force on the ball. However, since forces deal with two objects, the ball also pushes back on you in the opposite direction. In fact, the force that you push on the ball is EXACTLY the same magnitude that you push on the ball. That's just the nature of a force.

This leads us to the first part of Jim's physics trick. He uses his feet to push on Naomi. He pushes her up and away which means that she pushes back on him with the same force but in the opposite direction. Here is a diagram.

The second key idea is momentum. Momentum is the product of an object's mass and velocity. Physicists use the symbol "p" to represent momentum because we think that makes us cool. Actually, some claim that it has something to do with the latin word impetus—but who knows.

Momentum has a connection to forces with the momentum principle. This says that the net force on an object is equal to the time rate of change of the momentum. If there is a constant force on an object for some short time interval, then the following would be true.

Just a couple of comments. The arrows above the "F" and the "p" mean those quantities are vectors (variables with multiple dimensions). You don't really need to know about that (but here is something in case you want to look it)—I just don't like leaving the vectors off. It makes me feel icky. The other symbol is the Δ (delta). We use that to mean "change in". So you have change in momentum divided by the change in time. But the key thing here is change. Forces change momentum.

Now let's put these two ideas together. Jim pushes on Naomi with some force but that means there is also a force pushing on Jim with an equal magnitude but opposite direction. What do these forces do to Jim and Naomi? They change their momentums. Since their masses probably won't change (they won't) that turns into a change in velocity. If Jim has a larger mass than Naomi, he will have a smaller change in velocity to have the same change in momentum as Naomi (but in the opposite direction).

Do you see Jim's physics trick yet? OK, let's go over what happens after the artificial gravity stops. First, why do they float up anyway? Is it because you float in zero gravity? Nope. They were pushing on the gangway to support themselves. When the artificial gravity ended, they were still pushing for a short time and this increased their momentum upward.

Once they are moving away from the gangway, Jim does something rude. He pushes Naomi AWAY from the gangway. However, pushing Naomi away from the gangway means that there is a force pushing him towards the gangway. This means that Jim moves towards the gangway and Naomi moves away. But before Jim pushed Naomi, he clipped a cable to her. Then once he reaches the gangway, he can pull her in since he can hold on to the gangway and exert a force on her but not pull himself back up. In the end, they both get to the gangway and then can use their magnetic boots. Good job Jim.

Now for one more thing. You don't really understand something until you can model it. Well, here is a python model. Just click the "play" button to run it. I included some comments in the code for your enjoyment.

Feel free to edit the code and play with it. That's what models are for—to play with.

In the spring of 2017, Urmila Mahadev found herself in what most graduate students would consider a pretty sweet position. She had just solved a major problem in quantum computation, the study of computers that derive their power from the strange laws of quantum physics. Combined with her earlier papers, Mahadev’s new result, on what is called blind computation, made it “clear she was a rising star,” said Scott Aaronson, a computer scientist at the University of Texas, Austin.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Mahadev, who was 28 at the time, was already in her seventh year of graduate school at the University of California, Berkeley — long past the stage when most students become impatient to graduate. Now, finally, she had the makings of a “very beautiful Ph.D. dissertation,” said Umesh Vazirani, her doctoral adviser at Berkeley.

But Mahadev did not graduate that year. She didn’t even consider graduating. She wasn’t finished.

For more than five years, she’d had a different research problem in her sights, one that Aaronson called “one of the most basic questions you can ask in quantum computation.” Namely: If you ask a quantum computer to perform a computation for you, how can you know whether it has really followed your instructions, or even done anything quantum at all?

This question may soon be far from academic. Before too many years have elapsed, researchers hope, quantum computers may be able to offer exponential speedups on a host of problems, from modeling the behavior around a black hole to simulating how a large protein folds up.

But once a quantum computer can perform computations a classical computer can’t, how will we know if it has done them correctly? If you distrust an ordinary computer, you can, in theory, scrutinize every step of its computations for yourself. But quantum systems are fundamentally resistant to this kind of checking. For one thing, their inner workings are incredibly complex: Writing down a description of the internal state of a computer with just a few hundred quantum bits (or “qubits”) would require a hard drive larger than the entire visible universe.

And even if you somehow had enough space to write down this description, there would be no way to get at it. The inner state of a quantum computer is generally a superposition of many different non-quantum, “classical” states (like Schrödinger’s cat, which is simultaneously dead and alive). But as soon as you measure a quantum state, it collapses into just one of these classical states. Peer inside a 300-qubit quantum computer, and essentially all you will see is 300 classical bits — zeros and ones — smiling blandly up at you.

“A quantum computer is very powerful, but it’s also very secretive,” Vazirani said.

Given these constraints, computer scientists have long wondered whether it is possible for a quantum computer to provide any ironclad guarantee that it really has done what it claimed. “Is the interaction between the quantum and the classical worlds strong enough so that a dialogue is possible?” asked Dorit Aharonov, a computer scientist at the Hebrew University of Jerusalem.

During her second year of graduate school, Mahadev became captivated by this problem, for reasons even she doesn’t fully understand. In the years that followed, she tried one approach after another. “I’ve had a lot of moments where I think I’m doing things right, and then they break, either very quickly or after a year,” she said.

But she refused to give up. Mahadev displayed a level of sustained determination that Vazirani has never seen matched. “Urmila is just absolutely extraordinary in this sense,” he said.

Now, after eight years of graduate school, Mahadev has succeeded. She has come up with an interactive protocol by which users with no quantum powers of their own can nevertheless employ cryptography to put a harness on a quantum computer and drive it wherever they want, with the certainty that the quantum computer is following their orders. Mahadev’s approach, Vazirani said, gives the user “leverage that the computer just can’t shake off.”

For a graduate student to achieve such a result as a solo effort is “pretty astounding,” Aaronson said.

Mahadev, who is now a postdoctoral researcher at Berkeley, presented her protocol recently at the annual Symposium on Foundations of Computer Science, one of theoretical computer science’s biggest conferences, held this year in Paris. Her work has been awarded the meeting’s “best paper” and “best student paper” prizes, a rare honor for a theoretical computer scientist.

In a blog post, Thomas Vidick, a computer scientist at the California Institute of Technology who has collaborated with Mahadev in the past, called her result “one of the most outstanding ideas to have emerged at the interface of quantum computing and theoretical computer science in recent years.”

Quantum computation researchers are excited not just about what Mahadev’s protocol achieves, but also about the radically new approach she has brought to bear on the problem. Using classical cryptography in the quantum realm is a “truly novel idea,” Vidick wrote. “I expect many more results to continue building on these ideas.”

A Long Road

Raised in Los Angeles in a family of doctors, Mahadev attended the University of Southern California, where she wandered from one area of study to another, at first convinced only that she did not want to become a doctor herself. Then a class taught by the computer scientist Leonard Adleman, one of the creators of the famous RSA encryption algorithm, got her excited about theoretical computer science. She applied to graduate school at Berkeley, explaining in her application that she was interested in all aspects of theoretical computer science — except for quantum computation.

“It sounded like the most foreign thing, the thing I knew least about,” she said.

But once she was at Berkeley, Vazirani’s accessible explanations soon changed her mind. He introduced her to the question of finding a protocol for verifying a quantum computation, and the problem “really fired up her imagination,” Vazirani said.

“Protocols are like puzzles,” Mahadev explained. “To me, they seem easier to get into than other questions, because you can immediately start thinking of protocols yourself and then breaking them, and that lets you see how they work.” She chose the problem for her doctoral research, launching herself on what Vazirani called “a very long road.”

If a quantum computer can solve a problem that a classical computer cannot, that doesn’t automatically mean the solution will be hard to check. Take, for example, the problem of factoring large numbers, a task that a big quantum computer could solve efficiently, but which is thought to be beyond the reach of any classical computer. Even if a classical computer can’t factor a number, it can easily check whether a quantum computer’s factorization is correct — it just needs to multiply the factors together and see if they produce the right answer.

Yet computer scientists believe (and have recently taken a step toward proving) that many of the problems a quantum computer could solve do not have this feature. In other words, a classical computer not only cannot solve them, but cannot even recognize whether a proposed solution is correct. In light of this, around 2004, Daniel Gottesman — a physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario — posed the question of whether it is possible to come up with any protocol by which a quantum computer can prove to a non-quantum observer that it really has done what it claimed.

Within four years, quantum computation researchers had achieved a partial answer. It is possible, two different teams showed, for a quantum computer to prove its computations, not to a purely classical verifier, but to a verifier who has access to a very small quantum computer of her own. Researchers later refined this approach to show that all the verifier needs is the capacity to measure a single qubit at a time.

And in 2012, a team of researchers including Vazirani showed that a completely classical verifier could check quantum computations if they were carried out by a pair of quantum computers that can’t communicate with each other. But that paper’s approach was tailored to this specific scenario, and the problem seemed to hit a dead end there, Gottesman said. “I think there were probably people who thought you couldn’t go further.”

It was around this time that Mahadev encountered the verification problem. At first, she tried to come up with an “unconditional” result, one that makes no assumptions about what a quantum computer can or cannot do. But after she had worked on the problem for a while with no progress, Vazirani proposed instead the possibility of using “post-quantum” cryptography — that is, cryptography that researchers believe is beyond the capability of even a quantum computer to break, although they don’t know for sure. (Methods such as the RSA algorithm that are used to encrypt things like online transactions are not post-quantum — a large quantum computer could break them, because their security depends on the hardness of factoring large numbers.)

In 2016, while working on a different problem, Mahadev and Vazirani made an advance that would later prove crucial. In collaboration with Paul Christiano, a computer scientist now at OpenAI, a company in San Francisco, they developed a way to use cryptography to get a quantum computer to build what we’ll call a “secret state” — one whose description is known to the classical verifier, but not to the quantum computer itself.

Their procedure relies on what’s called a “trapdoor” function — one that is easy to carry out, but hard to reverse unless you possess a secret cryptographic key. (The researchers didn’t know how to actually build a suitable trapdoor function yet — that would come later.) The function is also required to be “two-to-one,” meaning that every output corresponds to two different inputs. Think, for example of the function that squares numbers — apart from the number 0, each output (such as 9) has two corresponding inputs (3 and −3).

Armed with such a function, you can get a quantum computer to create a secret state as follows: First, you ask the computer to build a superposition of all the possible inputs to the function (this might sound complicated for the computer to carry out, but it’s actually easy). Then, you tell the computer to apply the function to this giant superposition, creating a new state that is a superposition of all the possible outputs of the function. The input and output superpositions will be entangled, which means that a measurement on one of them will instantly affect the other.

Next, you ask the computer to measure the output state and tell you the result. This measurement collapses the output state down to just one of the possible outputs, and the input state instantly collapses to match it, since they are entangled — for instance, if you use the squaring function, then if the output is the 9 state, the input will collapse down to a superposition of the 3 and −3 states.

But remember that you’re using a trapdoor function. You have the trapdoor’s secret key, so you can easily figure out the two states that make up the input superposition. But the quantum computer cannot. And it can’t simply measure the input superposition to figure out what it is made of, because that measurement would collapse it further, leaving the computer with one of the two inputs but no way to figure out the other.

In 2017, Mahadev figured out how to build the trapdoor functions at the core of the secret-state method by using a type of cryptography called Learning With Errors (LWE). Using these trapdoor functions, she was able to create a quantum version of “blind” computation, by which cloud-computing users can mask their data so the cloud computer can’t read it, even while it is computing on it. And shortly after that, Mahadev, Vazirani and Christiano teamed up with Vidick and Zvika Brakerski (of the Weizmann Institute of Science in Israel) to refine these trapdoor functions still further, using the secret-state method to develop a foolproof way for a quantum computer to generate provably random numbers.

Mahadev could have graduated on the strength of these results, but she was determined to keep working until she had solved the verification problem. “I was never thinking of graduation, because my goal was never graduation,” she said.

Not knowing whether she would be able to solve it was stressful at times. But, she said, “I was spending time learning about things that I was interested in, so it couldn’t really be a waste of time.”

Set in Stone

Mahadev tried various ways of getting from the secret-state method to a verification protocol, but for a while she got nowhere. Then she had a thought: Researchers had already shown that a verifier can check a quantum computer if the verifier is capable of measuring quantum bits. A classical verifier lacks this capability, by definition. But what if the classical verifier could somehow force the quantum computer to perform the measurements itself and report them honestly?

The tricky part, Mahadev realized, would be to get the quantum computer to commit to which state it was going to measure before it knew which kind of measurement the verifier would ask for — otherwise, it would be easy for the computer to fool the verifier. That’s where the secret-state method comes into play: Mahadev’s protocol requires the quantum computer to first create a secret state and then entangle it with the state it is supposed to measure. Only then does the computer find out what kind of measurement to perform.

Since the computer doesn’t know the makeup of the secret state but the verifier does, Mahadev showed that it’s impossible for the quantum computer to cheat significantly without leaving unmistakable traces of its duplicity. Essentially, Vidick wrote, the qubits the computer is to measure have been “set in cryptographic stone.” Because of this, if the measurement results look like a correct proof, the verifier can feel confident that they really are.

“It is such a wonderful idea!” Vidick wrote. “It stuns me every time Urmila explains it.”

Mahadev’s verification protocol — along with the random-number generator and the blind encryption method — depends on the assumption that quantum computers cannot crack LWE. At present, LWE is widely regarded as a leading candidate for post-quantum cryptography, and it may soon be adopted by the National Institute of Standards and Technology as its new cryptographic standard, to replace the ones a quantum computer could break. That doesn’t guarantee that it really is secure against quantum computers, Gottesman cautioned. “But so far it’s solid,” he said. “No one has found evidence that it’s likely to be breakable.”

In any case, the protocol’s reliance on LWE gives Mahadev’s work a win-win flavor, Vidick wrote. The only way that a quantum computer could fool the protocol is if someone in the quantum computing world figured out how to break LWE, which would itself be a remarkable achievement.

Mahadev’s protocol is unlikely to be implemented in a real quantum computer in the immediate future. For the time being, the protocol requires too much computing power to be practical. But that could change in the coming years, as quantum computers get larger and researchers streamline the protocol.

Mahadev’s protocol probably won’t be feasible within, say, the next five years, but “it is not completely off in fantasyland either,” Aaronson said. “It is something you could start thinking about, if all goes well, at the next stage of the evolution of quantum computers.”

And given how quickly the field is now moving, that stage could arrive sooner rather than later. After all, just five years ago, Vidick said, researchers thought that it would be many years before a quantum computer could solve any problem that a classical computer cannot. “Now,” he said, “people think it’s going to happen in a year or two.”

As for Mahadev, solving her favorite problem has left her feeling a bit at sea. She wishes she could understand just what it was about that problem that made it right for her, she said. “I have to find a new question now, so it would be nice to know.”

But theoretical computer scientists see Mahadev’s unification of quantum computation and cryptography not so much as the end of a story, but as the initial exploration of what will hopefully prove a rich vein of ideas.

“My feeling is that there are going to be lots of follow-ups,” Aharonov said. “I’m looking forward to more results from Urmila.”

Original story reprinted with permission from Quanta Magazine, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

You Can Power a Calculator With Some LEDs

March 20, 2019 | Story | No Comments

Suppose you are getting ready to take a physics test. Everything is set—but wait! Your calculator battery died. What do you do? If you're extra crafty, you could grab an LED (light-emitting diode) and use it to get your calculator to function again. I know this seems crazy, but it's true. In fact, I did indeed run a calculator using some LEDs, which I will show you below.

Of course, to really understand how this works we need to look at what an LED actually is. I'm sure you have a few in the smartphone in your pocket. Many video displays use LEDs. It's very possible you've got one screwed into your ceiling light. They are everywhere.

Let's start off with just a diode. A diode is a device that is made from two types of semiconductors that are connected together. In one of the semiconductors, there are extra electrons (negative charges) that can move around to make the material a conductor. We call this an n-type semiconductor (the n stands for negative). The other type of material is called a p-type semiconductor. I bet you can guess what the p stands for—yup, positive charges. In the p-type there are actually atoms with missing electrons. These are called positive holes because an electron should be there. But these holes essentially behave like a positive charge.

When you put a p-type together with an n-type, you get a diode. If a current of negative electrons (which is the way most electrical currents work) enters the n-type side of the diode, everything works fine. The negative electrons can move through the n-type part of the diode with no problems. When these charges get to the p-type side, they combine with a positive hole (they fill in the holes). This makes it look as if a positive hole is moving in the opposite direction as the negative charge, such that there is a constant current across the diode.

If you switch the direction of the electric current, something different happens. To do that, you have to change the direction of the electric field inside the diode. This field then pushes the negative charges in the n-type and the positive holes in the p-type farther apart. Now it is much harder for the n's and p's to combine, so you essentially get no current.

That's the essence of a diode. Current can go one way through it, but not the other way. But wait! What about the light part? It turns out that a negative charge in the n-type side has a greater energy than the positive holes in the p-type side. So when a negative charge combines with a hole, there is a decrease in energy for the charge. Since energy has to be conserved, that energy has to go somewhere. It does. It makes light.

It's actually even crazier that that. It turns out that the frequency of the light produced is proportional to the change in energy. Yes, this is from quantum mechanics, but it is still real. Here is that relationship:

In this expression, ΔE is the change in energy of the electron and f is the frequency of light. The h is Planck's constant—it's kind of a big deal in quantum mechanics. But that is your LED, the light-emitting diode. I use them. You use them. Everyone uses them. They are great for lights because they mostly just create light and don't get very hot like incandescent or fluorescent bulbs.

Now let's get super crazy. What if you take an LED and you don't connect it to a battery? Instead you connect the LED to a voltmeter and measure the electric potential across the leads of the LED? Check this out.

Notice that by connecting the LED to the voltmeter, you get a voltage right away. This LED comes from an overhead light. When I cover up the LED, the voltage drops. Shining a bright light increases the voltage quite a bit. But why? Essentially, the diode is acting like a solar panel. OK, it IS a solar panel. The light gives energy to the electrons in the n-type material so that it has enough energy to move to the p-type side. This movement of charges builds up a potential difference (it's essentially acting like a capacitor here) so that you get voltage.

In case you can't tell, I think this is awesome. The LED is a two-way device. Run a current through and you get light. Shine light on it and you can get an electric current (if you connect it to something). OK. Game on. Can I use some LEDs to power something? In fact, YES. Check this out. Here are a bunch of LEDs connected in parallel such that the current from each LED adds to the total current. This LED bank is connected to a solar power calculator with the solar cell removed.

It works. OK, I had plans for something bigger. I wanted to have this run some tiny electric motor, but I couldn't get it to work. The calculator is pretty low power so it's perfect for this job.

But wait. If an LED can be both a light and a solar panel, can a solar panel also be a light? Apparently, yes. I didn't get this to work, but I've been told that if you connect a solar panel to a power supply, it will glow. Oh, you can't see it—it glows in the near infrared (like your TV remote). This means you will need a camera without an infrared filter to see it. I'm going to keep trying with this.

Let me tell you my real plan (since it didn't work). I was going to connect a motor to an LED and shine a light on the LED to run the motor. Then I was going to turn the motor really fast so that it acts like a generator and lights up the LED. That would be pretty cool.

Yes, an electric motor and an electric generator are the same thing. If you run current through it, it spins. If you spin it, you can get a current. Boom. Double duty. There are other devices that go both ways. What about the speaker? If you connect a speaker to the audio input on your computer, it acts as a microphone. Also, there is the TEG (thermoelectric generator). This is a device that is essentially just two different metals connected together. If you heat up one of the metals you can create an electric current. This sort of device is used with spacecraft (and a radioactive source for heat) to provide deep-space power. However, if you take this same device and run current through it, one side gets hot and one side gets cold. It's an electric cooler with zero moving parts.

So, now I'm adding the LED to this list of dual-purpose devices. Now I just need to figure out how to build an LED solar panel from scratch. That will be fun.

Friday morning began with delays at New York’s LaGuardia Airport. That’s not unusual—New York’s airports are famously balky. But this time, the cause wasn't something prosaic, like a blizzard. It was staffing. Because of the federal government shutdown, the airport didn’t have enough Transportation Security Administration agents and air traffic controllers; things slowed to a ground stop.

Then it started to spread—Newark, Philadelphia, even the key hub of Atlanta all began to wind down. And that’s terrifying. Airports are nodes on a global network, and the science that guides how that network behaves means that if one node has a problem, that problem will spread. The international air travel network exists on something of a knife-edge. It doesn’t take much to knock it out of optimal flow.

Basically, the delay problem is one of "connected resources"; planes land and have to get turned around to perform other flights, and some of the passengers on them are getting onto other flights, too. If you’ve ever flown, you know all that, but what it means in practice is that small mistakes or delays at one airport get magnified as they move down the line, propagating and sometimes intensifying. “The systems operating these queues are very close to capacity,” says Hamsa Balakrishnan, an aerospace engineer at MIT who studies the air transport network. “Both LaGuardia and Newark had wind-related delays today. With full staffing you might have been able to manage, but with a decrease in staffing as well you have delays, which then end up spreading to other airports as well, because of connectivity.”

Typically you might expect that the biggest airports in the world—the ones with the most flights in and out, say, or that move the most people—would have the biggest effect on overall movement across the network. But in fact, an airport’s “delay propagation multiplier” varies depending on all kinds of things, from how an airport is scheduled to its overall capacity, and even the weather. By one calculation, a minute of delay causes an average of 30 seconds of slowdown elsewhere in the network. But some airports are more resilient than others. The time it takes to get from one to the other has an effect. It’s so complicated that it daunts even the most intrepid network modelers.

Airlines try to account for all this by building slack into the schedule. They calculate the amount of time a given flight should take—the “scheduled block time”—and the amount of time the plane should have to spend on the ground, the “scheduled turnaround time.” But then they have a choice. “They insert buffer time in their schedules and ground operations,” says Bo Zou, a transportation engineer at the University of Illinois. “They still encounter delays, and a newly formed delay for one flight will propagate to the second and third flights. Part of it will be absorbed by the buffer, but not all of it.” Build too small a buffer, and the delays propagate further. Build too large a buffer and you’re not using your fleet efficiently, and losing money. “One side is efficiency, the other is robustness,” Zou says.

And it changes all the time, depending on changing conditions—some are predictable, like winter storms, and some are not, like government shutdowns and informal sick-outs. That’s called a “dynamical complex network.” It has to adapt, constantly.

Because if it doesn’t? According to one study, flight delays cost the US economy over $30 billion a year. It’s not just lost time or flight expenses; it’s whatever the people on those flights were planning on doing when they arrived. “A prolonged shutdown, or even slow down, would likely affect all kinds of unforeseen things,” says Luís Bettencourt, a network scientist at the University of Chicago. “The reliability of time­-sensitive logistics will degrade, and the hub character of some of these cities will have to be bypassed, at least temporarily. A prolonged slow down would be most disastrous to large cities, their influence, and their economies.”

Having shut down the shutdown, the government can now get its TSA agents and air traffic controllers back on station. That’ll build some resilience back into the airports just in time for a big-ass snowstorm due to hit the midwest next week. But the overall health of the air travel network will still be precarious.

That’s why researchers are working on accumulating more and more data on how it all works (or doesn’t). If humans can’t schedule all these flights in an efficient and robust way, maybe an algorithm can. Balakrishnan has even cofounded a startup that’s trying to make it happen. “There are so many moving pieces that it’s hard for a human being to come up with all possible solutions,” she says. “But that’s something we know how to get computers to do.” If you enjoy flying on an intractable and incomprehensible network now, wait until it’s run by an intractable, incomprehensible robot.