Tag Archive : SCIENCE

/ SCIENCE

A Clever and Simple Robot Hand

March 20, 2019 | Story | No Comments

If you want to survive the robot apocalypse—the nerd joke goes—just close the door. For all that they’re great at (precision, speed, consistency), robots still suck at manipulating door handles, among other basic tasks. Part of the problem is that they have to navigate a world built for humans, designed for hands like ours. And those are among the most complex mechanical structures in nature.

Relief for the machines, though, is in sight. Researchers at the University of Pisa and the Italian Institute of Technology have developed a stunningly simple, yet stunningly capable robotic hand, known as the SoftHand 2, that operates with just two motors. Compare that to the Shadow Dexterous Hand, which is hypnotizingly skillful, but also has 20 motors. The SoftHand promises to help robots get a grip at a fraction of the price.

Like other robot hands out there, the SoftHand uses “tendons,” aka cables, to tug on the fingers. But it’s arranged in a fundamentally different way. Instead of a bunch of cables running to individual fingers, it uses just one cable that snakes through pulleys in each finger. Which gives it a bit less dexterity, but also cuts down on cost and power usage. And that’s just fine: There’s no such thing as a one-technique-fits-all robotic manipulator. More complex robot hands will undoubtedly have their place in certain use cases, as might SoftHand.

To create this hand, the researchers originally built a simpler SoftHand with just one motor. “The idea is that when you turn the motor, the length of the tendon shrinks and in this way you force the hand to close,” says roboticist Cosimo Della Santina, who helped develop the system.

Let out the tendon and the fingers once again unfurl into a flat palm, thanks to elasticity in the joints. It works great if you want to, say, grip a ball. But because the fingers move more or less in unison, fine manipulation isn’t possible.

By adding one more motor, SoftHand 2 ups the dexterity significantly. Take a look at the images above. Each end of the tendon—which still snakes through all the fingers—is attached to one of two motors in the wrist. If you move the motors in the same direction, the tendon shortens, and you get the gestures in the top row: A, B, C, and D. Same principle as the original SoftHand.

But run the motors in opposite directions, and something more complex unravels in E, F, G, and H. In this case, one motor lets out the tendon, while the other reels it in. “If you have a tendon moving through a lot of pulleys, the tension of the tendon is not constant,” says Della Santina.

If one motor is pulling, the tension on that end of the tendon will be higher. If the other is letting out the tendon, the tension on that end will be lower. By exploiting tension this way, the SoftHand requires far fewer cables than your typical robotic hand, yet can still get all those fingers a-wiggling.

Take a look at the GIF above and you can see the difference an extra motor makes. That’s one motor in the hand on the left, and two in the hand on the right. The former sort of brute-forces it, collapsing all its fingers around the ball. The latter, though, can more deliberately pinch the ball, thanks to the differences in tension of the tendon. Same principle below with the bank note.

Given that it’s working with just two motors, SoftHand can pull off an impressive array of maneuvers. It can extend an index finger to unlatch a toolbox or slide a piece of paper off a table. It can even unscrew a jar. All of it on the (relative) cheap. Because lots of motors = lots of money.

“For robots to learn and do cool stuff, we need cheap, reliable, and complex systems,” says Carnegie Mellon University roboticist Lerrel Pinto, who works on robot manipulation. “I think their hand strikes this balance,” he adds, but the real test is whether others researchers find uses for it. “Can it be used to autonomously learn? Is it reliable and robust over thousands of grasps? These questions remain unanswered.”

So SoftHand has promise, but more complicated robotic manipulators like the Shadow Dexterous Hand still have lots to offer. The SoftHand might be good for stereotyped behaviors, like unscrewing jars, while the Shadow and its many actuators might adapt better to more intricate tasks.

Fist bumps, though? Leave that to old Softie.

Police departments around the country are getting increasingly comfortable using DNA from non-criminal databases in the pursuit of criminal cases. On Tuesday, investigators in North and South Carolina announced that a public genealogy website had helped them identify two bodies found decades ago on opposite sides of the state line as a mother and son; the boy’s father, who is currently serving time on an unrelated charge, has reportedly confessed to the crime. It was just the latest in a string of nearly two dozen cold cases cracked open by the technique—called genetic genealogy—in the past nine months.

This powerful new method for tracking potential suspects through forests of family trees has been made possible, in part, by the booming popularity of consumer DNA tests. The two largest testing providers, Ancestry and 23andMe, have policies in place to prevent law enforcement agencies from directly accessing the genetic data of their millions of customers. Yet both companies make it possible for customers to download a digital copy of their DNA and upload the file to public databases where it becomes available to police. Searches conducted on these open genetic troves aren’t currently regulated by any laws.

But that might not be true for much longer, at least in Maryland. Last month, the state’s House of Delegates introduced a bill that would ban police officers from using any DNA database to look for people who might be biologically related to a strand of offending, unknown DNA left behind at a crime scene. If it passes, Maryland investigators would no longer have access to the technique first made famous for its role in cracking the Golden State Killer case.

Maryland has been a leader in genetic privacy since 2008, when the state banned the practice of so-called “familial searches.” This method involves comparing crime scene DNA with genetic registries of convicted felons and arrestees, in an attempt to identify not only suspects but their relatives. Privacy advocates argue that this practice turns family members into “genetic informants,” a violation of the Fourth Amendment. A handful of other states, including California, have also reined in the practice. But only Maryland and the District of Columbia outlawed familial search outright.

“Everyday law enforcement should never trump the Constitution,” says delegate Charles Sydnor III, an attorney and two-term Democrat from Baltimore. Sydnor is sponsoring the current bill, which would expand Maryland’s protections of its residents’ DNA even further. “If the state doesn’t want law enforcement searching databases full of its criminals, why would it allow the same kind of search conducted on citizens who haven’t committed any crimes?”

But opponents of House Bill 30 dispute that the two methods share anything in common. At a hearing for the proposed law in late January, Chevy Chase police chief John Fitzgerald, speaking on behalf of Maryland chiefs and sheriffs, called the bill a “mistake” that would tie investigators’ hands. “A search is a government intrusion into a person’s reasonable expectation of privacy,” he said. Because public databases house DNA from people who have freely consented to its use, as opposed to being compelled by police, there can be no expectation of privacy, said Fitzgerald. “Therefore, there is no search.”

Here is where it might help to have a better idea of how genetic genealogy works. Some police departments enlist the help of skilled sleuths like Barbara Rae-Venter, who worked on both the Golden State Killer and recent North and South Carolina murder cases. But most hire a Virginia-based company called Parabon. Until last spring, Parabon was best known for its work turning unknown DNA into forensic sketches. In May it began recruiting people skilled in the art of family-tree-building to form a unit devoted to offering genetic genealogy services to law enforcement.

The method involves extracting DNA from a crime scene sample and creating a digital file made up of a few hundred thousand letters of genetic code. That file is then uploaded to GEDMatch, a public warehouse of more than a million voluntarily uploaded DNA files from hobby genealogists trying to find a birth parent or long-lost relative. GEDMatch’s algorithms hunt through the database, looking for any shared segments of DNA and adding them up. The more DNA shared between the crime scene sample and any matches, the closer the relationship.

Parabon’s genealogists take that list of names and, using public records like the US Census, birth and death certificates, newspaper clippings, and social media, build out family networks that can include many thousands of individuals. They then narrow down the list to a smaller cohort of likely suspects, which they pass on to their law enforcement clients. In both genetic genealogy and familial search, these lists of relatives generated by shared DNA are treated as leads, for police to investigate further using conventional detective work.

It’s easy to understand why law enforcement agencies in Maryland would want to halt the bill in its tracks. Last year, Parabon helped police in Montgomery and Anne Arundel Counties arrest suspects in two cold cases—a home invasion that turned deadly and a serial rapist who targeted elderly victims. The company declined to disclose how many open cases it is currently pursuing with the state, saying only that it has working relationships with a number of police departments across Maryland. The cost of each case varies, based on the number of hours Parabon’s genealogists put into the search, but on average it runs about $5,000.

Parabon’s CEO, Steven Armentrout, who also spoke out against the bill at the hearing last month, suggests that forensic genetic genealogy is no different than police knocking on doors. “A lead is a lead, whether it’s generated by a phone tip or security camera footage or a consenting individual in a public DNA database.” When police canvas a neighborhood after a crime has taken place, some people will decline to answer, while others will speak freely. Some might speak so freely that they implicate one of their neighbors. “How is this any different?” Armentrout asks.

The difference, say privacy advocates, is that genetic genealogy has the potential to ensnare many more innocent people in a net of police suspicion based solely on their unalterable biology. Today, more than 60 percent of Americans of European ancestry can be identified using open genetic genealogy databases, regardless of whether they’ve ever consented to a DNA test themselves. Experts estimate it will be only a few years before the same will hold true for everyone residing in the US.

“There isn’t anything resembling consent, because the scope of information you can glean from these types of genetic databases is so extensive,” says Erin Murphy, a law professor at New York University. Using just a criminal database, a DNA search would merely add up stutters of junk DNA, much like identifying the whorls on a pair of fingerprints. DNA in databases like GEDMatch, however, can tell someone what color eyes you have, or if you have a higher-than-average risk of certain kinds of cancer. The same properties that make that kind of data much more powerful for producing more distant and more accurate kin-relations make it much more sensitive in the hands of police.

Until very recently, GEDMatch was police investigators’ only source of consumer DNA data. Companies like 23andMe and Ancestry have policies to rebuff requests by law enforcement. But last week Buzzfeed revealed that another large testing firm, Family Tree DNA, has been working with the FBI since last fall to test crime scene samples. The arrangement marked the first time a commercial company has voluntarily cooperated with authorities. The news came as a shock to Family Tree DNA customers, who were not notified that the company’s terms of service had changed.

“That’s why a bill like this is so important,” says Murphy. Because the fine print can change at any time, she argues that demanding more transparency from companies or more vigilance from consumers is insufficient. She also points out that the proposed legislation should actually encourage Marylanders to engage in recreational genomics, because they can worry less about the prying eyes of law enforcement.

But despite Maryland’s history, HB 30 faces an uphill battle. In part, that’s because the ban this time around has been introduced as a stand-alone measure. The 2008 prohibition was folded into a larger package that expanded DNA collection from just-convicted felons to anyone arrested on suspicion of a violent crime. The other hurdle is that in 2008 there were not yet any well-publicized familial search success stories. With resolutions to several high-profile cold cases, forensic genetic genealogy has already captured the public imagination.

Representative Sydnor, who comes from a law enforcement family—his father is a probation officer, and he has one uncle who is a homicide detective and another who is an FBI agent—says he wants to catch criminals as much as anyone. He just wants to do it the right way. “DNA is not a fingerprint,” he says. “A fingerprint ends with you. DNA extends beyond you to your past, present, and future. Before we decide if this is the route we really want to take, citizens and policymakers have to have a frank and honest conversation about what we’re really signing up for.” Over the next few months, that’s exactly what Marylanders will do.

If you want to watch sunrise from the national park at the top of Mount Haleakala, the volcano that makes up around 75 percent of the island of Maui, you have to make a reservation. Being at 10,023 feet, the summit provides a spectacular—and very popular, ticket-controlled—view.

Just about a mile down the road from the visitors center sits “Science City,” where civilian and military telescopes curl around the road, their domes bubbling up toward the sky. Like the park’s visitors, they’re looking out beyond Earth’s atmosphere—toward the sun, satellites, asteroids, or distant galaxies. And one of them, called the Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS, just released the biggest digital astro data set ever, amounting to 1.6 petabytes, the equivalent of around 500,000 HD movies.

From its start in 2010, Pan-STARRS has been watching the 75 percent of the sky it can see from its perch and recording cosmic states and changes on its 1.4 billion-pixel camera. It even discovered the strange 'Oumuamua, the interstellar object that a Harvard astronomer has suggested could be an alien spaceship. Now, as of late January, anyone can access all of those observations, which contain phenomena astronomers don’t yet know about and that—hey, who knows—you could beat them to discovering.

Big surveys like this one, which watch swaths of sky agnostically rather than homing in on specific stuff, represent a big chunk of modern astronomy. They are an efficient, pseudo-egalitarian way to collect data, uncover the unexpected, and allow for discovery long after the lens cap closes. With better computing power, astronomers can see the universe not just as it was and is but also as it's changing, by comparing, say, how a given part of the sky looks on Tuesday to how it looks on Wednesday. Pan-STARRS's latest data dump, in particular, gives everyone access to the in-process cosmos, opening up the "time domain" to all earthlings with a good internet connection.

Pan-STARRS, like all projects, was once just an idea. It started around the turn of this century, when astronomers Nick Kaiser, John Tonry, and Gerry Luppino at Hawaii’s Institute for Astronomy suggested that relatively “modest” telescopes—hooked to huge cameras—were the best way to image large skyfields.

Today, that idea has morphed into Pan-STARRS, a many-pixeled instrument attached to a 1.8-meter telescope (big optical telescopes may measure around 10 meters). It takes multiple images of each part of the sky to show how it’s changing. Over the course of four years, Pan-STARRS imaged the heavens above 12 times, using five different filters. These pictures may show supernovae flaring up and dimming back down, active galaxies whose centers glare as their black holes digest material, and strange bursts from cataclysmic events. “When you visit the same piece of sky again and again, you can recognize, ‘Oh, this galaxy has a new star in it that was not there when we were there a year or three months ago,” says Rick White, an astronomer at the Space Telescope Science Institute, which hosts Pan-STARRS’s archive. In this way, Pan-STARRS is a forerunner of the massive Large Synoptic Survey Telescope, or LSST, which will snap 800 panoramic images every evening with a 3.2-billion-pixel camera, capturing the whole sky twice a week.

Plus, by comparing bright dots that move between images, astronomers can uncover closer-by objects, like rocks whose path might sweep uncomfortably close to Earth.

That latter part is interesting not just to scientists but also to the military. “It’s considered a defense function to find asteroids that might cause us to go extinct,” White says. That's (at least part of) why the Air Force, which also operates a satellite-tracking system on Haleakala, pushed $60 million into Pan-STARRS’s development. NASA, the state of Hawaii, a consortium of scientists, and some private donations ponied up the rest.

But when the telescope first got to work, its operations hit some snags. Its initial images were about half as sharp as they should have been, because the system that adjusted the telescope’s mirror to make up for distortions wasn’t working right.

Also, the Air Force redacted parts of the sky. It used software called Magic to detect streaks of light that might be satellites (including the US government's own). Magic masked those streaks, essentially placing a dead-pixel black bar across that section of sky, to “prevent the determination of any orbital element of the artificial satellite before the images left the [Institute for Astronomy] servers,” according to a recent paper by the Pan-STARRS group. The article says the Air Force dropped the requirement in December 2011. The magic was gone, and the scientists reprocessed the original raw data, removing the black boxes.

The first tranche of data, from the world’s most substantial digital sky survey, came in December 2016. It was full of stars, galaxies, space rocks, and strangeness. The telescope and its associated scientists have already found an eponymous comet, crafted a 3D model of the Milky Way’s dust, unearthed way-old active galaxies, and spotted everyone’s favorite probably-not-an-alien-spaceship, ’Oumuamua.

The real deal, though, entered the world late last month, when astronomers publicly released and put online all the individual snapshots, including auto-generated catalogs of some 800 million objects. With that data set, astronomers and regular people everywhere (once they've read a fair number of help-me files) can check out a patch of sky and see how it evolved as time marched on. The curious can do more of the “time domain” science Pan-STARRS was made for: catching explosions, watching rocks, and squinting at unexplained bursts.

Pan-STARRS might never have gotten its observations online if NASA hadn't seen its own future in the observatory's massive data pileup. That 1.6-petabyte archive is now housed at the Space Telescope Science Institute in Maryland, in a repository called the Mikulski Archive for Space Telescopes. The institute is also the home of bytes from Hubble, Kepler, GALEX, and 15 other missions, mostly belonging to NASA. “At the beginning they didn’t have any commitment to release the data publicly,” White says. “It’s such a large quantity they didn’t think they could manage to do it.” The institute, though, welcomed this outsider data in part so it could learn how to deal with such huge quantities.

The hope is that Pan-STARRS’s freely available data will make a big contribution to astronomy. Just look at the discoveries people publish using Hubble data, White says. “The majority of papers being published are from archival data, by scientists that have no connection to the original observations,” he says. That, he believes, will hold true for Pan-STARRS too.

But surveys are beautiful not just because they can be shared online. They’re also A+ because their observations aren’t narrow. In much of astronomy, scientists look at specific objects in specific ways at specific times. Maybe they zoom in on the magnetic field of pulsar J1745–2900, or the hydrogen gas in the farthest reaches of the Milky Way’s Perseus arm, or that one alien spaceship rock. Those observations are perfect for that individual astronomer to learn about that field, arm, or ship—but they’re not as great for anything or anyone else. Surveys, on the other hand, serve everyone.

“The Sloan Digital Sky Survey set the standard for these huge survey projects,” says White. Sloan, which started operations in 2000, is on its fourth iteration, collecting light with telescopes at Apache Point Observatory in New Mexico and Las Campanas Observatory in Northern Chile. From the early universe to the modern state of the Milky Way’s union, Sloan data has painted a full-on portrait of the universe that, like those creepy Renaissance portraits, will stick around for years to come.

Over in a different part of New Mexico, on the high Plains of San Agustin, radio astronomers recently set the Very Large Array’s sights on a new survey. Having started in 2017, the Very Large Array Sky Survey is still at the beginning of its seven years of operation. But astronomers don't have to wait for it to finish its observations, as happened with the first Pan-STARRS survey. “Within several days of the data coming off the telescope, the images are available to everybody,” says Brian Kent, who since 2012 has worked on the software that processes the data. That's no small task: For every four hours of skywatching, the telescope spits out 300 gigabytes, which the software then has to make useful and usable. “You have to put the collective smarts of the astronomers into the software,” he says.

Kent is excited about the same kinds of time-domain discoveries as White is: about seeing the universe at work rather than as a set of static images. Including the chronological dimension is hot in astronomy right now, from these surveys to future instruments like the LSST and the massive Square Kilometre Array, a radio telescope that will spread across two continents.

By watching for quick changes in their observations, astronomers have sought and found comets, asteroids, supernovae, fast radio bursts, and gamma-ray bursts. As they keep capturing a cosmos that evolves, moves, and bursts forth—not one trapped forever in whatever pose they found it—who knows what else they'll unearth.

Kent, though, is also psyched about the idea of bringing the universe to more people, through the regular internet and more formal initiatives, such as one that has, among other projects, helped train students from the University of the West Indies and the University of the Virgin Islands to dig into the data.

“There’s tons of data to go around,” says White. “And there’s more data than one person can do anything with. It allows people who might not use the telescope facilities to be able to be brought into the observatory.”

No reservations required.


More Great WIRED Stories

  • YouTube and Instagram tots are the new child stars
  • PHOTOS: Wildlife and humans collide on a grand scale
  • With its new 911, Porsche improves the unimprovable
  • ‘Fair’ algorithms can perpetuate discrimination
  • Meth, guns, pirates: The coder who became a crime boss
  • ? Looking for the latest gadgets? Check out our latest buying guides and best deals all year round
  • ? Want more? Sign up for our daily newsletter and never miss our latest and greatest stories

Related Video

Science

Alien Hunting: SETI Scientists on the Search for Life Beyond Earth | WIRED25

As part of WIRED25, WIRED's 25th anniversary celebration in San Francisco, Jill Tarter, author of "The 21st Century: The Century of Biology on Earth and Beyond," and astrobiologist Margaret Turnbull, two of the foremost authorities on search for life beyond Earth come together for a discussion on habitable planets, how life is defined and detected, and finding life in the universe outside Earth.

The laser is a tool of many talents, as the Nobel committee well knows. On Tuesday morning in Stockholm, its members announced the year’s physics prize and rattled off a short list of the technologies it has made possible: barcodes, eye surgery, cancer treatment, welding, cutting materials more precisely than a scalpel. They failed to acknowledge the whimsy it has brought cat owners, although one committee member did mention laser light shows.

The laser’s resume keeps growing. This year, the Nobel committee awarded the prize in physics to three scientists who invented two groundbreaking ways to use them: Arthur Ashkin for developing a technique for grabbing and studying microscopic objects, known as optical tweezers, and Donna Strickland and Gérard Mourou for inventing a method that now allows scientists to produce intense pulses of light that, for a billionth of a billionth of a second, contain more power than the entire U.S. electricity grid. These laser techniques have transformed medical procedures, manufacturing, and biology research, the committee said. The three will share the 9 million kronor (about $1 million) prize, with Ashkin receiving half, and Strickland and Mourou splitting the other half.

At 96, Ashkin is the oldest ever Nobel recipient. He developed optical tweezers in 1970, while he was a scientist working at Bell Labs. He found that, if you focus a laser in a specific way, you can create a sweet spot in the beam where certain microscopic beads can reside harmlessly and motionlessly. By affixing proteins or other tiny biological objects onto the bead, you can precisely steer and prod them. Since Ashkin’s invention, scientists have trapped and played with individual viruses, bacteria, proteins, DNA, and more using the tweezers. They allow scientists to grab and sort individual cells, for example, and to watch cells called phagocytes engulf bacteria to keep humans healthy. Scientists have even used the tweezers to measure the forces during mitosis—a cell dividing into two.

In particular, optical tweezers let scientists study the springiness and bendiness of biological structures, says physicist Michelle Wang of Cornell University. The tweezers are delicate enough to stretch a DNA molecule. Wang uses optical tweezers to study the twisting motion of motor proteins, which are structures that move molecules and other objects around the body.

Strickland and Mourou’s invention—a technique known as chirped pulse amplification—allows scientists to amplify laser light into the petawatts, which is more power than a trillion solar panels under direct sunlight. Prior to their innovation in 1985, this level of intensity was impossible. The beam was so powerful that it would destroy parts of the laser itself, says physicist Arvinder Sandhu of the University of Arizona. Strickland, who now works at the University of Waterloo, and Mourou of the École Polytechnique near Paris, figured out how to first mellow out the laser pulse—deliver the photons in a slow stream—and then squish them together into a sharp burst in another part of the laser that could handle the intensity.

The lasers don’t emit continuously at such high power; instead, this level of intensity endures as briefly as a billionth of a billionth of a second (an “attosecond”). These pulses are especially useful because they can precisely cut away materials like biological tissue without damaging its surroundings. That’s why they’re used in corrective eye surgery, says Sandhu. Similarly, industrial processes use it to carefully cut construction materials—“just like it machines the eye,” said Strickland during the prize announcement.

Like the optical tweezers, these short pulses can also be used to observe microscopic processes. Sandhu points the short bursts at exotic new materials, using it somewhat like a camera. In particular, he is interested in studying what electrons do inside a material, as they determine the material’s bulk properties, such as its electrical conductivity, magnetism, and melting point. The intensity of the light rips electrons off atoms in the material so that Sandhu can study their behavior. Short pulses provide a benefit like a fast shutter speed: the shorter the pulse, the more clearly you can capture what the electrons are doing in slow motion.

This year’s award is especially notable because the Nobel committee recognized both Mourou and Strickland, says Sandhu. When they developed the technique, Strickland was Mourou’s graduate student, a demographic whose achievements have historically been overlooked by the Nobel committee. “Junior researchers play an important role in physics,” he says. “It’s good to see that it’s not just advisers being recognized.”

It’s also “very heartening” that they gave the prize to a woman physicist this year, says Sandhu. Strickland is the first woman to receive the physics Nobel in 55 years. Only three women—Marie Curie, Maria Goeppert-Mayer, and now, Strickland—have won the Nobel Prize in physics. “Is that all? Really?” said Strickland during the conference. That brings the total fraction of female physics laureates to about 1.5 percent. The American Physical Society found that in 2017, women made up 21 percent of college physics degrees.

“Obviously, we need to celebrate women physicists because we’re out there,” said Strickland. “Hopefully in time it’ll start moving forward at a faster rate.” At the very least, during the ceremony, someone finally made Strickland a page on Wikipedia.

When Charles Darwin articulated his theory of evolution by natural selection in On the Origin of Species in 1859, he focused on adaptations—the changes that enable organisms to survive in new or changing environments. Selection for favorable adaptations, he suggested, allowed ancient ancestral forms to gradually diversify into countless species.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

That concept was so powerful that we might assume evolution is all about adaptation. So it can be surprising to learn that for half a century, a prevailing view in scholarly circles has been that it’s not.

Selection isn’t in doubt, but many scientists have argued that most evolutionary changes appear at the level of the genome and are essentially random and neutral. Adaptive changes groomed by natural selection might indeed sculpt a fin into a primitive foot, they said, but those changes make only a small contribution to the evolutionary process, in which the composition of DNA varies most often without any real consequences.

But now some scientists are pushing back against this idea, known as neutral theory, saying that genomes show much more evidence of evolved adaptation than the theory would dictate. This debate is important because it affects our understanding of the mechanisms that generate biodiversity, our inferences about how the sizes of natural populations have changed over time and our ability to reconstruct the evolutionary history of species (including our own). What lies in the future might be a new era that draws from the best of neutral theory while also recognizing the real, empirically supported influence of selection.

An 'Appreciable Fraction' of Variation

Darwin’s core insight was that organisms with disadvantageous traits would slowly be weeded out through negative (or purifying) selection, while those with advantageous features would reproduce more often and pass those features on to the next generation (positive selection). Selection would help to spread and refine those valuable traits. For most of the first half of the 20th century, population geneticists largely attributed genetic differences between populations and species to adaptation through positive selection.

But in 1968, the famed population geneticist Motoo Kimura resisted the adaptationist perspective with his neutral theory of molecular evolution. In a nutshell, he argued that an “appreciable fraction” of the genetic variation within and between species is the result of genetic drift — that is, the effects of randomness in a finite population — rather than natural selection, and that most of these differences have no functional consequences for survival and reproduction.

The following year, the biologists Jack Lester King and Thomas Jukes published “Non-Darwinian Evolution,” an article that likewise emphasized the importance of random genetic changes in the course of evolution. A polarized debate subsequently emerged between the new neutralists and the more traditional adaptationists. Although everyone agreed that purifying selection would weed out deleterious mutations, the neutralists were convinced that genetic drift accounts for most differences between populations or species, whereas the adaptationists credited them to positive selection for adaptive traits.

Much of the debate has hinged on exactly what Kimura meant by “appreciable fraction” of genetic variation, according to Jeffrey Townsend, a biostatistician and professor of evolutionary biology at the Yale School of Public Health. “Is that 50 percent? Is it 5 percent, 0.5 percent? I don’t know,” he said. Because Kimura’s original statement of the theory was qualitative rather than quantitative, “his theory could not be invalidated by later data.”

Nevertheless, neutral theory was rapidly adopted by many biologists. This was partly a result of Kimura’s reputation as one of the most prominent theoretical population geneticists of the time, but it also helped that the mathematics of the theory was relatively simple and intuitive. “One of the reasons for the popularity of the neutral theory was that it made things a lot easier,” said Andrew Kern, a population geneticist now at the University of Oregon, who contributed an article with Matthew Hahn, a population geneticist at Indiana University, to a special issue of Molecular Biology and Evolution celebrating the 50th anniversary of neutral theory.

To apply a neutral model of evolution to a population, Hahn explained, you don’t have to know how strong selection is, how large the population is, whether mutations are dominant or recessive, or whether mutations interact with other mutations. In neutral theory, “all of those very hard parameters to estimate go away.”

The only key input required by the neutral model is the product of the population size and the mutation rate per generation. From this information, the neutral model can predict how the frequency of mutations in the population will change over time. Because of its simplicity, many researchers adopted the neutral model as a convenient “null model,” or default explanation for the patterns of genetic variation they observed.

Some population geneticists were not convinced by Kimura’s argument, however. For instance, John Gillespie, a theoretical population geneticist at the University of California, Davis (and Kern’s doctoral adviser), showed in the early 1970s that some natural selection-based models could explain patterns observed in nature as well as neutral models, if not better.

More fundamentally, even when there aren’t enough data to disprove a neutral-theory null model, it doesn’t mean that natural selection isn’t happening, said Rebekah Rogers, an evolutionary geneticist at the University of North Carolina, Charlotte. “Any time you have limited data, the arguments get really fierce,” she said.

For decades, that was the crux of the problem: Kimura had proposed neutral theory at a time before inexpensive sequencing technology and the polymerase chain reaction became available, when gene sequence data were sparse. There was no simple way to broadly prove or disprove its tenets except on theoretical grounds because we didn’t know enough about genomic variation to resolve the dispute.

Strong Feelings About Neutrality

Today, 50 years after Kimura’s article, more affordable genomic sequencing and sophisticated statistical methods are allowing evolutionary theorists to make headway on quantifying the contribution of adaptive variation and neutral evolution to species differences. In species like humans and fruit flies, the data have revealed extensive selection and adaptation, which has led to strong pushback against Kimura’s original idea, at least by some researchers.

“The ubiquity of adaptive variation both within and between species means that a more comprehensive theory of molecular evolution must be sought,” Kern and Hahn wrote in their recent article.

Although the vast majority of researchers agree that strict neutrality as originally formulated is false, many also point out that refinements of neutral theory have addressed weaknesses in it. One of the original shortcomings was that neutral theory could not explain the varying patterns of genome evolution observed among species with different population sizes. For instance, species with smaller population sizes have on average more mutations that are deleterious.

To address this, one of Kimura’s students, Tomoko Ohta, now professor emeritus at Japan’s National Institute of Genetics, proposed the nearly neutral theory of molecular evolution in 1973. This modified version of neutral theory suggests that many mutations are not strictly neutral, but slightly deleterious. Ohta argued that if population sizes are large enough, purifying selection will purge them of even slightly deleterious mutations. In small populations, however, purifying selection is less effective and allows slightly deleterious mutations to behave neutrally.

Nearly neutral theory also had problems, Kern said: It did not explain, for example, why the rate of evolution varies as observed among different lineages of organisms. In response to such challenges, Ohta and Hidenori Tachida, now a professor of biology at Kyushu University, developed yet another variation of the nearly neutral model in 1990.

Opinions about the standing of nearly neutral theory can still differ sharply. “The predictions of nearly neutral theory have been confirmed very well,” said Jianzhi Zhang, who studies the evolution of genomes at the University of Michigan and also contributed to the special issue of Molecular Biology and Evolution.

Kern and Hahn disagree: Nearly neutral theory “didn’t explain much from the start and then was shuffled around in attempts to save an appealing idea from the harsh glare of data,” Kern wrote in an email.

How Much Evolves Neutrally?

For Townsend, the ongoing debate between neutralists and selectionists isn’t particularly fruitful. Instead, he said, “it’s just a quantitative question of how much selection is going on. And that includes some sites that are completely neutral, and some sites that are moderately selected, and some sites that are really strongly selected. There’s a whole distribution there.”

When Townsend first started studying cancer about a decade ago after training as an evolutionary biologist, he saw that cancer biologists had begun to study mutations at a level of detail that could reveal information about mutation rates at individual sites in the genome. That’s precious information that most population geneticists don’t get from the wild populations they study. Yet few cancer biologists study natural selection, and that’s what Townsend has brought to the cancer field with his background in evolutionary biology.

In a paper published in late October in the Journal of the National Cancer Institute, Townsend and his Yale colleagues presented the results of their evolutionary analysis of mutations in cancers. “What we’ve been able to do is actually quantify, on a site-by-site basis, what the selection intensities of different mutations are,” he said. Cancer cells are rife with mutations, but only a small subset of those are functionally important to the cancer. The selection intensities reveal how important the different mutations are for driving growth in an individual case of cancer—and therefore which ones would be most promising as therapeutic targets.

“This quantification of the selection intensity is absolutely essential, I think, to guiding how we treat cancer,” Townsend said. “My point is, doctors today are encountering the question: Which drug should I give to this patient? And they don’t have a quantification of how important the mutations that those drugs target actually are.” Someday, Townsend hopes, this evolutionary framework will offer a genetic basis for choosing the right drug and even predicting how a specific tumor might develop resistance to a treatment.

Although identifying the mutations undergoing the strongest selection is clearly useful and important, selection can also have subtle but important indirect effects on regions of the genome neighboring the target of selection.

The first hint of these indirect effects came in the 1980s and ’90s with the advent of the polymerase chain reaction, a technique that enabled researchers to look at nucleotide-level variation in gene sequences for the first time. One thing they discovered was an apparent correlation between the level of genetic variation and the rate of recombination at any specified region of the genome.

Recombination is a process in which the maternal and paternal copies of chromosomes exchange blocks of DNA with each other during meiosis, the production of sperm and egg cells. These recombinations shuffle genetic variation throughout the genome, splitting up alleles that might have previously been together.

By 2005, researchers could get whole-genome data from a variety of organisms, and they started to find this apparent correlation between levels of genetic variation and the rates of recombination everywhere, Kern said. That correlation meant that forces beyond direct purifying selection and neutral drift were creating differences in levels of variation across the genomic landscape.

Kern argues that the differences in the rates of recombination across the genome reveal a phenomenon called genetic hitchhiking. When beneficial alleles are closely linked to neighboring neutral mutations, natural selection tends to act on all of them as a unit.

Genetic hitchhiking meant that evolutionary geneticists suddenly had a whole new force called linked selection to worry about, Kern said. If only 10 percent of the genome is under direct selection in a population, then linked selection means that a much larger percentage—maybe 30 or 40 percent—might show its effects.

And if that is true, then selection for adaptive variants indirectly shapes neighboring genomic regions, leading to “a situation where neutral alleles have their frequencies determined by more than genetic drift, and instead have a new layer of stochasticity induced by selection,” Kern explained by email: Linked selection would produce more variance between generations than one would expect under neutrality.

Zhang points out that linked neutral mutations are still neutral. They might be hitchhiking with beneficial alleles, but that linkage is random—they could as easily be linked to harmful alleles and weeded out through “background selection.” So the neutral mutations’ fate is still determined by chance.

Kern agrees: The neutral mutations are still neutral—but they are not behaving as neutral theory would predict. Purifying selection at linked sites, he wrote, would “add noise to allele frequencies beyond drift,” while background selection and hitchhiking would lead to less genetic variation than under neutrality.

Neutral Models and Human Evolution

“While neutral models have without doubt begat tremendous theoretical fruits … the explanatory power of the neutral theory has never been exceptional,” Kern and Hahn wrote in their paper. “Five decades after its proposal, in the age of cheap genome sequencing and tremendous population genomic data sets, the explanatory power of the neutral theory looks even worse.”

In humans, recent evidence suggests “there’s a lot more adaptation than we ever thought was present,” Kern said. Recent human evolution is largely a history of migrations to new geographical locations where humans encountered new climates and pathogens to which they had to adapt. In 2017, Kern published a paper showing that most human adaptations arose from existing genetic variation within the genome, not novel mutations that spread rapidly through the population.

Even so, only about 1 percent of the human genome actually codes for proteins, said Omar Cornejo, an evolutionary genomicist at Washington State University. Maybe about 20 percent of the genome regulates when and where those coding regions are expressed. But that still leaves about 80 percent of the genome with unknown function.

Parts of this noncoding portion of the genome are riddled with repetitive DNA sequences, caused by transposable genetic elements, or transposons, that copy and insert themselves throughout the genome. According to Irina Arkhipova, a molecular evolutionary geneticist who studies the role of transposons at the Marine Biological Laboratory at the University of Chicago, “this portion of the genome is quintessentially neutral in Kimura’s sense,” even if some fraction of those transposons do affect the expression of genes. Because of this, neutral models applied to the nonfunctional regions of the genome can be used to infer the demographic history of human populations (and a variety of other organisms) quite accurately, Cornejo said.

Kern disagrees. “I’d argue we have no idea if we are accurately estimating human demographic history,” he wrote in an email. If you computationally simulate a population evolving neutrally, then methods for estimating demography will work; but introduce linked selection, and those methods fail.

Kern is agnostic about what percentage of the human genome is functional, but he believes that genetic linkage touches a large—yet unknown—fraction of the genome. With the accumulating evidence for adaptation in the human genome, it seems likely that some large fraction of the genome would be subject to the effects of linked selection, he suggested. “We just don’t know how large that fraction is.”

A recent paper in eLife by Fanny Pouyet and her computational-geneticist colleagues at the University of Bern and the Swiss Institute of Bioinformatics pins down that number. “Up to 80-85 percent of the human genome is probably affected by background selection,” the authors wrote.

After they additionally accounted for biased changes in genes that recombination can introduce during DNA repair, they concluded that less than 5 percent of the human genome evolved by chance alone. As the editors of eLife noted in their summary of the paper, “This suggests that while most of our genetic material is formed of non-functional sequences, the vast majority of it evolves indirectly under some type of selection.”

It’s possible that this estimate may creep even higher as biologists learn to recognize more subtle hints of selection. The new frontier in population genomics is focusing on traits such as height, skin color and blood pressure (among many others) that are polygenic, meaning that they result from hundreds or thousands of genes acting in concert. Selection for greater height, for instance, requires piling up changes at a number of dispersed genes to have an effect. Similarly, when farmers select strains of corn for higher yields, the impact generally shows up in many genes simultaneously.

But detecting polygenic adaptations in natural populations is a “very tricky business,” according to Kern, because those multitudes of genes are likely to be interacting in complex, nonlinear ways. The statistical methods for spotting those suites of changes are only beginning to be developed. To Kern, it will involve learning to appreciate “a whole other flavor” of adaptation because it will involve many small changes in individual mutation frequencies that collectively matter to natural selection.

In other words, it’s yet another non-neutral mechanism affecting genome evolution. As useful as the neutral theory has been in its various forms over the past half-century, the future of evolutionary theory may inevitably depend on finding ever-better ways to do the hard work of figuring out exactly how — and how much—selection is inexorably shaping our genomes after all.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Just how fast is the X-Men character Quicksilver, also known as Peter Maximoff (and son of Magneto)? This would be a popular topic of debate at the comic book bar—if a comic book bar existed. It such a bar does exist, I have to go.

We can take a crack at answering this question, starting with a scene from the 2016 movie X-Men: Apocalypse. In it, Quicksilver arrives at Xavier's School for Gifted Youngsters (the X-Men mansion). Right as he gets there, he realizes it's in the process of blowing up. That means it's up to Peter to save everyone. He makes multiple trips in and out of the mansion to bring the students out to safety. And this is where I will estimate his speed.

Preemptive comment. Yes, I know it's just a movie. Yes, I know it's not real. I don't care. This is how I show my appreciation of superhero movies, by pulling out the physics moves. That's just what I do.

Calculating Speed

Suppose I run across the yard, touch a tree and then come back to where I started. If I know the time this takes and the total distance, I can calculate the average speed as:

That seems straightforward, but I need to point out that this is the average speed. That's what you get when you divide the distance traveled (Δs) by the time it takes (Δt). Do not confuse this with the average velocity. Normally, when we say average velocity we are talking about the vector velocity. This depends on the starting and ending position of the motion. If I run to the tree and back, my average velocity would be the zero vector.

Now I just need to estimate the distance and the time and boom—I have the average Quicksilver speed. Of course that sounds simple, but there are some problems. I'm going to need to estimate some things. Such as:

  • The total time it takes Quicksilver to save all these people.
  • The number of trips he makes in and out of the X-Men Mansion.
  • The percent of time he is running vs. the time he spends just making silly gestures and playing with stuff.

Let's start with the easiest estimation—the time. Quicksilver seems to (somehow) notice the explosion right when it starts inside the mansion. I have no idea how he knows it exploded. He can't possibly hear it because the sound would travel with the shockwave. Also, he can't see the explosion since it's inside the mansion. Oh well. I guess it doesn't matter. But still—the total time has to be the time it takes for the shockwave from the explosion to expand to the outside yard. That's how much time Quicksilver has to get everyone out.

So, just how fast is an explosion? Of course this is no ordinary explosion. It's somehow connected to his nemesis, Apocalypse—it's not some conventional chemical-based explosive. But still, that's a good place to start. The speed of an expanding shockwave is called the detonation velocity. Luckily, Wikipedia has a table of detonation velocities.

The slowest detonation velocity is from ammonium nitrate with a speed of 2700 m/s and the highest (DDF) is around 10,000 m/s. In order to accommodate this range of speeds, I am going to use a detonation velocity of 6000 +/- 3000 meters per second. The "+/-" means "plus or minus," to display the range of uncertainty. I am going to be using this for all my estimates.

Now I need the explosion distance. Let me call this dy (for distance to the yard). I have no idea where this mansion is, so I'm just going to guess the distance is 100 +/- 50 meters. Just to be clear, with this notation that means the real yard distance is somewhere between 50 meters and 150 meters. I can put these two estimates together to get the total saving time.

I'm not going to put values in just yet. I'll do that at the end. But the next thing I need to estimate is the total number of trips Quicksilver makes into and out of the mansion. This isn't too difficult, because you can just watch the clip and count the number of times he grabs someone (sometimes he grabs two people). Using my fingers and toes, I count 21 trips. I will call this variable N and let it be equal to 21 +/- 1 (in case I added one or missed one).

Now that I have the number of trips, the trip distance (to the yard), and the total time, I have just about everything I need. I can divide the total explosion time by the number of trips to get the time for one trip. The only problem is that this doesn't quite work. Quicksilver has to act like a kid and stop and play with stuff while saving people. I can't really estimate the time he wastes since the movie is playing in slow motion. Instead, I am going to estimate the percent of the total time wasted. Let's call this Pw (percent wasted) with a value of 10 +/- 10 percent. That means the time for one trip into and out of the mansion can be calculated as:

Now for the final expression. This is what you have been waiting for. This is the expression for the speed of Quicksilver.

That's a nice equation. I like it. Notice something here? The yard distance doesn't matter (it cancels). Since both the total time and the total distance depend on this yard distance, it doesn't factor into the average speed. See—we estimated that for nothing.

But here comes the best part. Now that I have both estimates and uncertainties for the estimates, I can get a value for Quicksilver's speed WITH UNCERTAINTY. Yes, I can get a range for his speeds. It's going to be great. So, how do you calculate the speed when the quantities have uncertainties? This is the nightmare of your introductory physics lab course. But wait. I am going to make it simple. I will use the "crank three times" method for calculating uncertainties. Here's how it works.

  • Calculate the speed. Ignore all the uncertainty stuff.
  • Calculate the minimum speed. Use the values that give the lowest possible speed. Notice that if you are dividing by time, you would use the largest possible time in order to get the smallest possible speed.
  • Calculate the maximum speed. This is just like the minimum speed—except it's the maximum.

For the final answer, I will report the plain speed, and the uncertainty in this speed will be the average of the deviation from the maximum and minimum values. That's it. Let's do it. Of course I am going to write this up as a python script—both because it's easier and because that means you can change the values if you like (I know you want to change stuff).

OK, just to be clear. That gives Quicksilver a speed of 280 +/- 188 kilometers per second (or, if you insist on thinking in miles per second, 174 +/- 117 mps). Yes, that's way faster than the speed of sound, but way slower than the speed of light. That's about the right range of speeds. But wait. There are too many questions left unanswered. Here are some homework questions.

Homework

  • Estimate the acceleration of Quicksilver as he runs up to speed and then stops. Find the value of this acceleration in units of "g's" where 1 g = 9.8 m/s2.
  • At one point, Quicksilver throws some people out of the window. How fast would they have to be thrown and at what angle?
  • Estimate the acceleration of the people thrown from the window as they collide with the curtains (that catch them).
  • What are some other calculated speed of Quicksilver from other scenes? Hint: I know this is out there since I wrote it—but you have to search for it. Oh, what about Flash? How fast is he?
  • At the beginning of the scene, Quicksilver is eating a twinkie. How many twinkies would he need to eat in order to get enough energy to save all these people?
  • What about air resistance? Estimate the air resistance on a human running this fast.
  • He also let's go of the twinkie at the beginning. It looks like it just floats in the air—but it doesn't. No, it's falling but things are running in super slow motion. Use this to estimate the slow motion rate of the film.
  • Estimate the coefficient of static friction between Quicksilver and the ground in order to run like he does.

One last point. I don't think Quicksilver runs fast. Instead, I think he has the ability to control time. He can make time slow down around him such that it looks like he is running fast. That means that he doesn't have to have super high accelerations or coefficients of friction and stuff. But who cares how he works. He's still a superhero.

Jake Misch’s family has been growing corn in the sandy soils of northwestern Indiana for four generations. Like other farmers in the area, the Misches spray their fields with a nitrogen-rich fertilizer once in the spring when the seeds are planted, and once later in the year, when the corn is going through its growth spurt. Fertilizing is essential to yielding a healthy harvest, but it’s expensive enough that he stresses about it, and, as he’s well aware, it’s not great for the planet.

Which is why next year, Misch is trying something new. In the spring he’ll douse his freshly furrowed corn seeds with a liquid probiotic for plants. As the seedlings grow, these special microbes will colonize their roots, forming hairy nodes and converting atmospheric nitrogen into a form that plants can use to turn sunlight into sugar. If all goes as planned, those little critters will produce 25 pounds of usable nitrogen for every acre of corn.

At least, that’s what Pivot Bio is promising. The Berkeley, California, biotech startup announced today its commercial launch of the first and only nitrogen-producing microbial treatment for US corn farmers. It’s not a complete and total replacement for fertilizer, but the product aims to reduce farmer’s reliance on it. Fertilizer production is a huge contributor to greenhouse gases. Once it’s on fields, it can leach into aquifers or run off into rivers, leading to toxic algae blooms. According to Pivot, if every corn farmer in the US followed Misch’s lead, it would be the environmental equivalent of removing one million cars from the road.

“We’re trying to create an ecological zeitgeist,” says Karsten Temme, CEO and cofounder of Pivot. The company also announced that it had raised $70 million in series B funding, led by Bill Gates’ Breakthrough Energy Ventures, an investment fund that aims to drastically cut greenhouse gas emissions.

Using underground microbes to solve modern agriculture’s biggest challenges is not an altogether new idea. Farmers have been lacing their fields with pest-killing bacteria for decades. But money only recently started flowing into Big Ag microbials. Startups like Pivot and Indigo Ag began hunting down useful organisms to turn into products in 2014. Last year, German biotech giant Bayer launched a $100 million joint venture with Ginkgo Bioworks, an organism-engineering outfit, to create self-fertilizing crops with the help of designer bacteria.

That company, now called Joyn Bio, is using every trick in the synthetic biology trade to engineer organisms that can do for corn and wheat what naturally occurring microbes do for legume crops like soybeans. It’s still two to three years from field testing its first product.

Pivot, on the other hand, saw a way forward with what nature had already provided. There were bacteria, they knew, that lived on corn roots, that had nitrogen-fixing genes encoded in their DNA. But because the process is so energy-expensive, those bacteria only flipped those genes when needed. And because farmers always plant in fertilized fields, those genes had gone dormant over the decades. Someone just had to turn them back on. “What we’re trying to do is actually reawaken this function that the microbe has had all along,” says Temme.

But first, they had to find ’em. To start, Pivot bought buckets of soil from farmers throughout the US corn belt. Into that soil, the company’s scientists planted young corn seedlings called “bait plants” because of the substances they excrete to attract beneficial bacteria. Think of it like agricultural Tinder; the corn swipes right on microbes that make its life easier. Out of thousands of bugs that might be in the soil, the plant pulls out maybe a dozen or so. Then Pivot’s scientists grind up the roots and dab the mixture onto a plate of agar devoid of nitrogen. Anything that survives must be making its own.

Once they’ve got some good candidate bacteria, says Pivot scientist Sarah Bloch, they use gene editing tools to rewire their gene expression programs. The goal is to keep the nitrogen-fixing activity turned on, even in the presence of fertilizer. “We’ll take a promising strain and remodel it 100 different ways and test all those approaches to see which one is best,” she says. Sometimes, big breakthroughs come from surprising places. The strain that makes up Pivot’s first product, which will be available to farmers in select states for the next growing season, came from land in Missouri belonging to Bloch’s father.

“A lot of our early samples came from people we know—friends and family,” says Bloch. “It just turned out that we hit the jackpot on my dad’s sample.”

That product was tested in five generations of small field trials before going into larger trials this summer. Pivot worked with twenty-five farmers across the US to each grow a few acres using the liquid probiotic treatment. The harvest has yet to come in, and the data with it, but farmers like Misch are already excited. He learned on Twitter that an associate of his was participating in the trial, so Misch stopped by his farm to walk the test rows last week. What he saw convinced him to sign up to be Pivot’s first customer in Indiana. “If you go back 10 years, biologics get a mixed review from farmers,” says Misch. “These are living organisms and they act with every environment differently, but the science has come a long way.” The way Misch crunches the numbers, using this kind of probiotic plant treatment could save him $20 an acre, or about a third of his current annual nitrogen investment.

Is it enough to save the planet? Maybe not. But at least it's a step in the right direction.

WIRED ICON

Bill Gates, cofounder of Microsoft

NOMINATES

Stephen Quake, professor of bioengineering and applied physics at Stanford University and copresident of the Chan Zuckerberg Biohub


Few things trouble me as much as the fact that many cutting-edge medical advances aren’t available to everyone who needs them. Many lifesaving procedures require specialized equipment and trained technicians. If you don’t have a lot of money or live near a major hospital, you’re out of luck.

Stephen Quake wants to change that. By sampling the small amount of genetic material that circulates in the bloodstream, he’s replacing invasive, often painful procedures with cheaper, easier blood tests. He’s built a career out of turning highly specialized procedures into something simple that can be done anywhere, including the most remote places in the world.

Consider, for example, that doctors do not have a good way to predict if a baby will be born prematurely. This information would be lifesaving, as more than 600,000 infant deaths a year are caused by premature birth. In June, Quake and his team published a groundbreaking study (which our foundation helped fund) that showed you can predict a woman’s due date within a two-week window from a blood test. It works by looking at how RNA in her blood changes over the course of a pregnancy. We’re years away from doctors using this test during a checkup, but it could have a big impact worldwide. If a woman knows she might deliver early, she can work with her doctor to minimize risks.

The prematurity test is just the latest remarkable innovation from Quake. I first met him in 2007 when he was working on a blood test to detect genetic disorders like Down syndrome in a fetus. In the past year alone, more than 3 million women have taken it. Many of them were then able
to avoid amniocentesis, the invasive and sometimes risky procedure previously required.

Quake and others are also pioneering research into blood tests for infectious diseases and even some cancers. Just like Quake’s prematurity screening, these tests are potentially less costly and require minimal training. Any health provider, anywhere in the world, could draw a blood sample and mail it to a lab for analysis.

I believe that noninvasive blood tests are the future of health care. More accurate, less expensive, and earlier diagnosis of issues will revolutionize how we treat people and prevent disease while reducing costs. This is the direction medicine is headed, and Stephen Quake is leading the way.

Grooming by Michelle Marshall

This article appears in the October issue. Subscribe now.

MORE FROM WIRED@25: 1993-1998

  • Editor's Letter: Tech has turned the world upside down. Who will shake up the next 25 years?
  • Opening essay by Louis Rossetto: It's time for techies to embrace militant optimism again
  • Joi Ito and Neha Narula: Blockchain … for tyrants?
  • Jeff Bezos and the 10,000-year clock: Beyond civilization
  • Jaron Lanier and Glen Weyl: 3 radical paths to equality
  • Infinite Loop and Apple Park: A tale of two buildings

Join us for a four-day celebration of our anniversary in San Francisco, October 12–15. From a robot petting zoo to provocative onstage conversations, you won't want to miss it. More information at www.Wired.com/25.

Related Video

Promotion

25 Years of WIRED

WIRED is turning 25! We are celebrating in San Francisco this October with four days of events honoring the ideas, innovations, and icons who have shaped the world we know today—and those who will shape it for the 25 years to come.

Three hundred and sixty four days ago, Jiwoo Lee’s friends helped her celebrate her 18th birthday by baking her a Rice Crispr cake. They bedecked the gooey, cereal-based treat with blue and red frosted double helixes in honor of her favorite high school hobby—gene editing. Lee, who won top awards at the 2016 Intel International Science and Engineering Fair, is one of the youngest champions of the “Crisprize everything!” brigade. Her teenage passion and talent with the molecular tool even caught the eye of Crispr co-discoverer Jennifer Doudna. On Monday, the eve of her 19th birthday, Lee explained to the audience at the WIRED25 Summit how Crispr works and what her hopes are for the potential of the disruptive technology to one day snip away all human disease.

“The next five to ten years hold enormous potential for discovery and innovation in medicine,” she said. Now a sophomore at Stanford, Lee described just how quickly things are moving. In the last year alone scientists have used Crispr to annihilate malaria-causing mosquitoes, cure Huntington’s disease in mice, and supercharge human immune cells to better seek and destroy cancer.

Crispr-based cancer treatments are of particular interest to Silicon Valley’s tech elite. The first human trial in the US kicked off this year, financed by the Parker Institute for Cancer Immunotherapy, a charity set up by Sean Parker of Napster and Facebook fame.

“At some point I got frustrated with the monoculture of the consumer internet world,” he remarked onstage. “It was unsatisfying spending all our time making products that were as addictive as possible.” And working with scientists like Alex Marson, a biologist and infectious disease doctors at UC San Francisco, takes him back to a time when the work, not the valuation, was the true reward.

“Where we are now with biotech feels quite a bit like where we were with information technology in the late 1990s,” said Parker. “When we were just interested in building these products that we thought would make the world a better place.”

Marson is a pioneer in the field, using Crispr to rewire T cells—the immune sentinels responsible for attacking bodily threats. In a recent Nature paper, he showed that with the right mix of genome-editing machinery and a zap of electricity, it was possible to rewrite vast stretches of code to give T cells dramatic new functions. That means they can be made to be more effective at killing cancer, becoming an assassin squad of Manchurian candidates targeting tumors. But that’s just the beginning.

“We think we’ll be able to start putting in new logic to the underlying code of immune cells to treat broad spectrums of disease, not just cancer,” said Marson. While he was hesitant to attempt a tech analogy in front of Parker, it was hard to avoid. Scientists have begun to think about cells as hardware, and the DNA inside them the software packages that tell them what to do. Marson noted that advances in the ability to make and edit vast quantities of human cells have already delivered the first cell-based medicines to market—the first treatments, for cancer, were approved last year by the US Food and Drug Administration.

With the hardware problem largely solved, Marson believes the next step is to get better at building the instruction packages. And Crispr is the tool that’s making it possible. “It’s helping us iterate faster and faster to discover which software programs will work for which diseases.”

The government’s new weather forecast model has a slight problem: It predicts that outside temperatures will be a few degrees colder than what nature delivers. This “cold bias” means that local meteorologists are abandoning the National Weather Service in favor of forecasts produced by British and European weather agencies.

For the past few weeks, the National Weather Service has been forecasting snowfall that ends up disappearing, according to Doug Kammerer, chief meteorologist at WRC-TV in Washington, DC. “It’s just not performing well,” Kammerer says. “It has continued to show us getting big-time snowstorms in this area, where the European model will not show it.”

The new model, known as GFS-FV3 (Finite Volume on a Cubed Sphere dynamical core), has often overpredicted snow in the Northeast Corridor between Washington and Boston, a region where incorrect forecasts affect the lives of tens of millions of people.

The existing NWS forecast model, called the Global Forecast System, or GFS, has long been considered second in accuracy to the European models. Now Kammerer and others say the new FV3 upgrade is worse than the forecast model put out by our neighbors to the north. “The running joke now among meteorologists is that [the FV3] is looking more like the Canadian model,” Kammerer says. For those not plugged into weather humor, apparently the Canadian model also predicts big snowstorms that ultimately vanish.

The FV3 was developed over the past three years by NOAA’s Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. FV3 forecasts were released a few weeks ago for testing by local meteorologists, and many of them took to Twitter to complain about the results. “I have no faith in the FV3 [for snowfall]”, tweeted Boston-based Judah Cohen, a meteorologist at Atmospheric Environmental Research, a private firm that provides forecasts to commercial and government clients.

On Wednesday, the National Weather Service tweeted that the FV3 will be fully operational on March 20. But a NWS official told WIRED on Friday that the agency might push it back a few weeks because of all the complaints.

The FV3 upgrade uses an enhanced set of algorithms that have been developed in the past few years by climate scientists to describe the interaction between the atmosphere and the oceans. These algorithms, which capture the physics of cloud formation, tropical storms, and polar winds, among other things, are then populated with temperature data from satellites and surface observations to generate a three- or 10-day forecast.

“No model is perfect,” says David Novak, acting director of the NWS’ National Center for Environmental Prediction. “The weather community knows this.” Novak acknowledges that the FV3 has a “cold bias” and that the agency is working to fix it. “It tends to be colder than what is observed. It appears to be a systematic issue, we are doing our due diligence and investigating these reports.”

Novak says the 35-day government shutdown slowed final testing of the FV3. When federal climate scientists and programmers got back to work on January 25, the agency expected the model to be almost ready to go live. It looks like that deadline will now be pushed back.

He argues, however, that the FV3 isn’t all bad. He says it produces more accurate forecasts of hurricane intensity and the jet stream, the current of high-altitude air around the northern hemisphere that drives much of the United States’ weather patterns. “We found a lot of the good things,” Novak says. “We do know there are some areas that may need additional improvement.”

NOAA recently signed an agreement with the National Center for Atmospheric Research, a Boulder-based research facility that also develops forecast models. Antonio Busalacchi, director of NCAR’s parent agency, says he’s optimistic that the new NWS model will get better over time. “It’s premature to evaluate any one modeling system based on a snapshot with snowfall forecasts,” Busalacchi says. “One needs to look at the totality of the system.”

At the same time, Busalacchi says that NWS and its parent agency, NOAA, might want to rely on help from academic scientists who are developing their own forecast models. “We want to get in a position where the research community and operational community are more collaborative than we have been in the past,” he says.

As for Kammerer, he says he'll keep watching the new NWS model as he prepares his own forecast for the weather in Washington, DC. But maybe not for the next snow day.