Tag Archive : SCIENCE

/ SCIENCE

And just like that, humanity draws one step closer to the singularity, the moment when the machines grow so advanced that humans become obsolete: A robot has learned to autonomously assemble an Ikea chair without throwing anything or cursing the family dog.

Researchers report today in Science Robotics that they’ve used entirely off-the-shelf parts—two industrial robot arms with force sensors and a 3-D camera—to piece together one of those Stefan Ikea chairs we all had in college before it collapsed after two months of use. From planning to execution, it only took 20 minutes, compared to the human average of a lifetime of misery. It may all seem trivial, but this is in fact a big deal for robots, which struggle mightily to manipulate objects in a world built for human hands.

To start, the researchers give the pair of robot arms some basic instructions—like those cartoony illustrations, but in code. This piece goes first into this other piece, then this other, etc. Then they place the pieces in a random pattern front of the robots, which eyeball the wood with the 3-D camera. So the researchers give the robots a list of tasks, then the robots take it from there.

“What the robot does is to first figure out where exactly is the original position of the frame,” says engineer Quang-Cuong Pham of Nanyang Technological University in Singapore, “and then calculates the motion of the two arms automatically to go and grasp it and transport it.”

As one arm grasps, say, the back of the chair, the other arm picks up one of those infernal wooden pegs and tries inserting it into a hole at the joint. That 3-D camera only has an accuracy of a few millimeters, so the robot has to feel around. The robot makes swirling motions around the hole, and when it feels the force pattern change, it knows the peg has dropped in slightly, then will apply more force to fully insert the thing.

This, though, is where the robot tends to have problems. If it hasn’t scanned the hole accurately enough, it might start swirling too far away—all the way over the edge of the piece. “Then the changes in force pattern are the same, so it would think that it has found the hole and it would go and insert in the void,” says Pham.

Matters grow more complicated when the robot arms have to grip either end of a larger piece of the chair. Not only does each robot arm have to calculate its own grasping and lifting motion, but it has to do so in consideration of the other arm. Think if you grasped the ends of a baseball bat and swirled it around—each arm is restricted by the movements of the other.

The stakes are even higher for the robot because it’s making calculations as it’s eyeballing the pieces, and has to commit to the plan it works out. “If there is a small error, for example in the modeling of the object, then the arms would fight each other, pulling this direction and the other pulling in another direction,” says Pham. “If that happens the robot will break the object.”

The solution is the force sensors. “When we sense that the force is too much, then it would change the motion of the robot to accommodate the errors,” Pham adds.

Pretty impressive stuff, but the fact remains that the researchers have to do a good amount of hand-holding. "This is a nice result,” says UC Berkeley’s Ken Goldberg, who works in robotic manipulation. “The big challenge is to replace such carefully engineered special purpose programming with new approaches that could learn from demonstrations and/or self-learn to perform tasks like this."

Which is exactly what the researchers are now working on. The next level of autonomy could be something called imitation learning, in which a human either joysticks the robots to learn to do the tasks in the right sequence, or the robot watches the human do it and then mimics.

The ultimate goal? “The final level is we show the robot an image of the assembled chair and then it has to figure it out,” says Pham. “But I would envision this last step not in the next probably five or six years or so.”

This kind of advanced learning will be essential for robots going forward, because there’s just no way engineers can program them to manipulate every object they come across in the complicated world of humans. That means facing challenges including but not limited to bringing down the tyranny of flat-packed Ikea furniture.

Curse you, Stefan. Curse you.

More helpful robots

  • Over at UC Berkeley, engineers have taught Brett the robot to teach itself to master a children's game by failing over and over.

  • As far as imitation learning is concerned, a startup called Kindred is helping picker robots learn to manipulate products in fulfillment centers.

  • Journey inside the Panoptic Studio, which is giving robots the super senses necessary to explore our world.

Related Video

Science

How Brett the Robot is Learning by Failing

Brett, a robot at UC Berkeley, is learning to put a square peg in a square hole the same way that a child does. Slowly and with trial and error.

About 40 years ago, Louise Brown, the first human created using in vitro fertilization, was conceived in a petri dish. Not long after her birth, Leon Kass, a prominent biologist and ethicist at the University of Chicago, wrung his hands about the then-­revolutionary technology of joining sperm and egg outside the body. The mere existence of the baby girl, he wrote in an article, called into question “the idea of the humanness of our human life and the meaning of our embodiment, our sexual being, and our relation to ancestors and descendants.” The editors of Nova magazine suggested in vitro fertilization was “the biggest threat since the atom bomb.” The American Medical Association wanted to halt research altogether.

Yet a funny thing happened, or didn’t, in the decades that followed: Millions of babies were conceived using IVF. They were born healthy and perfectly normal babies, and they grew to become healthy and perfectly normal adults. Brown is one of them. She lives in Bristol, England, and works as a clerk for a sea freight company. She’s married and has two healthy boys. Everyone is doing fine.

Nothing so excites the forces of reaction and revolution like changes in human reproduction. When our ideas of sex are nudged aside by technologies, we become especially agitated. Some loathe the new possibilities and call for restrictions or bans; others claim untrammeled rights to the new thing. Eventually, almost everyone settles down, and the changes, no matter how implausible they once seemed, become part of who we are.

We are now on the brink of another revolution in reproduction, one that could make IVF look quaint. Through an emerging technology called in vitro gametogenesis (or IVG), scientists are learning how to convert adult human cells—taken perhaps from the inside of a cheek or from a piece of skin on the arm—into artificial gametes, lab-made eggs and sperm, that could be combined to create an embryo and then be implanted in a womb. For the infertile or people having trouble conceiving, it would be a huge breakthrough. Even adults with no sperm or eggs could conceivably become biological parents.

In the future, new kinds of families might become possible: a child could have a single biological parent because an individual could theoretically make both their own eggs and sperm; a same-sex couple could have a child who is biologically related to both of them; or a grieving widow might use fresh hair follicles from a dead spouse’s brush to have a child her late husband didn’t live to see.

At the same time, modern gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add,
or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable. Eli Adashi, who was dean of medicine at Brown University and has written about the policy challenges of IVG, is astounded by what researchers have achieved so far. “It’s mind-boggling,” he says, although he cautions that popular understanding of the technology has not kept pace with the speed of the advances: “The public is almost entirely unaware of these technologies, and before they become broadly feasible, a conversation needs to begin.”

The story of artificial gametes truly begins in 2006, when a Japanese researcher named Shinya Yamanaka reported that he had induced adult mouse cells into becoming pluripotent stem cells. A year later, he demonstrated that he could do the same with human cells. Unlike most other cells, which are coded to perform specific, dedicated tasks, pluripotent stem cells can develop into any type of cell at all, making them invaluable for researchers studying human development and the
origins of diseases. (They are also invaluable to humans: Embryos are composed of stem cells, and babies are the products of their maturation.) Before Yamanaka’s breakthrough, researchers who wanted to work with stem cells had to extract them from embryos discarded during IVF or from eggs that had been harvested from women and later fertilized; in both cases, the embryos were destroyed in the process of isolating the stem cells. The process was expensive, controversial, and subject to intense government oversight in the United States. After Yamanaka’s discovery, scientists possessed a virtually inexhaustible supply of these so-called induced pluripotent stem cells (or iPSCs), and all over the world, they have since been trying to replicate each stage of cellular development, refining the recipes that can coax stem cells to become one cell or another.

In 2014, as a consequence of Yamanaka’s work, a Stanford researcher named Renee Reijo Pera cut skin from infertile men’s forearms, reprogrammed the skin cells to become iPSCs, and transplanted them into the testicles of mice to create human germ cells, the primitive precursors to eggs and sperm. (No embryos were created using these germ cells.) Two years later, in a paper published in Nature, two scientists in Japan, Mitinori Saitou and Katsuhiko Hayashi, described how they had turned cells from a mouse’s tail into iPSCs and from there into eggs. It was the first time that artificial eggs had been made outside of an organism’s body, and there was even more extraordinary news: Using the synthetic eggs, Saitou and Hayashi created eight healthy, fertile pups.

But baby mice do not a human make, and Saitou and another scientist, Azim Surani, are each working directly with human cells, trying to understand the differences between how mice and human iPSCs become primordial germ cells. In December 2017, Surani announced a crucial milestone concerning the eight-week cycle, after which germ cells begin the process of transforming into gametes. His lab had successfully nudged the development of stem cells to around week three of that cycle, inching closer to the development of a human gamete. Once adult human cells can be made into gametes, editing the stem cells will be relatively easy.

How soon before humans have children using IVG? Hayashi, one of the Japanese scientists, guesses it will take five years to produce egg-like cells from other human cells, with another 10 to 20 years of testing before doctors and regulators feel the process is safe enough to use in a clinic. Eli Adashi is less sure of the timing than he is of the outcome. “I don’t think any of us can say how long,” he says. “But the progress in rodents was remarkable: In six years, we went from nothing to everything. To suggest that this won’t be possible in humans is naive.”

Some cautiousness about IVG and gene editing is appropriate. Most medicines that succeed in so-called mouse models never find a clinical use. Yet IVG and gene editing are different from, say, cancer drugs: IVG induces cells to develop along certain pathways, which nature does all the time. As for gene editing, we are already beginning to use that in non-germ-line cells, where such changes are not heritable, in order to treat blood, neurological, and other types of diseases. Once scientists and regulators are confident they have minimized the potential risks of IVG, we could easily make heritable changes to germ cells like eggs, sperm, or early-stage embryos, and with those changes, we’d be altering the germ line, our shared human inheritance.

Used together, we can imagine would-be parents who have genetic diseases, or are infertile, or want to confer various genetic advantages on their children going to a clinic and swabbing their cheeks or losing a little piece of skin. Some 40 weeks later, they’ll have a healthy baby.

DISCUSSION

Join Parenting In a WIRED World, a new Facebook Group for parents to discuss kids' health and their relationship to tech.

The demand for IVG coupled with gene editing would be significant. Around 7 percent of men and 11 percent of women of reproductive age in the US have reported problems with fertility, according to the National Institutes of Health. And IVF, which is typically the last, best hope for those struggling to conceive, is invasive, often doesn’t work, and can’t work for women who have no eggs at all.

Then there is genetic disease. Of the more than 130 million children who will be born next year, around 7 million will have serious genetic disorders. Today, parents who don’t want to pass on genetic abnormalities (and who have the thousands of dollars often required) might resort to IVF with preimplantation genetic diagnosis, where embryos are genetically tested before they are transferred to a woman’s uterus. But that process necessarily involves the same invasive process of IVF, and it entails rejecting and often destroying embryos with the unwanted genes, an act that some parents find morally impermissible. With IVG and gene editing, prospective parents would think it unremarkable to give doctors permission to test or alter stem cells or gametes. A doctor might say, “Your child will have a higher chance of developing X. Would you like us to fix that for you?”

Proving that IVG and gene editing are broadly safe and reliable will be necessary before regulatory agencies around the world relax the laws that currently preclude creating a human being from sythnetic gametes or tinkering with the human germ line. Although IVF was greeted with alarm by many mainstream physicians and scientists, it nonetheless was subject to little regulation; it slipped through the federal regulatory machinery charged with overseeing drugs or medical devices, as it was neither. Because IVG and gene editing are so strange, there may be popular and expert demand for their oversight. But in what form? Richard Hynes, a professor of cancer research at MIT, helped oversee a landmark 2017 report on the science and ethics of human genome editing. “We set out a long list of criteria,” Hynes says, “including only changing a defect to a gene that was common in the population. In other words, no enhancements; just back to normal.”

Critics imagine other ethical quandaries. Parents with undesirable traits might be coerced by laws—or, more likely, preferential insurance rates—to use the technologies. Or parents might choose traits in their children that others might consider disabilities. “Everyone thinks about parents eliminating disease or [about] augmentation, but it’s a big world,” says Hank Greely, a professor of law at Stanford University and the author of The End of Sex and the Future of Human Reproduction. “What if there are parents who wanted to select for Tay-Sachs disease? There are plenty of people in Silicon Valley who are somewhere on the spectrum, and some of them will want children who are neuro-atypical.”

And what of unknown risks? Even if Saitou, Hayashi, and their peers can prove that their techniques don’t create immediate genetic abnormalities, how can we know for sure that children born using IVG and gene editing won’t get sick later in life, or that their descendants won’t lack an important adaptation? Carriers of the gene for sickle cell, for example, enjoy a protective advantage against malaria. How can we know if we are shortsightedly eliminating a disorder whose genes confer some sort of protection?

George Daley, the dean of Harvard Medical School, has a simple answer to that question: We can’t. “There are always unknowns. No innovative therapy, whether it is a drug for a disease or something so bold and disruptive as germ line intervention, can ever remove all possible risk. Fear of the unknown and unquantifiable risks shouldn’t absolutely prohibit us from making interventions that could have great benefits. The risks of a genetic, inherited disease are quantifiable, known, and in many cases devastating. So we go forward, accepting the risks.”

Among the current unknowns are the name and sex of the first child who will be born using IVG. But somewhere there might be two people who will become her parents. They may not know each other yet or the difficulties with fertility or genetic disease that will prompt their physician to suggest IVG and gene editing. But sometime before the end of the century, their child will have her picture taken for a birthday profile in whatever media exists. In the likeness, her smile, like Louise Brown’s today, will be radiant with the joy of being here.

Read More

Tools for Fetal Surgery •
Save the Preemies •
The Year's Best Tech Playthings •
Cashing in on Kiddie YouTube •
The #MiniMilah Effect •
Rethinking Screen Time •
A Brief History of Digital Worries •
Solving Health Issues at All Stages


Jason Pontin (@jason_pontin) is the former editor in chief and publisher of MIT Technology Review.

This article appears in the April issue. Subscribe now.

Related Video

Science

Crispr Gene Editing Explained

Maybe you've heard of Crispr, the gene editing tool that could forever change life. So what is it and how does it work? Let us explain.

If you think your on-the-job training was tough, imagine what life is like for newbie surgeons. Under the supervision of a veteran doctor, known as an attending, trainees help operate on a real live human, who might have a spouse and kids—and, if something goes awry, a very angry lawyer.

Now add to the mix the da Vinci robotic surgery system, which operators control from across the room, precisely guiding instruments from a specially-designed console. In traditional surgery, the resident gets hands-on action, holding back tissue, for instance. Robotic systems might have two control consoles, but attendings rarely grant residents simultaneous control. According to UC Santa Barbara’s Matt Beane—who recently published a less-than-rosy report on robot training for residents—he never once saw this happen.

Beane judged the state of the field by collecting interviews with surgeons and observations of hundreds of traditional and robotic procedures. (Robots, by the way, are good for things like hysterectomies or removing cancerous tissue from a prostate.) What he found was troubling: During minimally invasive robotic procedures, residents sometimes get just five or 10 minutes at the controls on their own.

“Even during that five or 10 minutes during practice, I'm helicopter-teaching you,” Beane says. “Like, ‘No no no no!’ Literally that kind of stuff. ‘Why would you ever do that?’ So after five minutes you're out of the pool and you feel like a kid in the corner with your dunce cap on.”

Some medical schools put more emphasis on robotic training than others. But Beane has found that a worrying number of residents struggle mightily in this environment. “I realized, good god, almost none of these residents are actually learning how to do surgery,” he says. “It's just failing.” Beane reckons that at most, one out of five residents at top-tier institutions are succeeding at robotic surgery.

That’s especially troubling considering that the da Vinci robot, the pioneer in a growing class of medical robo-assistants, has been in service for almost two decades. The benefits of the system are obvious: precision, cleanliness, reduced fatigue. But those benefits only materialize if medical schools are properly training their residents on the system. (Intuitive Surgical, maker of the da Vinci system, declined to comment for this story.)

The da Vinci system is indeed designed to accommodate residents in training, thanks to that secondary console. “The resident will be watching either on a monitor or on the second console,” says Jake McCoy, a urology resident at Louisiana State University. “At some point either the attending decides it's an appropriate time for the resident to take over, or if the resident wants to speak up and say something, then the resident might get control of the robot.” But McCoy is nearly done with his training, and he says he's never worked a case from start to finish. “There are certain parts that they just absolutely won't let me do.”

“I think at this point I'm going to be a little bit hesitant, or at least a little bit wary, to go out and unsupervised do any case that has any bit of complexity,” he adds.

Which is not to say 100 percent of residents aren’t getting fully trained on robotic surgery. “I think we get excellent robotic experience, and I’m very comfortable doing certainly standard procedures and maybe even more complex ones on my own,” says Ross McCaslin, a urologic surgery resident at Tulane. For McCaslin, that competence came in part from doing base-level training in simulators, just like a pilot would, supplemented by real patient experience. (All programs that Beane studied required simulator training.)

Same as it ever was, though. “It's a dirty little secret, but even when we did open surgery there are residents that are trained better than others depending on what program they're at and who their mentors are,” says Jonathan Silberstein, chief section of urologic oncology at Tulane.

Good training, whether in open or robotic surgery, requires extreme patience. “Training can slow us down,” Silberstein adds. “It can add significant time to an operation. It certainly increases my stress level, my blood pressure, the number of gray hairs I have. But that's our duty as physicians who have accepted this responsibility to train the next generation.”

While teaching by way of robotics may have its challenges, it also has its perks. For one, in open surgery, a resident and an attending have literally a different view of the procedure, which gets particularly complicated in a labyrinthine system of overlapping organs. But with the robot, they see the exact same picture through a camera. And after the surgery, the attending can walk the resident through a recording of the procedure, a sort of play-by-play for the operating room.

But residents who aren’t so well-nurtured tend to slip into what Beane calls shadow learning. They go out of their way to load up on simulations, or binge on YouTube videos of procedures. Which seems useful until you consider that attendings notice they're improving and give them more time at the console at the expense of other residents.

The good news hidden in all of this? Maybe surgery training gaps won’t be a problem for long. “Many of the very advanced surgeons at top institutions that I talked to say surgery definitely has a half-life,” says Beane. “In 50 years we're going to look back and be like, What? You wounded someone to try to heal them? What?” He’s thinking about noninvasive solutions like nanobots.

The field of surgery was early to the robotics game, and although the da Vinci system comes with serious costs, robotic surgery also means less recovery time and therefore less hospitalization. (Malfunctioning surgery robots, though, have also been implicated in patient injuries.) But the future will see doctors ceding ever more control to the machines, and then the difficulties of training residents will be history. “Maybe this problem is just going to suck for a little while,” says Beane. “And then people won't do that anymore.”

More Medical Robotics

  • Robot surgeons need to train, just like humans do. So researchers at UC Berkeley have developed a shifting platform that simulates the heaving body of a living patient.

  • Implantable robots also hold great promise in medicine. Take, for instance, a robotic sleeve that fits over the heart to keep it pumping.

  • In less … invasive medical robotics, we'd like you to meet Tug, the charming robot that roams hospitals delivering drugs and food.

Related Video

Science

Fear Not the Robot Singularity

The robot revolution we’re in the midst of is way more interesting and way less murder-y than science fiction. Call it the multiplicity.

Self-driving cars have it rough. They have to detect the world around them in fine detail, learn to recognize signals, and avoid running over pets. But hey, at least they’ll spend most of their time dealing with other robot cars, not people.

Now, a delivery robot, on the other hand, it roams sidewalks. That means interacting with people—lots of people—and dogs and trash and pigeons. Unlike a road, a sidewalk is nearly devoid of structure. It’s chaos.

Block by block, a San Francisco startup called Marble has been trying to conquer that chaos with a self-driving delivery cart. Today, they’re announcing a new, more powerful robot they hope is up to the task—and that will prove to skeptical regulators that the machines are smart enough operate safely on their own.

Marble’s previous robot is what we might call semi-autonomous. It can find its way around, but a human chaperone always follows to remote-control it out of trouble. But that’s a temporary measure—Marble wants to make these things proficient enough to find their own way around the people and the buskers and the intersections. One particularly important upgrade is extra cameras to fill in blind spots. “As you might imagine, one of the challenges that we have is seeing a small curb, understanding where it is, and driving around that in a sensible way, or telling the difference between a dog's tail and a stick,” says Kevin Peterson, Marble cofounder and software lead. “So the upgrades in cameras have improved that.”

The new robot also has three times the amount of computing power, meaning it can crunch more data coming in from the environment. That’ll be essential for getting the robot to go fully autonomous. The idea is that instead of following the robot around, a human chaperone could someday sit in a call center and monitor a fleet of robots from afar. (Babysitter for robots is actually a hot new job, by the way.)

To get to that point, though, Marble has to be sure its robot can follow the rules of the road. “We believe it's important to have a very polite robot that understands the kind of cues of walking through a crowd,” says Matt Delaney, CEO and cofounder of Marble. Self-driving cars have nice orderly lanes, but think about what happens when you’re walking right at another person on a sidewalk.

Human behavior on a sidewalk is weirdly complex. You know that thing where a group in front of you is walking just too slow for your liking, and you’ve got to turbo around them? Or if you’re feeling lazy, you just slow down a bit to match their speed. But you don’t follow too close, because you’re not a weirdo.

So Marble has been rating the robot’s interactions with people on the street. “When we see something really awkward happen in real life, we take that and reproduce it,” says Peterson. “There we come up with a scoring system and sort of evaluate how the system is doing.” Thus the team can objectively score the human notion of “awkwardness.”

Marble is learning that a robot has to nonverbally telegraph its intentions if it expects to get anywhere. Take a super-crowded intersection, for instance. Lots and lots of cars in a delicate ballet coming from all directions. The robot can’t just sit there and expect everyone to notice. “If you're too cautious then cars will just drive, if you're too bold then of course you get hit,” says Peterson. “So there's a sweet spot, where the vehicle has to wait an appropriate amount of time and indicate that it's going into the world.”

That means inching out a bit to nonverbally announce, Hey, I’m not just waiting on this street corner. I need to get across. Ideally at that point drivers let it pass like they would for a human pedestrian. The robot has analyzed the scenario and chosen the course of action that’s both efficient and safe.

On a more subtle level, the design of Marble’s robot also seems to telegraph information. For one, this new version is pared down (yet still carries the same amount of payload), perhaps giving it a friendlier vibe. “What we found is that people just enjoy the vehicle more if it's smaller,” says Delaney.

Though people shouldn’t enjoy it too much. Marble has found its robot to be very … approachable. Pedestrians will stop and stand in its way, apparently to test the soundness of its sensors. Which may be inevitable in these early days of street-roaming robots—humans still want to test the novel system. So Marble’s robot comes equipped with a microphone and speaker for the human chaperone who follows it to remind gawkers that the machine is on the job.

Marble’s robot has also attracted the attention of San Francisco regulators. Last December, the Board of Supervisors voted to severely restrict the machines to areas with low foot traffic.
“The business model is basically get as many robots out there to do deliveries and somebody in some office will monitor all these robots,” San Francisco supervisor Norman Yee told WIRED at the time. “So at that point you're inviting potential collisions with people.”

Yes, whether or not we trust robots to not run down pedestrians is now a conversation we need to have. But Marble and other companies that have unleashed robots on cities are learning fascinating lessons in human-robot interaction, lessons that will shape a world we’ll be sharing with more and more machines. So if you see a robot waiting the cross the street, be patient. It’s working harder than you think.

More robots

  • We got a ride-along with Marble's robot last year to see how hard it is to deal with dogs and buskers.

  • Marble isn't the only delivery robot out there. This little machine delivers pizza.

  • Also in San Francisco, a security robot found itself in trouble after it allegedly disrupted a homeless camp.

Related Video

Business

The Robot That's Roaming San Francisco's Streets to Deliver Food

Hungry? But you don't want to deal with a human? If you live in San Francisco's Mission district, you can get your food delivered by a robot named Marble.

Click:Chlorine tablet factory

Whether they believe robots are going to create or destroy jobs, most experts say that robots are particularly useful for handling “dirty, dangerous and dull” work. They point to jobs like shutting down a leaky nuclear reactor, cleaning sewers , or inspecting electronic components to really drive the point home. Robots don’t get offended, they are cheap to repair when they get “hurt,” and they don’t get bored. It’s hard to disagree: What could possibly be wrong about automating jobs that are disgusting, mangle people, or make them act like robots?

WIRED OPINION

ABOUT

Matt Beane (@mattbeane) received his PhD from MIT's Sloan School of Management and now a faculty member at UC Santa Barbara's Technology Management program and a research affiliate with MIT’s Initiative on the Digital Economy.

The problem is that installing robots often makes the jobs around them worse. Use a robot for aerial reconnaissance, and remote pilots end up bored. Use a robot for surgery, and surgical trainees end up watching, not learning. Use a robot to transport materials, and workers that handle those materials can no longer interact with and learn from their customers. Use a robot to farm, and farmers end up barred from repairing their own tractors.

I know this firsthand: For most of the last seven years, I have been studying these dynamics in the field. I spent over two years looking at robotic surgery in top-tier hospitals around the US, and at every single one of them, most nurses and surgical assistants were bored out of their skulls.

In an open procedure—doing surgery with scalpels, retractors, sponges, and large incisions—nurses and scrubs are part of the action, with a regular and dynamic flow of critical work to do. They can learn a lot about surgery, trauma, anatomy, and organizational operations. It’s dirty, dangerous, and interesting work. People who study collaborative work agree: Often, dirt, danger and drudgery mean that you’ve got your hands on a satisfying job—it challenges you, you’re doing something meaningful for others, and you get respect.

For many support workers, robotic surgery is much less satisfying than open surgery. There’s a huge amount of solitary setup work to allow the robot to work, then there’s a big sprint to get the robot draped and docked to the patient. And then…everyone watches the procedure on TV. While the surgeon is operating via an immersive 3-D control console, the scrub folds his arms and waits. The nurse sits in the corner at a PC entering data, or sometimes checking email or Facebook. There’s not a lot to do, but you always have to be ready. Compared to open surgery, it's clean, safe, and dull work.

At most of these hospitals, robots have been in service for over a decade—and conditions haven't improved. Though workers and executives sensed deeper problems, they didn’t advocate strongly enough to make changes. On paper, things seemed to be working: the focal task had been “improved” via cutting-edge technology, patient results looked fine and the hospital workers still had jobs (albeit, duller ones.)

Across my studies, the pattern is similar. The robot gets installed, handling a focused set of dirty, dangerous, or boring tasks. Efforts to redesign the work slow to a trickle: once results are the same or slightly better than before the redesign stops there. This means organizations miss innovative work designs and instead settle on ones that make the work worse: less challenging with fewer opportunities to learn and relate to other people in the process.

There’s good evidence that this dynamic is hard to dodge; try this 1951 study of coal mining on for size. Without proof that the new robotic install could be better, no one is motivated enough to try out alternative approaches. As we automate work like trucking, people transport, or package delivery—things that touches hundreds of thousands or even millions of people—these effects will get worse.

From the front lines, it seems clear that organizations that take robots as an opportunity to learn will come out ahead. Surgery’s a good example: putting a robot in the operating room left many workers in the lurch, but it also revealed how hospitals might improve robotic surgical work. Nurses and surgical technicians might now help across simultaneous procedures, for example, or could even formally train surgical residents who are starved for attention and practice.

Getting these clues takes careful, boots-on-the-ground attention to the entire work system as it changes. Using them to guide a broader work redesign can cost more than a typical robotic install—and not all roboticization is worth equal attention. But not doing this work guarantees an outcome we can't afford: a future of degrading work.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.

Watch a Robot 'Hen' Adopt a Flock of Chicks

March 20, 2019 | Story | No Comments

I don’t want to tell these baby chickens how to live, but they’re going about their business all wrong. The cylindrical robot in their pen looks nothing like a hen, and it makes decidedly un-hen-like beeps, yet the chicks trail it obsessively, as if it’s their mother. Where the PoulBot goes, so too go the yellow little fluffs. Beep beep beep, says the robot. Chirp chirp chirp, say the chicks.

The idea behind this pairing, developed by researchers from several European universities, isn’t to give the chicks a complex—I promise—but to parse the extreme complexities of animal behaviors, especially as those behaviors manifest in groups. The ultimate goal is to develop robots that behave with the complexity of living beings so they can interact more realistically with actual animals.

The secret is imprinting. Around 5 hours after they hatch, chicks begin to grow deeply attached to their mother. It’s such a strong instinct that if something, anything, moves, chances are a chick will form a bond with it. That’s why farmers—at least the small-scale ones—go out of their way to bond themselves to their birds. It makes the critters more managable.

And researchers can use imprinting to trick chicks into falling in love with robots. First they put the chicks in little plexiglass boxes from which they watch PoulBot scoot back and forth. All the while the robot calls out, though not with pre-recorded hen sounds. “If you start to emit real sound, you have to understand what those real sounds mean, and you have to translate chicken language,” says Université Paris Diderot physicist José Halloy, co-author of a new paper detailing the process. So the robot makes sounds that are chicken-ish, which helps the creatures bond to it.

Now the chicks are ready to meet their adopted mother face-to-face in a little pen. PoulBot isn’t programmed to act like a classical chicken mom, though. Instead, it leads the chicks to a particular spot in the pen, constantly monitoring who’s following. “If someone is missing you have to go back and fetch them, stimulate the chicks to follow, and then go back to the target,” says Halloy.

An overhead camera tracks each chick, and PoulBot has a special covering around its base so the animals don’t get their toes squished in the tracks. (Tracks instead of wheels, by the way, so the works don’t get gunked up with chick crap. It’s a tank on a battlefield of excrement.) The researchers also programmed PoulBot with a behavior called avoid-running-over-chick. "If a chick has fallen asleep during the experiment and hence lies below the level of the sensors,” they write in their paper, they don't want it to be in danger. PoulBot must not kill its fuzzy babies! So it uses accelerometer readings to tell if it’s no longer on flat ground, and will back up accordingly. “The results are not very interesting if you destroy half of your animals during your experiments,” says computer engineer and study co-author Alexey Gribovskiy of the École Polytechnique Fédérale de Lausanne in Switzerland.

Now, while the majority of chicks imprint on the robot, they imprint on it to different degrees, which is important because that influences the dynamics of the group. “Obviously if you have only strongly imprinted chicks, you get the military march,” says Halloy. “Everybody follows the leader. If you have a bunch of mixed weakly imprinted and strongly imprinted and in-between chicks, you have some kind of organized chaos there.”

Some chicks follow the robot and some chicks follow other chicks, creating a dynamic mob that’s tracked by the overhead camera. Algorithms even calculate their speed and acceleration, classifying every chick by how it’s behaving. This tells the researchers not only how well the robot is indoctrinating the subjects, but how chicks can vary in their acceptance of a fake mother.

Now, developing animal behavior models to power robots is hard. I can’t do it, and you probably can’t do it. “It takes a PhD to build a model, which means four years of work,” says Halloy. The PoulBot speeds that process up. “The idea was to use robots and artificial intelligence to automate as much as possible to produce a model faster,” Halloy adds. That’s right—postdocs aren't safe from automation either.

Unravel the intricacies of flocking behavior and figure out what cues a robot needs to send to get an animal to accept it as a mother, and you can build robots that get animals to do certain tasks. “I could imagine scenarios where robots act to lead animals to a food source or a medical treatment area without stressing them,” says ecologist and biomimetic roboticist David Bierbach, who wasn’t involved in the research.

The shepherds on the farms of the future, then, may well be robots. Robots on tracks, not wheels, of course.

What do you do when you discover you’re wrong? That’s a conundrum Daniel Bolnick recently faced. He’s an evolutionary biologist, and in 2009 he published a paper with a cool finding: Fish with different diets have quite different body types. Biologists had suspected this for years, but Bolnick offered strong confirmation by collecting tons of data and plotting it on a chart for all to see. Science for the win!

The problem was, he’d made a huge blunder. When a colleague tried to replicate Bolnick’s analysis in 2016, he couldn’t. Bolnick investigated his original work and, in a horrified instant, recognized his mistake: a single miswritten line of computer code. “I’d totally messed up,” he realized.

But here’s the thing: Bolnick immediately owned up to it. He contacted the publisher, which on November 16, 2016, retracted the paper. Bolnick was mortified. But, he tells me, it was the right thing to do.

Why do I recount this story? Because I think society ought to give Bolnick some sort of a prize. We need moral examples of people who can admit when they’re wrong. We need more Heroes of Retraction.

Related Stories

  • Adam Rogers

    The Dismal Science Remains Dismal, Say Scientists

  • Marcus Woo

    Scientists Are Wrong All the Time, and That's Fantastic

  • Mallory Pickett

    Scientist Screwed Up? Send 'Em to Researcher Rehab

Right now society has an epidemic of the opposite: too many people with a bulldog unwillingness to admit when they’re factually wrong. Politicians are shown evidence that climate change is caused by human activity but still deny our role. Trump fans are confronted with near-daily examples of his lies but continue to believe him. Minnesotans have plenty of proof that vaccines don’t cause autism but forgo shots and end up sparking a measles outbreak.

“Never underestimate the power of confirmation bias,” says Carol Tavris, a social psychologist and coauthor of Mistakes Were Made (but Not by Me). As Tavris notes, one reason we can’t admit we have the facts wrong is that it’s too painful to our self-conception as smart, right-thinking people—or to our political tribal identity. So when we get information that belies this image, we simply ignore it. It’s incredibly hard, she writes, to “break out of the cocoon of self-justification.”

That’s why we need moral exemplars. If we want to fight the power of self-delusion, we need tales of honesty. We should find and loudly laud the awesome folks who have done the painful work of admitting error. In other words, we need more Bolnicks.

Science, it turns out, is an excellent place to find such people. After all, the scientific method requires you to recognize when you’re wrong—to do so happily, in fact.

Granted, I don’t want to be too starry-eyed about science. The “replication crisis” still rages. There are plenty of academics who, when their experimental results are cast into doubt, dig in their heels and insist all is well. (And cases of outright fakery and fraud can make scholars less likely to admit their sin, as Ivan Oransky, the cofounder of the Retraction Watch blog, notes.) Professional vanity is powerful, and a hot paper gets a TED talk.

Still, the scientific lodestar still shines. Bolnick isn’t alone in his Boy Scout–like rectitude. In the past year alone, mathematicians have pulled papers when they’ve learned their proofs don’t hold and economists have retracted work after finding they’d misclassified their data. The Harvard stem-cell biologist Douglas Melton had a hit 2013 paper that got cited hundreds of times—but when colleagues couldn’t replicate the finding, he yanked it.

Fear of humiliation is a strong deterrent to facing error. But admitting you’re mistaken can actually bolster your cred. “I got such a positive response,” Bolnick told me. “On Twitter and on blog posts, people were saying, ‘Yeah, you outed yourself, and that’s fine!’” There’s a lesson there for all of us.

Click:wholesale pool chlorine tablets

Maybe you are one of those humans that avoids all trailers because they spoil the movie too much. I am not one of those humans. Which is why I immediately watched a trailer that came out this week for the upcoming Marvel movie Ant-Man and the Wasp. Although I was a huge comic book fan growing up, I never really got into Ant-Man. But the first Ant-Man movie was better than expected—and now I'm looking forward to this sequel.

If you don't know about Ant-Man, I'll give you a quick overview. This superhero uses special technology that allows him to shrink to ant-size (or sometimes he can also get really big—as seen in Captain America: Civil War). He also has the ability to communicate with ants. Oh, and the technology used to change the size of Ant-Man can also be used to shrinkify or embigenate other objects.

In the trailer, we see Hank Pym (the creator of the size-changing technology) shrink a whole building and then roll it away on wheels. But what happens when you shrink a building? To answer that, we have to thinking about what shrinking actually does in the Marvle Universe. When an object shrinks, does its size get smaller but its mass stays constant? Perhaps the density of the object stays constant during the process—or maybe it does something weird like moving into other dimensions.

Really, the mechanics of shrinking is pretty tough to figure out. There's conflicting evidence from the first film: First, there is the case where Scott Lang (aka Paul Rudd aka Ant-Man) puts on the suit and shrinks. At one point, he falls onto the floor and cracks the tile, suggesting that he keeps the mass of a full-size human. Later, though, we see that Hank Pym has a tiny tank on his key chain—a real tank that was just reduced in size. But clearly, this tank couldn't have the same mass as a full size tank. Otherwise, how would he carry it around?

Whatever. I'm just going to go with the idea that the mass stays constant—and if I'm wrong, oh well. It's just a movie anyway.

Let's start with the full-sized building in this trailer. How big is it? What is the volume? What is the mass? Of course I am going to have to make some rough estimates, so I'll start with the size. Looking at the video, I can count 10 levels with windows. That makes it 10 stories with each story 4 meters tall, (roughly). That would put the building at a height of 40 meters. When the build shrinks down, it looks fairly cubical in shape. This would put both the length and width at 40 meters. The volume would be (40 m)3 = 64,000 m3.

Why do I even need the volume? Because I'm going to use it to estimate the mass.

I'm sure some civil engineer somewhere has a formula to calculate building mass, but I don't want to search for that. Instead, I can find the mass by first estimating the density (where density is defined as the mass divided by the volume). For me, it is easier to imagine the density of a building by pretending like it was floating in water. Suppose you took a building and put in the ocean (and the building doesn't leak). Would it float? Probably. How much of it would stick out above the water? I'm going to guess that 75 percent is above water—sort of like a big boat. From that, I get a density of 0.25 times the density of water or 250 kg/m3 (more details in this density example).

With the estimated volume and density, I get a building mass of 16 million kilograms. Again, this is just my guess.

Now let's shrink this building down to the size in the trailer. I'm going to assume it gets to a size that's just 0.5 meters on each side, putting the volume at 0.125 m3. If the mass is still 16 million kilograms, the tiny building would have a density of 512,000 kg/m3. Yes, that is huge. Just compare this to a high-density metal like tungsten (used in fishing weights). This has a listed density of 19,300 kg/m3. This building would have a density that is 26 times higher than tungsten.

But wait! There's more! What if you put this tiny and super massive building down on the ground with just two small rolling wheels, like Hank Pym does in the trailer? Let me calculate the pressure these wheels would exert on the road, where pressure is the force divided by the contact area. The size of the wheels is pretty tough to estimate—and it's even harder to get the contact area between the wheels and the ground. I'll just roughly estimate it (and guess on the large size). Let's say each wheel has a 1 cm22 contact area for a total of 2 cm2 or 0.0002 m2.

I know the force on the ground will be the weight of the building. This can be calculated by taking the mass and multiplying by the local gravitational constant of 9.8 Newtons per kilogram. Once I get this force, I just divide by the area to get a contact pressure of 3.14 x 109 Newtons per square meters, or 3.14 Gigapascals. Yes. That is huge. Let's compare this to the compressive strength of concrete at about 40 Megapascals. The compressive strength is the pressure a material can withstand before breaking. Clearly 3 Gigapascals is greater than 40 MPa. Heck, even granite has a compressive strength of 130 MPa.

If Hank wants to roll this building away so that no one will notice, he is going to have a problem. The wheels will leave behind a trail of destruction by breaking all the surfaces it rolls on. Or there is another option. Maybe the mass of the building gets smaller when it shrinks—but in that case, I don't have something fun to write about.

More Marvel Physics

  • Superheroes are really big on this shape-shifting stuff—but is the Incredible Hulk really as hulky as he looks in Thor: Ragnarok?

  • You can also have shape-shifting planets, like the weird non-spherical planet Sovereign in Guardians of the Galaxy Vol. 2. Could that really work?

  • And for some super-nerdy density physics: Can you calculate the center of mass in Thor's hammer?

Related Video

Culture

Ant-Man Director Says Paul Rudd is Just Right as Tiny Marvel Superhero

Ant-Man director Peyton Reed spoke with WIRED about bringing the tiny superhero to the big screen, some easter eggs for Marvel fans and how Paul Rudd preserved the wry humor of the original comic books.

Does Your Doctor Need a Voice Assistant?

March 20, 2019 | Story | No Comments

“Siri, where is the nearest Starbucks?”

“Alexa, order me an Uber.”

“Suki, let’s get Mr. Jones a two-week run of clarithromycin and schedule him back here for a follow-up in two weeks.”

Doesn’t sound that crazy, does it? For years, voice assistants have been changing the way people shop, get around, and manage their home entertainment systems. Now they’re starting to show up someplace even a little more personal: the doctor’s office. The goal isn’t to replace physicians with sentient speakers. Quite the opposite. Drowning in a sea of e-paperwork, docs are quitting, retiring, and scaling back hours in droves. By helping them spend more time listening to patients and less time typing into electronic health records, voice assistants aim to keep physicians from getting burned out.

It’s a problem that started when doctors switched from handwritten records to electronic ones. Health care organizations have tried more manual fixes—human scribes either in the exam room or outsourced to Asia and dictation tools that can only convert text verbatim. But these new assistants—you’ll meet Suki in a sec—go one step further. Equipped with advanced artificial intelligence and natural language processing algorithms, all a doc has to do is ask them to listen. From there they’ll parse the conversation, structure it into medical and billing lingo, and insert it cleanly into an EHR.

“We must reduce the burden on clinicians,” says John Halamka, chief information officer at Boston-based Beth Israel Deaconess Medical Center.1 He’s been conducting extensive early research around how Alexa might be used in a hospital, to help patients locate their care team or request additional services, for example. “Ambient listening—the notion that technologies like Alexa and Siri turn clinician speech and clinician-patient conversations into medical records—is a key strategy.”

Alexa and Siri might be the best known voice assistants, but they’re not the first ones doctors are trusting with their patients. While Amazon and Apple are rumored to be working on voice applications for health care, so far they’re still piloting potential use cases with hospitals and long-term care facilities. They don’t yet have any HIPAA-compliant products on the market.

Not so for Sopris Health, a Denver-based health intelligence company that launched today after starting to roll out its app at the beginning of the year. You don’t summon a name to turn it on, just tap it when you want it to start listening. It automatically converts the audio to free text, then turns that speech into a doctor’s note, thanks to hours of training data from actual doctors’ visits. So “I think I’d like to see you again if things aren’t feeling better within a few days,” becomes “Schedule three-day follow-up.” Or, “We’re going to need to get an MRI of that left knee to figure out what’s going on in there” becomes “Order left knee MRI.”

Much in the same way that Google’s neural networks learned that cats and dogs are different animals that people like to keep as pets, Sopris’ algorithms learned to use context clues to pull out the medically actionable parts of a conversation. A cardinal number becomes an interesting feature—maybe it’s a calendar date or the dose of a medication. The words around it help the app decide to schedule a follow-up or order a prescription. And because it integrates directly with the EHR vendor, no separate orders or emails or phone calls are necessary: You just hit a button.

By doing so, physicians assume responsibility (and liability) that everything in it is correct. Which might sound like a leap of faith, but Sopris CEO and co-founder Patrick Leonard says is actually a positive feature. “What’s really cool is it’s changing physician behavior in a good way,” he says. “The app forces them to practice active listening, double-checking with patients that they got everything right. Which they actually have time for, now that they’re not sitting at a computer for six hours a day.” And if the assistant gets anything off, doctors can manually overwrite it.

Sopris plans to eventually move beyond orthopedics into other specialties; it’s currently in talks with a large children’s hospital about creating a pediatrics module. Another clinical voice company also launching today has even bigger plans. With $20 million in funding and stacked with engineers from Google and Apple, Redwood City-based Suki unveiled its AI-powered digital voice assistant this morning. Former Googler Punit Soni founded the company a year ago (it was originally called Robin), and has since launched a dozen pilots in internal medicine, ophthalmology, orthopedics, and plastic surgery practices in California and Georgia. Preliminary results from the company show Suki cuts physician paperwork by 60 percent.

For now, the app still needs some hand-holding. You have to say “Suki, this patient is 67 years old,” and “Suki, we need to order a blood test.” That’s because Soni’s team gave it just enough seed data to survive. But eventually, with enough data flowing through its neural nets, doctors will be able to say simply, “Suki, pay attention.” And then it’s on to tackling bigger problems.

“We’re starting with documentation, but then we can apply the same methods to billing and coding, and other higher order architectures,” says Soni. Things like prescription management, and maybe even decision support—an algorithm whispering hints in your doctor’s ear about a care plan. “I think it’s unreasonable to imagine that 10 years from now doctors will still be using clunky 1990s-style UI to take care of patients,” says Soni.

The health care system has long been impervious to this kind of disruption. But as deep learning gets even better, these kinds of assistants begin to look more plausible. The space is filling up rapidly; last year a third startup, SayKara, helmed by former Amazon engineers, announced it was developing its own Alexa for health care. Others are sure to follow. And that’s when lawyers focused on privacy and cybersecurity start to get concerned. “When you’re talking about AI in the health care space, the appetite to capture more and more data becomes insatiable,” says Aaron Tantleff, a partner at Foley and Lardner law firm in Chicago. He points out that one of HIPAA’S key privacy protections is a rule that says businesses should only collect the minimal amount of information that is necessary. It’s a provision that is fundamentally at odds with data-hungry neural networks.

Voice assistants also raise questions about unauthorized disclosures in the exam room. “We already know these listening devices can get hacked and allow third parties to record conversations,” says Tantleff. “In a medical setting, there’s a very different level of risk. What are companies doing to prevent that from happening?”

Both Suki and Sopris recognize the significant privacy and security considerations involved with their products. The companies encrypt audio on the device, and in between transit to HIPAA-compliant clouds where their algorithms run. And both apps require a prompt from someone in the room to enable listening. Plus patients have to opt-in; docs can’t just record people who don’t consent. The potential benefit to physicians seems clear. The tradeoff for patients less so. Then again, if you want to keep your doctor around for the long haul, maybe it’s worth asking, “Suki, can you keep my data safe?”

1 Disclosure: Halamka was also formerly a member of Suki's advisory board

The Algorithm Will See You Now

  • When time is brain, AI can help stroke patients get better care, faster.

  • Google has developed software that can detect early signs of diabetes-related eye problems, and is testing it in eye hospitals in India.

  • To keep pace with all the new ideas for using computers and machine learning in health care, the Food and Drug Administration had to create a new team of digital health experts.

Almost exactly a year ago, 23andMe earned the right to tell people what diseases might be lurking in their DNA. Since then, the consumer genetic testing company has turned tubes of spit into health reports for thousands of its customers. You can learn how your genes might predispose you to eight diseases with a well-known genetic component—things like Parkinson’s, Alzheimer’s, and most recently, breast and ovarian cancers.

But these limited genetic red flags are rare enough that for most people, there’s not much for 23andMe to report back.

Lots of people, though, get migraines. And allergies. And depression. 23andMe says it wants to help them, too—not by extracting insights from their DNA, but by harvesting the wisdom of the crowd. For the last few weeks, the company has been quietly rolling out a new health hub, where customers can share information about how they manage 18 common health conditions. They get to see which treatments work best, according to other users’ personal reports. And 23andMe gets a bunch of data it didn’t have before.

It’s not hard to see who’s getting the better side of the deal.

Each condition page provides some information unique to 23andMe, says product manager Jessie Inchauspe. She highlights how customers can look at the prevalence of a given condition among their spit kit sisters and brothers. Based on the millions of 23andMe customers that consented to participate in research, 27% have self-reported having depression. Most of them were diagnosed by their 30th birthday. And any kids they have will be 20 percent more likely to develop depression themselves.

Unlike the company’s health reports though, the conditions pages won’t tell you how likely your genes are to give you depression, just how much of depression generally is attributable to DNA, according to the company’s data and its reading of the scientific literature. A disclaimer toward the top makes this plain: “This content is NOT based on your genetics. It may not be representative of the general population or of you as an individual.”

The same is true of the treatment ratings: Customers can sort them by reported efficacy and popularity, but not by their own genotype. 23andMe says it does have plans for adding an ethnicity filter at some point—certain drugs can be more or less effective depending on your heritage—but right now there’s nothing personalized about it.

It’s also misleading. “Normally I think 23andMe does a really nice job visually representing genetic risks, but this model brings up some real interpretation concerns,” says Kayte Spector-Bagdady, a bioethicist at the University of Michigan and a former associate director of the Presidential Commission for the Study of Bioethical Issues. The problem, she says, is that people are being asked what treatments they’ve tried and how effective each one is. But that’s not the same as a comparative effectiveness trial. “If I say I have depression and all I ever tried was Zoloft and I had moderate improvement, it doesn’t mean Zoloft was better for me than exercise or Wellbutrin,” says Spector-Bagdady. But on the new pages, colored bars that display reported efficacy of treatments side by side suggest otherwise. “It’s hard for any individual consumer to understand what this information means for them.”

If 23andMe customers want to compare treatments, they don’t have to log on to the company’s health hub to do so. Iodine, a startup co-founded by former WIRED executive editor Thomas Goetz that merged with drug pricing transparency company GoodRx, crowdsources patient reviews and presents them alongside clinical trial data and input from pharmacists. HealthTap’s RateRx app lets doctors from all over the world rate the effectiveness of certain medications for certain ailments. Even Google has been working with the Mayo Clinic to create a database of commonly searched medical conditions and their most frequently used treatments.

So why should 23andMe’s customers turn to them rather than the wilds of the internet for health advice? “We have a nice, closed platform where people feel safe,” says Inchauspe, pointing out that about 80 percent of the company’s 5 million customers consent to participate in research. That means research that 23andMe’s 60 staff scientists do internally, as well as outside studies with data the company shares with academic institutions and sells to pharmaceutical firms. “That gives us an opportunity to crowdsource unique data that just doesn’t exist anywhere else.”

23andMe does have data that other treatment comparison companies don’t: DNA. Theoretically, pairing its massive genetic databases with reports of treatment efficacy could help the company take steps toward offering precision medicine solutions: treatments tailored to your DNA. But for now, that’s not information it can easily share with its customers, at least in the US, on account of federal regulations that treat pharmacogenetic testing—how genes influence someone’s sensitivity to different drugs—as a medical device.

When asked, the company said it has no immediate plans to turn the health hub data into genetic reports. “We view this as a separate product,” says Inschauspe. But 23andMe has already shown its interest in pharmacogenetic testing. In 2014, the company introduced 12 such tests to its customers in the UK—though it stopped offering them in 2017 to make its product uniform on both sides of the pond. But if 23andMe ever plans to bring them back, a little (or a lot) more data certainly won’t hurt.

More Personal Genomics

  • Direct-to-consumer genetic tests are more popular than ever. Last year, Ancestry DNA sold 1.5 million spit kits over a four-day period.

  • Over on Helix's DNA marketplace, you'll soon be able to request your own clinical tests for 59 disease-causing genetic mutations.

  • Upstart Genos takes a different tack, offering financial incentives to its customers for donating their exome sequence data to science.

04/20/18 3:50pm ET This story has been updated to reflect the most up-to-date numbers for how many of 23andMe's consented research participants experience depression; it is 27 percent, not 45 percent, as a previous version of this story stated.

Related Video

Science

Crispr Gene Editing Explained

Maybe you've heard of Crispr, the gene editing tool that could forever change life. So what is it and how does it work? Let us explain.