Stanford psychologist Russell Poldrack, shown here in his 105th MRI scanning session during an 18-month experiment, is one of a number of researchers who are enlisting as subjects in their own studies. TIM LAUMANN Source: Science Magazine Shidonna Raven Garden and Cook
By Esther LandhuisDec. 5, 2016 , 12:30 PM Source: Science Magazine Photos Source: Science Magazine
Some scientists analyze fruit flies. Others use zebrafish. Many conduct studies with mice. But occasionally, researchers choose to experiment on a different animal: themselves. Consider the medical officer who in the early 1800s fed himself spoiled sausage to determine the source of foodborne botulism. Or the physician who in 1929 performed the world’s first cardiac catheterization on himself, and the young doctor who in 1984 guzzled Helicobacter pylori broth to prove that the bacterium causes ulcers. The latter two went on to win Nobel Prizes, but others haven’t been as fortunate. During the Spanish–American War, when yellow fever was killing thousands of U.S. soldiers, physician Jesse Lazear died after intentionally exposing himself to infected mosquitoes.
Medical martyrdom is rarer these days, in part due to increased regulation of human subject research after World War II, and fewer researchers dying for their work can only be a good thing. Nonetheless, autoexperimentation continues. The access to the subject is matchless, and the allure of big data and personalized medicine seems to be some nudging self-experimenters toward new types of studies. However, the regulatory environment remains somewhat vague, leaving it up to researchers to weigh practicality against ethical considerations. But if care and diligence accompany the appetite for adventure, scientists can responsibly conduct self-experimentation studies that help advance science—and potentially offer some fun and personal benefit to boot.
Balancing ease with ethics
For scientists whose work isn’t particularly risky, it’s hard to beat a prime motivation for self-experimentation: convenience. “It’s easy to draw your own blood and analyze it,” says Laura Stark, a bioethics historian at Vanderbilt University in Nashville. “You don’t have to worry about someone suing you or deciding you can’t use their sample.”
That was a key factor when Lawrence David, a Ph.D. student at the Massachusetts Institute of Technology at the time, and his adviser, bioengineer Eric Alm, sought to monitor how daily activities influenced the human gut and oral microbiomes over the course of a year. They needed to determine feasibility limits—for example, how frequently samples could be collected and how many variables could be measured. When the researchers couldn’t immediately find participants, they decided to enroll themselves. “We thought that by participating, we’d gain firsthand understanding of those limits,” says David, now an assistant professor of molecular genetics and microbiology at Duke University in Durham, North Carolina.
Each day, the two researchers saved spit samples and pooped into sterile bags. They used an iPad app to log their weight and everything they did and ate. Several months into the study, David went to Bangkok for a few weeks but stuck with the regimen, shipping home 3 to 5 pounds of stool on dry ice. That commitment eventually paid off when the results were published.
Russell Poldrack, a psychologist at Stanford University in Palo Alto, California, also had an ambitious study plan that required more than what the average participant would tolerate. That’s what led him to climb into an MRI machine every Tuesday and Thursday morning for 18 months to get his brain scanned. The idea started simmering years before, when Poldrack’s studies to understand psychiatric disorders stalled because they lacked a good control for normal brain function variability over time. At some point, he recalls, while he was directing the Imaging Research Center at the University of Texas (UT) at Austin, artist-in-residence Laurie Frick “really started pushing me, saying, ‘You’ve got this MRI scanner. Why aren’t you getting in there and scanning yourself?’”
While Poldrack was mulling over this possibility, Stanford geneticist Michael Snyder published a 2012 paper describing an “integrative Personal Omics Profile” of a 54-year-old male volunteer—himself. Snyder’s genome was sequenced and analyzed, and over 14 months, the research team made more than 3 billion measurements of his blood, saliva, mucus, urine, and feces. During the study—conducted as a proof of principle and to learn what a baseline “healthy” state looks like—Snyder discovered that he was genetically at risk for type 2 diabetes. With that information and the accompanying data, he was able to investigate biological pathways that kicked in as he developed signs of disease, which could have implications beyond Snyder’s individual health. Seeing Snyder’s work made Poldrack think that his crazy brain study might “not just be a goofy boutique project; it could actually have some scientific impact.”
He was right: His 18-month ordeal produced the most detailed map of functional brain connectivity in a single person to date.
Despite the potential advantages of using oneself as a subject, scientists contemplating this approach should consider research ethics guidelines. In the United States, the National Institutes of Health enacted policies in 1954 that restrict the use of employees as research subjects. The National Research Act, passed by Congress in 1974, requires research involving human subjects to be vetted by an institutional review board (IRB). Current rules, which date from 1981, outline additional protections for vulnerable groups, including pregnant women, children, and prisoners. U.S. federal law does not, however, explicitly address self-experimentation by a scientist or physician, says Jonathan Moreno, a bioethicist at the University of Pennsylvania. As Stark explains, it is “a blind spot in the current human subjects regulations.” That means that, at least for now, it is up to researchers to decide whether they’re comfortable experimenting on themselves and whether they need to seek IRB approval.
Conducting research in this vague regulatory environment can create confusion, even when researchers do everything they can to make sure they’re proceeding according to regulations and requirements. Before Poldrack started his brain study, for example, he submitted a proposal to the IRB at UT Austin, where he worked at the time. The board said that it did not consider his project to be human subjects research and therefore it did not require approval, so Poldrack got started collecting his scans without worrying about any further paperwork.
About 6 months after Poldrack started collecting data, however, the situation became more complicated. Researchers at Washington University School of Medicine in St. Louis learned of the study and wanted to use some of Poldrack’s data. When they checked with their IRB to see whether a formal protocol was required, they hoped the IRB would say it was unnecessary. After all, it was data being collected at a different institution that hadn’t required IRB approval—“essentially just a data transfer from our point of view,” says M.D.-Ph.D. student Tim Laumann, one of the researchers interested in accessing the data. However, the Washington University IRB did require a protocol to be written and approved—a process that took about a month even when expedited, Laumann says.
Looking back, Poldrack suspects that things could have gone more smoothly if he had gotten IRB approval from his institution to begin with. “It would have made data sharing much easier because the data would not have been living in an ethical gray zone”—although, he adds, other aspects of the study, such as the fact the data cannot be de-identified, “might also have raised issues even with IRB approval.” In the absence of hard-and-fast rules for self-experimentation, researchers wishing to study themselves should trust their best judgment while allowing for hiccups that could arise in this less-chartered realm.
The power of doing it yourself
Beyond administrative challenges, self-experimentation studies can raise questions about whether analyses of just a few individuals are scientifically valid. Self-monitoring experiments are not randomized or blinded like traditional human studies, and the experimenter’s personal involvement and motivations could make the research seem less objective.
Despite these concerns and caveats, there are scenarios where self-experimentation may be not only acceptable but optimal. Studies such as Poldrack’s, which aim to correlate hard-to-describe personal experiences such as mood or emotion with concrete measurements, for example, are among them because the researchers have particular expertise that makes them ideal subjects. Researchers “know the categories used to describe feelings and side effects and can articulate them in a way that translates easily into scientific language,” Stark says. Self-experimentation, therefore, can offer a way to calibrate tools and technologies that are otherwise hampered by relying on an individual’s subjective experience.
And for University of California, San Francisco, neuroscientist Adam Gazzaley, who develops video games to help improve brain function, the small sample size is exactly what he wants. The video games automatically adjust their difficulty based on the user’s performance, creating a personalized digital therapy, which is a key part of his lab’s effort to shift “away from just focusing on large populations and focusing more on the individual, the n of 1,” Gazzaley says. “We’re looking to understand more about how to make meaningful statements about data from a single person.”
Adam Gazzaley underwent various measurements, including EEGs, as a participant in his own studies. JO GAZZALEY Source: Science Magazine Shidonna Raven Garden and Cook
Every once in a while, when Gazzaley gives talks about the project, someone from the audience would ask him whether he played the games himself. His answer was “no” until the summer of 2015, when Gazzaley decided to put his time where his mouth is and became a research participant. For 2 months he played an hour of Body Brain Trainer, a physical and cognitive fitness game, three mornings a week. He also did 30 minutes of a meditation game called Meditrain on weeknight evenings, and for 3 weeks he played the newest game, Neurodrummer, which aims to improve cognition through rhythm training. He also had to get numerous measurements taken via saliva and blood samples, MRIs, EEGs, sleep tracking, heart rate monitoring, and more.
“Playing games I helped invent and being in studies I helped design and validate, but doing it from the perspective of a participant, was really helpful,” Gazzaley says. Experiencing firsthand the challenges of compliance, especially for something “not as quick as a pill,” has inspired Gazzaley to develop ways to not only push people to work harder during the game but also to sustain motivation over the long haul.
As for whether he plans to publish the data collected on himself, he says he might play the games again, perhaps annually, “to get a more longitudinal view.” For now, though, the personal reasons for self-experimentation could be just as strong as the scientific motivation. Now in his late 40s, Gazzaley says he is “approaching the age range of the adults we treat in some of our older studies. We know middle-aged folks have declining cognitive control. This seemed a great way for me to try and get out in front of it.”
Regardless of why scientists engage in self-experimentation, they should be transparent, making a public statement—perhaps a paragraph in the manuscript—explaining why they’re doing a study on themselves and what they hope to learn by conducting the research this way, Moreno says. “It says the researcher isn’t just using patients as guinea pigs.” Time will tell whether these types of studies establish worth that goes beyond provocative one-offs. Then again, with certain research questions, he adds, “if you don’t give it a shot, you may never know.”
doi:10.1126/science.caredit.a1600160
Esther Landhuis is a freelance science journalist based in the San Francisco Bay area.
Why don’t more doctors and scientist use themselves as subjects since they have confidence in their trails? Why don’t patients know about informed consent? Why don’t doctors and scientist get it? Why are clinical trials infamous for their fines? Are unethical medical experiments really a thing of the past? Why are there so many modern day instances of unethical medical experiments?
If these articles have been helpful to you and yours, give a donation to Shidonna Raven Garden and Cook Ezine today. All Rights Reserved – Shidonna Raven (c) 2025 – Garden & Cook.
Comments