Science-- there's something for everyone

Tuesday, April 30, 2013

A new way to assess pain

Have you ever been asked by a medical professional to rate your pain level on a scale of zero to ten? Needless to say, this ranking system is highly subjective. 



Even more problematically, it can’t be used for people who can’t speak (the very young or the cognitively impaired) nor can it be used to compare pain levels in different people. And yet, treatment for pain usually depends on understanding the intensity of that pain. Thanks to some new fMRI studies conducted by researchers led by Tor Wager from the University of Colorado, we may now have a neurological signal for physical pain. 


You know a pain study isn't going to be good news for the volunteers. Wager and his colleagues do not disappoint. 114 participants were hooked up to an fMRI machine while heat was applied to their left forearms. The heat levels were calibrated for each individual to range from warm to scorching (subjective pain level 'seven'). Each test was preceded by a warning cue, 8 seconds of anticipation, 10 seconds of applied heat, and then a 4 second evaluation period. Some people went through 75 of these trials. 

There were some interesting variations. In one study, all the subjects had recently experienced a romantic rejection. These people got to look at either a picture of their ex-partner or of a close friend while being seared. In another, people were unwittingly given analgesics before the trials.


So, what did the scientists learn from all this? The fMRI scans indicated a ‘signature response’ to pain that became clearer as pain levels increased. Not only that, but the same pattern appeared in different people, indicating that it could be a universal signal of pain at the neurological level. People suffering from emotional pain, as with the heartbroken subjects viewing pictures of their exes, did not have the pain response pattern. People being treated with analgesics showed a dampened signature response.

All of this strongly suggests that doctors should be able to use fMRI scans to assess patient pain levels. Until that happens, here's another scale you might find useful:

 
Wager, T., Atlas, L., Lindquist, M., Roy, M., Woo, C., & Kross, E. (2013). An fMRI-Based Neurologic Signature of Physical Pain New England Journal of Medicine, 368 (15), 1388-1397 DOI: 10.1056/NEJMoa1204471.



Monday, April 29, 2013

Bad news about sugary soft drinks


You already know that soft drinks are bad for you. But a single daily indulgence can’t be that bad can it? Sorry. According to a new study by the scientists in the InterAct Consortium at the Imperial College London, one sugar-sweetened soft drink per day can increase your risk of getting type 2 diabetes by 22%. 

From 1991 to 2007, the InterAct project collected data on hundreds of thousands of people from eight European countries. None of the participants had type 2 diabetes when first recruited. By the end, over 12,000 of them did. These people were compared with 16,000 randomly selected non-diabetics.

The researchers asked the subjects about their consumption of sweet beverages, which were divided into juices and nectars (fruits or vegetables plus up to 20% added sugar) and soft drinks (sugar-sweetened or artificially sweetened). Participants were asked how often they consumed these sweet drinks, from less than one serving per month to at least one per day. A serving size was twelve fluid ounces, or 336 ml. Among other factors, the researchers adjusted for total calorie intake, body mass index (BMI), gender, educational level, physical activity and use of alcohol and/or tobacco. 

The bad news is that drinking at least one sugar-sweetened soft drink per day increased a person’s risk of developing type 2 diabetes by 29%, compared with consuming fewer than one such beverage per month. The worse news is that the diabetes risk jumped by 22% when going from one to six drinks per week to one or more drinks per day. Because these categories were so broad, this could mean increasing from one drink per week to one per day, or from six per week to several per day. Either way, your safest bet is to consume fewer than one sugary soft drink per week. 

The good news is that after accounting for BMI, there was no such association with either artificially sweetened soft drinks or with juices and nectars. So, if you really can’t palate plain water, you do have some options, at least as far as type 2 diabetes is concerned. I’ve written before about another peril of sugary drinks.

Here’s one more interesting thing. From 1992 to 2000, Europeans got about 2.5% of their daily carbohydrates from sugary drinks. For people in the U.S., that figure was over 10%. I’m amazed, and not in a good way.


The InterAct consortium (2013). Consumption of sweet beverages and type 2 diabetes incidence in European adults: results from EPIC-InterAct. Diabetologia PMID: 23620057.




Friday, April 26, 2013

Calibrating the Maya Long Count calendar


Thanks to the media sensation of 2012, we’ve all heard of the Maya Long Count (MLC) calendar. Needless to say, that calendar did not augur the end of the world any more than our calendars do every December 31st. The more interesting question is how well we can correlate events noted on the Maya calendar with dates on our own calendar. Thanks to work led by Douglas Kennett of Pennsylvania State University, we can now be pretty sure we have correctly matched up the two counting methods.

Let’s begin by explaining the MLC calendar. Just as we divide our time periods into sections (millennia, years, days, etc), so too did the Maya. Only, their system included five time units: Bak'tun (144,000 days), K'atun (7,200 days), Tun (360 days), Winal (20 days), and K'in (1 day). In our system, we might designate a date as 4/26/2013 (or 26/4/2013 for you Europeans) to show that it’s the twenty-sixth day of the fourth month in the two thousand thirteenth year, or about 735,360 days (if I put in the right number of leap years) since the starting point at 0/0/0. The Maya would have designated that same time span as 5.2.2.12.0 (5 Bak’tuns, 2 K’atuns, 2 Tuns, 12 Winals and 0 K’ins). Of course they didn’t use our numeric system, so it would have looked like a series of bars and dots.


Caption: Elaborately carved wooden lintel or ceiling from a temple in the ancient Maya city of Tikal, Guatemala, that carries a carving and dedication date in the Maya calendar.
Credit: Courtesy of the Museum der Kulturen.

Okay, so we know how to translate the MLC calendar to tell us how many days have passed since they began that count. Unfortunately, that information alone doesn’t yield any insight into the equivalent date on our calendar, because they didn’t start their calendar at the same time as we started ours. Knowing that an event occurred 80,000 days into the MLC doesn’t tell you much if you don’t know when their day zero was. You need to know what correction factor to add to the Maya count to bring it into alignment with the European calendar.

There are two ways to find that correction factor. One is to find an event that was recorded in both calendars. The most commonly used correction, known as the Goodman-Martinez-Thompson (GMT) correlation, is largely based on this sort of historical evidence. The second method is to physically date artifacts that are from a specific Maya date. Kennett and his colleagues followed this tactic.

The researchers took four samples from a wooden lintel in an ancient Maya temple (shown above) that included a date commemorating the defeat of one Maya king by his rival. The researchers used radiocarbon dating of the wooden lintel in conjunction with growth rate estimates for the tree from whence it had come to calculate the actual age of the lintel. From that, they figured out when the regicide had occurred.

Among the various correlations, the GMT, placing the event depicted on the lintel during our year C.E. 695, proved to be most accurate. Other methods of correlating Maya and European dates varied by as much as a five hundred years either way. 

With this new information, we can now accurately match events that occurred on the MLC to dates that we can understand. Luckily, we all lived through 2012 to appreciate this.

Kennett, D., Hajdas, I., Culleton, B., Belmecheri, S., Martin, S., Neff, H., Awe, J., Graham, H., Freeman, K., Newsom, L., Lentz, D., Anselmetti, F., Robinson, M., Marwan, N., Southon, J., Hodell, D., & Haug, G. (2013). Correlating the Ancient Maya and Modern European Calendars with High-Precision AMS 14C Dating Scientific Reports, 3 DOI: 10.1038/srep01597.


Thursday, April 25, 2013

The development of human language


Language is the hallmark characteristic that sets humans apart from other animals. More than tool use, empathy or morality, all of which are practiced by at least some non-humans, language makes us who we are. At some point, we evolved the ability to turn a few dozen sounds into a limitless number of expressions. Charles Yang of the University of Pennsylvania tried to figure out when that happened by comparing two linguistically similar creatures: very young children and chimpanzees. You won’t be surprised to learn that they aren’t that similar after all.

The big question in language acquisition is how young children get from speaking no words to complex sentences so quickly and accurately. There are two prevailing ideas. One is that toddlers begin their journey into language use with imitation. That is, they simply repeat the short phrases that they hear adults say. Only after mastering those sentences do they go on to improvise their own longer sentences. The second idea is that children combine language elements independently from the very beginning, based on the grammar of the speakers around them.

To evaluate these two possibilities, Yang noted how often young language learners, speaking only two-word sentences, used ‘a’ or ‘the’ before nouns. He compared this ratio to that found in the Brown Corpus (a collection of English language texts) and with over a million utterances appearing in the public domain that were directed at children. In adult speech, certain words tend to be paired almost exclusively with one or the other of these determiners (we almost always refer to ‘the kitchen’ rather than ‘a kitchen’), but young children used the two articles much more equally. This strongly suggests that even at the very earliest stages of language acquisition, they are not simply parroting back phrases they’ve previously heard.

Obviously, chimpanzees don’t speak, but a few of them can sign. Do they also combine signs independently of ones they’ve seen? Here, Yang uses a sample size of one: the American Sign Language-using chimp named Nim Chimpsky (after noted linguist Noam Chomsky). Unlike the human kids, Nim’s language skills seemed to be based purely on memorization. He could not improvise new combinations of signs.

This corroborates that there really is something unique about human language. At some point after we diverged from chimpanzees, we evolved the ability to communicate in a fundamentally different way. How that happened is still a mystery, and may remain so, since we can look to neither the fossil record nor our living cousins for answers.



Yang, C. (2013). Ontogeny and phylogeny of language Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1216803110.





Wednesday, April 24, 2013

Just for fun: One minute medical school

Wish you knew more about medicine or the human body but only have a minute to spare? Have I got the YouTube channel for you. With 'One Minute Medical School', Dr. Rob Tarzwell, a Clinical Assistant Professor at the University of British Columbia, has created a series of videos that address everything from radiation to wrist bones in one minute increments.

Here's a sample, about chicken pox and shingles:



By the way, Tarzwell's topics are often prompted by viewer request, so feel free to contact him on Facebook, Twitter or his website.

Hat tip: Skeptically Speaking.


Tuesday, April 23, 2013

In pandemics, screen passengers at exit points


No matter where in the world the next epidemic starts, it’s no more than a day’s plane ride from reaching anyone on the planet. Needless to say, doctors are keen to stop diseases from spreading around the globe. To that end, Kamran Khan of St. Michael’s Hospital, Toronto, and his colleagues have a proposal: screen airline passengers as they depart from risky areas.

Screening airline passengers for infectious diseases may be a tempting idea in principle. In practice, it’s more complicated. Leaving aside the issues of privacy and expense, it could add to already tedious boarding procedures and lead to huge disruptions in travel. However, if we do choose to implement screening practices, there are better and worse ways to do it.

The most inefficient health screening checks are done at the point of entry. Most passengers arriving at an airport were never exposed to any dangerous diseases. Even people arriving from an at-risk location might not have been there long enough to contract anything. For example, take air travelers leaving Mexico during the 2009 H1N1 influenza pandemic. The researchers estimated that catching any possible influenza carriers arriving from Mexico would have required screening stations at 82 international airports spread through 26 countries. 116 people would have been screened at those entry points for every person with any possibility of being infected. And that’s just counting direct flights. Adding in all passengers traveling through Mexico on connecting flights would have meant screening 67 million people in over a thousand airports. Clearly, entry screening is not the way to go.

How about screening people as they leave a hazardous region? Again, referring to the H1N1 pandemic, the researchers concluded that exit screening at just eight Mexican airports would have caught 90% of the people who could have been carrying the disease. Clearly, if you can choose where to conduct your health screenings, exit points are the way to go.

There are a couple of problems with this strategy. The most glaring one is that the scientists don’t say how air passengers should be screened. How do you process that many people in a way that doesn’t unduly slow down air traffic without missing anything? Also, screening only at exit points means that other countries have to trust each other’s screening procedures. Plus, there’s always the risk that a person will not begin to show symptoms until after departing from the exit airport. Finally, these check points will most likely only be used if there is some indication that an epidemic may be brewing. By that time, at least a few contagious passengers will undoubtedly have already been transported to new regions.



Khamran Khan, Rose Eckhardt, John Brownstein, Raza Naqvi, Wei Hu, David Kossowsky, David Scales, Julien Arino, Michael MacDonald, Jun Wang, Jennifer Sears, & Martin Cetron (2013). Entry and exit screening of airline travellers during the A(H1N1) 2009 pandemic: a retrospective evaluation Bulletin of the World Health Organization BLT.12.114777.


Monday, April 22, 2013

This blog is going dark for 24 hours to protest CISPA, the Cyber Intelligence Sharing and Protection Act. 



You can read more about CISPA here.


Friday, April 19, 2013

What do adjuvants do anyway?


It’s been known for nearly a century that vaccines are more effective when they include adjuvants. In fact, vaccines that don’t contain entire live pathogens (which is most of them these days) work rather poorly without adjuvants. Luckily, alum is a very good and safe adjuvant that can be added to just about any vaccine. Unluckily, we didn’t used to have any idea of how adjuvants work. That has now changed, thanks to work by researchers from the University of Colorado and from the Howard Hughes Medical Institute.

To understand adjuvants, you have to understand vaccines at the molecular level. What exactly is going on after the needle punctures your arm? The immune system is immensely complex with myriad cellular and protein actors that I can't possibly untangle here. Suffice it to say that one of the first events is the arrival at the scene of a type of white blood cell called the neutrophil. These first responders release various chemical cues to encourage other cells to enter the fray. Among them are the dendritic cells that engulf the antigens within the vaccine and display them on their surfaces to T-cells. These T-cells in turn initiate antibody production. 

Where does the adjuvant come into the picture? Neutrophils happen to be extremely short-lived cells. Very soon after encountering the intruding antigens, the neutrophils die, releasing streams of DNA. If a vaccine includes the adjuvant alum, that DNA will coat the alum. Then other dendritic cells end up engulfing the entire complex of host DNA-alum-antigen. It turns out that the T-cells are much more interested in the DNA-associated antigens; they form longer and stronger interactions with dendritic cells that have ingested the DNA-alum morass along with the target antigens. The scientists confirmed this by adding DNase (an enzyme that digests DNA) along with their vaccines.

Amy McKee of the University of Colorado, and lead author of the paper, explains:
The DNA makes the antigen-presenting cell stickier. We believe that extended engagement provides a stronger signal to the T-cell, which makes the immune response more robust.  
Why should this be so? We can't really answer that question yet. However, I find it intriguing that the adjuvant places host DNA in such close contact with the foreign antigen. Remember, it's the immune system's job to distinguish host from non-host. Perhaps this juxtaposition makes that contrast more stark.

McKee, A., Burchill, M., Munks, M., Jin, L., Kappler, J., Friedman, R., Jacobelli, J., & Marrack, P. (2013). Host DNA released in response to aluminum adjuvant enhances MHC class II-mediated antigen presentation and prolongs CD4 T-cell interactions with dendritic cells. Proceedings of the National Academy of Sciences, 110 (12) DOI: 10.1073/pnas.1300392110.



Thursday, April 18, 2013

Cockatoos can delay gratification

I’ve written before about the ‘marshmallow test’ for self-control. If you haven’t seen that post, I recommend checking it out as the results may surprise you. What may be even more surprising is that humans aren’t the only creatures who can delay gratification. Alice Auersperg and her colleagues from the University of Vienna have subjected Goffin cockatoos to a similar test and found that they too are capable of restraining themselves. 

Delayed gratification tests usually involve the promise that if the participant can avoid eating the treat in front of them, they'll get something even better. They've been done on other kinds of birds, most notably corvids (crows and ravens). These birds have been known to wait for up to five minutes for a better offer. However, corvids have the habit of hoarding their food, which might make it easier for them to postpone eating their treats. Cockatoos have no such trait. 


Fourteen cockatoos were given the chance to rank three possible treats (pecan, cashew or fried meat). After that, they were taught to exchange less desirable items for more desirable items as shown in the video below. 





Notice that the experimenter keeps the more desirable tidbit visible but out of reach in her left hand. The cockatoo is only allowed to exchange for it when the person reopens her right hand after a predetermined delay (in this case, 40 seconds). 

All the birds could delay eating the first item for at least a couple of seconds.  Remember, they held that morsel in their mouths, unlike the corvids who tended to temporarily discard it. I'm not sure how many of us could wait even that long with a marshmallow in our mouths. Half the birds could wait forty seconds and three waited nearly a minute and a half. When the choice was between one item now and either two or six of the same item later, fewer of the birds were interested in trading, but eight of them did hold out for twenty seconds. 

Interestingly, the birds tended to either wait the entire duration or give up immediately. The authors speculate that the cockatoos were able to judge the time duration and decide if the expected benefit was worth the wait. This is reminiscent of the children judging the reliability of the experimenter in the post I mentioned earlier


Auersperg, A., Laumer, I., & Bugnyar, T. (2013). Goffin cockatoos wait for qualitative and quantitative gains but prefer 'better' to 'more' Biology Letters, 9 (3), 20121092-20121092 DOI: 10.1098/rsbl.2012.1092.




Wednesday, April 17, 2013

Just for fun: Stop that car!


If you’re in law enforcement, how do you stop a speeding car without risking injury to yourself or to the occupants of that or any other car? The folks at Engineering Science Analysis Corporation used funding from Homeland Security's Science & Technology Directorate to develop the Safe, Quick, Undercarriage Immobilization Device (SQUID) program, which includes the following two technologies:

The Pit-Ballistic Undercarriage Lanyard (Pit-BUL™):


In case it wasn’t clear in the video, the net contains spikes that puncture the tire. That car is going nowhere.

The NightHawk™:


Again, the strip contains spikes. Either of these devices can be triggered remotely or activated by motion sensors.


Tuesday, April 16, 2013

You only think you have free will


The more we learn about the brain, the less control we seem to have over our own thoughts and actions. We feel fully conscious and in command of the decisions we make. We also feel as if all our thoughts are coming from a single entity in a united all-controlling mind. None of that is true. Need some evidence? Chun Siong Soon and his colleagues designed an experiment to illustrate the order of events between the conscious decision to do something and the action itself. Spoiler alert: the action comes first.

The researchers used some rather nuanced experiments to suss out the timing between decision and action. Briefly, numbers were flashed on a screen in front of 18 volunteers while fMRI machines recorded their brain activity. At some point, and at their own volition, the participants decided to either add or subtract the next two numbers shown. They recorded both the moment of that decision and the answer they got. You can read a more complete explanation at Why Evolution is True.

What were the results?

Four seconds before the person stopped the clock with the thought, ‘I’ve now decided that I will be adding the next two numbers I see’, his brain activity indicated that he would be performing that action. In other words, the researchers could see the forthcoming action take shape in the subject’s brain well before he himself was aware of having decided anything. Not only that, but researchers were able to predict with 59% accuracy whether the forthcoming math operation would be addition or subtraction. Remember, the scientists were basing this prediction on brain activity occurring some time before the person himself knew when or what the next mathematical operation would be.

This phenomenon has been dubbed the 'Bereitschaftspotential' (German for 'readiness potential'), or BP, and it suggests that we can at best veto an action the unconscious parts of our brains have already decided to take. Similar experiments have shown that the BP can occur up to ten seconds before the conscious part of our brains gets clued in.  Needless to say, this is completely counterintuitive and could mean that many of our ‘decisions’ are actually rationalizations after the fact. For whatever reason, part of our brains wants to perform an action and it convinces the aware part of us that that’s what we had planned all along.


To be clear, none of this changes the fact that we really feel as if we are the masters of our own behavior, nor does it alter our responsibility for our conduct. Our actions have consequences for us and others regardless of what part of our brains instigated them.


Soon, C., He, A., Bode, S., & Haynes, J. (2013). Predicting free choices for abstract intentions Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1212218110.




Monday, April 15, 2013

Yet another blood group


Blood typing (determining what kinds of antigens a person has on the surface of his or her red blood cells) is a series business. Giving a person the wrong type of blood can have dire consequences. Unfortunately, blood typing is also complicated by the fact that there are so many different classes of antigens to consider (for more background, see my post on the discovery of two new blood groups). Just when you think you’ve covered all your bases, someone succumbs to an acute hemolytic transfusion reaction because of a previously unidentified red blood cell antigen.

One such unhappy event happened in 1952 when a patient referred to as ‘Mrs. Vel’ nearly died after receiving what her doctors thought was a perfectly compatible blood transfusion. It turns out that Mrs. Vel was missing an antigen (later called the Vel antigen) present in the donated blood. This discrepancy resulted in the widespread destruction of her red blood cells.

In an effort to avoid repeating this error, researchers screened tens of thousands of donated blood samples to see which, if any, would not cross react with Mrs. Vel’s blood. The scientists found that only about one person in two thousand was missing the Vel antigen. This effectively means that people like Mrs. Vel are probably out of luck if they need a blood transfusion. More importantly, it means that it’s critical to identify Vel minus people before giving them blood of any kind.

Needless to say, finding the Vel gene would make the screening process much easier. Now, many of the same researchers who brought you the Langereis and Junior blood groups have done just that. The scientists discovered that a gene known as SMIM1 was responsible for encoding the Vel antigen. Vel minus people are missing seventeen nucleotides from their copies of this gene, effectively nullifying the protein as a surface antigen.

This should make it much easier to rapidly identify Vel minus people before they receive life-threatening blood transfusions. It also brings the number of blood typing groups up to 33. Luckily, blood typing tends to be automated these days, so most people have no need to remember all 33 factors. Personally, I only know my blood type for two blood systems: ABO and rhesus. Statistically speaking, I’m probably Vel plus, and I have no idea for the other thirty groups.

Ballif BA, Helias V, Peyrard T, Menanteau C, Saison C, Lucien N, Bourgouin S, Le Gall M, Cartron JP, & Arnaud L (2013). Disruption of SMIM1 causes the Vel- blood type. EMBO molecular medicine PMID: 23505126.




Friday, April 12, 2013

The amazing see-through brain


Stanford researchers, led by Kwanghun Chung, have developed an amazing tool for seeing the inner workings of the brain. In an obvious attempt to fit their chosen acronym of CLARITY (and an excellent acronym it is), they named their technique Clear Lipid-exchanged Anatomically Rigid Imaging/immunostaining-compatible Tissue hYdrogel. Nicely done.
 CLARITY transformation of a mouse brain at left into a transparent but still intact brain at right. Shown superimposed over a quote from the great Spanish neuroanatomist Ramon y Cajal. Credit: Kwanghun Chung and Karl Deisseroth, Howard Hughes Medical Institute/Stanford University

CLARITY transformation of a mouse brain at left into a transparent but still intact brain at right. Shown superimposed over a quote from the great Spanish neuroanatomist Ramon y Cajal.
Credit: Kwanghun Chung and Karl Deisseroth, Howard Hughes Medical Institute/Stanford University.

So, how do you make a brain transparent? Here’s the extremely simplified recipe:

  • Step one: Infuse your brain with a mix of chemicals (acrylamide, bisacrylamide and formaldehyde) that bind to proteins, nucleic acids and small molecules, but critically, not to lipids (fats)
  • Step two: Allow the chemicals to solidify into a gel that permeates the entire brain. The molecules listed in step one are locked in place by this acrylamide matrix.
  • Step three: Run an electric current through the brain to eliminate everything not attached to the acrylamide gel (i.e. the light-reflecting lipids). The result is a totally transparent brain, as shown above.
  • Step four: Add your stain or stains of choice.  
 Three-dimensional view of stained hippocampus showing:
fluorescent-expressing neurons (green)
connecting interneurons (red)
supporting glia (blue).
Credit: Courtesy of the Deisseroth lab.


What can be done with this technique? A better question might be 'what can't you do with it?' Not only could you directly compare the brain structures of people with and without diseases like Alzheimer’s, cancer or autism, but you could see how different parts of the brain interact with each other. Unlike with current brain staining technologies, you can look at entire brains rather than only at thin slices of brain tissue.
 
CLARITY allows imaging through the entire intact brain without sectioning. Shown is yellow fluorescent protein labeling of chiefly projection (Thy1) neurons in an entire intact mouse brain.
CREDIT: Kwanghun Chung and Karl Deisseroth, Howard Hughes Medical Institute/Stanford University.

Not enough pretty pictures? How about a video:



In case you're wondering, this technique will work with any organs, not just brains. Brains are just the coolest thing to look at.

More details at Science Daily, Not Exactly Rocket Science and the L.A. Times.



Chung, K., Wallace, J., Kim, S., Kalyanasundaram, S., Andalman, A., Davidson, T., Mirzabekov, J., Zalocusky, K., Mattis, J., Denisin, A., Pak, S., Bernstein, H., Ramakrishnan, C., Grosenick, L., Gradinaru, V., & Deisseroth, K. (2013). Structural and molecular interrogation of intact biological systems Nature DOI: 10.1038/nature12107.


Thursday, April 11, 2013

Why is gastric bypass so effective?


Gastric bypass surgery is a highly effective treatment for obesity. It involves surgically sealing off or removing most of a person’s stomach and attaching the remaining small stomach pocket to the small intestine. Thus, a person is only able to eat a small amount before feeling full. You may think that this inability to eat large meals is the driving force behind the high success rate of gastric bypass. While that is important, it may not be the main factor. It may be more about the microbes.

Alice Liou of Massachusetts General Hospital and her colleagues used a mouse model of gastric bypass to compare the pre-and post-operative fauna in the digestive tracts of mice with diet-induced obesity. One group (GB) received gastric bypass surgery similar to that used in humans. The second (SHAM) also underwent surgery but that operation did not result in gastric bypass. All the mice were fed a liquid diet for two weeks and then were allowed to eat as much as they liked. 

Post surgery, the GB mice maintained normal weights while the SHAM mice quickly regained their obese statures. So, far this is no surprise. After all the SHAM mice had not had gastric bypass. However, there were significant differences in the composition of the gut microbes between GB mice and the SHAM mice. In other words, having real gastric bypass surgery affected the make-up of the mice’s intestinal flora. This was true regardless of whether the mice were fed a normal or a high fat diet.

Here’s the fascinating part: the scientists inoculated germ-free mice (mice with no gut bacteria of their own) with fecal samples from the GB or SHAM mice. In effect, the researchers were transplanting the intestinal environment of the GB or SHAM mice into the germ-line mice without surgery. The germ-free mice that received gut bacteria from the GB mice lost weight and fat tissue. This was without any diet restrictions. In contrast, mice receiving fecal matter from SHAM mice did not lose weight. This strongly suggests that it is the altered microbial environment of the gut rather than meal size restriction that drives gastric bypass weight loss.

This is actually a hopeful sign. It means that we might be able to achieve the same results as gastric bypass surgery by simply manipulating a person’s microbial content. The bad news is that we’re far from understanding how to do this safely. We don't know what it is about gastric bypass that causes the change in bacterial population and we don't know exactly which of those changes are critical for weight loss. Still, it's a start.


Liou, A., Paziuk, M., Luevano, J., Machineni, S., Turnbaugh, P., & Kaplan, L. (2013). Conserved Shifts in the Gut Microbiota Due to Gastric Bypass Reduce Host Weight and Adiposity Science Translational Medicine, 5 (178), 178-178 DOI: 10.1126/scitranslmed.3005687.