Friday, March 27, 2015

Fixing 'leaky' blood vessels to combat severe respiratory ailments and, perhaps, Ebola

When you get an infection, your immune system responds with an influx of inflammatory cells that target the underlying bacteria or viruses. These immune cells migrate from your blood into the infected tissue in order to release a cocktail of pro-inflammatory proteins and help eliminate the infectious threat. During this inflammatory response, the blood vessel barrier becomes “leaky.” This allows for an even more rapid influx of additional immune cells. Once the infection resolves, the response cools off, the entry of immune cells gradually wanes and the integrity of the blood vessel barrier is restored.

But if the infection is so severe that it overwhelms the immune response or if the patient is unable to restore the blood vessel barrier, fluid moves out of the blood vessels and begins pouring into the tissue. This “leakiness” is what can make pneumonia turn into acute respiratory distress syndrome. ARDS, by my estimate affects hundreds of thousands of people each year worldwide. In the US around 190,000 people develop ARDS each year and it has a mortality rate of up to 40%. In people with Ebola, this leakiness is also often deadly, causing severe blood pressure drops and shock.
New therapies to fix the leakiness of blood vessels in patients suffering from life-threatening illnesses, such as acute respiratory distress syndrome and Ebola virus infections, have the potential to save many lives.

What is ARDS?

Severe pneumonia can lead to acute respiratory distress syndrome (ARDS), a complication in which the massive leakiness of blood vessels in the lung leads to the fluid build-up, which covers the cells that exchange oxygen and carbon dioxide. Patients usually require mechanical ventilators to force oxygen into the lungs in order to survive.
Pneumonia is one of the most common causes of ARDS but any generalized infection and inflammation that is severe enough to cause massive leakiness of lung blood vessels can cause the syndrome.
For people with ARDS treatment, options other than ventilators and treating the underlying infection are limited. And suppressing the immune system to treat this leakiness can leave patients vulnerable to infection.

A new treatment option

But what if we specifically target the leakiness of the blood vessels? Our research has identified an oxygen-sensitive pathway in the endothelial cells which line the blood vessels of the lungs. The leakiness or tightness of the blood vessel barrier depends on the presence of junctions between these cells. These junctions need two particular proteins to work properly. One is called VE-cadherin and is a key building block of the junctions. The other is called VE-PTP and helps ensure that VE-cadherin stays at the cell surface where it can form the junctions with neighboring cells.
When the endothelial cells are inflamed, these junctions break down and the blood vessels become leaky. This prompts the cells to activate a pathway via Hypoxia Inducible Factors (HIFs), which are usually mobilized in response to low oxygen stress. In the heart, HIF pathways are activated during a heart attack or long-standing narrowing of the heart blood vessels to improve the survival of heart cells and initiate the growth of new blood vessels.

We found that a kind of HIF (called HIF2α) was protective in lung blood vessel cells. When it was activated, it increased levels of the proteins that support the junctions between lung cells and strengthened the blood vessel barrier. But in many patients, this activation may not start soon enough to prevent ARDS.

The good news is that we can activate this factor before the lung fluid accumulates and before low oxygen levels set in. Using a drug, we activated HIF2α under normal oxygen conditions, which “tricked” cells into initiating their protective low-oxygen response and tightening the blood vessel barrier. Mice treated with a HIF2α activation drug had substantially higher survival rates when exposed to bacterial toxins or bacteria which cause ARDS.

Similar drugs have already been used in small clinical trials to increase the production of red blood cells in anemic patients. This means that activating HIF2α is probably safe for human use and may indeed become a viable strategy in ARDS. However, the efficacy and safety of drugs which activate HIF2α still have to be tested in humans with proper placebo control groups.

Could this treat Ebola?

The Ebola virus is a hemorrhagic virus and is also known to induce the breakdown of blood vessel barriers. In fact, it is these leaks in the blood vessels that make the disease so deadly. Due to the leakage of fluid and blood from the blood vessels into the tissue, the levels of fluid and blood inside the blood vessels decrease to critically low levels, causing blood pressure drops and ultimately shock. A group of researchers in Germany recently reported the use of an experimental drug (a peptide) developed for the treatment of vascular leakage in a 38-year-old doctor who had contracted Ebola in Sierra Leone and was airlifted to Germany. The researchers received a compassionate-use exemption for the drug and the patient recovered.

This is just a single case report and it is impossible to know whether the patient would have recovered similarly well without the experimental vascular leakage treatment, but it does highlight the potential role of drugs which treat blood vessel leakiness in Ebola patients.

The Conversation

This article was originally published on The Conversation. Read the original article. Gong, H., Rehman, J., Tang, H., Wary, K., Mittal, M., Chatturvedi, P., Zhao, Y., Komorova, Y., Vogel, S., & Malik, A. (2015). HIF2α signaling inhibits adherens junctional disruption in acute lung injury Journal of Clinical Investigation, 125 (2), 652-664 DOI: 10.1172/JCI77701

Thursday, March 5, 2015

Does Thinking About God Increase Our Willingness to Make Risky Decisions?

There are at least two ways of how the topic of trust in God is broached in Friday sermons that I have attended in the United States. Some imams lament the decrease of trust in God in the age of modernity. Instead of trusting God that He is looking out for the believers, modern day Muslims believe that they can control their destiny on their own without any Divine assistance. These imams see this lack of trust in God as a sign of weakening faith and an overall demise in piety. But in recent years, I have also heard an increasing number of sermons mentioning an important story from the Muslim tradition. In this story, Prophet Muhammad asked a Bedouin why he was leaving his camel untied and thus taking the risk that this valuable animal might wander off and disappear. When the Bedouin responded that he placed his trust in God who would ensure that the animal stayed put, the Prophet told him that he still needed to first tie up his camel and then place his trust in God. Sermons referring to this story admonish their audience to avoid the trap of fatalism. Just because you trust God does not mean that it obviates the need for rational and responsible action by each individual.

It is much easier for me to identify with the camel-tying camp because I find it rather challenging to take risks exclusively based on the trust in an inscrutable and minimally communicative entity. Both, believers and non-believers, take risks in personal matters such as finance or health. However, in my experience, many believers who make a risky financial decision or take a health risk by rejecting a medical treatment backed by strong scientific evidence tend to invoke the name of God when explaining why they took the risk. There is a sense that God is there to back them up and provide some security if the risky decision leads to a detrimental outcome. It would therefore not be far-fetched to conclude that invoking the name of God may increase risk-taking behavior, especially in people with firm religious beliefs. Nevertheless, psychological research in the past decades has suggested the opposite: Religiosity and reminders of God seem to be associated with a reduction in risk-taking behavior.

 Daniella Kupor and her colleagues at Stanford University have recently published the paper "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking" which takes a new look at the link between invoking the name of God and risky behaviors. The researchers hypothesized that reminders of God may have opposite effects on varying types of risk-taking behavior. For example, risk-taking behavior that is deemed ‘immoral' such as taking sexual risks or cheating may be suppressed by invoking God, whereas taking non-moral risks, such as making risky investments or sky-diving, might be increased because reminders of God provide a sense of security. According to Kupor and colleagues, it is important to classify the type of risky behavior in relation to how society perceives God's approval or disapproval of the behavior. The researchers conducted a variety of experiments to test this hypothesis using online study participants.

 One of the experiments involved running ads on a social media network and then assessing the rate of how often the social media users clicked on slightly different wordings of the ad texts. The researchers ran the ads 452,051 times on accounts registered to users over the age of 18 years residing in the United States. The participants either saw ads for non-moral risk-taking behavior (skydiving), moral risk-taking behavior (bribery) or a control behavior (playing video games) and each ad came either in a 'God version' or a standard version. Here are the two versions of the skydiving ad (both versions had a picture of a person skydiving):
Amazing Skydiving! God knows what you are missing! Find skydiving near you. Click here, feel the thrill!
Amazing Skydiving! You don't know what you are missing! Find skydiving near you. Click here, feel the thrill!
The percentage of users who clicked on the skydiving ad in the ‘God version' was twice as high as in the group which saw the standard "You don't know what you are missing" phrasing! One explanation for the significantly higher ad success rate is that "God knows…." might have struck the ad viewers as being rather unusual and piqued their curiosity. Instead of this being a reflection of increased propensity to take risks, perhaps the viewers just wanted to find out what was meant by "God knows…". However, the response to the bribery ad suggests that it isn't just mere curiosity. These are the two versions of the bribery ad (both versions had an image of two hands exchanging money):
Learn How to Bribe! God knows what you are missing! Learn how to bribe with little risk of getting caught!
Learn How to Bribe! You don't know what you are missing! Learn how to bribe with little risk of getting caught!
In this case, the ‘God version' cut down the percentage of clicks to less than half of the standard version. The researchers concluded that invoking the name of God prevented the users from wanting to find out more about bribery because they consciously or subconsciously associated bribery with being immoral and rejected by God. These findings are quite remarkable because they suggest that a a single mention of the word ‘God' in an ad can have opposite effects on two different types of risk-taking, the non-moral thrill of sky-diving versus the immoral risk of taking bribes.

Clicking on an ad for a potentially risky behavior is not quite the same as actually engaging in that behavior. This is why the researchers also conducted a separate study in which participants were asked to answer a set of questions after viewing certain colors. Participants could choose between Option 1 (a short 2 minute survey and receiving an additional 25 cents as a reward) or Option 2 (four minute survey, no additional financial incentive). The participants were also informed that Option 1 was more risky with the following label:
WARNING Eye Hazard: Option 1 not for individuals under 18. The bright colors in this task may damage the retina and cornea in the eyes. In extreme cases it can also cause macular degeneration.
In reality, neither of the two options was damaging to the eyes of the participants but the participants did not know this. This set-up allowed the researchers to assess the likelihood of the participants taking the risk of potentially injurious light exposure to their eyes. To test the impact of God reminders, the researchers assigned the participants to read one of two texts, both of which were adapted from Wikipedia, before deciding on Option 1 or Option 2:

Text used for participants in the control group:
"In 2006, the International Astronomers' Union passed a resolution outlining three conditions for an object to be called a planet. First, the object must orbit the sun; second, the object must be a sphere; and third, it must have cleared the neighborhood around its orbit. Pluto does not meet the third condition, and is thus not a planet."
  Text used for the participants in the ‘God reminder' group:
"God is often thought of as a supreme being. Theologians have described God as having many attributes, including omniscience (infinite knowledge), omnipotence (unlimited power), omnipresence (present everywhere), and omnibenevolence (perfect goodness). God has also been conceived as being incorporeal (immaterial), a personal being, and the "greatest conceivable existent."
As hypothesized by the researchers, a significantly higher proportion of participants chose the supposedly harmful Option 1 in the ‘God reminder' group (96%) than in the control group (84%). Reading a single paragraph about God's attributes was apparently sufficient to lull more participants into the risk of exposing their eyes to potential harm. The overall high percentage of participants choosing Option 1 even in the control condition is probably due to the fact that it offered a greater financial reward (although it seems a bit odd that participants were willing to sell out their retinas for a quarter, but maybe they did not really take the risk very seriously).

A limitation of the study is that it does not provide any information on whether the impact of mentioning God was dependent on the religious beliefs of the participants. Do ‘God reminders' affect believers as well atheists and agnostics or do they only work in people who clearly identify with a religious tradition? Another limitation is that even though many of the observed differences between the ‘God condition' and the control conditions were statistically significant, the actual differences in numbers were less impressive. For example, in the sky-diving ad experiment, the click-through rate was about 0.03% in the standard ad and 0.06% in the ‘God condition'. This is a doubling but how meaningful is this doubling when the overall click rates are so low? Even the difference between the two groups who read the Wikipedia texts and chose Option 1 (96% vs. 84%) does not seem very impressive. However, one has to bear in mind that all of these interventions were very subtle – inserting a single mention of God into a social media ad or asking participants to read a single paragraph about God.

People who live in societies which are suffused with religion such as the United States or Pakistan are continuously reminded of God, whether they glance at their banknotes, turn on the TV or take a pledge of allegiance in school. If the mere mention of God in an ad can already sway some of us to increase our willingness to take risks, what impact does the continuous barrage of God mentions have on our overall risk-taking behavior? Despite its limitations, the work by Kupor and colleagues provides a fascinating new insight on the link between reminders of God and risk-taking behavior. By demonstrating the need to replace blanket statements regarding the relationship between God, religiosity and risk-taking with a more subtle distinction between moral and non-moral risky behaviors, the researchers are paving the way for fascinating future studies on how religion and mentions of God influence human behavior and decision-making.  

Reference: Kupor DM, Laurin L, Levav J. "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk TakingPsychological Science (2015) doi: 10.1177/0956797614563108   

Note: An earlier version of this article was first published on the 3Quarksdaily Blog. Kupor DM, Laurin K, & Levav J (2015). Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking. Psychological Science PMID: 25717040

Saturday, February 21, 2015

Physician-Scientists: An Endangered Species?

Can excellent scientists be excellent physicians at the same time?

“I would like to ask you about a trip to Thailand.”

This is not the kind of question I expected from a patient in my cardiology clinic at the Veterans Administration hospital in Indianapolis. Especially since this patient lived in rural Indiana and did not strike me as the adventurous type.

“A trip to Thailand?”, I mumbled, “Well, ummm…I am sure……ummm…I guess the trip will be ok. Just take your heart medications regularly, avoid getting dehydrated and I hope you have a great vacation there. I am just a cardiologist and if you want to know more about the country you ought to talk to a travel agent.”

I realized that I didn’t even know whether travel agents still existed in the interwebclickopedia world, so I hastily added “Or just use a travel website. With photos. Lots of photos. And videos. Lots of videos.”

Now it was the patient’s turn to look confused.

“Doctor, I didn’t want to ask you about the country. I wanted to know whether you thought it was a good idea for me to travel there to receive stem cell injections for my heart.”

I was thrilled because for the first time in my work as a cardiologist, a patient had asked me a question which directly pertained to my research. My laboratory’s focus was studying the release of growth factors fromstem cells and whether they could help improve cardiovascular function. But my excitement was short-lived and gradually gave way to horror when the patient explained the details of the plan. A private clinic in Thailand was marketing bone marrow cell injections to treat heart patients with advanced heart disease. The patient would have to use nearly all his life savings to travel to Thailand and stay at this clinic, have his bone marrow extracted and processed, and then re-injected back into his heart in order to cure his heart disease.

Much to the chagrin of the other patients in the waiting room, I spent the next half hour summarizing the current literature on cardiovascular cell therapies for the patient. I explained that most bone marrow cells were not stem cells and that there was no solid evidence that he would benefit from the injections. He was about to undergo a high-risk procedure with questionable benefits and lose a substantial amount of money. I pleaded with him to avoid such a procedure, and was finally able to convince him.

I remember this anecdote so well because in my career as a physician-scientist, the two worlds of science and clinical medicine rarely overlap and this was one of the few exceptions. Most of my time is spent in my stem cell biology laboratory, studying basic mechanisms of stem cell metabolism and molecular signaling pathways. Roughly twenty percent of my time is devoted to patient care, treating patients with known cardiovascular disease in clinics, inpatient wards and coronary care units.

“Portrait of Dr. Gachet” – Painting by Vincent van Gogh (Public Domain via Wikimedia)

As scientists, we want to move beyond the current boundaries of knowledge, explore creative ideas and test hypotheses. As physicians, we rely on empathy to communicate with the patient and his or her family, we apply established guidelines of what treatments to use and our patient’s comfort takes precedence over satisfying our intellectual curiosity. The mystique of the physician-scientist suggests that those of us who actively work in both worlds are able to synergize our experiences from scientific work and clinical practice. Being a scientist indeed has some impact on my clinical work, because it makes me evaluate clinical data on a patient and published papers more critically. My clinical work helps me identify areas of research which in the long-run may be most relevant to patient care. But these rather broad forms of crosstalk have little bearing on my day-to-day work, which characterized by mode-switching, vacillating back and forth between my two roles.

Dr. J. Michael Bishop, who received the Nobel Prize in 1989 with Dr. Harold Varmus for their work on retroviral cancer genes (oncogenes), spoke at panel discussion at the 64th Lindau Nobel Laureate Meeting (2014) about the career paths of physician-scientists in the United States. Narrating his own background, he said that after he completed medical school, he began his clinical postgraduate training but then exclusively focused on his research. Dr. Bishop elaborated how physician-scientists in the United States are often given ample opportunities and support to train in both medicine and science, but many eventually drop out from the dual career path and decide to actively pursue only one or the other. The demands of both professions and the financial pressures of having to bring in clinical revenue as well as research grants are among the major reasons for why it is so difficult to remain active as a scientist and a clinician.

To learn more about physician-scientist careers in Germany, I also spoke to Dr. Christiane Opitz who heads a cancer metabolism group at the German Cancer Research Center, DKFZ, in Heidelberg and is an active clinician. She was a Lindau attendee as a young scientist in 2011 and this year has returned as a discussant.

JR: You embody the physician-scientist role, by actively managing neuro-oncology patients at the university hospital in Heidelberg as well as heading your own tumor metabolism research group at the German Cancer Research Center (Deutsches Krebsforschungszentrum or DKFZ in Heidelberg). Is there a lot of crosstalk between these two roles? Does treating patients have a significant influence on your work as a scientist? Does your work as cancer cell biologist affect how you evaluate and treat patients?

CO: In my experience, my being a physician influences me on a personal level and my character but not so much my work as a scientist. Of course I am more aware of patients’ needs when I design scientific experiments but there is not a lot of crosstalk between me as a physician and me as a scientist. I treat patients with malignant brain tumors which is a fatal disease, despite chemotherapy and radiation therapy. We unfortunately have very little to offer these patients. So as a physician, I see my role as being there for the patients, taking time to talk to them, provide comfort, counseling their families because we do not have any definitive therapies. This is very different from my research where my aim is to study basic mechanisms of tumor metabolism.

There are many days when I am forced to tell a patient that his or her tumor has relapsed and that we have no more treatments to offer. Of course these experiences do motivate me to study brain tumor metabolism with the hope that one day my work might help develop a new treatment. But I also know that even if we were lucky enough to uncover a new mechanism, it is very difficult to predict if and when it would contribute to a new treatment. This is why my scientific work is primarily driven by scientific curiosity and guided by the experimental results, whereas the long-term hope for new therapies is part of the bigger picture.

JR: Is it possible that medical thinking doesn’t only help science but can also be problematic for science?

CO: I think in general there is increasing focus on translational science from bench-to-bedside, the aim to develop new treatments. This application-oriented approach may bear the risk of not adequately valuing basic science. We definitely need translational science, because we want patients to benefit from our work in the basic sciences. On the other hand, it is very important to engage in basic science research because that is where – often by serendipity – the real breakthroughs occur. When we conduct basic science experiments, we do not think about applications. Instead, we primarily explore biological mechanisms.
Physicians and scientists have always conducted “translational research”, but it has now become a very popular buzzword. For that reason, I am a bit concerned when too much focus and funding is shifted towards application-oriented science at the expense of basic science, because then we might lose the basis for future scientific breakthroughs. We need a healthy balance of both.

JR: Does the medical training of a physician draw them towards application-oriented translational science and perhaps limit their ability to address the more fundamental mechanistic questions?

CO: In general, I would say it is true that people who were trained purely as scientists are more interested in addressing basic mechanisms and people who were trained as physicians are more interested in understanding applications such as therapies, therapeutic targets and resistance to therapies.
They are exceptions, of course, and it is ultimately dependent on the individual. I have met physicians who are very interested in basic sciences. I also know researchers who were trained in the basic sciences but have now become interested in therapeutic applications.

JR: When physicians decide to engage in basic science, do you think they have to perhaps partially “unlearn” their natural tendency of framing their scientific experiments in terms of therapeutic applications because of their exposure to clinical problems?

CO: We obviously need application-oriented science, too. It is important to encourage physicians who want to pursue translational research in the quest of new therapies, but we should not regard that as superior to basic science. As a physician who is primarily working in the basic sciences, I make a conscious effort to focus on mechanisms instead of pre-defined therapeutic goals.

Looking to the future

Dr. Opitz’s description of how challenging it is to navigate between her clinical work in neuro-oncology and her research mirrors my own experience. I have often heard that the physician-scientist is becoming an “endangered species”, implying that perhaps we used to roam the earth in large numbers and have now become rather rare. I am not sure this is an accurate portrayal. It is true that current financial pressures at research funding agencies and academic institutions are placing increased demands on physician-scientists and make it harder to actively pursue both lines of work. However, independent of these more recent financial pressures, it has always been extremely challenging to concomitantly work in two professions and be good at what you do. Dr. Bishop decided to forsake a clinical career and only focus on his molecular research because he was passionate about the research. His tremendous success as a scientist shows that this was probably a good decision.

As physician-scientists, we are plagued by gnawing self-doubts about the quality of our work. Can we be excellent scientists and excellent physicians at the same time? Even if, for example, the number of days we see patients are reduced to a minimum, can we stay up-to-date in two professions in which a huge amount of new knowledge is produced and published on a daily basis? And even though the reduction in clinical time allows us to develop great research programs, does it compromise our clinical skills to a point where we may not make the best decisions for our patients?

We are often forced to sacrifice our week-ends, the hours we sleep and the time we spend with our families or loved ones so that we can cope with the demands of the two professions. This is probably also valid for other dual professions. Physician-scientists are a rare breed, but so are physician-novelists, banker-poets or philosopher-scientists who try to remain actively engaged in both of their professions.

There will always be a rare population of physician-scientists who are willing to take on the challenge. They need all the available help from academic institutions and research organizations to ensure that they have the research funds, infrastructure and optimized work schedules which allow them to pursue this extremely demanding dual career path. It should not come as a surprise that, despite the best support structure, a substantial proportion of physician-scientists will at some point feel overwhelmed by the demands and personal sacrifices and opt for one or the other career. Even though they may choose drop out, the small pool of physician-scientists will likely be replenished by a fresh batch of younger colleagues, attracted by the prospect of concomitantly working in and bridging these two worlds.

Instead of lamenting the purported demise of physician-scientists, we should also think about alternate ways to improve the dialogue and synergy between cutting-edge science and clinical medicine. A physician can practice science-based medicine without having to actively work as a scientist in a science laboratory. A scientist can be inspired or informed by clinical needs of patients without having to become a practicing physician. Creating routine formalized exchange opportunities such fellowships or sabbaticals which allow scientists and clinicians to spend defined periods of time in each other’s work environments may be much more feasible approach to help bridge the gap and engender mutual understanding or respect.

Originally published as “Physician Scientists: An Endangered Species?“ in the Lindau Nobel Laureates Meeting blog.

Monday, February 9, 2015

Resisting Valentine's Day

To celebrate Valentine's Day (as a geeky scientist), I decided to search the "Web of Science" database for published articles with the phrase "Valentine's Day" in the title.

The article with the most citations was "Market-resistance and Valentine's Day events" published in the Journal of Business Research in 2009, by the authors Angeline Close and George Zinkhan. The title sounded rather interesting so I decided to read it. The authors reported the results of a survey of college students and consumers conducted in 2003-2005 regarding their thoughts about gift-giving on Valentine's Day:

1) Most males (63%) and some females (31%) feel obligated to give a gift to their partner for this holiday.

2) Males in a new relationship (i.e. less than six months) feel most obligated (81%), females in a new relationship are the second most obligated group (50%).

3) Less than half of males (44%) in a more established relationship feel obligated, and this number is even lower for females in more established relationships (13%).

The authors also conducted interviews using open-ended questions and reviewed diaries and E-diaries to investigate whether people indicated a "resistance" to giving gifts. They found that people expressed three different types of resistance, either opposing or severely limiting the giving of gifts (gift resistance), resisting the purchase of gifts (retail resistance) or broadly opposing the Valentine's Day business in general (market resistance). All of these forms of "resistance" appeared to be connected to an anti-consumption attitude, the desire to not be drawn into a culture of excessive consumerism.

Here are a couple of quotes from the participants:

Valentine's Day is a marketing strategy by the flower and candy companies. It's a cheesy, overblown, stupid “holiday” to force you to spend money on each other.

Valentine's Day is a way for retailers to get you to spend money in their stores. People get caught up in the B.S. and I should not have to spend extra to show I care, and my girlfriend agrees. But we both still spent plenty!

 The survey results indicating differences between men and women are interesting but the paper also shows that even though the majority of people in the US might feel obligated to give each other gifts on Valentine's Day, there is a strong anti-consumerism attitude. People are not willing to succumb to the pressure to spend a lot of money that ultimately benefits retailers. They are instead expressing their affection for each other in ways that do not involve purchasing expensive gifts.

 If you forgot to get a Valentine's Day gift for your partner or spouse, just print out a copy of this paper and give it to them instead, saying that your lack of gift-giving is your expression of anti-consumerism stance. If that person is just as geeky as you are, you might be able to pull it off. There is one caveat: The Journal of Business Research is not open access, so you may hit a paywall asking for $31.50 to read the article, which is more than a typical box of chocolates.

Image credit: Early 20th century Valentine's Day card, showing woman holding heart shaped decoration and flowers, ca. 1910 - via Wikimedia Commons - Public Domain

Note: An earlier version of this article was first published on the Scilogs blogging network Close, A., & Zinkhan, G. (2009). Market-resistance and Valentine's Day events Journal of Business Research, 62 (2), 200-207 DOI: 10.1016/j.jbusres.2008.01.027

Thursday, February 5, 2015

Typical Dreams: A Comparison of Dreams Across Cultures

But I, being poor, have only my dreams;
I have spread my dreams under your feet;
Tread softly because you tread on my dreams.
                                    William Butler Yeats – from "Aedh Wishes for the Cloths of Heaven"   

Have you ever wondered how the content of your dreams differs from that of your friends? How about the dreams of people raised in different countries and cultures? It is not always easy to compare dreams of distinct individuals because the content of dreams depends on our personal experiences. This is why dream researchers have developed standardized dream questionnaires in which common thematic elements are grouped together. These questionnaires can be translated into various languages and used to survey and scientifically analyze the content of dreams. Open-ended questions about dreams might elicit free-form, subjective answers which are difficult to categorize and analyze. Therefore, standardized dream questionnaires ask study subjects "Have you ever dreamed of . . ." and provide research subjects with a list of defined dream themes such as being chased, flying or falling. Dream researchers can also modify the questionnaires to include additional questions about the frequency or intensity of each dream theme and specify the time frame that the study subjects should take into account. For example, instead of asking "Have you ever dreamed of…", one can prompt subjects to focus on the dreams of the last month or the first memory of ever dreaming about a certain theme. 

Any such subjective assessment of one's dreams with a questionnaire has its pitfalls. We routinely forget most of our dreams and we tend to remember the dreams that are either the most vivid or frequent, as well as the dreams which we may have discussed with friends or written down in a journal. The answers to dream questionnaires may therefore be a reflection of our dream memory and not necessarily the actual frequency of prevalence of certain dream themes. Furthermore, standardized dream questionnaires are ideal for research purposes but may not capture the complex and subjective nature of dreams. Despite these pitfalls, research studies using dream questionnaires provide a fascinating insight into the dream world of large groups of people and identify commonalities or differences in the thematic content of dreams across cultures. 

 The researcher Calvin Kai-Ching Yu from the Hong Kong Shue Yan University used a Chinese translation of a standardized dream questionnaire and surveyed 384 students at the University of Hong Kong (mostly psychology students; 69% female, 31% male; mean age 21). Here are the results: Ten most prevalent dream themes in a sample of Chinese students according to Yu (2008):
  1. Schools, teachers, studying (95%)
  2. Being chased or pursued (92 %)
  3. Falling (87 %)
  4. Arriving too late, e.g., missing a train (81 %)
  5. Failing an examination (79 %)
  6. A person now alive as dead (75%)
  7. Trying again and again to do something (74%)
  8. Flying or soaring through the air (74%)
  9. Being frozen with fright (71 %)
  10. Sexual experiences (70%)
The most prevalent theme was "Schools, teachers, studying". This means that 95% of the study subjects recalled having had dreams related to studying, school or teachers at some point in their lives, whereas only 70% of the subjects recalled dreams about sexual experiences. The subjects were also asked to rank the frequency of the dreams on a 5-point scale (0 = never, 1=seldom, 2= sometimes, 3= frequently, 4= very frequently). For the most part, the most prevalent dreams were also the most frequent ones. Not only did nearly every subject recall dreams about schools, teachers or studying, this theme also received an average frequency score of 2.3, indicating that for most individuals this was a recurrent dream theme – not a big surprise in university students. On the other hand, even though the majority of subjects (57%) recalled dreams of "being smothered, unable to breathe", its average frequency rating was low (0.9), indicating that this was a rare (but probably rather memorable) dream. 

How do the dreams of the Chinese students compare to their counterparts in other countries? Michael Schredl and his colleagues used a similar questionnaire to study the dreams of German university students (nearly all psychology students; 85% female, 15% male; mean age 24) with the following results: Ten most prevalent dream themes in a sample of German students according to Schredl and colleagues (2004):
  1. Schools, teachers, studying (89 %)
  2. Being chased or pursued (89%)
  3. Sexual experiences (87 %)
  4. Falling (74 %)
  5. Arriving too late, e.g., missing a train (69 %)
  6. A person now alive as dead (68 %)
  7. Flying or soaring through the air (64%)
  8. Failing an examination (61 %)
  9. Being on the verge of falling (57 %)
  10. Being frozen with fright (56 %)
There is a remarkable overlap in the top ten list of dream themes among Chinese and German students. Dreams about school and about being chased are the two most prevalent themes for Chinese and German students. One key difference is that dreams about sexual experiences are recalled more commonly among German students. 

 Tore Nielsen and his colleagues administered a dream questionnaire to students at three Canadian universities, thus obtaining data on an even larger study population (over 1,000 students). Ten most prevalent dream themes in a sample of Canadian students according to Nielsen and colleagues (2003):
  1. Being chased or pursued (82 %)
  2. Sexual experiences (77 %)
  3. Falling (74 %)
  4. Schools, teachers, studying (67 %)
  5. Arriving too late, e.g., missing a train (60 %)
  6. Being on the verge of falling (58 %)
  7. Trying again and again to do something (54 %)
  8. A person now alive as dead (54 %)
  9. Flying or soaring through the air (48%)
  10. Vividly sensing . . . a presence in the room (48 %)
It is interesting that dreams about school or studying were the most common theme among Chinese and German students but do not even make the top-three list among Canadian students. This finding is perhaps also mirrored in the result that dreams about failing exams are comparatively common in Chinese and German students, but are not found in the top-ten list among Canadian students. At first glance, the dream content of German students seems to be somehow a hybrid between those of Chinese and Canadian students. Chinese and German students share a higher prevalence of academia-related dreams, whereas sexual dreams are among the most prevalent dreams for both Canadians and Germans. However, I did notice an interesting aberrancy. Chinese and Canadian students dream about "Trying again and again to do something" – a theme which is quite rare among German students. I have simple explanation for this (possibly influenced by the fact that I am German): Germans get it right the first time which is why they do not dream about repeatedly attempting the same task. 

 The strength of these three studies is that they used similar techniques to assess dream content and evaluated study subjects with very comparable backgrounds: Psychology students in their early twenties. This approach provides us with the unique opportunity to directly compare and contrast the dreams of people who were raised on three continents and immersed in distinct cultures and languages. However, this approach also comes with a major limitation. We cannot easily extrapolate these results to the general population. Dreams about studying and school may be common among students but they are probably rare among subjects who are currently holding a full-time job or are retired. University students are an easily accessible study population but they are not necessarily representative of the society they grow up in. Future studies which want to establish a more comprehensive cross-cultural comparison of dream content should probably attempt to enroll study subjects of varying ages, professions, educational and socio-economic backgrounds. Despite its limitation, the currently available data on dream content comparisons across countries does suggest one important message: People all over the world have similar dreams.   


 Yu, Calvin Kai-Ching. "Typical dreams experienced by Chinese people." Dreaming 18.1 (2008): 1-10. 

 Nielsen, Tore A., et al. "The Typical Dreams of Canadian University Students." Dreaming 13.4 (2003): 211-235. 

 Schredl, Michael, et al. "Typical dreams: stability and gender differences." The Journal of Psychology 138.6 (2004): 485-494.   

 Note: An earlier version of this article was first published on 3Quarksdaily. Yu, C. (2008). Typical dreams experienced by Chinese people. Dreaming, 18 (1), 1-10 DOI: 10.1037/1053-0797.18.1.1

Wednesday, February 4, 2015

Moral Time: Does Our Internal Clock Influence Moral Judgments?

Does morality depend on the time of the day? The study "The Morning Morality Effect: The Influence of Time of Day on Unethical Behaviorpublished in October of 2013 by Maryam Kouchaki and Isaac Smith suggested that people are more honest in the mornings, and that their ability to resist the temptation of lying and cheating wears off as the day progresses. In a series of experiments, Kouchaki and Smith found that moral awareness and self-control in their study subjects decreased in the late afternoon or early evening.  The researchers also assessed the degree of "moral disengagement", i.e. the willingness to lie or cheat without feeling much personal remorse or responsibility, by asking the study subjects to respond to questions such as "Considering the ways people grossly misrepresent themselves, it's hardly a sin to inflate your own credentials a bit" or "People shouldn't be held accountable for doing questionable things when they were just doing what an authority figure told them to do" on a scale from 1 (strongly disagree) to 7 (strongly agree). Interestingly, the subjects who strongly disagreed with such statements were the most susceptible to the morning morality effect. They were quite honest in the mornings but significantly more likely to cheat in the afternoons. On the other hand, moral disengagers, i.e. subjects who did not think that inflating credentials or following questionable orders was a big deal, were just as likely to cheat in the morning as they were in the afternoons.

Understandably, the study caused quite a bit of ruckus and became one of the most widely discussed psychology research studies in 2013, covered widely by blogs and newspapers such as the Guardian "Keep the mornings honest, the afternoons for lying and cheating" or the German Süddeutsche Zeitung "Lügen erst nach 17 Uhr" (Lying starts at 5 pm). And the findings of the study also raised important questions: Should organizations and businesses take the time of day into account when assigning tasks to employees which require high levels of moral awareness?  How can one prevent the "moral exhaustion" in the late afternoon and the concomitant rise in the willingness to cheat?  Should the time of the day be factored into punishments for unethical behavior? 

One question not addressed by Kouchaki and Smith was whether the propensity to become dishonest in the afternoons or evenings could be generalized to all subjects or whether the internal time in the subjects was also a factor. All humans have an internal body clock – the circadian clock- which runs with a period of approximately 24 hours. The circadian clock controls a wide variety of physical and mental functions such as our body temperature, the release of hormones or our levels of alertness. The internal clock can vary between individuals, but external cues such as sunlight or the social constraints of our society force our internal clocks to be synchronized to a pre-defined external time which may be quite distinct from what our internal clock would choose if it were to "run free". Free-running internal clocks of individuals can differ in terms of their period (for example 23.5 hours versus 24.4 hours) as well as the phases of when individuals would preferably engage in certain behaviors. 

Some people like to go to bed early, wake up at 5 am or 6 am on their own even without an alarm clock and they experience peak levels of alertness and energy before noon. In contrast to such "larks", there are "owls" among us who prefer to go to bed late at night, wake up at 11 am, experience their peak energy levels and alertness in the evening hours and like to stay up way past midnight. It is not always easy to determine our "chronotype" – whether we are "larks", "owls" or some intermediate thereof – because our work day often imposes its demands on our internal clocks. Schools and employers have set up the typical workday in a manner which favors "larks", with work days usually starting around 7am – 9am. In 1976, the researchers Horne and Östberg developed a Morningness-Eveningness Questionnaire to investigate what time of the day individuals would prefer to wake up, work or take a test if it was entirely up to them. They found that roughly 40% of the people they surveyed had an evening chronotype! If Kouchaki and Smith's findings that cheating and dishonesty increases in the late afternoons applies to both morning and evening chronotype folks, then the evening chronotypes ("owls") are in a bit of a pickle. Their peak performance and alertness times would overlap with their propensity to be dishonest. 

The researchers Brian Gunia, Christopher Barnes and Sunita Sah therefore decided to replicate the Kouchaki and Smith study with one major modification: They not only assessed the propensity to cheat at different times of the day, they also measured the chronotypes of the study participants. Their recent paper ""The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Dayconfirms that Kouchaki and Smith findings that the time of the day influences honesty, but the observed effects differ among chronotypes. After assessing the chronotypes of 142 participants (72 women, 70 men; mean age 30 years), the researchers randomly assigned them to either a morning session (7:00 to 8:30 am) or an evening session (12:00 am to 1:30 am). The participants were asked to report the outcome of a die roll; the higher the reported number, the more raffle tickets they would receive for a large prize, which served as an incentive to inflate the outcome of the roll. Since a die roll is purely random, one would expect that reported average of the die roll results would be similar across all groups if all participants were honest. 

Their findings: Morning people ("larks") tended to report higher die-roll numbers in the evening than in the morning – thus supporting the Kouchaki and Smith results- but evening people tended to report higher numbers in the morning than in the evening. This means that the morning morality effect and the idea of "moral exhaustion" towards the end of the day cannot be generalized to all. In fact, evening people ("owls") are more honest in the evenings. 

 Not so fast, say Kouchaki and Smith in a commentary published to together with the new paper by Gunia and colleagues. They applaud the new study for taking the analysis of daytime effects on cheating one step further by considering the chronotypes of the participants, but they also point out some important limitations of the newer study. Gunia and colleagues only included morning and evening people in their analysis and excluded the participants who reported an intermediate chronotype, i.e. not quite early morning "larks" and not true "owls". This is a valid criticism because newer research on chronotypes by Till Roenneberg and his colleagues at the University of Munich has shown that there is a Gaussian distribution of chronotypes. Few of us are extreme larks or extreme owls, most of us lie on a continuum. Roenneberg's approach to measuring chronotypes looks at the actual hours of sleep we get and distinguishes between our behaviors on working days and weekends because the latter may provide a better insight into our endogenous clock, unencumbered by the demands of our work schedule. The second important limitation identified by Kouchaki and Smith is that Gunia and colleagues used 12 am to 1:30 am as the "evening condition". This may be the correct time to study the peak performance of extreme owls and selected night shift workers but ascertaining cheating behavior at this hour is not necessarily relevant for the general workforce. 

Neither the study by Kouchaki and Smith nor the new study by Gunia and colleagues provide us with a definitive answer as to how the external time of the day (the time according to the sun and our social environment) and the internal time (the time according to our internal circadian clock) affect moral decision-making. We need additional studies with larger sample sizes which include a broad range of participants with varying chronotypes as well as studies which assess moral decision-making not just at two time points but also include a range of time points (early morning, afternoon, late afternoon, evening, night, etc.). But the two studies have opened up a whole new area of research and their findings are quite relevant for the field of experimental philosophy, which uses psychological methods to study philosophical questions. If empirical studies are conducted with human subjects then researchers need to take into account the time of the day and the internal time and chronotype of the participants, as well as other physiological differences between individuals. 

 The exchange between Kouchaki & Smith and Gunia & colleagues also demonstrates the strength of rigorous psychological studies. Researcher group 1 makes a highly provocative assertion based on their data, researcher group 2 partially replicates it and qualifies it by introducing one new variable (chronotypes) and researcher group 1 then analyzes strengths and weaknesses of the newer study. This type of constructive criticism and dialogue is essential for high-quality research. Hopefully, future studies will be conducted to provide more insights into this question. By using the Roenneberg approach to assess chronotypes, one could potentially assess a whole continuum of chronotypes – both on working days and weekends – and also relate moral reasoning to the amount of sleep we get. Measurements of body temperature, hormone levels, brain imaging and other biological variables may provide further insight into how the time of day affects our moral reasoning. 

 Why is this type of research important? I think that realizing how dynamic moral judgment can be is a humbling experience. It is easy to condemn the behavior of others as "immoral", "unethical" or "dishonest" as if these are absolute pronouncements. Realizing that our own judgment of what is considered ethical or acceptable can vary because of our internal clock or the external time of the day reminds us to be less judgmental and more appreciative of the complex neurobiology and physiology which influence moral decision-making. If future studies confirm that the internal time (and possibly sleep deprivation) influences moral decision-making, then we need to carefully rethink whether the status quo of forcing people with diverse chronotypes into a compulsory 9-to-5 workday is acceptable. Few, if any, employers and schools have adapted their work schedules to accommodate chronotype diversity in human society. Understanding that individualized work schedules for people with diverse chronotypes may not only increase their overall performance but also increase their honesty might serve as another incentive for employers and schools to recognize the importance of chronotype diversity among individuals. 


 Brian C. Gunia, Christopher M. Barnes and Sunita Sah (2014) "The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day", Psychological Science (published online ahead of print on Oct 6, 2014). 

 Maryam Kouchaki and Isaac H. Smith (2014) "The Morning Morality Effect: The Influence of Time of Day on Unethical Behavior", Psychological Science 25(1) 95–102. 

Till Roenneberg, Anna Wirz-Justice and Martha Merrow. (2003) "Life between clocks: daily temporal patterns of human chronotypes." Journal of Biological Rhythms 18:1: 80-90.   

 Note: An earlier version of this article was first published on the 3Quarksdaily blog. Gunia, B., Barnes, C., & Sah, S. (2014). The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day Psychological Science, 25 (12), 2272-2274 DOI: 10.1177/0956797614541989

Tuesday, February 3, 2015

The Psychology of Procrastination: How We Create Categories of the Future

"Do not put your work off till tomorrow and the day after; for a sluggish worker does not fill his barn, nor one who puts off his work: industry makes work go well, but a man who puts off work is always at hand-grips with ruin."                          

                                                              Hesiod in "The Works and Days"

Paying bills, filling out forms, completing class assignments or submitting grant proposals – we all have the tendency to procrastinate. We may engage in trivial activities such as watching TV shows, playing video games or chatting for an hour and risk missing important deadlines by putting off tasks that are essential for our financial and professional security. Not all humans are equally prone to procrastination, and a recent study suggests that this may in part be due to the fact that the tendency to procrastinate has a genetic underpinning. Yet even an individual with a given genetic make-up can exhibit a significant variability in the extent of procrastination. A person may sometimes delay initiating and completing tasks, whereas at other times that same person will immediately tackle the same type of tasks even under the same constraints of time and resources. A fully rational approach to task completion would involve creating a priority list of tasks based on a composite score of task importance and the remaining time until the deadline. The most important task with the most proximate deadline would have to be tackled first, and the lowest priority task with the furthest deadline last. This sounds great in theory, but it is quite difficult to implement. A substantial amount of research has been conducted to understand how our moods, distractability and impulsivity can undermine the best laid plans for timely task initiation and completion. The recent research article "The Categorization of Time and Its Impact on Task Initiation" by the researchers Yanping Tu (University of Chicago) and Dilip Soman (University of Toronto) investigates a rather different and novel angle in the psychology of procrastination: our perception of the future.

Tu and Soman hypothesized that one reason for why we procrastinate is that we do not envision time as a linear, continuous entity but instead categorize future deadlines into two categories, the imminent future and the distant future. A spatial analogy to this hypothesized construct is how we categorize distances. A city located at a 400 kilometer distance may be considered as being spatially closer to us if it is located within the same state than another city which may be physically closer (e.g. only 300 kilometers away) but located in a different state. The categories "in my state" and "outside of my state" therefore interfere with the perception of the actual physical distance.

 In an experiment to test their time category hypothesis, the researchers investigated the initiation of tasks by farmers in a rural community in India as part of a larger project aimed at helping farmers develop financial literacy and skills. The participants (n=295 male farmers) attended a financial literacy lecture. The farmers learned that they would receive a special financial incentive if they opened a bank account, completed the required paperwork and accumulated at least 5,000 rupees in the account within the next 6 months. The farmers were also told they could open an account with zero deposit and complete the paperwork immediately while a bank representative was present at the end of the lecture. Alternatively, they could open the bank account at any point in time later by going to the closest branch of the bank. These lectures were held in June 2010 as well as in July 2010. In both cases, the six-month deadline was explicitly stated as being in December 2010 (for the June lectures) and in January 2011 (for the July lectures). The researchers surmised that even though the farmers were given the same six-month period to open the account and save the money, the December 2010 deadline would be perceived as the imminent future or an extension of the present because it fell in the same calendar year (2010) as the lecture, whereas the January 2011 deadline would be perceived as a far-off date in the distant future because it would fall in the next calendar year.

The results of this experiment were quite astounding: 32% of the farmers with the December 2010 deadline immediately opened the bank account whereas only 8% of the farmers with the January 2011 deadline followed suit. The contrast was even starker when it came to actually completing the whole task and saving the required money. 28% of the farmers with the December 2010 deadlines succeeded whereas only 4% of the farmers with the January 2011 deadline were successful. Even though both groups were given the same timeframe to complete the task (exactly six months) the same-year group had a six-to-seven fold higher success rate! To test whether their idea of time categorization into the "like-the-present" future and the distant future could be generalized, the researchers conducted additional studies with students at the University of Toronto and the University of Chicago. These experiments yielded similar results, but also revealed that the distinction between "like-the-present" and the distant future is not only tied to the end of the calendar year but can also occur at the end of the month. Participants who were asked in April to complete a task with a deadline on  April 30th indicated a far greater willingness to initiate the task than those with a deadline of May 1st, presumably because the April group thought of the deadline being an extension of the present (the month of April).

One of the most interesting experiments in their set of studies was the investigation of whether one could tweak the temporal perception of a deadline by providing visual cues which link the future date to the present.  Tu and Soman conducted the study on March 9, 2011 (a Wednesday) and told participants that the study was about judging actions. The text provided to the participants read, "Any action can be described in many ways; however the appropriateness of these descriptions may largely depend on the occasion on which the action occurs. In today's study, we are interested in your judgment of the appropriateness of descriptions of several actions. Please pick the one that you think is most appropriate in the occasion that is given to you in this study."

 The researchers then showed the participants a calendar of March 2011 and told them that all the given actions would occur on March 13, 2011 (a Sunday).  But the participants were divided into two groups, half of whom received a calendar in which the whole week was highlighted in one color, thus emphasizing that the Sunday deadline belonged to the same week ("like-the-present group"). The control group received a standard calendar in which the week-ends were colored differently from working days. The participants were provided with a list of 25 tasks and given two options for how they would describe each task. The two options reflected either a hands-on implementation approach versus more abstract approach. For example, for the task of "Caring for houseplants", they could choose between the hands-on option "Watering plants" or the more abstract option "Making the room look nice". Participants who saw the calendar in which the whole week (including Sunday) was depicted in the same color were significantly more likely to choose implementation options, suggesting that the visual cue was prepping their mind to think in terms of already implementing the tasks.

 The work by Tu and Soman makes a strong case for the idea that we think of the future in categories and that this has a major impact on our tendency to procrastinate and take charge and expediently initiate and complete tasks. However, the work does have some limitations such as the fact that the researchers did not investigate whether the initial categorization is modified over time and whether specific reminders can help change the categorization. For example, if the farmers with the January 2011 deadline were to be approached again in the beginning of January 2011, would they then re-evaluate the "remote future" deadline and now consider it to be a "like-the-present" deadline that needs to be addressed immediately? Another limitation of the research article is that it does not explicitly state the ethical review of the studies, such as whether the farmers in India knew that their data was being used for a behavioral research study and whether provided informed consent.

 This research provides fascinating insights into the science of procrastination and raises a number of important questions about how one should set deadlines. If the deadline is too far in the future, there is a much greater likelihood of thinking of it as a remote entity which may end up being ignored. If we want to ensure that tasks are initiated and completed in a timely manner, it may be important to emphasize the proximity of the deadline using visual cues (colors of calendars) or explicitly emphasizing the "like-the-present" nature such as stating "the deadline is in 30 days" instead of just mentioning a deadline date. The researchers did not study the impact of a countdown clock, but perhaps a countdown may be one way to help individuals build a cognitive bridge between the present and a looming deadline. Hopefully, government agencies, universities, corporations and other institutions which heavily rely on deadlines will pay attention to this research and re-evaluate how to convey deadlines in a manner which will reduce procrastination.

   Note: An earlier version of this article was first published on the 3Quarksdaily blog Tu, Y., & Soman, D. (2014). The Categorization of Time and Its Impact on Task Initiation Journal of Consumer Research, 41 (3), 810-822 DOI: 10.1086/677840

Monday, February 2, 2015

Climate Change: Heatwaves and Poverty in Pakistan

In the summer of 2010, over 20 million people were affected by the summer floods in Pakistan. Millions lost access to shelter and clean water, and became dependent on aid in the form of food, drinking water, tents, clothes and medical supplies in order to survive this humanitarian disaster. It is estimated that at least $1.5 billion to $2 billion were provided as aid by governments, NGOs, charity organizations and private individuals from all around the world, and helped contain the devastating impact on the people of Pakistan. These floods crippled a flailing country that continues to grapple with problems of widespread corruption, illiteracy and poverty.

 The 2011 World Disaster Report (PDF) states:
In the summer of 2010, giant floods devastated parts of Pakistan, affecting more than 20 million people. The flooding started on 22 July in the province of Balochistan, next reaching Khyber Pakhtunkhwa and then flowing down to Punjab, the Pakistan ‘breadbasket'. The floods eventually reached Sindh, where planned evacuations by the government of Pakistan saved millions of people.However, severe damage to habitat and infrastructure could not be avoided and, by 14 August, the World Bank estimated that crops worth US$ 1 billion had been destroyed, threatening to halve the country's growth (Batty and Shah, 2010). The floods submerged some 7 million hectares (17 million acres) of Pakistan's most fertile croplands – in a country where farming is key to the economy. The waters also killed more than 200,000 head of livestock and swept away large quantities of stored commodities that usually fed millions of people throughout the year.
The 2010 floods were among the worst that Pakistan has experienced in recent decades. Sadly, the country is prone to recurrent flooding which means that in any given year, Pakistani farmers hope and pray that the floods will not be as bad as those in 2010. It would be natural to assume that recurring flood disasters force Pakistani farmers to give up farming and migrate to the cities in order to make ends meet. But a recent study published in the journal Nature Climate Change by Valerie Mueller at the International Food Policy Research Institute has identified the actual driver of migration among rural Pakistanis: Heat. Mueller and colleagues analyzed the migration and weather patterns in rural Pakistan from 1991-2012 and found that flooding had a modest to insignificant effect on migration whereas extreme heat was clearly associated with migration. 

The researchers found that bouts of heat wiped out a third of the income derived through farming! In Pakistan, the average monthly rural household income is 20,000 rupees (roughly $200), which is barely enough to feed a typical household consisting of 6 or 7 people. It is no wonder that when heat stress reduces crop yields and this low income drops by one third, farming becomes untenable and rural Pakistanis are forced to migrate and find alternate means to feed their family. Mueller and colleagues also identified the group that was most likely to migrate: rural farmers who did not own the land they were farming. Not owning the land makes them more mobile, but compared to the land-owners, these farmers are far more vulnerable in terms of economic stability and food security when a heat wave hits. Migration may be the last resort for their continued survival. It is predicted that the frequency and intensity of heat waves will increase during the next century

Research studies have determined that global warming is the major cause of heat waves, and  an important recent study by Diego Miralles and colleagues published in Nature Geoscience has identified a key mechanism which leads to the formation of "mega heat waves". Dry soil and higher temperatures work as part of a vicious cycle, reinforcing each other. The researchers found that drying soil is a critical component.. During daytime, high temperatures dry out the soil. The dry soil traps the heat, thus creating layers of high temperatures even at night, when there is no sunlight. On the subsequent day, the new heat generated by sunlight is added on to the "trapped heat" by the dry soil, which creates an escalating feedback loop with progressively drying soil that becomes devastatingly effective at trapping heat. The result is a massive heat-wave which can wipe out crops, lead to water scarcity and also causes thousands of deaths. 

 The study by Mueller and colleagues provides important information on how climate change is having real-world effects on humans today. Climate change is a global problem, affecting humans all around the world, but its most severe and immediate impact will likely be borne by people in the developing world who are most vulnerable in terms of their food security.  There is an obvious need to limit carbon emissions and thus curtail the progression of climate change. This necessary long-term approach to climate change has to be complemented by more immediate measures that help people cope with the detrimental effects of climate change by, for example, exploring ways to grow crops that are more heat resilient, and ensuring the food security of those who are acutely threatened by climate change. 

 As Mueller and colleagues point out, the floods in Pakistan have attracted significant international relief efforts whereas increasing temperatures and heat stress are not commonly perceived as existential threats, even though they can be just as devastating. Gradual increases in temperatures and heat waves are more insidious and less likely to be perceived as threats, whereas powerful images of floods destroying homes and personal narratives of flood survivors clearly identify floods as humanitarian disasters. The impacts of heat stress and climate change, on the other hand, are not so easily conveyed. Climate change is a complex scientific issue, relying on mathematical models and intrinsic uncertainties associated with these models. As climate change progresses, weather patterns will become even more erratic, thus making it even more challenging to offer specific predictions. 

Climate change research and the translation of this research into pragmatic precautionary measures also face an uphill battle because of the powerful influence of the climate change denial lobby. Climate change deniers take advantage of the scientific complexity of climate change, and attempt to paralyze humankind in terms of climate change action by exaggerating the scientific uncertainties. In fact, there is a clear scientific consensus among climate scientists that human-caused climate change is very real and is already destroying lives and ecosystems around the world. Helping farmers adapt to climate change will require more than financial aid.  It is important to communicate the impact of climate change and offer specific advice for how farmers may have to change their traditional agricultural practices. A recent commentary in Nature by Tom Macmillan and Tim Benton highlighted the importance of engaging farmers in agricultural and climate change research. Macmillan and Benton pointed out that at least 10 million farmers have taken part in farmer field schools across Asia, Africa and Latin America since 1989 which have helped them gain knowledge and accordingly adapt their practices. 

 Pakistan will hopefully soon engage in a much-needed land reform in order to solve the social injustice and food insecurity that plagues the country. Five percent of large landholders in Pakistan own 64% of the total farmland, whereas 65% small farmers own only 15% of the land. About 67% of rural households own no land. Women own only 3% of the land despite sharing in 70% of agricultural activities!  The land reform will be just a first step in rectifying social injustice in Pakistan. Involving Pakistani farmers – men and women alike - in research and education about innovative agricultural practices in the face of climate change will help ensure their long-term survival.  

Note: An earlier version of this article was first published on the 3Quarksdaily blog. Mueller V, Gray C, & Kosec K (2014). Heat Stress Increases Long-term Human Migration in Rural Pakistan. Nature Climate Change, 4, 182-185 PMID: 25132865