-by Waleed Bin Khalid

Remember that time in the exam hall when you asked yourself “Do I really need to pass this exam?” When you just want to call it quits? Or a random Monday morning when getting out of bed seems like an impossible task? Something seems amiss; it’s as if we lack the fuel to carry out our routine sometimes. In fact we even begin to question the necessity of performing the simplest task, like changing out of a pair of comfortable pajamas.

The answer to all that is lack of motivation simply put the desire or willingness to do something. So what induces this desire and how strong is this phenomenon?

On April 26, 2003, Aron Ralston (The person on whom the movie 127 hours is based on) was hiking alone through the Blue John Canyon. During his descend, a part of the canyon boulder dislodged and crushed his right arm against the canyon wall. Aron had not informed anyone about his adventure thus dismissed the possibility of being rescued. In a course of five days he finished all the food he had, drank all the water and finally had to drink his own urine to keep hydrated. When he had finally lost all hope, Aron inscribed his name, date of birth and presumed date of death on the wall of the canyon and began recording his last goodbyes on his video recorder.

(You can watch one of these clips here:

On what Aron assumed to be his last night on earth, he was struck with a dreadful but ultimately life-saving idea. The next morning Aron broke the radius and ulna bones of his right arm. Using a dull pocket knife he amputated his right arm and was able to escape near certain death. Aron Ralston was saved by the power of motivation. The sudden urge to struggle to survive.

Psychologists have been trying hard to discern the phenomenon of motivation. As of now, there are four major theories that have been proposed. These include the “Evolutionary Perspective”, “The Drive Reduction Theory”, “The Optimal Arousal Theory” and “Maslow’s Hierarchy of Needs”. Let us now examine these theories in detail.

Evolutionary theory

The early theories that sprung up were based on the notion that everything that happened was based on instincts (hence also called the instinct theory). From an evolutionary point of view, behaviors are not made consciously: they are instinctive, and based on what is most advantageous in terms of passing one’s genes on to the next generation. A baby learns to cry when taken away from its mother because signaling distress saves the infant from the predatory forces of the environment and hence helps the infant survive. Similarly, dogs know how to shake their bodies when wet. All these abilities are inherent by the living being not learnt over the passage of time.

Evidently, there were major problems with this theory. The undeniable role of learning and nurture was not taken into consideration. A lot of what we do is based on intellectual experience and so simply attributing everything to evolution was not going to do the job. Moreover just because something has a tendency to occur does not mean it will always occur, for example a cat has the tendency to chase a mouse but that does not necessarily mean that it will chase the mouse.

The drive reduction theory

The drive reduction theory is all about needs directing actions. Humans are motivated to satisfy their physiological needs in order to maintain homeostasis (the tendency to maintain a balance, or optimal level, within a biological system.) The theory was proposed by Clark Hull in 1943. According to Hull there were two kinds of drives: the primary drive and the secondary drive.

The primary drives consist of innate biological needs such as hunger, food and sex while the secondary drives indirectly satisfy the primary drives such as earning money to pay for food.

So when you feel hungry you have low blood sugar, which is a physiological need and this corresponds to the drive state hunger. Since eating will help you achieve homeostasis, you get up from your comfortable couch and go to the fridge. What motivated you to overcome your lethargy? Your hunger did. In essence this theory states that our drive or motivation springs from our desire as living beings to survive. We search for food because we need food to survive; we build houses because we need shelter to survive and we find mates and fall in love and have families because that is what’s required to keep the human race going.

This theory also has its limitations. We don’t always feel obligated to worship our needs and fulfill them. People often fast for religious or political causes and subdue their desire to eat. Many abandon shelters and go for a more nomadic life to satisfy some spiritual beliefs. Similarly certain religions require it’s believers to dispel their sexual urges entirely showing how we can overcome both our primary and secondary drives.

The “pleasure-seeking” behaviors also place hurdles in front of the drive reduction theory. For example, people do not eat only when they are hungry. Why would someone seek out fulfillment of their primary drive if that drive is already fulfilled? The optimal arousal theory explains this discrepancy.

The optimal arousal theory

This theory focuses more on the neurologically transmitted dopamine as the motivator of the body. The idea is that our brains reward us every time we do something of its pleasing and so a goal-driven body seeks out things that will achieve arousal (the state of being awake or reactive to stimuli). However too much arousal is also bad for the body and so we seek the optimal level of arousal that would help us perform the most efficiently.

To show how the reward system works, Peter Milner and James Olds conducted an experiment in the early 1950s in



which a rat had an electrode implanted in its brain so that its brain could be locally stimulated at any time. The rat was put in a box that contained two levers: one lever released food and water, and another lever delivered a brief stimulus to the reward center of the brain. At the beginning the rat wandered around the box and stepped on the levers by accident, but before long it was pressing the lever for the brief stimulus repeatedly. This behavior is called electrical self-stimulation. Sometimes, rats would become so involved in pressing the lever that they would forget about food and water, stopping only after collapsing from exhaustion. Electrical self-stimulation apparently provided a reward that reinforced the habit to press the lever. This study provided evidence that animals are motivated to perform behaviors that stimulate dopamine release in the reward center of the brain. Our brains also function the very same way. They seek arousal in the form of stimulus.

It is important to note that everyone has a different level of optimal arousal. For example adrenaline junkies jump off planes and cliffs to get their dopamine rewards where as others might be satiated by nothing more than a good movie or a book. This suggests that we merely want to avoid both boredom and stress.

Now that we have established some of the concrete motivators it should be evident that not all needs are of equal importance.

Maslow’s Hierarchy of Needs


Figure 2

The priority of motivators was proposed by Abraham Maslow. Maslow was a professor
of psychology at the Alliant International University, Brandeis University, Brooklyn College, New School for Social Research and Columbia University. In his 1954 book Motivation and Personality, Maslow proposed his theory on human needs and summed it into a pyramid. The idea was that in order to attain happiness humans had to climb the pyramid Maslow had created. Each landing of the pyramid was a motivator that belonged to certain needs.

Woody Allen explains this climb beautifully in his film Stardust Memories. He says: “…obviously if you don’t have enough to eat that becomes your major problem, but what happens when you’re living in a situation where you don’t need to worry about that, then your problems become, how can I fall in love? Or why can’t I fall in love? Why do I age and die? And what meaning can my life possibly have? The issues become very complex for you.”

Basically first you have to satisfy all the needs that are required to help you to survive. Eat food, drink water, sleep. Next you must achieve security in the world, where you can earn for yourself and live without the fear of death. After that what you require are fulfilling relationships. Relationships that help prosper and make you feel loved. Such relationships are established with your parents, friends or lover. Once you have made it this far you must achieve self-respect and respect for others. Become a good caring human being who is respected by the society. Finally, we reach the pinnacle of the pyramid which calls for self-actualization, i.e. achieving your truest and highest potential.

The Maslow’s Hierarchy of Needs has also received its fair share of critics. The hierarchy fails to take into consideration the differences between the people who belong to different societies. The needs and drives of those in individualistic societies tend to be more self-centered than the needs and drives of those in collectivist societies, focusing on improvement of the self, with self-actualization being the apex of self-improvement. In collectivist societies, the needs of acceptance and community will outweigh the needs for freedom and individuality. Moreover the pyramid varies from culture to culture and has many terms with ambiguous definitions. The theory generalizes different types of people. However, it is still one of the best portrayals of human motivators and drivers.

This illustrates the complicated nature of humans and also explains why so many of us find ourselves unhappy even though we may be blessed with many necessities that other individuals may only dream of. This also explains why different people are motivated by different things.

Our thirst for improvement is never quenched. Had we not had this drive to be better than we currently are, we would never have progressed. People say we should be content with what we have and sure we should be thankful for whatever we have but we can hardly ever be satisfied with the status quo. This very naturally stems from our continuous drive that urges us to improve our condition. In doing so many of us become dissatisfied with our lives but here John Stuart Mill comes to our .

“It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are a different opinion, it is because they only know their own side of the question.”
John Stuart Mill, Utilitarianism (1863)

PS watch Stardust Memories and DFTBA.





Intellect: qualitative, quantitative or complex

-by Waleed bin Khalid

1) Rearrange the following letters to make a word and choose the category in which it fits. RAPETEKA

A. city             B. fruit              C. bird              D. vegetable

2) Which word does not belong? Apple, marmalade, orange, cherry, grape

A. apple            B. marmalade                C. orange          D. cherry           E. grape

I am certain all of you have faced questions like these either in standardized tests or in random books and online quizzes that boast of measuring your intelligence. But my question is: what is intelligence in the first place?

Figure 1Source:

Figure 1Source:

I recently attended a seminar by Mr. Umair Jallianwala, a renowned motivational speaker and trainer, where he emphasized how subjective and qualitative intelligence is. He highlighted Howard Gardner’s (Hobbs Professor of Cognition and Education at the Harvard Graduate School of Education) study to categorize intelligence into 8 forms: musical, mathematical, linguistic, naturalist, interpersonal, intrapersonal, spatial, kinesthetic abilities. This got me thinking that if intelligence really is such a complex notion and if so many people claim it cannot be quantized, why do we try to explain it the way we do?

So I ask you to bear with be as we explore the nature of intelligence.

The English psychologist Charles Spearman was of the opinion that intelligence is a quantitative thing and that it can be measured via statistical means. Spearmen argued that we do have one united form of intelligence and called it the G factor. The G factor was basically the manifestation of all the intelligence of a person in a single score, basically a number reflecting all your capabilities. To Spearman this was the sole variable that determined how intelligent a person would be and he believed that a person with a low G factor would perform poorly in all fields of expertise.

Before you disregard Spearman’s theory as being too rigid, you would be surprised to find out that researchers did find that there often was a correlation between the different forms of intelligence amongst people. Meaning, if a person was good at linguistics he had a good chance of being good at other things like mathematics or emotional intelligence thus backing the G factor theory to some extent.

However, clearly the notion has major discrepancies.

The Savant Syndrome directly contradicts the G factor theory. Savant Syndrome is when a person is restricted in a certain mental ability but excels in some other ability. If you have seen the movie Rain Man (and I recommend that you do), Dustin Hoffman portrays how Raymond, who is suffering from autism, excelled in mathematical abilities even though he was blank when it came to things like emotional intelligence. Raymond was a classical Savant.

Look around you, maybe your friend who aces every single test is socially awkward or is less emotionally stable. He or she is excelling at many realms of intelligence but lacks in the emotional intelligence arena. This is in direct contradiction with the G factor theory.

This ultimately did lead to the more modern theories of different types of intelligences such as Howard Gardener’s theory as mentioned earlier.

So if intelligence really is so complicated, are the tests that measure it accurate? Well for an intelligence test to be applicable it has to have three basic things. The test should be:

  • Standardized: It should be applicable for all the people taking it and hence the results should be reproducible.
  • Reliable: It delivers what is being predicted.
  • Valid: The test tests what it should.

Alfred Binet and Theodore Simon, were two French psychologists, devised a way of measuring intelligence amongst kids using a standard chronological age scale. They called it the Binet-Simon scale. Basically, if a child tests as an average 8 year old, his mental age would be 8. Binet and Simon however believed that this was only a way to measure the current intelligence in a child and that attention, practice, self-discipline and experience could help boost intelligence.

Binet’s theory was further used by William Stern to create the famous Intelligence Quotient. The formula goes something like this.


So if I am 20 and my mental age measures out to be 20 (via IQ tests) then my IQ would be 100. This formula proved to work brilliantly for kids but fell apart when it came to adults. Adults, it was realized, did not hit intelligence milestones like children did. A person will not differ much in intelligence levels from the ages 34 and 35, vis-à-vis a child will see major changes in his or her intelligence level form say the age of 5 to 6.

Now there is a much darker side to these testing methods.

Lewis Terman, an American psychologist started using this method in order to test soldiers during the First World War. The first mass administration of IQ testing was done with 1.7 million soldiers. The recruits were given group intelligence tests and recruits who earned scores of “A” would be trained as officers while those who earned scores of “D” and “E” would never receive officer training.

Terman was a Eugenicist. That is to say he supported the philosophy of improving of the human race through the promotion of higher rates of sexual reproduction for people with desired positive traits or reduced rates of sexual reproduction and sterilization of people with less-desired traits. He basically supported a form of selective breeding amongst humans.

In his book the measurement of intelligence, Terman wrote:

“High-grade or border-line deficiency… is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come… Children of this group should be segregated into separate classes… They cannot master abstractions but they can often be made into efficient workers… from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding” (The Measurement of Intelligence, 1916, p. 91-92).

His propagation of the IQ method led to mass sterilization of people, especially black immigrants, poor white women and prostitutes even until the 1970’s. Immigrants were classified as feeble minded if they could not answer questions like “Who was the first American president.” The Nazi’s went a step further and they simply eliminated any person who went below par in their self-designed intelligence tests. Tests more related to acceptable social norm.

Before we simply negate Terman’s arguments and the philosophy of Eugenics, let’s look at them objectively.  Is genetics the only determinant for intelligence? Do upbringing and environment play any role?

To understand this, psychologists conducted studies to find a correlation of intelligence amongst the three pairs:

1) Children and their birth parents.

2) Adopted children and their birth parents

3) Adopted children and their adoptive parents.

The graph above clearly shows that the most dominant correlation was based on biological variables. Children did have intelligence levels that matched their biological parents and rarely manifested the intelligence of their adoptive parents.

This all seems a bit scary. Can you really do nothing about your intelligence? Don’t worry. Genetics might be an important factor but it is by no means the only factor that determines how smart you are. Environment, it was shown, matters almost as much as genetics do in determining intelligence.

Jamie McVicker Hunt conducted a study in a destitute orphanage in 1970’s. The orphanage had extremely poor living conditions and minimal care was given to infants. The cries of the orphans were never responded to instead they were fed and taken care of by routine in a mechanical fashion. As a result the children of that orphanage never learned to communicate. Jamie trained the caregivers to talk to the infants and participate in the upbringing of the orphans. Quiet remarkably the orphans started learning quickly and their communication skills developed brilliantly. The study highlighted how malleable early childhood can be for intelligence.

So should you have a fixed mindset that we can’t change how intelligent we are or should we go with a growth mindset believing that we grow as we progress? Carol Dweck of Stanford University puts the balance of a fixed mindset and a growth mindset brilliantly in her book Mindset: The new psychology of success. She writes:

“Believing that your qualities are carved in stone — the fixed mindset — creates an urgency to prove yourself over and over.

[…]The growth mindset is based on the belief that your basic qualities are things you can cultivate through your efforts. Although people may differ in every which way — in their initial talents and aptitudes, interests, or temperaments — everyone can change and grow through application and experience.

Do people with this mindset believe that anyone can be anything, that anyone with proper motivation or education can become Einstein or Beethoven? No, but they believe that a person’s true potential is unknown (and unknowable); that it’s impossible to foresee what can be accomplished with years of passion, toil, and training.”

So on a concluding note it would appear that intelligence is a real measurable phenomenon. But no one can claim that they have disentangled all the intricacies of intelligence. There is so much more to this realm than what we have barely scratched. The human mind is a mystical land where ideas unfurl and seemingly psychotics like Vincent Van Gogh can produce marvels like the Starry Night. You are much more complicated than any test score and so as the saying goes, don’t let a test score define you.

All you have to do is realize your potentials and bring them to reality, and as always, DFTBA.

PS: in case you were wondering, the answers to the above questions are (Parakeet) C and B.


The twins study:

The measurement of intelligence by Lewis Terman:

Carol Dweck’s book:


Halley’s Comet


-by Muhammad Sanan Khan

Since the beginning of time, outer space has been of immense curiosity for mankind and has captivated human minds for centuries. Comets over the years have contributed to this liking for space of mankind’s which can be seen by the fact that comets sighted decades earlier are still studied and are held us in awe.

A comet, basically, is an icy body that releases gas or dust. They are often compared to dirty snowballs, though recent research has led some scientists to call them snowy dirtballs. Comets contain dust, ice, carbon dioxide, ammonia, methane and more. Astronomers think comets are leftovers from the gas, dust, ice and rocks that initially formed the solar system about 4.6 billion years ago.

One such comet is the Halley’s comet, one of the most famous comet in our history. It is a periodic comet, and it returns to Earth’s vicinity about every 75 years, making it possible for a human to see it, at most, twice in his or her lifetime. The last time it was here was in 1986, and it is projected to return in 2061. Appearance of the Halley’s comet can be traced back to as early as 239 B.C when it was sighted in China.

The comet is named after the English astronomer Edmond Halley who examined reports of a comet approaching Earth in 1531, 1607 and 1682. He concluded that these three comets were actually the same comet returning over and over again, and he predicted the comet would come again in 1758.

Human being are a curious specie and comets such as Halley add to our curiosity for space. Anything which man cannot understand will always make mankind pursue it for better understanding till they have either understood it or conquered it. Till then space will keep human attention.

The Psychology of Decision Making

 The Psychology of Decision Making
-by Waleed Bin Khalid

A man drives his son to school. An accident occurs and the man dies. The son is seriously injured and requires a surgery. The ambulance makes it to the nearest hospital and everyone waits for the surgeon. The surgeon walks in, sees the boy and says: “I cannot operate this surgery because he is my son.”

What happened? Did the boy have two fathers? Or was the boy a doppelganger of the surgeon’s son?

The answer is that the surgeon was the boy’s mother. The above scenario is a classical psychological question that investigates the inherent biases that are present in our minds and how they affect decision making. The topic is extensively discussed in Malcom Gladwell’s book “Blink: The power of thinking without thinking.”

Our brains have been meticulously programmed. They take large chunks of data and process it in negligible amount of time without us knowing behind closed doors. This often leads to something Gladwell calls snap judgments or blink decisions. That is to say we draw judgments within seconds unconsciously and in doing so we often use our passive knowledge to derive conclusions. This passive knowledge can be skewered and may lead us to biased decisions. By now you must be feeling bad about yourself, but don’t be hasty. Turns out this quality of the human brain serves a great purpose, necessary for everyday interactions with the world. It has both pros and cons and the onus falls on us to discern when the use of this technique is correct.

So let’s delve into our everyday lives. How many decisions do you have to make daily? The answer to that is countless times. You make decisions even when you are not consciously aware that you are making decisions. From an unconscious decision like picking up a certain brand of jam, to making conscious decisions at your workplace or university, decision making is a vital part of our daily routine.

If we were to put the same amount of effort into all of our decisions, then we would probably not have time to be productive. But here is where our swift brains come to our rescue. Our brains use prior data to reach to conclusions. This happens in every single decision we make and it is what a layman would call “gut feeling” or “hunch”. Basically a decision is reached in a matter of milliseconds without us actually thinking “Okay what should I choose?” What’s interesting is that we may choose to ignore this gut feeling and change our blink decision and we do this by rationalizing the decision.

Imagine a doctor treating a patient who has polyp (a small growth) on his nose. The first thing that the doctor suspects on examining the growth is malignancy (cancerous or infectious). But the doctor might decide to carry out further tests which may prove that the polyp is not cancerous but is merely an abnormal nasal growth easily removable by surgery. Here the doctor is giving precedence to his rational decision over his snap judgment. Now doctors often face the dilemma of whether they should trust their gut feeling or decide rationally. After all conducting medical tests may be unnecessarily costly. The answer to this might surprise you.

Let’s go to Cook County Hospital in Chicago. The hospital had a critical shortage of beds available to treat patients who were experiencing chest pains. Chest pains can be symptomatic of heart attack, but not every patient suffering from chest pains is actually having a heart attack. The only way to be certain is to admit a patient to the cardiac unit and run a series of expensive tests. But due to the cost and the lack of available space this was not possible. With that in mind, the hospital management turned to a decision tree developed by a U.S. Navy cardiologist Lee Goldman. Goldman spent years developing and testing a single model that would allow submarine doctors to quickly evaluate possible heart attack symptoms and determine if the submarine had to resurface and evacuate the chest pain sufferer. The algorithm simply asked a few questions and on the basis of the patients answers, gave a verdict of whether the patient was at risk of a heart attack or not. No complicated tests, no lengthy cultures or anything, just a few simple questions.

Surprisingly, it was revealed that more information allowed doctors to be more confident about their treatment but more information did NOT help them in making better diagnoses. On the other hand the Goldman Algorithm proved to be 70 percent better than the old method at recognizing patients who weren’t having a heart attack. This analysis makes it abundantly clear that more information does not necessarily lead to better decisions and rationalizing everything isn’t the best way to go about things.

So if our blink decisions are so accurate should we trust them on everything? Well, not always. In Ian Ayres’ book, “Pervasive Prejudice?: Non-Traditional Evidence of Race and Gender Discrimination”, Ian highlights a study where he sent white men, women and black men to different car dealers in Chicago. The groups were dressed in similar styles and had similar cover stories as well. However the study revealed that the price quoted for a certain brand of car to white men was the least and the price quoted to the other two groups was considerably greater. Black men were quoted prices about $1000, on average, higher than white men. Even after bargaining white men came to a cheaper price for cars compared to the other two groups even though the groups were virtually similar apart from their skin color or gender. This is an indicator of snap judgments made by the car salesmen. The salesmen were neither sexist nor racist but their snap judgments told them that white men would be more liable to buy cars and hence the bias occurred.

Bob Golomb, a car salesman with sales numbers over twice that of the average car salesperson said the following about his success:

“You cannot prejudge people in this business. Prejudging is the kiss of death. You have to give everyone your best shot. A green salesperson looks at a customer and says, ‘This person looks like he can’t afford a car,’ which is the worst thing you can do, because sometimes the most unlikely person is flush. I have a farmer I deal with, who I’ve sold all kinds of cars over the years. We seal our deal with a handshake, and he hands me a hundred-dollar bill and says, ‘Bring it out to my farm.’ We don’t even have to write the order up. Now, if you saw this man, with his coveralls and his cow dung, you’d figure he was not a worthy customer. But in fact, as we say in the trade, he’s all cashed up. Or sometimes people see a teenager and they blow him off. Well, then later that night, the teenager comes back with Mom and Dad, and they pick up a car, and it’s the other salesperson that writes them up.”

The above example indicates how often we need to subdue our rash decision making instincts and make the rational decision. But when should we use what? Well the answer is a bit nebulous. At the end of his book, Malcom Gladwell offers a simplistic solution (spoiler alert). He said that when making big decisions like buying a car or a house, we should trust our snap judgments. But when making small decisions like buying groceries we should wait and pounder. This was based on a study where people who made rational decisions in buying groceries at supermarkets were more satisfied than customers who made snap decisions. This rarely happened amongst people who bought expensive items like televisions and cars. But like many things, the study is in no way conclusive.

So do you think you’re decisions are all fair and all rational? Do you think your brain would not make decisions that were on the lines of implicit sexism and racism? Well then go to  and give their Implicit Association Test (IAT). It’s a short test that highlights the implicit biases in the depths of our gray matter. I am sure the answers will surprise you.

Good luck with your decisions and DFTBA.

Dr. Hafeez Hoorani – The One from CERN

Interior of Particle Accelerator at CERN

-by Muhammad Sanan Khan

In a country which is taking its nascent steps in the world of science, Dr. Hafeez Horaani is one of the pioneers in this world for our country.

Hailing from Karachi, he is a particle physicist, with a specialization in accelerator physics, and a research scientist at the CERN. Nowadays, Mr. Hoorani works at the National Center for Physics, with research focus in elementary particle physics and high energy physics.

Dr. Hafeez Horaani is certainly among the league of the top scientists of our country for all his works and accomplishments. His efforts to make Pakistan a part of CERN are highly commendable and they are a big push forward for the country.

What is CERN? CERN is the European Organization for Nuclear Research bent on seeking answers related to the origin of humankind such as; what is the universe made of? How did it start? Using some of the world’s most powerful particle accelerators.

Dr. Hoorani joined CERN in 1989 and due to his efforts, in 2000, Pakistan Atomic Energy Commission (led by nuclear physicist Dr. Ishfaq Ahmad) signed an agreement with CERN. This agreement opened the door for Pakistani physicists to collaborate with CERN’s particle physics project.

Our country needs more men like Dr. Hoorani so that in the race to win the world with science and technology our country can make a mark of its own. More power to you, Mr. Hoorani and to PAKISTAN. Ciao till next time.