Category: Psychology


2nd LABEL – IEPR Conference

“Understanding cognition and decision making by children”

 May 4 – 5, 2017

 

 PROGRAM

  Thursday, May 4, 2017
 
 Radisson Los Angeles at USC
Victory AB Room
3540 South Figueroa Street
Los Angeles, California 90007
 
 

8:45 –  9:15   Continental Breakfast

9:15 –  9:30   Introductory remarks  

9:30 – 10:15   “The role of agency in regret and relief in 3- to 10-year-old children”Giorgio Coricelli (USC)

10:15 – 11:00  “Young Children’s Exploration of Alternative Possibilities”.  Henrike Moll (USC)

11:00 – 11:30   Coffee break

11:30 – 12:15   “Navigating Uncertainty: Neural Correlates of Decisions from Experience in Adolescence”. Wouter van de Bos (Max Planck – Berlin)

12:15 – 1:45   Lunch

1:45 – 2:30   “The Costs and Benefits of Growing Up”. Michael Norton (Harvard U.)

2:30 – 3:15   “Using economic experiments to detect drivers of behavior at early ages: Evidence from a large field experiment in Chicago”. Ragan Petrie (Texas A&M)                                        

3:15 – 3:45   Coffee break  

3:45 – 4:30   “Does the little mermaid compete? An experiment on competitiveness with children”Marco Piovesan (U. Copenhagen)

6:30              Dinner (by invitation)

 


 

Friday May 5, 2017
 
Radisson Los Angeles at USC
Victory AB Room
3540 South Figueroa Street
Los Angeles, CA 90007

 

9:00 – 9:30   Continental Breakfast

9:30 – 10:15   “Altruism and strategic giving in children and adolescents”. Isabelle Brocas (USC)

10:15 – 11:00   “Theory of Mind among Disadvantaged Children”. Aldo Rustichini (U. Minnesota)

11:00 – 11:30   Coffee break

11:30 – 12:15   “The development of reciprocal sharing in relation to intertemporal choice”. Felix Warneken (Harvard U.)

12:15 – 1:45   Lunch

1:45 – 2:30   “Group and Normative Influences on Children’s Exchange Behavior”. Yarrow Dunham (Yale U.)

2:30 – 3:15   “The role of social preferences and emotions in children’s decision making -a view from developmental psychology”. Michaela Gummerum (Plymouth U.)

3:15 – 3:45   Coffee Break

3:45 – 4:30   “Socio-cognitive mechanisms of prosocial behavior”. Nadia Chernyak (Boston U.)

Advertisements

Yuqing Hu, Juvenn Woo

GARP, the generalized axiom of revealed preference, is a dichotomous notion. A dataset can either satisfy the theory of a rational consumer or violate it[1]. There are several challenges in testing GARP consistency in an experiment. One challenge comes from the nature of budget sets, as there are more individual-level variations in expenditure than variations in prices. This causes the overlap of two budget sets, upon which the test based could bias towards the satisfaction of GARP, with the extreme case that no violation would occur if the budget sets are nested. Another challenge stems from lack of identities of consumers such that individuals have to be treated as the same for revealed preference tests (Chambers & Echenique, 2016).

 

A few indices have been invented to measure the degree of GARP violations. One is Afriat’s efficiency index (AEI) (Afriat, 1967), which uses the degree of “deflation” of expenditure that is needed to make GARP consistent. Other variations of AEI are Varian’s efficiency index (VEI) (Varian, 1983), and the money pump index (MPI) (Echenique, Lee, & Shum, 2011).

 

There is a modest amount of literature in experimental economics that test GARP consistency, which is summarized below.

 

Battalio et al.(1973) pioneered applying GARP consistency in experimental study. They ran field experiments among psychotic patients to let them exchange tokens for different consumption goods. They induced a variety of different budget sets by changing the value of tokens, and they found that if small measurement errors were allowed, then almost all patients’ behavior was consistent with GARP.

 

Sippel (1997) ran lab experiments in which 42 students were asked to repeatedly choose a bundle of priced entertaining items under different standard budget constraints. Individuals were paid in consumption goods, and were required to actually consume the goods at the experiment. The experiment showed 63% of subjects made choices violating GARP, though the median number of violations over all subjects was only 1 of 45 choices. The number did not change even if the study perturbed demand close to actual demand. Even though the (remarkably) low number of violations for individual indicates the subjects were highly motivated when making choices, a majority of subjects could not be classified as rational in the sense of utility maximization.

 

Harbaugh, Krause and Berry (2001) examined the development of choice behavior for kids. The study conducted simple choice experiments over cohorts of second graders, sixth graders, and undergraduates. They measured and compared number of GARP violations. It found out that for the particular test, about 25% of 7-year-olds, and 60% of 11-year-olds were making choices consistent with GARP respectively, and there is no increase in consistency from 11 to 21-year old. They also found that violations of GARP are not significantly correlated with the results of a test that measures mathematical ability in children.

 

Andreoni and Miller (2002) ran dictator-game experiments to test whether the observed altruistic behavior is utility-maximizing. They found that 98% of subjects make choices that are consistent with utility maximization. Andreoni and Miller went further than most revealed-preference exercises in estimating a parametric function of a utility function accounting for subject’s choices (about half the subjects can be classified as using a linear, CES (constant elasticity of substitution), or Leontief utility).

 

Hammond and Traub (2012) designed a three-stage experiment, where participant advances to next stage of test only if he or she made GARP consistent choices at current one. It had been conducted over 41 (non-economics) undergraduates. The study found that, at the 10% significance level, only 22% of subjects (compared to 6.7% of simulated random choice-maker) passed second-stage tests, and 19.5% passed all three stage tests. The result reinforced Sippel’s study suggesting that human is generally not perfectly rational. It should be noted though conditional on passing second-stage rationality, 62.5% of subjects passed third-stage tests.

 

Choi, Kariv, Müller and Silverman (2014) conducted with CentERpanel survey a comprehensive study on rationality. Measuring GARP consistency by Afriat Critical Cost Efficiency Index (CCEI) and correlation with socio-economic factors, it found that: on average, younger subjects are more consistent than older, men more than women, high-education more than low-education, and high-income subjects more consistent than low-income.

 

Brocas, Carillo, Combs and Kodaverdian (2016) studied choice behaviors in cohorts of young and older adults, varying choice tasks from simple domain to complex domain. They showed that while both young and older adults are about equally well consistent in simple domain, older ones were observed being significantly more inconsistent in complex tasks. Beyond that, by performing working memory and fluid intelligence (IQ) test, further correlation examinations reveal that older adults’ inconsistency in complex domain can be attributed to decline in working memory and fluid intelligence.

[1] One can, however, evaluate the degree of violations.

 

Reference:

 

Afriat, S. N. (1967). The Construction of Utility Functions from Expenditure Data. International Economic Review, 8(1), 67–77.

Andreoni, J., & Miller, J. (2002). Giving According to GARP: An Experimental Test of the Consistency of Preferences for Altruism. Econometrica, 70(2), 737–753.

Battalio, R. C., Kagel, J. H., Winkler, R. C., Edwin B. Fisher, J., Basmann, R. L., & Krasner, L. (1973). A Test of Consumer Demand Theory Using Observations of Individual Consumer Purchases. Western Economic Journal, XI(4), 411–428.

Brocas, I., Carillo, J. D., Combs, T. D., & Kodaverdian, N. (2016). Consistency in Simple vs. Complex Choices over the Life Cycle.

Chambers, C. P., & Echenique, F. (2016). Revealed Preference Theory. Cambridge University Press.

Choi, S., Kariv, S., Müller, W., & Silverman, D. (2014). Who Is (More) Rational? The American Economic Review, 104(6), 1518–1550.

Echenique, F., Lee, S., & Shum, M. (2011). The Money Pump as a Measure of Revealed Preference Violations. Journal of Political Economy, 119(6), 1201–1223.

Grether, D. M., & Plott, C. R. (1979). Economic Theory of Choice and the Preference Reversal Phenomenon. The American Economic Review, 69(4), 623–638.

Hammond, P., & Traub, S. (2012). A Three-Stage Experimental Test of Revealed Preference.

Harbaugh, W. T., Krause, K., & Berry, T. R. (2001). GARP for Kids: On the Development of Rational Choice Behavior. American Economic Review, 91(5), 1539–1545. http://doi.org/10.1257/aer.91.5.1539

Sippel, R. (1997). An Experiment on the Pure Theory of Consumer’s Behaviour. The Economic Journal, 107(444), 1431–1444. http://doi.org/Doi 10.1111/1468-0297.00231

Varian, H. R. (1983). Non-Parametric Tests of Consumer Behaviour. The Review of Economic Studies, 50(1), 99–110. http://doi.org/10.1007/sl0869-007-9037-x

 

 

by Calvin Leather, Yuqing Hu

In response to this article: http://www.jneurosci.org/content/36/39/10016

Recent literature in reinforcement learning has demonstrated that the context in which a decision is made influences subject reports and neural correlates of perceived reward. For example, consider visiting a restaurant where have previously had many excellent meals. Expecting another excellent meal, when you receive a merely satisfactory meal, your subjective experience is negative. Had you received this objectively decent meal elsewhere, without the positive expectations, your experience would have been better. This intuition is captured in adaptive models of value, where a stimuli’s reward (i.e. Q-value) is expressed as being relative to the expected reward in a situation, and it has been found that this accurately models activation in value regions (Palminteri et al 2015). Such a model also can be beneficial as it allows reinforcement learning models to learn to avoid punishment, as avoiding a contextually-expected negative payoff results in a positive reward. This had previously been challenging to express within the same framework as reinforcement learning models (Kim et al, 2006).

Alongside these benefits, there has been concern that adaptive models might be confused by certain choice settings. In particular, agents with an adaptive model of value would have an identical hedonic experience (i.e. Q-values in the model) when receiving a reward of +10 units, in a setting where they might receive either +10 or 0 units, and a reward of 0 units, in a setting where they might receive either -10 or 0 units (we will refer to this later as the ‘confusing situation’). With this issue in mind, Burke et. al. (2016) develop an extension to the adaptive model, where contextual information only has a partial influence on reward. So, whereas the previous, fully-adaptive model has a subjective reward (Q-value) of +5 units for receiving an objective reward of 0 in the context where the possibilities were 0 and -10, and an absolute model ignoring context would experience a reward of 0, the Burke model would experience a reward of +2.5. It takes the context into account, but only partially, and accordingly they call their model ‘partially-adaptive’. Burke et. al. compare this partially-adaptive model with a fully-adaptive model, and an absolute model (which ignores context). When subjects were given the same contexts and choices as the confusing situation outlined above, Burke et. al. found that the partially-adaptive model reflects neural data in the vmPFC and striatum better than the fully-adaptive or absolute models.

The partially-adaptive model is interesting, as it has the same advantages as the fully-adaptive model (reflecting subjective experience and neural data well, allowing for avoidance learning), while potentially avoiding the confusion outlined above. Here, we seek to investigate the implications and benefits of Burke et. al.’s partially-adaptive model more thoroughly. In particular, we will consider the confusion situation’s ecological validity and potential resolution, whether it is reasonable that partially-adaptive representations might extend beyond decision (to learning and memory), and the implications of the theory for future work. Before we do this we would like to briefly present an alternative interpretation of their findings.

The finding that the fMRI signal is best classified by a partially-adaptive model does not necessarily entail the brain utilizing a partially-adaptive encoding as the value over which decisions occur. All neurons within a voxel can influence the fMRI signal, so it is possible that the signal may reflect a combination of multiple activity patterns present within a voxel. This mixing phenomenon has been used to explain the success of decoding early visual cortex, where the overall fMRI signal in a voxel reflects the specific distribution of orientation-specific columns within a voxel (Swisher, 2010). Similarly, the partially-adaptive model’s fit might be explained by the average contribution of some cells with a full-adaptive encoding, and other cells with absolute encodings of value (within biological constraints). This concern is supported by the co-occurrence of adaptive and non-adaptive cells in macaque OFC (Kobayashi, 2010). Therefore, more work is needed to understand the local circuitry and encoding heterogeneity of regions supporting value-based decision making.

Returning to the theory presented by the authors, we would like to consider whether a fully-adaptive encoding of value is truly suboptimal. The type of confusing situation presented above was shown to be problematic for real decision makers in Pompilio and Kacelnik (2010), where starlings became indifferent between two options with different objective values, due to the contexts those options appeared in during training. However, this type of choice context might not be ecologically valid. If two stimuli are exclusively evaluated within different contexts, as in Pompilio and Kacelnik, it is not relevant whether they are confusable, as the decision maker would never need to compare them.

Separate from the confusion problem’s ecological validity is the inquiry into its solution. Burke et. al. suggest partially-adaptive encoding avoids confusion, and therefore should be preferred to a fully-adaptive encoding. However, this might only be true for the particular payoffs used in the experiment. Consider a decision maker who makes choices in two contexts. One, the loss context, has two outcomes, L0 (worth 0), and Lneg (worth less than 0), while the other, the gain context, has two outcomes, G0 (worth 0), and Gpos (worth more than 0). If L0-Lneg = Gpos– G0, as in Burke et. al., a fully-adaptive agent would be indifferent between G0 and Lneg (and between Gpos and L0). A partially-adaptive agent, however, would not be indifferent, as the value of G0 would be higher than Lneg.   Now consider what happens if we raise the value the value of Gpos. By doing this, we can raise the average value of the gain context by any amount. Now consider what this does the experienced value (Q-value) of G0. As we increase the average reward of the context, G0 becomes a poorer option in terms of its Q-value. Note that since the only reward we are changing is Gpos, the Q-values for the loss context do not change. Therefore, we can decrease the Q-value for G0 until it is equal to that of Lneg. This is exactly the confusion that we had hoped the partially-adaptive model would avoid. Furthermore, this argument will work for any partially-adaptive model: we are unable to defeat this concern by parameterizing the influence of context in the update equations, and manipulating this parameter.

As mentioned earlier, it is possible that some cells might encode partially-adaptive value, while others might have a fully- or non-adaptive encoding. We should be open to the possibility that even if partially-adaptive value occurs in decision, non-adaptive encodings might be used for storage of value information, and are transformed at the time of decision into the observed partially-adaptive signals. Why might this be reasonable? An agent who maintains partially-adaptive representations in memory faces several computational issues. One is efficiency: storage requirements for a partially-adaptive representation requires S*C quantities or distributions to be stored (one for each of the S stimuli in each of the C contexts). On the other hand, consider an agent who stores non-adaptive stimulus values and the average value of each context, and then adjusts stimulus values using the context values at the time of decision. They could utilize the same information, but only store S+C quantities. Another problem with storage of value information in an adaptive format is the transfer of learning across contexts. If I encounter a stimulus in context A, my experiences should alter my evaluation of that stimulus in context B: getting sick after eating a food should reduce my preference for that food in every context. An agent who stores value adaptively would need to update one quantity for the encountered stimulus in each context, namely C quantities. An agent who stores value non-adaptively only updates a single quantity. So, even if decision utilizes partially-adaptive encoding, non-adaptive representation is most efficient for storage. Furthermore, non-adaptive information is present in the state of the world (e.g. concentration of sucrose in a juice does not adapt to expectations), so this information is available to agents during learning. Accordingly, it must be asked why agents would discard information that might ease learning. While these differences do not necessarily affect the authors’ claims about value during decision, they should be considered when investigating the merits of different models of value.

In sum, while partial adaption is an exciting theory that may provide novel motivations for empirical work, more effort is needed to understand when and where it is optimal. If we can overcome these concerns, the new theory opens up potential investigation into the nature of contextual influence: if we allow a range of contextual influence (via a parameter) in the partially-adaptive model, do certain individuals have more contextual influence, and does this heterogeneity correlate with learning performance? Do different environments (e.g. noise in signals conveying the context) alter the parameter? Do different cells or regions respond with different amounts of contextual influence? As such, the theory opens up new experimental hypotheses that might allow us to better understand how the brain incorporates context in the learning and decision-making processes.

 

 

References

 

Burke, C. J., Baddeley, X. M., Tobler, X. P. N., & Schultz, X. W. (2016). Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses. Journal of Neuroscience, 36(39), 10016–10025. doi:10.1523/JNEUROSCI.0487-16.2016

Kim, H., Shimojo, S., & Doherty, J. P. O. (2006). Is Avoiding an Aversive Outcome Rewarding ? Neural Substrates of Avoidance Learning in the Human Brain. PLoS Biology, 4(8), 1453–1461. doi:10.1371/journal.pbio.0040233

Kobayashi, S., Carvalho, O. P. De, & Schultz, W. (2010). Adaptation of Reward Sensitivity in Orbitofrontal Neurons. Journal of Neuroscience, 30(2), 534–544. doi:10.1523/JNEUROSCI.4009-09.2010

Palminteri, S., Khamassi, M., Joffily, M., & Coricelli, G. (2015). Contextual modulation of value signals in reward and punishment learning. Nature Communications, 6, 1–14. doi:10.1038/ncomms9096

Pompilio, L., & Kacelnik, A. (2010). Context-dependent utility overrides absolute memory as a determinant of choice. PNAS, 107(1), 508–512. doi:10.1073/pnas.0907250107

Swisher, J. D., Gatenby, J. C., Gore, J. C., Wolfe, B. A., Moon, H., Kim, S., & Tong, F. (2010). Multiscale pattern analysis of orientation-selective activity in the primary visual cortex. Journal of Neuroscience, 30(1), 325–330. doi:10.1523/JNEUROSCI.4811-09.2010.Multiscale

In this paper, I will describe the psychological continuity theory of personal identity and I will show how it is assumed in the film Being John Malkovich. Then I will explain the problems of the theory.

The problem for personal identity is about what makes a personal numerically identical from one time to the next. Before going into further discussion, I will first clarify the definition of “numerical identity”, which is useful in describing the theory of personal identity. The sufficient and necessary condition for numerical identity is the sharing of all the properties including the spatial-temporal ones. Therefore, if a is numerically identical to b, then there is only one thing which it can be called either “a” or “b”. For example, Clark Kent is numerically identical with Superman in the sense that there is only one person. Alternatively, a and b are qualitatively identical if and only if a and b have all the properties in common except the spatial-temporal ones. For example, two qualitatively identical chairs are made in the same furniture factory on the same production line such that they look exactly the same, but they are two distinct chairs which occupy different spaces at the same time.

The psychological continuity theory of personal identity is that a person at one time is the same the person at another time if and only if the person has continuous psychological states including memory, consciousness and personality. In other words, self is numerically identical to the psychological states. Therefore, by definition of numerical identity, when we are referring to the self of a person, we are actually referring to his or her psychological states. Since psychological states persist through time and have a casual influence on future psychological states, one’s psychological continuity is essential in identifying a person over different periods of time. One representative feature of psychological states is memory. If person A can remember the earlier thoughts and perceptions of person B, then person A is identical to person B. My following discussions are mainly focused on this memory aspect of psychological states.

The film Being John Malkovich implies psychological continuity theory of personal identity. For example, as Craig Schwartz enters the portal, he experiences the sensory stream of John Malkovich and later gradually learns how to puppeteer John Malkovich’s body, but several scenes of the film suggest Craig remains to exist as Craig after he enters the portal, and never succeeds in being John Malkovich, even when he can completely control John’s body. For example, after Craig enters the portal, he has prior memories of being Craig, but not memories of being John Malkovich. Furthermore, the film shows clear discontinuities in the psychological characteristics that separate the John Malkovich prior to Craig’s entry from the John Malkovich after Craig’s entry. In addition, after Craig goes out of the portal, he is the same person as before because he remembers being Craig. The film also shows Dr. Lester and his friends can prolong their lives by staying in John Malkovich’s body, so Dr. Lester continues to exist because he consciously remembers being Dr. Lester. This is consistent with the psychological continuity theory as the change or cease of body is assumed to be independent from personal identity.

However, there are several problems in the psychological continuity theory. One of the philosophical problems is the paradox when dealing with breaks in the psychological states, which can be escalated into more serious moral issues. For example, suppose at time period t person A commits a crime, and at time t’ he gets amnesia such that he cannot remember anything he did before and his personality is totally changed. Because there exists obvious discontinuity in his memory, the theory implies the A in t’ and the A in t are not identical, which may contradicts with someone’s intuition. Furthermore, suppose later he is accused and has to go to court, it is unclear whether he should be responsible and get punishment for the crime he commits at time t, because it is controversial whether we still have the same person, as A now behaves completely differently as he is in t, and he could not remember the crime he committed.

The second problem is that memory is not perfectly reliable as it can be altered over time, and we cannot distinguish between the genuine and non-genuine memories. In order to correctly identify a person by his continuity of memory, that person’s memory should be genuine, reflecting his true psychological states. The genuineness of memory requires a casual mechanism such that previous perception must have occurred and is causally responsible for the current memory, and the current memory must also accurately represent the previous perception. However, the criterion is sometimes hard to achieve as the perception and memory can be easily swayed by suggestions or other interventions. Moreover, since one has privileged access to his mind, we cannot reliably tell whether someone’s memory is genuine. In this sense, memory based psychology continuity cannot server as a perfect link for personal identity. One possible solution to this might be to identify the casual link between the different pieces of memory across time, but it is still difficult to implement as it requires dividing the time periods into very small pieces in order to closely track the casual events that cause memory change alone the time period.

Furthermore, the fading memory can cause paradox of the identities for different periods of time. That is, we cannot remember everything we have done, and what we can remember also changes over time, so using the time-varying memory to identify a person would contradict with the transitivity of identity. For example, suppose an old man has forgotten the time when he was a child when he was spanked for knocking over the milk, but he can remember when he was a college student and awarded a scholarship for academic excellence, so by the psychological continuity theory the old man is the same person as the college student. But now suppose that when the man received the scholarship, he could remember being spanked for knocking over the milk as a boy. This means that the person who received the scholarship is the same person who was spanked. Transitivity of identity tells us if A is identical to B, B is identical to C, then A is identical to C. Therefore, by transitivity, the old man is the same person who was spanked, contradicting to the theory due to fact that the old man does not remember being the boy who was spanked.

To conclude, the film Being John Malkovich assumes the psychological continuity theory of personal identity, which uses long-standing psychological characteristics to identify person. However, this theory has several problems, and they mainly result from the time-varying property of psychological states and their presence of discontinuities, as well as the lack of effective method to detect the genuineness of memory.

 

In this paper, I will explain why Turing proposes an “imitation game” as a reformulation of the question of “Can computers think”. However, I will argue that passing the “Turing test” neither constitutes a sufficient nor a necessary condition for answering this question. Based on Searle’s argument, I will explain that judging whether a computer has an original intentional mental status is a better test. Finally, I will show although Searle does not provide a “test” to measure intentionality, he in fact answers the question that computers cannot think.

By asking “Can computers pass the Turing test”, Turing innovatively converts a difficult theoretical question to an operational question that is open to experimental research. Directly answering “Can computers think?” is hard, given the ambiguous definition of “thinking” and “computer”. A clear definition of “thinking” and “computer” is the premise of judging whether computers can think. According to Turing, the definitions might be framed to reflect the common understanding among people and the normal use of the words. Therefore, a statistical survey is necessary to be conducted in seeking the meaning and answer to the question, but it is absurd since it draws answers from imprecise public opinions rather than scientific studies. Instead of attempting a definition that satisfies everyone, Turing replaces the question by another, which is unambiguous, and operationalizable.

Turing proposes an “imitation game”, which we call “Turing test” today. According to Turing, passing the test can be equivalently considered as having the capacity of thinking. The test involves a human interrogator and two respondents. One respondent is a computer and the other is human. The interrogator could use a keyboard and screen to engage in a natural language conversation with the unseen respondents. After many trials, if the interrogator cannot reliably tell the computer from the human, the computer is said to have passed the Turing test.

However, I will argue that even if a computer passes the Turing test, it does not indicate the computer can think. The Chinese room experiment proposed by John Searle forms a counterexample. The person in the experiment mimics a computer program, and the whole system with Chinese symbols both as input and output mimic a human being that engages in a Chinese conversation with a real human. In the experiment, a monolingual English speaker isolated in a room is given English instructions for manipulating Chinese symbols, while he does not understand or even cannot recognize Chinese. Someone outside the room hands in a set of Chinese symbols. The person applies the rules, writes down a different set of Chinese symbols as specified by the rules, and hands the result to a person outside the room. It is logically possible that the person outside the room is convinced that he/she is interacting with a real Chinese speaker. Therefore, if we let a computer that can execute the same rules replace the person, it also can logically pass the Turing test. However, Searle argues that thinking requires understanding. Since manipulating Chinese symbols following formal rules is not sufficient for the person to understand Chinese, it is not sufficient for a computer to understand Chinese, either. Therefore, a computer passing the Turing test does not sufficiently implies it can think.

According to Searle, understanding is a criterion for thinking, and “understanding” implies the possession of original intentional mental states and the truth of these states. Searle defines that “intentional mental states have propositional content that are directed at or about objects and states of affairs in the world.” In other words, it is about something. For example, the belief that dog is human’s friend is about dog, the desire to have a cat is about cat, so they are intentional states; while sensations like pains and itches are not about anything thus are not intentional states. Therefore, testing whether a computer has original intentional mental states is a better way to judge whether computers can think. However, how do we conduct such a test is an open question, given that intentionality is introspective, that is, one only privately knows he/she has intentionality, but cannot access others’ minds. If we evaluate others by interacting with them and observing their behaviors, then we again run into the Turing test.

Although Searle does not provide a test of whether computers can think, he indeed disposes of the question. He rejects that computers can think, as he disagrees that mind to brain can be duplicated by program/software to hardware. First, he argues that programs can have realizations but no understanding, because programs have no intentionality. In the Chinese room, the English speaker can memorize the rules and Chinese symbols, but memorizing won’t enhance his understanding of Chinese. Likewise, a computer can mimic human beings, but “mimicking” is not “duplicating”, as it only mechanically execute the programs that result in human-like behaviors or languages, but it does not involves any understanding of the corresponding meanings. Second, Searle shows that passing the Turing test is neither a sufficient nor a necessary condition for understanding/thinking. The Turing test requires using a computer program. As it is shown, programs do not have original intentionality, but understanding requires intentionality, so a program that lets the computer pass the Turing test does not indicate the computer can think. On the other hand, it is possible that original intentionality goes through other channels other than programs. Therefore, even if a computer can think, it is not necessary that it has to pass the Turing test.

To conclude, due to technology in-advance and ambiguous definition of the terms “thinking” and “computers”, Turing reformulates the question “Can computers think?” to “Can computers pass the Turing test?” However, this reformulation still does not answer whether computers can think, since passing the Turing test is not necessarily related to understanding, which requires intentional mental states. A test designed to detect original intentionality can essentially answer this question. However, this is empirically difficult as intentional mental states can only be privately experienced and one cannot have access to others’ mental states. Furthermore, although Searle does not provide such a test, he answers the question which denies that computers can think.

In Blink (Malcolm Gladwell, 2005), the author argues that rapid cognition can lead to effective decisions. This rapid cognition, so called “snap judgment”, is shown to be correct most of the time, but occasionally wrong. For example, at the Getty Museum in Los Angeles, detailed scientific tests incorrectly judged a statue as genuine, but the “snap judgment” from art experts identified the statue as a fake (p 3-8). This example shows that unconscious judgment derived from limited information can be accurate or even superior to analysis, which contradicts the conventional belief in decision making. Furthermore, Gladwell argues that judgment can be both consciously and unconsciously affected by preference, prejudices, and stereotypes. For instance, America’s worst president Warren Harding was elected because his distinguished-looking appearance catered to the preferences for most people, which distorted the public’s judgment. Unconscious bias can also lead to poor judgment. The Implicit Association Tests shows that unconscious stereotype may be utterly incompatible with the stated conscious values (p 77-88). Moreover, Gladwell points out that excess information can also interfere with the accuracy of judgment. That is, collecting more information may only reinforce the confidence but may not improve judgment accuracy, since this information may be irrelevant and confusing to the decision maker. For example, based on little information, General Paul V. Riper triumphed over his opponent in a simulation war game, who had advantageous possession of information (p 72-75). In sum, Blink provides a new perspective into cognition, which revolutionizes the way of thinking.

Blink is an insightful book about intuitive decision making and judgment. Its novel ideas and the accompanying vivid examples easily make readers keeping reading. However, readers are ultimately unable to find any operational suggestions or methods. For instance, in Chapter 4 after readers have just been amused by the power of “thin-slicing” in Chapter 3 and before, they are abruptly told that “thin-slicing” is not reliable, then after a few depressing examples, this chapter ends, leaving the readers lost amid the inconsistent opinions. Such style and design can either be a merit or a limitation. From the view of a general reader, any didactic words may be unnecessary for a best-selling book, for these kinds of books are predominately written for the commercial aim of entertainment, rather than out of education. In this sense, Blink is successful. On the other hand, from the academic point of view, this book lacks logic and instructions. In most cases, Gladwell simply proposes an idea, always a new idea that contradicts the conventional wisdom, and then he throws out a series of descriptive examples without validating them, which seems only to irresponsibly attract eyeballs but lack the confidence to ensure the correctness of arguments. Since this book includes a number of the cutting-edge studies, replacing the long descriptive paragraphs with concise analysis and emphasize more on the logic would probably win Blink a decent place among the popular science readings.

Coauthored with Xiaoyan Lei, James P. Smith and Yaohui Zhao. Full paper link here.

Abstract: 
Using the China Health and Retirement Longitudinal Study (CHARLS) 2008 pilot, the authors investigate the relationship between cognitive abilities and social activities for people aged 45 or older. They group cognition measures into two dimensions: intact mental status and episodic memory. Social activities are defined as participating in certain common specified activities in China such as playing chess, card games, or Mahjong, interacting with friends, and other social activities. OLS association results show that playing Mahjong, chess or card games and interacting with friends are significantly related with episodic memory, both individually and taken as a whole (any of the 3 activities), but individually they are not related to mental intactness while taken as a whole they are. Because social activities may be endogenous, they further investigate using OLS reduced form models whether having facilities that enables social activities in the community level is related to cognition. They find that having an activity center in the community is significantly related to higher episodic memory but no relation to mental intactness. These results point to a possible causal relationship between social activities and cognitive function, especially in strengthening short-term memory.

First draft, criticism and suggestions are welcome 🙂 Thank Jim, Prof. Lei, and Prof. Zhao.

The full paper link: click here

Abstract:
In this paper, we model gender differences in cognitive ability in China using a new sample of middle-aged and older Chinese respondents. Modeled after the American Health and Retirement Survey (HRS), the CHARLS Pilot survey respondents are 45 years and older in two quite distinct provinces—Zhejiang a high growth industrialized province on the East Coast, and Gansu, a largely agricultural and poor Province in the West. Our measures of cognition in CHARLS relies on two measures that proxy for different dimensions of adult cognition—episodic memory and intact mental status. We relate both these childhood health measures to adult health and SES outcomes during the adult years. We find large cognitive differences to the detriment of women that were mitigated by large gender differences in education among these generations of Chinese people. These gender differences in cognition are especially concentrated within poorer communities in China with gender difference being more sensitive to community level attributes than to family level attributes, with economic resources. In traditional poor Chinese communities, there are strong economic incentives to favor boys at the expense of girls not only in their education outcomes, but in their nutrition and eventually their adult height. These gender cognitive differences have been steadily decreasing across birth cohorts as the economy of China grew rapidly. Among younger cohorts of young adults in China, there is no longer any gender disparity in cognitive ability.