Latest Entries »

2nd LABEL – IEPR Conference

“Understanding cognition and decision making by children”

 May 4 – 5, 2017

 

 PROGRAM

  Thursday, May 4, 2017
 
 Radisson Los Angeles at USC
Victory AB Room
3540 South Figueroa Street
Los Angeles, California 90007
 
 

8:45 –  9:15   Continental Breakfast

9:15 –  9:30   Introductory remarks  

9:30 – 10:15   “The role of agency in regret and relief in 3- to 10-year-old children”Giorgio Coricelli (USC)

10:15 – 11:00  “Young Children’s Exploration of Alternative Possibilities”.  Henrike Moll (USC)

11:00 – 11:30   Coffee break

11:30 – 12:15   “Navigating Uncertainty: Neural Correlates of Decisions from Experience in Adolescence”. Wouter van de Bos (Max Planck – Berlin)

12:15 – 1:45   Lunch

1:45 – 2:30   “The Costs and Benefits of Growing Up”. Michael Norton (Harvard U.)

2:30 – 3:15   “Using economic experiments to detect drivers of behavior at early ages: Evidence from a large field experiment in Chicago”. Ragan Petrie (Texas A&M)                                        

3:15 – 3:45   Coffee break  

3:45 – 4:30   “Does the little mermaid compete? An experiment on competitiveness with children”Marco Piovesan (U. Copenhagen)

6:30              Dinner (by invitation)

 


 

Friday May 5, 2017
 
Radisson Los Angeles at USC
Victory AB Room
3540 South Figueroa Street
Los Angeles, CA 90007

 

9:00 – 9:30   Continental Breakfast

9:30 – 10:15   “Altruism and strategic giving in children and adolescents”. Isabelle Brocas (USC)

10:15 – 11:00   “Theory of Mind among Disadvantaged Children”. Aldo Rustichini (U. Minnesota)

11:00 – 11:30   Coffee break

11:30 – 12:15   “The development of reciprocal sharing in relation to intertemporal choice”. Felix Warneken (Harvard U.)

12:15 – 1:45   Lunch

1:45 – 2:30   “Group and Normative Influences on Children’s Exchange Behavior”. Yarrow Dunham (Yale U.)

2:30 – 3:15   “The role of social preferences and emotions in children’s decision making -a view from developmental psychology”. Michaela Gummerum (Plymouth U.)

3:15 – 3:45   Coffee Break

3:45 – 4:30   “Socio-cognitive mechanisms of prosocial behavior”. Nadia Chernyak (Boston U.)

Advertisements

Yuqing Hu, Juvenn Woo

GARP, the generalized axiom of revealed preference, is a dichotomous notion. A dataset can either satisfy the theory of a rational consumer or violate it[1]. There are several challenges in testing GARP consistency in an experiment. One challenge comes from the nature of budget sets, as there are more individual-level variations in expenditure than variations in prices. This causes the overlap of two budget sets, upon which the test based could bias towards the satisfaction of GARP, with the extreme case that no violation would occur if the budget sets are nested. Another challenge stems from lack of identities of consumers such that individuals have to be treated as the same for revealed preference tests (Chambers & Echenique, 2016).

 

A few indices have been invented to measure the degree of GARP violations. One is Afriat’s efficiency index (AEI) (Afriat, 1967), which uses the degree of “deflation” of expenditure that is needed to make GARP consistent. Other variations of AEI are Varian’s efficiency index (VEI) (Varian, 1983), and the money pump index (MPI) (Echenique, Lee, & Shum, 2011).

 

There is a modest amount of literature in experimental economics that test GARP consistency, which is summarized below.

 

Battalio et al.(1973) pioneered applying GARP consistency in experimental study. They ran field experiments among psychotic patients to let them exchange tokens for different consumption goods. They induced a variety of different budget sets by changing the value of tokens, and they found that if small measurement errors were allowed, then almost all patients’ behavior was consistent with GARP.

 

Sippel (1997) ran lab experiments in which 42 students were asked to repeatedly choose a bundle of priced entertaining items under different standard budget constraints. Individuals were paid in consumption goods, and were required to actually consume the goods at the experiment. The experiment showed 63% of subjects made choices violating GARP, though the median number of violations over all subjects was only 1 of 45 choices. The number did not change even if the study perturbed demand close to actual demand. Even though the (remarkably) low number of violations for individual indicates the subjects were highly motivated when making choices, a majority of subjects could not be classified as rational in the sense of utility maximization.

 

Harbaugh, Krause and Berry (2001) examined the development of choice behavior for kids. The study conducted simple choice experiments over cohorts of second graders, sixth graders, and undergraduates. They measured and compared number of GARP violations. It found out that for the particular test, about 25% of 7-year-olds, and 60% of 11-year-olds were making choices consistent with GARP respectively, and there is no increase in consistency from 11 to 21-year old. They also found that violations of GARP are not significantly correlated with the results of a test that measures mathematical ability in children.

 

Andreoni and Miller (2002) ran dictator-game experiments to test whether the observed altruistic behavior is utility-maximizing. They found that 98% of subjects make choices that are consistent with utility maximization. Andreoni and Miller went further than most revealed-preference exercises in estimating a parametric function of a utility function accounting for subject’s choices (about half the subjects can be classified as using a linear, CES (constant elasticity of substitution), or Leontief utility).

 

Hammond and Traub (2012) designed a three-stage experiment, where participant advances to next stage of test only if he or she made GARP consistent choices at current one. It had been conducted over 41 (non-economics) undergraduates. The study found that, at the 10% significance level, only 22% of subjects (compared to 6.7% of simulated random choice-maker) passed second-stage tests, and 19.5% passed all three stage tests. The result reinforced Sippel’s study suggesting that human is generally not perfectly rational. It should be noted though conditional on passing second-stage rationality, 62.5% of subjects passed third-stage tests.

 

Choi, Kariv, Müller and Silverman (2014) conducted with CentERpanel survey a comprehensive study on rationality. Measuring GARP consistency by Afriat Critical Cost Efficiency Index (CCEI) and correlation with socio-economic factors, it found that: on average, younger subjects are more consistent than older, men more than women, high-education more than low-education, and high-income subjects more consistent than low-income.

 

Brocas, Carillo, Combs and Kodaverdian (2016) studied choice behaviors in cohorts of young and older adults, varying choice tasks from simple domain to complex domain. They showed that while both young and older adults are about equally well consistent in simple domain, older ones were observed being significantly more inconsistent in complex tasks. Beyond that, by performing working memory and fluid intelligence (IQ) test, further correlation examinations reveal that older adults’ inconsistency in complex domain can be attributed to decline in working memory and fluid intelligence.

[1] One can, however, evaluate the degree of violations.

 

Reference:

 

Afriat, S. N. (1967). The Construction of Utility Functions from Expenditure Data. International Economic Review, 8(1), 67–77.

Andreoni, J., & Miller, J. (2002). Giving According to GARP: An Experimental Test of the Consistency of Preferences for Altruism. Econometrica, 70(2), 737–753.

Battalio, R. C., Kagel, J. H., Winkler, R. C., Edwin B. Fisher, J., Basmann, R. L., & Krasner, L. (1973). A Test of Consumer Demand Theory Using Observations of Individual Consumer Purchases. Western Economic Journal, XI(4), 411–428.

Brocas, I., Carillo, J. D., Combs, T. D., & Kodaverdian, N. (2016). Consistency in Simple vs. Complex Choices over the Life Cycle.

Chambers, C. P., & Echenique, F. (2016). Revealed Preference Theory. Cambridge University Press.

Choi, S., Kariv, S., Müller, W., & Silverman, D. (2014). Who Is (More) Rational? The American Economic Review, 104(6), 1518–1550.

Echenique, F., Lee, S., & Shum, M. (2011). The Money Pump as a Measure of Revealed Preference Violations. Journal of Political Economy, 119(6), 1201–1223.

Grether, D. M., & Plott, C. R. (1979). Economic Theory of Choice and the Preference Reversal Phenomenon. The American Economic Review, 69(4), 623–638.

Hammond, P., & Traub, S. (2012). A Three-Stage Experimental Test of Revealed Preference.

Harbaugh, W. T., Krause, K., & Berry, T. R. (2001). GARP for Kids: On the Development of Rational Choice Behavior. American Economic Review, 91(5), 1539–1545. http://doi.org/10.1257/aer.91.5.1539

Sippel, R. (1997). An Experiment on the Pure Theory of Consumer’s Behaviour. The Economic Journal, 107(444), 1431–1444. http://doi.org/Doi 10.1111/1468-0297.00231

Varian, H. R. (1983). Non-Parametric Tests of Consumer Behaviour. The Review of Economic Studies, 50(1), 99–110. http://doi.org/10.1007/sl0869-007-9037-x

 

 

https://www.microsoft.com/en-us/research/event/match-up-2017/


DAY 1
8:00 A.M. Breakfast
8.45 A.M. Invited Talk 1 —Estelle Cantillon, Université libre de Bruxelles

The efficiency – stability tradeoff in school choice: Lessons for market design

Abstract: A well-known result for the school choice problem is that ex-post efficiency and stability may not be compatible. In the field, that trade-off is sometimes small, sometimes big.  This talk will summarize existing and new results on the drivers of this trade-off and derive the implications for the design of priorities and tie-breaking rules.

9.30 A.M. Session 1
A Simple Model for the Top Trading Cycles School Choice Mechanism Using Fluid Approximation

Speakers: Jacob Leshno and Irene Lo

Abstract: Many cities determine the assignment of students to schools through a school choice mechanism which calculates an assignment based on student preferences and school priorities. The prominent Top Trading Cycles (TTC) algorithm can be applied to give a strategy-proof and Pareto efficient assignment, but the combinatorial description of TTC makes it non-transparent to parents and difficult to analyze for designers. Using a fluid approximation model, we give a tractable characterization of the TTC mechanism for school choice: the TTC assignment can be simply described by n^{2} admission thresholds, where n is the number of schools, and these thresholds can be calculated by a differential equation.

The model also allows us to analyze general sequential trade procedures, and show that changes in trading priority can have non-trivial indirect effects on the allocation. We also apply this model to solve for optimal investment in school quality, and help explain empirical findings about the relation between the TTC assignment and the Deferred Acceptance assignment. To validate the fluid model we show that it gives a good approximation for strongly converging economies. Our analysis draws on an interesting connection between continuous trading procedures and continuous time Markov chains.

Pairwise Matching in Large Economies

Speakers: Michael Greinecker and Christopher Kah

Abstract: We formulate a general model of two-sided pairwise matching for continuum economies and prove the existence of stable matchings. The data of our model are not individual agents but the population distribution of their characteristics, which can be multidimensional and need not lie in a compact set. Our model allows for transfers as well as their absence, general widespread externalities, and match specific contracts and activities. Continuum versions of the assignment game or the marriage model can be obtained as special cases, as well as intermediate models with limited side payments. Our distributional model can be interpreted as both a limit of finite matching models and as a reduced form of a genuine continuum model in which individual agents are explicitly modeled.

Lone Wolves in Infinite, Discrete Matching Markets

Speaker: Ravi Jagadeesan

Abstract: In finite two-sided matching markets, the lone wolf theorem guarantees that the same agents remain unmatched in all stable outcomes. I show by example that this assertion is not true in infinite, discrete markets. However, despite the fact that the lone wolf theorem is often used to derive strategy-proofness, deferred acceptance remains (group) strategy-proof in many infinite markets.

10.30 A.M. Break
10.50 A.M. Session 2
Dynamic Reserves in Matching Markets: Theory and Applications

Speakers: Bertan Turhan and Orhan Aygun

Abstract: Indian engineering school admissions, which draw more than 500,000 applications per year, suffer from an important market failure: Through their affirmative action program, a certain number of seats are reserved for different castes and tribes. How- ever, when some of these seats are unfilled, they are not offered to other groups, and the system is vastly wasteful. Moreover, since students care not only about the school they are assigned to but also whether they are assigned through reserves or not, they may manipulate the system both by not revealing their privilege type and by changing their preferences over programs. In this paper, we propose a new matching model with the ability to release vacant seats to the use of other students by respecting certain affirmative action objectives. We design a new choice function for schools that respects affirmative action objectives, and increases efficiency. We propose a mechanism that is stable, strategy proof, and respects test-score improvements with respect to these choice functions. Moreover, we show that some distributional objectives that can be achieved by capacity-transfers cannot be achieved by slot-specific priorities.

Strategyproof Pareto-Stable Mechanisms for Two-Sided Matching with Indifferences

Speakers: Nevzat Onur Domanic, Chi-Kit Lam and C. Gregory Plaxton

Abstract: We study variants of the stable marriage and college admissions models in which the agents are allowed to express weak preferences over the set of agents on the other side of the market and the option of remaining unmatched. For the problems that we address, previous authors have presented polynomial-time algorithms for computing a “Pareto-stable” matching. In the case of college admissions, these algorithms require the preferences of the colleges over groups of students to satisfy a technical condition related to responsiveness. We design new polynomial-time Pareto-stable algorithms for stable marriage and college admissions that correspond to strategyproof mechanisms. For stable marriage, it is known that no Pareto-stable mechanism is strategyproof for all of the agents; our algorithm provides a mechanism that is strategyproof for the agents on one side of the market. For college admissions, it is known that no Pareto-stable mechanism can be strategyproof for the colleges; our algorithm provides a mechanism that is strategyproof for the students.

First Choice-Maximizing School Choice Mechanisms

Speakers: Umut DurTimo Mennle and Sven Seuken

Abstract: We investigate first choice-maximality (FCM) (i.e., a maximal share of students is matched to their reported first choices), a common desideratum in many school districts. No FCM mechanism can be stable; however, we find first choice-stability (FCS) (i.e., no student forms a blocking pair with her first choice) to be the strongest rank-based relaxation of stability that is compatible with FCM. Focusing on the class of FCM and FCS mechanisms (which includes the widely used Boston mechanism and its variants), we show that the Pareto efficient members of this class form a minimally manipulable subset. Moreover, we identify the Nash equilibrium outcomes of any mechanism in this class when all students are strategic and when some student are sincere. Our analysis generalizes that of Ergin and Sönmez (2006) and Pathak and Sönmez (2008) from the Boston mechanism to the general class of FCM and FCS mechanisms. Our results suggest a novel approach to the design of school choice mechanisms where strategic students self-select into receiving their reported school in a first step, so that in a second step, the matching of sincere students can be made independently of the usual incentive constraints.

Endowments, Exclusion, and Exchange

Speakers: Ivan Balbuzanov and Maciej Kotowski

Abstract: We study a discrete exchange and assignment economy with a rich endowment structure. Identifying deficiencies with classic solutions, such as the core, we propose a new cooperative solution concept for resource allocation problems, the exclusion core. The exclusion core is neither weaker nor stronger than the (strong) core and it rests upon a foundational idea in the legal understanding of property, the right to exclude others. We extend the exclusion core to relational economies where ownership rights and endowments are qualified by social hierarchies or priorities. A generalization of the top trading cycles algorithm can identify exclusion core allocations.

Efficient and Essentially Stable Assignments

Speakers: Andrew Kloosterman and Peter Troyan

Abstract: An important desideratum in priority-based allocation is stability: no agent should claim an object because she has higher priority than the agent to whom it is assigned. However, stable matchings are in general Pareto inefficient, forcing upon designers a difficult trade-off. This paper alleviates this tension between efficiency and stability by defining an assignment to be essentially stable if any claim an agent has to an object she prefers is vacuous, in the sense that it initiates a chain of reassignments that ultimately results in the initial claimant losing the object to a third student with even higher priority; on the other hand, if a non-vacuous claim exists, the assignment is strongly unstable. A main practical advantage to our definition is its simplicity: under an essentially stable assignment, explaining to agents why their claims are vacuous becomes straightforward, even to non-experts who may not fully understand the inner workings of a given mechanism. We show that mechanisms based on Shapley and Scarf’s TTC algorithm, while efficient, are strongly unstable, while Kesten’s EADA mechanism is both Pareto efficient and essentially stable.

12.30 P.M. Lunch
1:00 P.M. Outlook Talk 1 – Al Roth, Stanford

Frontiers of Kidney Exchange

Abstract: Kidney exchange is different from many market design efforts I’ve been involved in, because it affects the everyday conduct of transplant centers, so we’re constantly adapting to their big strategy sets…(in contrast to e.g. annual labor markets or school choice which don’t affect the daily conduct of residency programs and schools …)The early design challenges in kidney exchange mostly involved dealing with congestion (and the solutions involved long chains, standard acquisition charges, and attempts to better elicit surgeons’ preferences over kidneys).The current challenges to kidney exchange involve creating more thickness in the markets, and I’ll touch on several new initiatives:

·  1. Frequent flier programs to encourage transplant centers to enroll more of their easy to match pairs;

·  2. Global kidney exchange;

·  3. Information deserts: populations of Americans who don’t get transplants;

·  4. Deceased donor initiated chains;

a. Increasing deceased donation: military share, priority in Israel

2:00 P.M. Session 3
On likely solutions of the stable matching problem with unequal numbers of men and women

Speaker: Boris Pittel

Abstract: Consider a set of men and a set of women contemplating selection of a marriage partner. It is assumed that each man and each woman has his/her preferences for a marriage partner, with no ties allowed. A maximal matching is called stable if there is no unmarried pair who prefer each other to their respective partners in the matching. A classic theorem, due to Gale and Shapley, asserts that, given any system of preferences, there exists at least one stable matching. Most of algorithmic/probabilistic research had been focused on the case of two sides having the same size. Recently Ashlagi, Kanoria and Leshno discovered that even a relatively slight difference between two sizes leads to a dramatic change in the likely properties of the stable matchings. As a follow-up, we extend our previous work on the expected number of stable matchings for the balanced case and derive an asymptotic formula for the expected number of stable matchings for the full range of tthe unequal sizes. For the size difference 1, the expected number is instantly smaller than its counterpart for the balanced case by a squared logarithmic factor, but remains quite sizable. As soon as the size difference is of the order of smaller size, the expected, whence the likely, number of stable matchings becomes bounded as both sizes grow indefinitely.

Circulation under Responsive Preferences

Speakers: Peter Biro, Flip Klijn and Szilvia Papai

Abstract: We study markets in which each agent is endowed with multiple units of an indivisible and agent-specific good. Monetary compensations are not possible. An outcome of a market is given by a circulation which consists of a balanced exchange of goods. Agents only have (responsive) preferences over the bundles they receive.We prove that for general capacity configurations there is no circulation rule that satisfies individual rationality, Pareto-efficiency, and strategy-proofness. We characterize the capacity configurations for which the three properties are compatible, and show that in this case the Circulation Top Trading Cycle (cTTC) rule is the unique rule that satisfies all three properties. We explore the incentive and efficiency properties of the cTTC rule for general capacity configurations and provide a characterization of the rule for lexicographic preferences.Next, we introduce and study two families of individually rational serial rules in which agents sequentially choose single goods or bundles. We show that in the first (second) case the rules are Pareto-efficient for lexicographic (responsive) preferences. Finally, we consider the family of Segmented Trading Cycle (STC) rules where agents are required to exchange their goods in market segments. We show that STC rules are strategy-proof.

Strategy-Proof Tie-Breaking

Speakers: Lars Ehlers and Alexander Westkamp

Abstract: A set of indivisible objects is allocated among agents with strict preferences. Each object has a weak priority ranking of the agents. A collection of priority rankings, a priority structure, is solvable if there is a strategy-proof mechanism that is constrained efficient, i.e. that always produces an agent-optimal stable matching. We characterize all solvable priority structures satisfying the following two restrictions:

(A) Either there are no ties, or there is at least one four-way tie.

(B) For any two agents i and j, if there is an object that assigns higher priority to i than j, there is also an object that assigns higher priority to j than i.

We show that there are at most three types of solvable priority structures: The strict type, the house allocation with existing tenants (HET) type, where, for each object, there is at most one agent who has strictly higher priority than another agent, and the task allocation with unqualied agents (TAU) type, where, for each object, there is at most one agent who has strictly lower priority than another agent. Out of these three, only HET priority structures are shown to admit a strongly group strategy-proof and constrained ecient mechanism.

Strategy-proof Pareto Improvement

Speakers: Samson Alva and Vikram Manjunath

Abstract: We consider a general model of resource allocation with unit-demand agents, each of whom have an outside option of privately known value. The model encompasses (priority-based) object allocation, matching with contracts, provision of a public good when agents may choose not to participate, and more. If a mechanism designer’s choice of a strategy-proof allocation rule must weakly Pareto-dominate an individually rational and participation-maximal benchmark rule, we show there is a unique solution to his problem, if one exists. As a consequence, for many market design applications, many known rules are on the efficient frontier of strategy-proof rules.

We relate strategy-proof Pareto improvements of a rule with expansions in the set of participating agents. This implies a revenue equivalence result for dominant strategy incentive compatible and individually rational rules in settings with quasilinear preferences and transfers.

Effect of selfish choices in deferred acceptance with short lists

Speakers: Hedyeh Beyhaghi, Daniela Saban and Eva Tardos

Abstract: We study the outcome of deferred acceptance when prospective medical residents can only apply to a limited set of hospitals. This limitation requires residents to make a strategic choice about the quality of hospitals they apply to. Through a mix of theoretical and experimental results, we study the effect of this strategic choice on the preferences submitted by participants, as well as on the overall welfare. We find that residents’ choices in our model mimic the behavior observed in real systems where individuals apply to a mix of positions consisting mostly of places where they are reasonably likely to get accepted, as well as a few “reach” applications to hospitals of very high quality, and a few “safe” applications to hospitals of lower than their expected level. Surprisingly, the number of such “safe” applications is not monotone in the number of allowed applications. We also find that selfish behavior can hurt social welfare, but the deterioration of overall welfare is very minimal.

3.40 P.M. Break
4:00 P.M. Session 4
Assortment Planning in School Choice

Speaker: Peng Shi

Abstract: In many public school systems across the US, school choice has become the preferred alternative to the traditional method of assigning each student to a neighborhood school. In a typical school choice system, each student submits a preference ranking for a set of schools called the student’s \emph{menu}. The school board then assigns students to schools using the Gale-Shapley deferred acceptance (DA) algorithm, which takes into account \emph{priorities} that differentiate certain types of students, as well as \emph{quotas} for students at each school. The menus, priorities and quotas are policy levers with which the school board may induce socially desirable outcomes. This paper presents a systematic approach for optimizing these policy levers.

Our methodology is based on a novel connection between stable matching and assortment planning, which allows us to approximate school choice as a convex optimization problem. The key to solving this convex optimization is to find an assortment of schools for each student type that maximizes the sum of utilities for this type and externalities for others. We refer to this as the “socially-optimal assortment planning” problem, and show that it is a generalization of the revenue-maximizing assortment planning problem studied in the revenue management literature. We give efficient algorithms for the socially-optimal assortment planning problem for the multinomial logit (MNL), nested logit, and Markov chain based utility distributions.

We demonstrate the effectiveness of our methodology by applying it to actual data from Boston Public Schools. We construct optimized menus and priority distributions that outperform the status quo, improving the expected utilities of students and the predictability of the assignment outcome while maintaining the same amount of busing.

The Iterative Deferred Acceptance Mechanism

Speakers: Inacio Bo and Rustamdjan Hakimov

Abstract: We introduce a new mechanism for matching students to schools or universities, denoted Iterative Deferred Acceptance Mechanism (IDAM), inspired by procedures currently being used to match millions of students to public universities in Brazil and China. Unlike most options available in the literature, IDAM is not a direct mechanism. Instead of requesting from each student a full preference over all colleges, the student is instead repeatedly asked to choose one college among those which would accept her given the current set of students choosing that college. Although the induced sequential game has no dominant strategy, when students simply choose the most preferred college in each period (denoted the straightforward strategy), the matching that is produced is the Student Optimal Stable Matching. Moreover, under imperfect information, students following the straightforward strategy is an Ordinal Perfect Bayesian Equilibrium. Based on data from 2016, we also provide evidence that, due to shortcomings which are absent in the modified version that we propose, the currently used mechanism in Brazil fails to assist the students with reliable information about the universities that they are able to attend, and are subject to manipulation via cutoffs, a new type of strategic behavior that is introduced by this family of iterative mechanisms and observed in the field.

Dynamic Matching in School Choice: Efficient Seat Reallocation After Late Cancellations

Speakers: Itai Feigenbaum, Yash Kanoria, Irene Lo and Jay Sethuraman

Abstract: In many centralized school admission systems, a significant fraction of allocated seats are later vacated, often due to students obtaining better outside options. We consider the problem of reassigning these seats in a fair and efficient manner while also minimizing the movement of students between schools. Centralized admissions are typically conducted using the deferred acceptance (DA) algorithm, with a lottery used to break ties caused by indifferences in school priorities. The key idea we introduce is to reassign vacated seats using a suitable permutation of the first round lottery numbers. In particular, we show that a mechanism based on a simple reversal of the first round lottery order performs the best among all truthful mechanisms satisfying some natural efficiency and fairness properties. Empirical investigations based on data from NYC high school admissions support our theoretical findings.

5:00 P.M. Invited Talk 2 – Aaron Roth, UPENNApproximately Stable, School Optimal, and Student-Truthful Many-to-One Matchings (via Differential Privacy)

Approximately Stable, School Optimal, and Student-Truthful Many-to-One Matchings (via Differential Privacy)

Abstract: In this talk, we will walk through a case study of how techniques developed to design “stable” algorithms can be brought to bear to design asymptotically dominant strategy truthful mechanisms in large markets, without the need to make any assumptions about the structure of individual preferences. Specifically, we will consider the many-to-one matching problem, and see a mechanism for computing school optimal stable matchings, that makes truthful reporting an approximately dominant strategy for the student side of the market. The approximation parameter becomes perfect at a polynomial rate as the number of students grows large, and the analysis holds even for worst-case preferences for both students and schools.

Joint work with: Sampath Kannan, Jamie Morgenstern, and Zhiwei Steven Wu.

5.45 P.M. Break
6:00 P.M. Poster Lightning Talks
6.30 P.M. Reception and Poster Session
8:00 P.M. END
DAY 2
8:00 A.M. Breakfast
8.45 A.M. Invited Talk 3 — Michael Ostrovsky, Stanford

Matching under preferences: beyond the two-sided case

Abstract: I will present an overview of several recent papers showing that most of the key results of matching theory generalize naturally to a much richer setting: trading networks. These networks do not need to be two-sided, and agents do not have to be grouped into classes (“firms”, “workers”, and so on). What is essential for the generalization is that the bilateral contracts representing relationships in the network have a direction (e.g., one agent is the seller and the other is the buyer), and that agents’ preferences satisfy a suitably adapted substitutability notion. For this setting, for the cases of discrete and continuous sets of possible contracts, I will discuss the existence of stable outcomes, the lattice structure of the sets of stable outcomes, the relationship between various solution concepts (stability, core, competitive equilibrium, etc.), and other results familiar from the literature on two-sided markets.

9.30 A.M. Session 5
Trading Networks with Bilateral Contracts

Speakers: Alexander Teytelboym, Tamas Fleiner, Zsuzsanna Jankó and Akihisa Tamura

Abstract: We consider general networks of bilateral contracts that include supply chains. We define a new stability concept, called trail stability, and show that any network of bilateral contracts has a trail-stable outcome whenever agents’ choice functions satisfy full substitutability. Trail stability is a natural extension of chain stability, but is a stronger solution concept in general contract networks. Trail-stable outcomes are not immune to deviations of arbitrary sets of firms. In fact, we show that outcomes satisfying an even more demanding stability property – full trail stability – always exist. For fully trail-stable outcomes, we prove results on the lattice structure, the rural hospitals theorem, strategy-proofness and comparative statics of firm entry and exit. We pin down a condition under which trail-stable and fully trail-stable outcomes coincide. We then completely describe the relationships between various other concepts. When contracts specify trades and prices, we also show that competitive equilibrium exists in networked markets even in the absence of fully transferrable utility. The competitive equilibrium outcome is (fully) trail-stable.

Too Good to Fire: Non-Assortative Matching to Play a Dynamic Game

Speakers: Benjamin Sperisen and Thomas Wiseman

Abstract: We study stable outcomes in a one-to-one matching market with firms and workers. The model endogenizes how transferable utility is within a match: when a firm-worker pair is matched, they play an infinite-horizon discounted dynamic game with one-sided, observable effort. Partners’ types are complementary in determining the maximal feasible payoffs. In our setting, the classic result that with complementary types stable matchings are positively assortative does not hold. Increasing the quality of a match harms dynamic incentives, because a firm cannot credibly threaten to fire a worker who is productive even with low effort. Thus, firms may prefer lower-type workers who will exert more effort. Our analysis suggests a new interpretation of efficiency wages: committing to pay a high wage can improve effort incentives indirectly by making the firm more willing to walk away.

Matching Auctions

Speakers: Alessandro Pavan and Daniel Fershtman

Abstract: We study mediated many-to-many matching in markets in which valuations evolve over time as the result of shocks, learning through experimentation, or a preference for variety. The matching dynamics that maximize either the platform’s profits, or welfare can be sustained through auctions implementing the matches with the highest bilateral score up to capacity. In equilibrium, bidding is straight-forward and myopic. The analysis also sheds light on the merits of regulating such markets. When match values are positive, proÖt maximization involves fewer and shorter interactions than welfare maximization. This conclusion need not extend to markets where certain agents dislike certain interactions.

10.30 A.M. Break
10.50 A.M. Session 6
Communication Requirements and Informative Signaling in Matching Markets

Speakers: Itai Ashlagi, Mark Braverman, Yash Kanoria and Peng Shi

Abstract: We study the amount of communication needed to find a stable matching in a two-sided matching market with private preferences when agents have some knowledge of the preference distribution. We show that in a two-sided market with workers and firms, when the preferences of workers are arbitrary and private and the preferences of firms follow a multinomial-logit (MNL) choice model with commonly known and heterogeneous parameters, there exists an algorithm that finds a stable matching with high probability and requires at most $O^*(\sqrt{n})$ bits of communication per agent. (We show that this is the best possible under this setting.) The algorithm, which we call communication efficient deferred acceptance (CEDA), is a modification of the deferred acceptance algorithm with workers applying. In this algorithm, firms help workers better target applications by signaling workers that they idiosyncratically like and broadcasting to the market a qualification requirement to discourage those with no realistic chances from applying. In the special case in which preferences of both workers and firms follow a tiered structure, we show that the signaling can be done in a parallel and decentralized way. In terms of incentives, the protocols we propose inherit many properties of DA under full preference elicitation. In large markets with a small core, they are incentive compatible for everyone. These results yield insights on how matching markets can be better organized to reduce congestion.

A Generalized Polymatroid Approach to Stable Matchings with Lower Quotas

Speaker: Yu Yokoi

Abstract: Classified stable matching, proposed by Huang, describes a matching model between academic institutes and applicants, in which each institute has upper and lower quotas on classes, i.e., subsets of applicants. Huang showed that it is NP-hard to decide whether there exists a stable matching or not in general. On the other hand, he showed that the problem is solvable if classes form a laminar family. For this case, Fleiner and Kamiyama gave a concise interpretation in terms of matroids and showed the lattice structure of stable matchings.In this paper, we introduce stable matchings on generalized matroids, extending the model of Fleiner and Kamiyama. We design a polynomial-time algorithm which finds a stable matching or reports the nonexistence. We also show that, the set of stable matchings, if nonempty, forms a lattice with several significant properties. Furthermore, we extend this structural result to the polyhedral framework, which we call stable allocations on generalized polymatroids.

The weighted stable matching problem

Speakers: Linda Farczadi and Natalia Guricanova

Abstract: We study the stable matching problem in non-bipartite graphs with incomplete but strict preference lists, where the edges have weights and the goal is to compute a stable matching of minimum or maximum weight. This problem is known to be NP-hard in general. Our contribution is two fold: a polyhedral characterization and an approximation algorithm. Previously Chen et al. have shown that the stable matching polytope is integral if and only if the subgraph obtained after running phase one of Irving’s algorithm is bipartite. We improve upon this result by showing that there are instances where this subgraph might not be bipartite but one can further eliminate some edges and arrive at a bipartite subgraph. Our elimination procedure ensures that the set of stable matchings remains the same, and thus the stable matching polytope of the final subgraph contains the incidence vectors of all stable matchings of our original graph. This allows us to characterize a larger class of instances for which the weighted stable matching problem is polynomial-time solvable. We also show that our edge elimination procedure is best possible, meaning that if the subgraph we arrive at is not bipartite, then there is no bipartite subgraph that has the same set of stable matchings as the original graph. We complement these results with a 2-approximation algorithm for the minimum weight stable matching problem for instances where each agent has at most two possible partners in any stable matching. This is the first approximation result for any class of instances with general weights.

New and simple algorithms for stable flow problems

Speakers: Ágnes Cseh and Jannik Matuschke

Abstract: Stable flows generalize the well-known concept of stable matchings to markets in which transactions may involve several agents, forwarding flow from one to another. An instance of the problem consists of a capacitated directed network, in which the vertices express their preferences over their incident edges. A network flow is stable if there is no group of vertices that all could benefit from rerouting the flow along a walk.Fleiner established that a stable flow always exists by reducing it to the stable allocation problem. We present an augmenting-path algorithm for computing a stable flow, the first algorithm that achieves polynomial running time for this problem without using stable allocation as a black-box subroutine. We further consider the problem of finding a stable flow such that the flow value on every edge is within a given interval. For this problem, we devise a simple and fast algorithm, which also can be used to find a solution to the stable marriage problem with forced and forbidden edges in an elegant way. Finally, we study the stable multicommodity flow model by Király and Pap. We present several graph-based reductions that show that it is without loss of generality to assume that no commodity-specific preference lists at the vertices and no commodity-specific capacities on the edges exist. We further show that it is NP-complete to decide whether an integral solution exists.

Stable Matching with Uncertain Pairwise Preferences

Speakers: Haris Aziz, Peter Biro, Tamas Fleiner, Serge Gaspers, Ronald de Haan, Nicholas Mattei and Baharak Rastegari

Abstract: In this paper we study a two-sided matching problem under preferences, where the agents have independent pairwise comparisons on their possible partners and these preferences may be uncertain. In this case preferences may be intransitive and agents may even have certain cycles in their preferences; e.g. an agent $a$ may prefer $b$ to $c$, $c$ to $d$, and $d$ to $b$, all with probability one. If an instance has such a cycle, then there may not exist any matching that is stable with positive probability. We focus on the computational problems of checking the existence of possibly and certainly stable matchings, i.e., matchings whose probability of being stable is positive or one, respectively. We show that finding possibly stable matchings is NP-hard, even if only one side can have cyclic preferences.

On the other hand we show that the problem of finding a certainly stable matching is polynomial-time solvable if only one side can have cyclic preferences and the other side has transitive preferences, but that this problem becomes NP-hard when both sides can have cyclic preferences. The latter complexity result also implies the hardness of finding a kernel in a special class of directed graphs.

12.30 P.M. Lunch
1:00 P.M. Lunch w/Outlook Talk 2 — David Manlove, University of Glasgow

Selected Algorithmic Open Problems in Matching Under Preferences

Abstract: The research community working on matching problems involving preferences has grown in recent years, but even so, plenty of interesting open problems still exist, many with large-scale practical applications.  In this talk I will outline some of these open problems that are of an algorithmic flavour, thus giving an outlook on some of the research challenges in matching under preferences that the computer science community might seek to tackle over the next decade.

2:00 P.M. Session 7
 “STRATEGIC” BEHAVIOR IN A STRATEGY-PROOF ENVIRONMENT

Speakers: Avinatan Hassidim, Assaf Romm and Ran Shorrer

Abstract: We present direct field evidence of preference misrepresentation under deferred acceptance. A large fraction of highly educated participants, who had been informed about the strategy-proof nature of the mechanism in numerous ways, failed to play truthfully: they ranked a non-funded position above a funded position in the same program. This is despite being informed that rank-ordered lists are never made public, that funding is a positive signal of ability, and that funding comes with no strings attached. Preference misrepresentation is associated with weaker applicants. A laboratory experiment documents a strong negative causal relationship between applicants’ expected desirability and preference misrepresentation.

Stable matching mechanisms are not obviously strategy-proof

Speakers: Itai Ashlagi and Yannai A Gonczarowski

Abstract: Many two-sided matching markets, from labor markets to school choice programs, use a clearinghouse based on the applicant-proposing deferred acceptance algorithm, which is well known to be strategy-proof for the applicants. Nonetheless, a growing amount of empirical evidence reveals that applicants misrepresent their preferences when this mechanism is used. This paper shows that no mechanism that implements a stable matching is obviously strategy-proof for any side of the market, a stronger incentive property than strategy-proofness introduced by Li (2015). A stable mechanism that is obviously strategy-proof for applicants is introduced for the case in which agents on the other side have acyclical preferences. Our findings suggest that strategic reasoning in two-sided markets requires more cognitive effort by participants than in one-sided markets.

Strategic Stable Marriage

Speakers: James Bailey and Craig Tovey

Abstract: We study stable marriage where individuals strategically submit private preference information to a publicly known stable marriage algorithm. We prove that no stable marriage algorithm ensures actual stability at a Nash equilibrium when individuals are strategic. Thus the set of Nash equilibria provides no predictive value nor guidance for mechanism design. We propose the following new minimal dishonesty equilibrium refinement, supported by experimental economics results: an individual will not strategically submit preference list $L$ if there exists a more honest $L’$ that yields as preferred an outcome. Then for all marriage algorithms satisfying monotonicity and IIA, every minimally dishonest equilibrium yields a sincerely stable marriage. This result supports the use of algorithms less biased than the (Gale-Shapley) man-optimal, which we prove yields the woman-optimal marriage in every minimally dishonest equilibrium. However, bias cannot be totally eliminated, in the sense that no monotonic IIA stable marriage algorithm is certain to yield the egalitarian-optimal marriage in a minimally dishonest equilibrium — thus answering an open question of Gusfield and Irving’s in the negative. Finally, we show that these results extend to student placement problems, where women are polyandrous and honest. but not to admissions problems, where women are both polyandrous and strategic.

Making it Safe to Use Centralized Markets: Epsilon – Dominant Individual Rationality and Applications to Market Design

Speakers: Ben Roth and Ran Shorrer

Abstract: A critical, yet under-appreciated feature of market design is that centralized markets operate within a broader context; often market designers cannot force participants to join a centralized market. Well-designed centralized markets must induce participants to join voluntarily, in spite of pre-existing decentralized institutions they may already be using. We take the view that centralizing a market is akin to designing a mechanism to which people may voluntarily sign away their decision rights. We study the ways in which market designers can provide robust incentives that guarantee agents will participate in a centralized market. Our first result is negative and derives from adverse selection concerns. Near any game with at least one pure strategy equilibrium, we prove there is another game in which no mechanism can eliminate the equilibrium of the original game.

In light of this result we offer a new desideratum for mechanism and market design, which we term epsilon-dominant individual rationality. After noting its robustness, we establish two positive results about centralizing large markets. The first offers a novel justification for stable matching mechanisms and an insight to guide their design to achieve epsilon-dominant individual rationality. Our second result demonstrates that in large games, any mechanism with the property that every player wants to use it conditional on sufficiently many others using it as well can be modified to satisfy epsilon-dominant individual rationality while preserving its behavior conditional on sufficient participation. The modification relies on a class of mechanisms we refer to as random threshold mechanisms and resembles insights from the differential privacy literature.

Integrating Schools for Centralized Admissions

Speakers: Mehmet Ekmekci and M. Bumin Yenmez

Abstract: As school districts integrate charter schools for centralized admissions in Denver, New Orleans, Newark and Washington D.C., some schools have stayed out. Likewise, there is a recent proposal in Boston to unify public school admissions in a centralized clearinghouse. We provide a new framework to study the incentives of a school to join a clearinghouse and we show that each school prefers to remain out of the system when others join it. Therefore, our analysis provides an explanation of why some charter schools have evaded (or may evade) the clearinghouse. To overcome this issue, we propose two schemes that can be used by policymakers to incentivize schools to join the system, which achieves the desired integration of schools.

3.40 P.M. Break
4:00 P.M. Session 8
Competitive Equilibrium and a Dynamic Auction for Allocation with Priorities

Speakers: Isa Hafalir and Tayfun Sonmez

Abstract: We consider an assignment problem where a set of items need to be allocated to a set of agents. Agents have different priorities for claiming different items, and personalized (discrete) pricesterms of the contracts can be used in the allocation process. We allow each item having some copies are for sale and some copies for free. In this model, we  first define a competitive equilibrium with personalized prices and show that there exists a strategy-proof mechanism that results in the best competitive equilibrium. We then propose a Nash incentive-compatible dynamic auction that results in the same competitive equilibrium allocation.

Competitive Equilibria with Indivisible Goods and Generic Budgets

Speakers: Moshe BabaioffNoam Nisan and Inbal Talgam-Cohen

Abstract: We study competitive equilibria in the basic Fisher market model, but with indivisible goods. Such equilibria fail to exist in the simplest possible market of two players with equal budgets and a single good, yet this is a knife’s edge instance as equilibria exist once budgets are not precisely equal. Is non-existence of equilibria also a knife-edge phenomenon in complex markets with multiple goods? Our computerized search has indicated that equilibria often exist when budgets are “generic”. We prove several existence results both for the case of general preferences and for the special case of additive preferences, and relate competitive equilibria to notions of fair allocation of indivisible items.

Object Allocation via Immediate-Acceptance: Characterizations and an Affirmative Action Application

Speakers: Battal Dogan and Bettina Klaus

Abstract: Which mechanism to use to allocate school seats to students still remains a question of hot debate. Meanwhile, immediate acceptance mechanisms remain popular in many school districts. We formalize desirable properties of mechanisms when respecting the relative rank of a school among the students’ preferences is crucial. We show that those properties, together with well-known desirable resource allocation properties, characterize immediate acceptance mechanisms. Moreover, we show that replacing one of the properties, consistency, with a weaker property, non-bossiness, leads to a characterization of a much larger class of mechanisms, which we call choice-based immediate acceptance mechanisms. It turns out that certain objectives that are not achievable with immediate acceptance mechanisms, such as affirmative action, can be achieved with a choice-based immediate acceptance mechanism.

Fair Division of goods, bads, and satiable items

Speakers: Anna BogomolnaiaHerve MoulinFedor Sandomirskiy and Elena Yanovskaya

Abstract: How to divide items that can be desirable (goods), or not (bads), and can also allow satiation? When all items are goods and preferences are represented by utility functions homothetic and concave, the Competitive Equilibrium with Equal Incomes (CEEI) is famously compelling because it maximizes the Nash product of utilities, is single-valued and easy to compute. The CEEI to divide only bads captures similarly all critical points of the Nash product in the efficient frontier. But it is far from resolute or easy to compute: the number of allocationsdistinct welfare-wise can be exponential in the number of agents and items.General problems behave as if we divide only goods, or as if we divide only bads. In the former case, everyone who can is strictly better off than zero (the ex ante utility), the CEEI is unique and maximizes the Nash product of utilities. In the latter everyone is strictly worse off than zero, and the CEEI collects all critical points of the Nash product of disutilities. Thus the task of dividing a mixed manna is either good news for everyone, or bad news for everyone.We refine our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly benefit under the CEEI rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Efficient divisions of bads can be disconnected so that it will admit no continuous selection.

5.20 P.M. Break
5.30 P.M. Invited Talk 4 — Marek Pycia, UCLA

Invariance and Matching Market Outcomes

Abstract: The empirical studies of school choice provide evidence that standard measures of admission outcomes are the same for many Pareto efficient mechanisms that determine the market allocation based on ordinal rankings of individual outcomes. The paper shows that two factors drive this intriguing puzzle: market size and the invariance properties of the measures for which the regularity has been documented. In addition, the talk will explore the consequences of these findings: the usefulness of non-invariant outcome measures and of mechanisms that elicit preference intensities.

6.15 P.M. Closing Remarks
6.30 P.M. END

Lemma: Let \mu be any individually rational matching with respect to strict preferences P and let M' be all men who prefer \mu to \mu_M. If M' is nonempty, there is a pair (m, w) that blocks \mu such that m is in M-M' and w is in \mu(M').

Proof. 

Case 1: \mu(M')\not=\mu_M(M'), i.e. the set of men who prefer \mu to \mu_M are matched to different sets of women under the matching rules \mu and \mu_M.

Pick any w\in\mu(M')-\mu_M(M'). Note that \mu(M')\not\subset\mu_M(M') in this case because it’s a one-to-one matching.

Denote m'=\mu(w), and m=\mu_M(w), so m'\in M', and m\not\in M'. This implies that w\succ \mu_M(m) and w\succ_m\mu(m).

Note that m\succ_w m', otherwise (m', m) will block \mu_M, contradicting to that \mu_M is stable.

Therefore, (m, w) blocks \mu such that w\in\mu(M') and $m\in M-M’$.

 

Case 2:\mu(M')=\mu_M(M'), i.e. the set of men who prefer \mu to \mu_M are matched to the same set of women under the matching rules \mu and \mu_M.

Let w be the last woman in W' to receive a proposal from an acceptable member of M' in the deferred acceptance. Denote this man by m'. So there are two possibilities: 1) w has not been proposed before, and \mu_M(m')=w; 2) w is already engaged with some other man m, and she rejects m and accepts m'.

1) is not possible: If it were true, then since w\in W', there must be some m'\in M such that \mu(m')=w\succ_{m'}\mu_M(m). That means when we run the deferred-acceptance algorithm to implement \mu_M, m' has already proposed to w and got rejected, contradicting that w has not been proposed.

2) i. m\not\in M', otherwise m will propose to another woman, contradicting that w is the last woman in W' to receive a proposal. Therefore w\succ_m\mu_M(m). And since m\not\in M', that means \mu_M(m)\succ_m\mu(m). This implies w\succ_m\mu(m).

ii. Since w is the last woman to receive the proposal, it must be that before w rejects m, she must have rejected \mu(w), i.e. her mate under \mu. That means m\succ_w\mu(m).

Combine i. and ii., we conclude that (m, w) blocks \mu.

Q.E.D.

Alternative proof: 

Case 1: The same as above.

Case 2\mu(M')=\mu_M(M').

Define the new market (M', W', P'). P'(m) is the same as P(m) restricted to W'\cup\{m\}, and P'(w) is the same as P(w) restricted to M'\cup\{w\}, \forall m\in M', w\in W'.

Note that \forall m\in M', w\in W', we must have \mu(m)\succ_m\mu_M(m) and \mu_M(m)\succ_w\mu(w), otherwise \mu_M would be blocked. We can write this as:

\mu_M\succ_{W'}\succ\mu

and

\mu\succ_{M'}\succ\mu_M.

So that means under P', w' is now ranked just below \mu(w), and m is now ranked just below \mu_M(m).

In other words, the only men in M' who are unacceptable to w are those m such that m\succeq_w \mu(w), so \mu_M(w) is acceptable to w under P' for all w\in W'.

Note that \mu_M restricted to M'\cup W' is still stable for (M', W, P'), because any pair that blocks \mu_M under P' would also block it under P.

Let \mu_M' be the M'-optimal matching for (M', W', P'), then by Pareto-optimality Theorem, it must be that

(*) \mu_M'\succ_{M'}.

Otherwise, if \mu_M=\mu_M', then it would be contradicting \mu\succ_{M'}\succ\mu_{M'}.

Furthermore, \mu_{M'}\succeq_{W'}\mu by the construction of P'.

Define \mu' on M\cup W by \mu'=\mu_{M'} on M'\cup W', and $latex\mu’=\mu_M$ on (M'-M)\cup (W'-W).

Combine it with (*), we have \mu'\succ_M\mu_M. \mu' is not stable for (M,W, P), so let \{m, w\} be a blocking pair.

i). If \{m, w\} in M'\cup W', m and w would be mutually acceptable under P', by construction of P', and so \{m, w\} would block \mu_M'. Also note that

then

w=\mu'(m)=\mu_{M'}(m)\succ_m\succ\mu_M(m)

and

\mu_{M'}(w)=\mu'(w)=m\succ_w\mu(w).

So \{m, w\} does not block \mu'.

ii). If m\in M', and m\in W-W', then

w=\mu'(m)=\mu'_M(m)\succ_m\succ\mu_M(m)

and

\mu(w)\succ_w\mu_M(w)=\mu'(w)=m.

\{m, w\} blocks \mu_M, but it does not block \mu'.

iii). If m\in M-M', w\in W-W', then

w=\mu'(m)=\mu_M(m)\succ_m\mu(m)

and

\mu(w)\succ_w\mu_M(w)=\mu'(w)=m.

So \{m, w\} does not block \mu'.

iv). If m\in M-M', w\in W', then

w=\mu'(m)=\mu_M(m)\succ_m\mu(m)

and

\mu_{M'}(w)=\mu'(w)=m\succ_w\mu(w).

So \{m, w\} does blocks \mu'. It’s the desired blocking pair.

Q.E.D.

 

 

References:

Gale, D., & Sotomayor, M. (1985). Some Remarks on the Stable Matching. Discrete Applied Mathematics, 11, 223–232.

Roth, A. E., & Sotomayor, M. (1990). Two-Sided Matching: A Study in Game-Theoretic Modeling and Analysis. Cambridge University Press.

Bounded: |f(x)|\le M for all x\in \mathbb E.

Uniformly bounded: |f_n(x)|<M for all x\in\mathbb E, n=1, 2, 3,...

Pointwise bounded: |f_n(x)<\phi(x)| where \phi is a finite valued function, and x\in\mathbb E, n=1, 2, 3,...

K is compact, f_n is pointwise bounded and equitcontinuous \Rightarrow f_n is uniformly bounded, and f_n contains a uniformly convergent subsequence.

 

Continuous: f: \mathbb E\subset X\to Y, p\in\mathbb E, \forall \varepsilon>0, \exists \delta>0 s.t. \forall x\in\mathbb E, d_X(x, p)<\delta\Rightarrow d_Y(f(x), f(p))<\varepsilon.

Uniformly continuous: \forall \varepsilon>0, \exists \delta>0 s.t. d_X(p, q)<\delta\Rightarrow d_Y(f(p), f(q))<\varepsilon.

Equicontinuous: \forall \varepsilon>0, \exists \delta>0 s.t. d(x, y)<\delta\Rightarrow |f(x)-f(y)|<\varepsilon.

Equicontinuous \Rightarrow uniformly continuous.

Compact set, uniformly convergence \Rightarrow equicontinuous.

 

Convergent: A sequence \{p_n\} in a metric space X, \exists p\in X s.t. \forall \varepsilon>0, \exists N s.t. n\ge N\Rightarrow d(p_n, p)<\varepsilon.

Uniformly convergent (definition 1): a sequence of functions \{f_n\}, n=1, 2, 3, ... converges uniformly on \mathbb E to a function f if \forall \varepsilon>0, \exists N s.t. n\ge N\Rightarrow |f_n(x)-f(x)|\le \varepsilon for all x\in\mathbb E.

Uniformly convergent (definition 2): \varepsilon>0, \exists N s.t. m\ge N, n\ge N, x\in\mathbb E\Rightarrow |f_n(x)-f_m(x)|\le \varepsilon.

Pointwise convergent: f(x)=\lim_{n\to\infty} f(x) (x\in\mathbb E).

If \{f_n\} is uniformly convergent, then \lim_{t\to x}\lim_{n\to\infty}f_n(t)=\lim_{n\to\infty}\lim_{t\to x}f_n(x).

K is compact, \{f_n\} is continuous and pointwise convergent, f_n\ge f_{n+1} \Rightarrow f_n\to f uniformly.

f_n\to f uniformly, where f_n\in\mathcal R Rightarrow \int_a^b f d_\alpha=\lim_{n\to\infty}\int_a^b f_n d\alpha.

\{f_n\} is differentiable, \{f_n'\} converge uniformly on [a, b]\Rightarrow f_n converges uniformly, and $f'(x)=\lim_{n\to\infty}f_n'(x) (a\le x\le b)$.

\{f_n\} is differentiable, f' is uniformly bounded \Rightarrow f_n is equicontinuous.

 

 

by Calvin Leather, Yuqing Hu

In response to this article: http://www.jneurosci.org/content/36/39/10016

Recent literature in reinforcement learning has demonstrated that the context in which a decision is made influences subject reports and neural correlates of perceived reward. For example, consider visiting a restaurant where have previously had many excellent meals. Expecting another excellent meal, when you receive a merely satisfactory meal, your subjective experience is negative. Had you received this objectively decent meal elsewhere, without the positive expectations, your experience would have been better. This intuition is captured in adaptive models of value, where a stimuli’s reward (i.e. Q-value) is expressed as being relative to the expected reward in a situation, and it has been found that this accurately models activation in value regions (Palminteri et al 2015). Such a model also can be beneficial as it allows reinforcement learning models to learn to avoid punishment, as avoiding a contextually-expected negative payoff results in a positive reward. This had previously been challenging to express within the same framework as reinforcement learning models (Kim et al, 2006).

Alongside these benefits, there has been concern that adaptive models might be confused by certain choice settings. In particular, agents with an adaptive model of value would have an identical hedonic experience (i.e. Q-values in the model) when receiving a reward of +10 units, in a setting where they might receive either +10 or 0 units, and a reward of 0 units, in a setting where they might receive either -10 or 0 units (we will refer to this later as the ‘confusing situation’). With this issue in mind, Burke et. al. (2016) develop an extension to the adaptive model, where contextual information only has a partial influence on reward. So, whereas the previous, fully-adaptive model has a subjective reward (Q-value) of +5 units for receiving an objective reward of 0 in the context where the possibilities were 0 and -10, and an absolute model ignoring context would experience a reward of 0, the Burke model would experience a reward of +2.5. It takes the context into account, but only partially, and accordingly they call their model ‘partially-adaptive’. Burke et. al. compare this partially-adaptive model with a fully-adaptive model, and an absolute model (which ignores context). When subjects were given the same contexts and choices as the confusing situation outlined above, Burke et. al. found that the partially-adaptive model reflects neural data in the vmPFC and striatum better than the fully-adaptive or absolute models.

The partially-adaptive model is interesting, as it has the same advantages as the fully-adaptive model (reflecting subjective experience and neural data well, allowing for avoidance learning), while potentially avoiding the confusion outlined above. Here, we seek to investigate the implications and benefits of Burke et. al.’s partially-adaptive model more thoroughly. In particular, we will consider the confusion situation’s ecological validity and potential resolution, whether it is reasonable that partially-adaptive representations might extend beyond decision (to learning and memory), and the implications of the theory for future work. Before we do this we would like to briefly present an alternative interpretation of their findings.

The finding that the fMRI signal is best classified by a partially-adaptive model does not necessarily entail the brain utilizing a partially-adaptive encoding as the value over which decisions occur. All neurons within a voxel can influence the fMRI signal, so it is possible that the signal may reflect a combination of multiple activity patterns present within a voxel. This mixing phenomenon has been used to explain the success of decoding early visual cortex, where the overall fMRI signal in a voxel reflects the specific distribution of orientation-specific columns within a voxel (Swisher, 2010). Similarly, the partially-adaptive model’s fit might be explained by the average contribution of some cells with a full-adaptive encoding, and other cells with absolute encodings of value (within biological constraints). This concern is supported by the co-occurrence of adaptive and non-adaptive cells in macaque OFC (Kobayashi, 2010). Therefore, more work is needed to understand the local circuitry and encoding heterogeneity of regions supporting value-based decision making.

Returning to the theory presented by the authors, we would like to consider whether a fully-adaptive encoding of value is truly suboptimal. The type of confusing situation presented above was shown to be problematic for real decision makers in Pompilio and Kacelnik (2010), where starlings became indifferent between two options with different objective values, due to the contexts those options appeared in during training. However, this type of choice context might not be ecologically valid. If two stimuli are exclusively evaluated within different contexts, as in Pompilio and Kacelnik, it is not relevant whether they are confusable, as the decision maker would never need to compare them.

Separate from the confusion problem’s ecological validity is the inquiry into its solution. Burke et. al. suggest partially-adaptive encoding avoids confusion, and therefore should be preferred to a fully-adaptive encoding. However, this might only be true for the particular payoffs used in the experiment. Consider a decision maker who makes choices in two contexts. One, the loss context, has two outcomes, L0 (worth 0), and Lneg (worth less than 0), while the other, the gain context, has two outcomes, G0 (worth 0), and Gpos (worth more than 0). If L0-Lneg = Gpos– G0, as in Burke et. al., a fully-adaptive agent would be indifferent between G0 and Lneg (and between Gpos and L0). A partially-adaptive agent, however, would not be indifferent, as the value of G0 would be higher than Lneg.   Now consider what happens if we raise the value the value of Gpos. By doing this, we can raise the average value of the gain context by any amount. Now consider what this does the experienced value (Q-value) of G0. As we increase the average reward of the context, G0 becomes a poorer option in terms of its Q-value. Note that since the only reward we are changing is Gpos, the Q-values for the loss context do not change. Therefore, we can decrease the Q-value for G0 until it is equal to that of Lneg. This is exactly the confusion that we had hoped the partially-adaptive model would avoid. Furthermore, this argument will work for any partially-adaptive model: we are unable to defeat this concern by parameterizing the influence of context in the update equations, and manipulating this parameter.

As mentioned earlier, it is possible that some cells might encode partially-adaptive value, while others might have a fully- or non-adaptive encoding. We should be open to the possibility that even if partially-adaptive value occurs in decision, non-adaptive encodings might be used for storage of value information, and are transformed at the time of decision into the observed partially-adaptive signals. Why might this be reasonable? An agent who maintains partially-adaptive representations in memory faces several computational issues. One is efficiency: storage requirements for a partially-adaptive representation requires S*C quantities or distributions to be stored (one for each of the S stimuli in each of the C contexts). On the other hand, consider an agent who stores non-adaptive stimulus values and the average value of each context, and then adjusts stimulus values using the context values at the time of decision. They could utilize the same information, but only store S+C quantities. Another problem with storage of value information in an adaptive format is the transfer of learning across contexts. If I encounter a stimulus in context A, my experiences should alter my evaluation of that stimulus in context B: getting sick after eating a food should reduce my preference for that food in every context. An agent who stores value adaptively would need to update one quantity for the encountered stimulus in each context, namely C quantities. An agent who stores value non-adaptively only updates a single quantity. So, even if decision utilizes partially-adaptive encoding, non-adaptive representation is most efficient for storage. Furthermore, non-adaptive information is present in the state of the world (e.g. concentration of sucrose in a juice does not adapt to expectations), so this information is available to agents during learning. Accordingly, it must be asked why agents would discard information that might ease learning. While these differences do not necessarily affect the authors’ claims about value during decision, they should be considered when investigating the merits of different models of value.

In sum, while partial adaption is an exciting theory that may provide novel motivations for empirical work, more effort is needed to understand when and where it is optimal. If we can overcome these concerns, the new theory opens up potential investigation into the nature of contextual influence: if we allow a range of contextual influence (via a parameter) in the partially-adaptive model, do certain individuals have more contextual influence, and does this heterogeneity correlate with learning performance? Do different environments (e.g. noise in signals conveying the context) alter the parameter? Do different cells or regions respond with different amounts of contextual influence? As such, the theory opens up new experimental hypotheses that might allow us to better understand how the brain incorporates context in the learning and decision-making processes.

 

 

References

 

Burke, C. J., Baddeley, X. M., Tobler, X. P. N., & Schultz, X. W. (2016). Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses. Journal of Neuroscience, 36(39), 10016–10025. doi:10.1523/JNEUROSCI.0487-16.2016

Kim, H., Shimojo, S., & Doherty, J. P. O. (2006). Is Avoiding an Aversive Outcome Rewarding ? Neural Substrates of Avoidance Learning in the Human Brain. PLoS Biology, 4(8), 1453–1461. doi:10.1371/journal.pbio.0040233

Kobayashi, S., Carvalho, O. P. De, & Schultz, W. (2010). Adaptation of Reward Sensitivity in Orbitofrontal Neurons. Journal of Neuroscience, 30(2), 534–544. doi:10.1523/JNEUROSCI.4009-09.2010

Palminteri, S., Khamassi, M., Joffily, M., & Coricelli, G. (2015). Contextual modulation of value signals in reward and punishment learning. Nature Communications, 6, 1–14. doi:10.1038/ncomms9096

Pompilio, L., & Kacelnik, A. (2010). Context-dependent utility overrides absolute memory as a determinant of choice. PNAS, 107(1), 508–512. doi:10.1073/pnas.0907250107

Swisher, J. D., Gatenby, J. C., Gore, J. C., Wolfe, B. A., Moon, H., Kim, S., & Tong, F. (2010). Multiscale pattern analysis of orientation-selective activity in the primary visual cortex. Journal of Neuroscience, 30(1), 325–330. doi:10.1523/JNEUROSCI.4811-09.2010.Multiscale

Enjoyed the beautiful Chicago summer!!

Frontiers of Economic Theory and Computer Science

August 12–13, 2016

Saieh Hall for Economics, Room 021
ORGANIZERS
BENJAMIN BROOKS, UNIVERSITY OF CHICAGO
MOHAMMAD AKBARPOUR, UNIVERSITY OF CHICAGO

With the advent of the internet economy, there has been a remarkable convergence in lines of inquiry between the fields of economic theory and computer science. The emergence of new markets mediated by information technology has spurred new research on how strategic agents interact with complex institutions, and also on how best to design institutions in the face of limited information and limited computational resources. The disciplines of game theory and complexity theory provide complementary approaches to understanding these issues. This conference brings together leading researchers from these fields to share and discuss exciting new developments. The broader goal is to facilitate a dialogue on how best to advance the common objective of understanding strategic behavior and institutional design in complex environments.

 

 

Program

Friday, August 12

Auctions and Pricing in Markets with Complex Constraints
Stanford University
Discussant: Tim Roughgarden
Combinatorial Auctions via Posted Prices
Blavatnik School of Computer Science at Tel-Aviv University
Perfect Competition in Markets with Adverse Selection
University of Pennsylvania
Games of Incomplete Information Played By Statisticians
Microsoft Research
Discussant: Vasilis Syrgkanis
Multi-dimensional Virtual Values and Second Degree Price Discrimination
Pennsylvania State University
Learning and Efficiency in Games with Dynamic Population
Cornell University

Saturday, August 13

Strong Duality for a Multiple-Good Monopolist
Massachusetts Institute of Technology
Break
Motivational Ratings
Stanford University
Game Abstractions for Counterfactual Predictions in Online Market
Facebook, Inc.
Information Design
Princeton University
Affirmative Action in Assignment Mechanisms
Massachusetts Institute of Technology

It’s really great to go back to Peking University 🙂

Here is the link of the news  (updated on June 30): http://www.econ.pku.edu.cn/displaynews.php?id=101395

Conference Agenda

June 25 – 26, 2016

Room 307, School of Economics, Peking University

Organizers: Xuezheng Qin and Qingmin Liu

Sponsor: School of Economics, Peking University

June 25 (Saturday)

08:00 – 08:30 Registration

08:30 – 08:45 Welcome Speech

Qixiang Sun, Professor and Dean, School of Economics at PKU

08:45 – 09:45 Xi Weng (Peking University):

“A Theory of Organizational Dynamics: Internal Politics and Efficiency”

(joint with Hongbin Cai and Hong Feng)

09:45 – 10:45 Dilip Abreu (Princeton University):

“Bargaining with One‐sided Asymmetric Information under Ideal

Reputational Conditions” (joint work with David Pearce and Ennio

Stacchetti)

10:45 – 11:05 Photo Session

Venue: Outside the main entrance of SEPKU Building (Ground Floor)

11:05 – 11:25 Coffee Break outside Room 307, SEPKU Building

11:25 – 12:25 Ning Sun (Shanghai University of Economics and Finance)

“A Theory of Marriage with Mutually Consented Divorces”

12:25 – 14:00 Lunch in Room B103 (Underground level 1 of SEPKU Building)

14:00 – 15:00 Xianwen Shi (University of Toronto):

“The Dimensions of Consensus” (joint with Alex Gershkov and Benny

Moldovanu)

15:00 – 16:00 Wing Suen (Hong Kong University):

Competition for Attention in the News Media Market(joint with

Heng Chen)

16:00 – 16:20 Coffee Break outside Room 307, SEPKU Building

16:20 – 17:20 Felix Kübler (University of Zurich):

The identification of beliefs from asset demand(joint with Herakles

Polemarchakis)

18:30 – 20:30 Reception Dinner (by invitation only)

Venue: ZHILIHUIGUAN Restaurant

June 26 (Sunday)

09:00 – 10:00 Zenan Wu (Peking University):

“Managerial Turnover and Entrenchment” (joint with Xi Weng)

10:00 – 11:00 Sylvain Chassang (Princeton University):

TBA

11:00 – 11:20 Coffee Break outside Room 307, SEPKU Building

11:20 – 12:20 Tilman Börgers (University of Michigan):

“Revealed Relative Utilitarianism” (joint with Yan-Min Choo)

12:20 – 14:00 Lunch in Room B103 (Underground level 1 of SEPKU Building)

14:00 – 15:00 Qingmin Liu (Columbia University):

“Auctions with Limited Commitment” (joint with Konrad Mierendorff

and Xianwen Shi)

15:00 – 16:00 Jimmy Chan (Fudan University):

“Fiscal Receipt Lottery”

16:00 – 16:20 Coffee Break outside Room 307, SEPKU Building

16:20 – 17:20 Cheng Sun (Peking University):

Reputation and Social Media

Conference Photo - PKU Summer Conference on Microeconomic Theory

thumb_IMG_1437_1024

The original paper: download

Summary for each part:

I: The mathematical solution to constructing a rational economic order is to achieve the same marginal rates of substitution between any two goods. This is not realistic, because it relies on the assumption of complete command of knowledge in a central single mind, while in the real society the knowledge is dispersed among separate individuals. This causes misconception about the nature of economic problem, which is essentially how to efficiently allocate resources by utilizing individual knowledge, rather than utilizing it in its integrated form.

 

II: Utilizing knowledge, which involves communicating it to the planner and among different individuals, is important in designing an efficient economic system, which can be in the form of central planning, completion and monopoly. Which kind of the economic systems is more efficient depends on whether the existing knowledge can be fuller used.

 

III: Different kinds of knowledge define different positions of the economic systems. The prominent position of central planning in public imagination is due to the exaggeration of the importance of scientific knowledge. Selecting a group of experts to command such knowledge is actually only a fraction of the wider problem. The knowledge of the particular circumstances, which is not always available, is equivalently socially useful, although it is sometimes regarded as disreputable if one gains advantage by using this knowledge.

 

IV: Economic problems arise as a result of change of circumstances, making the knowledge of the particular place and time important in making economic decisions. This sort of knowledge, however, cannot be statistically calculated therefore cannot be conveyed to the central authority who make plans based on statistical information.

 

V: Decentralization is necessary in solving economic problems, because adaptions to changes in economic systems require the knowledge of the particular circumstances to be promptly used. A price system helps coordinate separate actions for individuals whose visions in their own fields sufficiently overlap through intermediaries, thus brings about the outcome that might have been achieved by central planning with complete information.

 

VI: The price system acts as a mechanism that communicates only the most essential information for individuals to take the right action, and it extends the span of resources utilization beyond the control of any single mind. Like language, this is one of the formations upon which the foundation of civilization is built.

 

VII: The dispute about the indispensability of price system is not purely a political dissent, but also intellectual and methodological differences. Schumpeter’s argument that valuation of factors of production is implied in the valuation of consumers’ goods is untrue, because it also depends on the supply of the factors. His argument disregards the essential fact of imperfection of knowledge in the real world. Thus the solution to the economic problem has to be processed by interactions of people who possess their partial knowledge.

 

In sum, the key take-away ideas are:

 

In the real world, knowledge is spread throughout the society. The knowledge of particular circumstances of place and time is not always public available, but it is useful in making economic decisions. This is an essential feature of the real world’s economic problem, which makes central planning inefficient and infeasible. That’s because central planning requires a single mind processing all the knowledge. Decentralization overcomes this problem via a price system in which individuals with their own partial knowledge coordinate with each other and utilize resources that are beyond the control of any one person.

Definition. An outer measure on a nonempty set X is a function \mu^*: \mathcal P(X)\to[0, \infty] that satisfies

1) \mu^*(\emptyset)=0,

2) \mu^*(A)\le\mu^*(B) if $A\subset B$,

3) \mu^*(\cup_1^\infty A_j)\le \sum_1^\infty\mu^*(A_j).

 

The most common way to obtain outer measure is to start with a family \mathcal E of “elementary sets” on which a notion of measure is defined (such as rectangles in the plane) and then to approximate arbitrary sets “from the outside” by countable unions of members of \mathcal E. The precise construction is as follows.

 

Proposition. Let \mathcal E\subset\mathcal P(X) and \rho: \mathcal E\to[0, \infty] be such that \emptyset\in\mathcal E, X\in\mathcal E and \rho(\emptyset)=0. For any A\subset X, define

\mu^*(A)=\inf\{\sum_1^\infty\rho(E_j): E_j\in\mathcal E and A\subset \cup_1^\infty E_j\}.

Then \mu^* is an outer measure.

Proof. For any  A\subset X, \exists \{E_j\}_1^\infty\subset\mathcal E such that A\subset\cup_1^\infty E_j (take E_j=X for all j) so the definition of \mu^* makes sense. Obviously \mu^*(\emptyset)=0 (take E_j=\emptyset for all j), and \mu^*(A)\le\mu^*(B) for A\subset B because the set over which the infimum is taken in the definition of \mu^*(A) includes the corresponding set in the definition of \mu^*(B). To prove countable subadditivity, suppose \{A_j\}_1^\infty\subset\mathcal P(X) and \epsilon>0. For each j there exists \{E_j^k\}_{k=1}^\infty\subset\mathcal E such that A_j\subset\cup_{k=1}^\infty E_j^k and \sum_{k=1}^\infty\rho(E_j^k)\le\mu^*(A_j)+\epsilon 2^{-j}. But then if A=\cup_1^\infty A_j, we have A\subset\cup_{j, k=1}^\infty E_j^k and \sum_{j, k}\rho(E_j^k)\le\sum_j\mu^*(A_j)+\epsilon, whence \mu^*(A)+\epsilon. Since \epsilon is arbitrary, we are done. \Box

 

Definition. If \mu^* is an outer measure on X, a set A\subset X is called \mu^*-measurable if

\mu^*(E)=\mu^*(E\cap A)+\mu^*(E\cap A^c) for all E\subset X.

The inequality \mu^*(E)\le\mu^*(E\cap A)+\mu^*(E\cap A^c) holds for any A and E, so to prove that A is \mu^*-measurable, it suffices to prove that the reverse inequality. The latter is trivial if \mu^*(E)=\infty, so we see that A is \mu^*-measurable iff

\mu^*(E)\ge\mu^*(E\cap A)+\mu^*(E\cap A^c) for all E\subset X such that \mu^*(E)<\infty.

If E is a “well-behaved” set such that E\supset A, the equation \mu^*(E)=\mu^*(E\cap A)+\mu^*(E\cap A^c) says that the outer measure of A, \mu^*(A) is equal to the “inner measure” of A, \mu^*(E)-\mu^*(E\cap A^c).

Caratheodory’s Theorem. If \mu^* is an outer measure on X, the collection \mathcal M of \mu^*-measurable sets is a \sigma-algebra, and the restriction of \mu^* to \mathcal M is a complete measure.

Proof. First, we observe that \mathcal M is closed under complements since the definition of \mu^*-measurability of A is symmetric in A and A^c. Next, if A, B\in\mathcal M and E\subset X,

\mu^*(E)=\mu^*(E\cap A)+\mu^*(E\cap A^c)=\mu^*(E\cap A\cap B)+\mu^*(E\cap A\cap B^c)+\mu^*(E\cap A^c\cap B)+\mu^*(E\cap A^c\cap B^c).

But (A\cup B)=(A\cap B)\cup(A\cap B^c)\cup(A^c\cap B), so by subadditivity,

\mu^*(E\cap A\cap B)+\mu^*(E\cap A\cap B^c)+\mu^*(E\cap A^c\cap B)\ge\mu^*(E\cap(A\cup B)),

and hence

\mu^*(E)\ge\mu^*(E\cap(A\cup B))+\mu^*(E\cap(A\cup B)^c).

It follows that A\cup B=\mathcal M, so \mathcal M is an algebra. Moreover, if A, B\in\mathcal M and A\cap B=\emptyset,

\mu^*(A\cup B)=\mu^*((A\cup B)\cap A)+\mu^*((A\cup B)\cap A^c)=\mu^*(A)+\mu^*(B),

so \mu^* is finitely additive on \mathcal M.

To show that \mathcal M is a \sigma-algebra, it will suffice to show that \mathcal M is closed under countable disjoint unions. If \{A_j\}_1^\infty is a sequence of disjoint sets in \mathcal M, let B_n=\cup_1^n A_j and B=\cup_1^\infty A_j. Then for any E\subset X,

\mu^*(E\cap B_n)=\mu^*(E\cap B_n\cap A_n)+\mu^*(E\cap B_n\cap A_n^c)=\mu^*(E\cap A_n)+\mu^*(E\cap B_{n-1}),

so a simple induction shows that \mu^*(E\cap B_n)=\sum_1^n\mu^*(E\cap A_j). Therefore,

\mu^*(E)=\mu^*(E\cap B_n)+\mu^*(E\cap B_n^c)\ge\sum_1^n\mu^*(E\cap A_j)+\mu(E\cap B^c),

and letting n\to\infty we obtain

\mu^*(E)\ge\sum_1^\infty (E\cap A_j)+\mu^*(E\cap B^c)\ge\mu^*(\cup_1^\infty(E\cap A_j))+\mu^*(E\cap B^c)=\mu^*(E\cap B)+\mu^*(E\cap B^c)\ge \mu^*(E).

All the inequalities in the last calculation are thus equalities. It follows that B\in\mathcal M and taking E=B that \mu^*(B)=\sum_1^\infty\mu^*(A_j), so \mu^* is countably additive on \mathcal M. Finally, if \mu^*(A)=0, for any E\subset X we have

\mu^*(E)\le\mu^*(E\cap A)+\mu^*(E\cap A^c)=\mu^*(E\cap A^c)\le\mu(E),

so that A\in\mathcal M. Therefore \mu^*|\mathcal M is a complete measure. \Box

Let go of all thoughts of limitations about how you should or should not behave based on how old or young you are. Age is nothing but a number, “an issue of mind over matter. If you don’t mind, it doesn’t matter.” ~ Mark Twain

References:

[1] Gerald B. Folland, Real Analysis: Modern Techniques and Their Applications, 2ed, page 28-30.

[2] Purpose Fairy’s 21-Day Happiness Challenge, http://www.jrmstart.com/wordpress/wp-content/uploads/2014/10/Free+eBook+-+PurposeFairys+21-Day+Happiness+Challenge.pdf.