Back

 Abstracts Chapman University Agglomeration and the Extent of the Market    [pdf] Abstract Cities and marketplaces are central to economic development, but we know little about why such agglomerations initially form. I argue that evolutionary forces cause agglomerations to emerge when individuals' desire to spatially coordinate exchange in complex environments. To test this idea, I perform a laboratory experiment where geographically dispersed individuals bring different goods to a location for trade. I find that individuals spontaneously coalesce to reap the gains from exchange, re-agglomerate at the same locations after shocks, and have location choices that aggregate to create a Zipf population distribution. I also find that there is more agglomeration in economies with a larger variety of goods, that being land-tied reduces agglomeration, and that being land-tied magnifies the effect of variety. UCLA Dynamic Matching and Allocation of Tasks    [pdf] (joint work with Kartik Ahuja, Mihaela van der Schaar) Abstract In many two-sided markets, the parties to be matched have incomplete information about their characteristics. Each side has an opportunity to learn (some) relevant information about the other before final matches are made. For instance, clients seeking workers to perform tasks often conduct interviews that require the workers to perform some tasks and thereby provide information to both sides. The performance of a worker in such an interview- and hence the information revealed - depends both on the inherent characteristics of the worker and the task and also on the actions taken by the worker (e.g. the effort expended), which are not observed by the client thus there is moral hazard. Our goal is to derive a dynamic matching mechanism that facilitates learning on both sides before final matches are achieved and ensures that the worker side does not have incentive to obscure learning of their characterisitcs through their actions. We derive such a mechanism that leads to final matchings that achieve optimal performance (revenue) in equilibrium. We show that the equilibrium strategy is long-run coalitionally stable, i.e. there is no subset of workers and clients that can gain by deviating from the equilibrium strategy. Arizona State University May-to-One Dynamic Matching    [pdf] Abstract I study the stability on many-to-one matching markets in dynamic framework with the following features: matching is irreversible, market evolves over time, and each side of the market is restricted by a quota. I showed that in dynamic framework, pairwise stability is not sucient for stability. A new strategic behavior in such markets arises i.e. colleges can manipulate the ultimate matching via earlier matchings. Such incentives require cyclic preferences as well as restricted quota. Absent either of these conditions, one can rely on the results of related one-to-one dynamic market - a useful trick to compute stability in static world. Moreover, when the preferences are aligned, dynamically stable matchings are equivalent to the statically stable ones. University of Michigan Characterizing non-myopic information cascades in Bayesian learning    [pdf] (joint work with Ilai Bistritz, Nasimeh Heydaribeni, Achilleas Anastasopoulos) Abstract We consider an environment where a finite number of players need to decide whether to buy a certain product (or adopt a trend) or not. The product is either good or bad, but its true value is not known to the players. Instead, each player has her own private information on the quality of the product. Each player can observe the previous actions of other players and estimate the quality of the product. A player can only buy the product once. In contrast to the existing literature on informational cascades, in this work players get more than one opportunity to act. In each turn, a player is chosen uniformly at random from all players and can decide to buy or not to buy. His utility is the total expected discounted reward, and thus myopic strategies may not constitute equilibria. We provide a characterization of structured perfect Bayesian equilibria (sPBE) with forward-looking strategies through a fixed-point equation of dimensionality that grows only quadratically with the number of players. In particular, a sufficient state for players\' strategies at each time instance is a pair of two integers, the first corresponding to the estimated quality of the good and the second indicating the number of players that cannot offer additional information about the good to the rest of the players. We show existence of such equilibria and characterize equilibria with threshold strategies w.r.t. the two aforementioned integers. Based on this characterization we study informational cascades and show that they happen with high probability for a large number of players. Furthermore, only a small portion of the total information in the system is revealed before a cascade occurs. Universidad del Valle Schumpeterian Behavior in a CPR Game: Experimental Evidence from Colombian Fisheries Under TURF’s Management    [pdf] (joint work with Daniel Guerrero) Abstract This paper studies the behavior of Pacific-Colombian fishermen in a Common-Pool Resource game. The results show that decision-making depends on fishermen's schooling, sex and last round payoffs. Focusing on individual information, we observe that human capital, measured in years of schooling, has a significant effect on decision-making. Specifically, players with higher schooling adjust their decisions towards on lower levels of harvest, leading closer to the cooperative solution. This behavior could be explained by the better-educated subjects’ improved understanding of the information available to them and possible coordination of efforts due to TURF-based management in the zone. University of Rochester Collusion under persistent shocks    [pdf] (joint work with Vyacheslav Arbuzov, Gustavo Gudino) Abstract We study a repeated Cournot competition model where prices are determined not only by ﬁrms\' quantities but also unobservable market shocks (Green and Porter, 1984). Unlike Green and Porter (1984), market shocks are persistent and today’s market condition aﬀects tomorrow’s market condition. With such persistence, a cheating ﬁrm can manipulate its rival’s belief about future market conditions. Such belief manipulation creates another channel for the ﬁrm to optimally cheat on the opponent. Despite this additional channel, we show that under certain conditions, ﬁrms can still collude. Moreover, persistence rather makes it easier to collude. The Ohio State University Selling shares to many budget constrained bidders: Theory and Experiment    [pdf] Abstract Many auctions sell a divisible item that could be sold by shares: shares of a company, mineral rights, computer server capacity, and shares of facilities. If buyers are willing to buy the whole item and have the ability to pay, standard auctions where the highest bidder wins the whole item (ex. First price auction) are known to allocate the item efficiently and raise the highest revenue. However, when bidders have budget constraints, selling shares of the item to many bidders could be more reasonable than selling the whole item to the highest bidder. Our study aims to theoretically and experimentally investigate two formats of auctions of shares, the uniform price auction and the voucher auction (Krishna 2009), which have been suggested and used by practitioners. In particular, we will study bidding behavior and revenue implications of the two auctions with budget constrained bidders, compared to the first price auction, which is the most frequently used standard auction. Theoretical predictions show that under budget-constrained environments, both of the two share auctions can raise more revenue than the first price auction if the number of bidders is high enough. However, the two share auctions have distinctively different patterns in their revenues as budget constraints change. The revenue of the voucher auction is robustly constant regardless of budget constraints, but the revenue of uniform price auction decreases dramatically when the budget constraint gets severe. We conducted several sessions of lab experiment and the outcomes were qualitatively consistent with the theoretical predictions. University of Zielona Góra Interim Correlated Rationalizability in Large Games    [pdf] (joint work with Michael Greinecker, Kevin Reffett and Ł. Woźny) Abstract We provide general theoretical foundations for modeling strategic uncertainty in large distributional Bayesian games of with general type spaces in terms of a version of interim correlated rationalizability. We then focus on the case that payoff functions are supermodular in actions as in much literature on global games. This allows us to identify extremal interim correlated rationalizable solutions with extremal interim Bayes-Nash equilibria. No order structure on types is used. International Institute of Information Technology, Pune Resolving Deadlocks using All-Pay Auctions    [pdf] Abstract This paper proposes a model that can be used to handle deadlocks in different systems. It uses a game theory approach to handle deadlocks. It describes the problem of deadlocks in different systems along with the prerequisite conditions a system needs to satisfy in order to be susceptible to deadlocks. The paper then goes over the many existing methods of dealing with deadlocks in such systems and briefly goes over different algorithms and techniques used in real-world systems to deal with deadlocks. The proposed model is defined and an example is used to demonstrate the performance of the proposed model. The paper concludes by comparing the proposed model with existing models. University of South Florida Dynamic Contracts with Random Monitoring    [pdf] Abstract In contractual relationships where the agent executes numerous independent tasks over the lifetime of the contract, it is often infeasible to evaluate his performance on all tasks that he is assigned. Incentives under moral hazard are instead provided by randomly determining whether or not to monitor each of these tasks. We characterize optimal contracts implemented with such random monitoring in a stochastic dynamic environment where the agent's cost type varies over time. We show that the compensation terms the agent is promised for contingencies where monitoring reveals compliance are as good as those for when no monitoring takes place, and for some cost types are better; these latter types receive a monitoring reward. As time passes and the agent becomes richer, the size of the monitoring reward decreases. Compensation on the equilibrium path exhibits downward rigidity, a feature elicited empirically by earlier literature. Indian School of Business Timely Persuasion    [pdf] (joint work with Zhen Zhou ) Abstract We consider a regime change game but allow the agents to attack within a time window. Attack is irreversible and delayed attack is costly. There could be panic-based attacks, i.e., the agents attack thinking others will attack, even though it is not warranted. We propose a simple dynamic information disclosure policy, called “disaster alert”, which at a given date publicly discloses whether the regime is doomed to fail. We show that a timely alert persuades the agents to wait for the alert and not at- tack if the alert is not triggered, regardless of their private signals, and thus, eliminates panic. Tepper School of Business, Carnegie mellon university Persuasion for the Long Run    [pdf] (joint work with Daniel Quigley) Abstract We examine a persuasion game where concerns about future credibility are the sole source of current credibility. A long-run sender plays a cheap talk game with a sequence of short-run receivers. We characterise optimal persuasion in this setting, relating this to canonical persuasion problems. We show that long-run incentives do not generally substitute for ex-ante commitment to reporting strategies. A patient sender can achieve the same average payoffs as a sender with ex-ante commitment if and only if a) monitoring is perfect; and b) the optimal strategy under commitment induces a partitional information structure. We then show how a ‘review aggregator’ can implement average payoffs and information structures arbitrarily close to those available under ex-ante commitment. We examine such a review aggregator in the context of online markets. We also examine the connection between our ‘review aggregator’ and a 2002 financial legislation on the release of aggregate statistics regarding financial advice. City University of New York, Baruch College Hardness of Learning in Rich Environments and Some Consequences for Financial Markets    [pdf] Abstract This paper examines the computational feasibility of the standard model of learning in finance theory. Surprisingly, I find that the Bayesian update formula at the heart of this model is impossible to compute in all but the simplest scenarios. Specifically, using tools from theoretical machine learning, I show that there is no polynomial implementation of the formula unless the independence structure of variables in the data is common knowledge. Next, I demonstrate that there cannot exist a polynomial algorithm to infer the independence structure of variables; consequently, the overall learning problem is intractable. Using the Bayesian update formula when it is computationally infeasible carries risks, and some of these are explored in the latter part of the paper in the context of financial markets. Especially in rich, high-frequency environments, it implies discarding a lot of useful information, and this can lead to paradoxical outcomes. I illustrate this in a trading example where market prices can never reflect an informed trader's information, no matter how many rounds of trade. This paper thus provides new theoretical motivation for the use of bounded rationality models in the study of trading and market efficiency — the bound on rationality arising from the computational hardness in learning. University of Texas, Austin Strategic exit with information and payoff externalities    [pdf] Abstract I consider a stopping game between two players, where observations related an unknown state of the nature arrive at random. Players not only learn from observing each other, but their payoffs also depend on the presence of the counterpart. I derive a general characterization of an equilibrium in this game. As applications, I consider two stopping time games which can be viewed as models of sponsored research - one is the model where researchers get funded until (if ever) a research project experiences the first failure, the other one is the model where researchers get rewarded if a success is achieved. In either case, the researchers start working on a project of unknown quality. The quality of the project is identified with its ability to generate failures or successes, in the first and second models, respectively. The rate of arrival of success conditioned on the quality of the project is an increasing function of the total time spent on the sponsored research. Observations of failures or successes are public information. I find subgame perfect equilibria in both models and show that in case of two competing researchers, neither equilibrium outcomes, nor cooperative solutions are efficient unless research creates no payoff externalities. In either model, at least one of the researchers experiments inefficiently long, so that a designer of a grant competition would like to stop sponsoring one of the players earlier than in equilibrium. Surprisingly, this result holds in the model where the first success is rewarded no matter whether the laggards are rewarded with a smaller prize or punished. University of Texas, Austin Competing for success? On dangers of product competition    [pdf] Abstract Boeing planned the glitzy unveiling of its new 777X jetliner mid March 2019. Tragic events three days earlier prompted it to cancel the event. Could the crashes in Ethiopia and Indonesia have been avoided had Boeing been not under a competitive pressure from Airbus? The main question of this paper is how long a producer should experiment with a risky new product before introducing it to consumers. We also study how the length of experimentation is affected by competition in a duopoly, and how it depends on positive or negative correlation between the risks the duopolists face. New York University The Excess Method: A Multiwinner Approval Voting Procedure to Allocate Wasted Votes    [pdf] (joint work with Markus Brill) Abstract In using approval voting to elect multiple winners to a committee or council, it is desirable that excess votes—approvals beyond those that a candidate needs to win a seat—not be wasted. The excess method does this by sequentially allocating excess votes to a voter’s as-yet-unelected approved candidates, based on the Jefferson method of apportionment. It is monotonic—approving of a candidate never hurts and may help him or her get elected—computationally easy, and less manipulable than related methods. In parliamentary systems with party lists, the excess method is equivalent to the Jefferson method and thus ensures the approximate proportional representation of political parties. As a method for achieving proportional representation (PR) on a committee or council, we compare it to other PR methods proposed by Hare, Andrae, and Droop for preferential voting systems, and by Phragmén for approval voting. Because voters can vote for multiple candidates or parties, the excess method is likely to abet coalitions that cross ideological and party lines and to foster greater consensus in voting bodies. University of Pittsburgh Communication with Partially Verifiable Information: An Experiment (joint work with Maria Montero, Martin Sefton) Abstract We use laboratory experiments to study communication games with partially verifiable information. In these games, based on Glazer and Rubinstein (2004, 2006), an informed sender sends a two-dimensional message to a receiver, but only one dimension of the message can be verified. We compare a treatment where the receiver chooses which dimension to verify with one where the sender has this verification control. We find significant differences in outcomes across treatments. Specifically, receivers are more likely to observe senders’ best evidence when senders have verification control. However, receivers’ payoffs do not differ significantly across treatments, suggesting they are not hurt by delegating verification control. We also show that in both treatments the receiver’s best reply to senders’ observed behavior is close to the optimal strategy identified by Glazer and Rubinstein. Associate Professor Word of Mouth Communication and Search    [pdf] (joint work with Matthew Leister, Yves Zenou) Abstract Often the most credible source of information about the quality of products is advice from friends. We develop a word-of-mouth search model where information flows from the old to the new generation for an experience good with unknown quality. We study the features of the social network that determine product quality and welfare and characterize the demand-side (under provision of search effort) and supply-side (inefficient entry by firms) factors that result in inefficiencies. We extend our framework to encompass richer communication structures, correlation between the individuals' links between old and new generations, endogenous prices, as well as the possibility for a high-quality firm to seed information of its quality to a particular consumer. The Ohio State University Seller Curation in Platforms    [pdf] Abstract This article explores why market platforms do not screen out low-quality sellers in situations where screening costs are minimal. Consumers in a platform's market must search for a seller whose product is a good match. The presence of low-quality sellers reduces search intensity, softening competition between sellers, increasing equilibrium price and hence the platform's revenue per sale. If the platform's market is sufficiently competitive then it admits a positive proportion of low-quality sellers. Recommending a high-quality seller and search obfuscation are complementary strategies because the low-quality sellers enable the recommended seller to attract many consumers at a high price. University of Warwick When Does Information Determine Market Size? Search and Rational Inattention    [pdf] Abstract I develop a model in which optimal costly information acquisition by individual firms causes adverse selection in the market as a whole. Each firm's information acquisition policy determines which customers they provide to, and that in turn affects the distribution of customers remaining in the market and hence other firms’ incentives. I show that if firms possess the ability to choose any signal of the customer's type, in equilibrium all firms in the market will profit. By contrast, with restricted signal choice, only a limited number of firms can be profitable. In such a setting, the maximum number of profitable firms fails to increase with the number of potential customers. Smooth information acquisition dampens the adverse selection externality due to each firm, while lumpy information acquisition does not. I establish that my results apply to a broad class of information acquisition processes. Yeshiva University Subversive Conversations    [pdf] (joint work with Nemanja Antic, Rick Harbaugh) Abstract We consider the problem of a two-person committee with common interests exchanging information in order to take a decision. The committee faces a constraint in the form of a third player, the regulator, who is uninformed, has a conflict of interest with the committee, listens to the communication between the committee members, fully understands the intended meaning of all messages, and can overrule the committee decision. We identify conditions under which the committee can subvert the regulator’s agenda and implement the same committee-optimal decision rule that it would implement if it could communicate privately. Subversive communication typically takes the form of a back and forth conversation where committee members hide extreme information early in the conversation. Our results provide a theory of conversations based on plausible deniability in the face of possible public outrage. Texas A&M University Worst-Case Analysis for a Leader-follower Partially Observable Stochastic Game    [pdf] (joint work with Yanling Chang) Abstract This paper studies a leader-follower partially observable stochastic game where (i) the two agents are non-cooperative, and (ii) the follower's objective is unknown to the leader and/or the follower is irrational. We determine the leader's optimal value function assuming a worst-case scenario. Motivated by the structural properties of this value function and its computational complexity, we design a viable and computationally efficient solution procedure for computing a lower bound of the value function and an associated policy for the finite horizon case. We analyze the error bounds and show that the algorithm for computing the value function converges for the infinite horizon case. We illustrate the potential application of the proposed approach in a security context for a liquid egg production example. University of Oxford Price Competition in Buyer-Seller Networks    [pdf] Abstract Traditional economic models of competition between sellers assume that each seller has access to the entire set of buyers in the market. However, many economic interactions occur in settings where sellers are only able to sell to a subset of buyers. This paper models differentiated Bertrand competition in a network. A bipartite graph determines the relationship between buyers and sellers, with sellers competing for overlapping consumers on price. For sufficiently small substitutability between goods, there is a unique interior pure-strategy equilibrium where each seller's price is decreasing in their Bonacich centrality in a sellers-only graph which is strategically equivalent to the original bipartite graph. Using this model, it is possible to analyse the result of changes to the competition in the network - for example, seller entry or the introduction of a new buyer. The proposed framework can also be used to find the network structure that maximises consumer surplus and/or seller profit. Singapore Management University Probabilistic Generalized Median Voter Schemes: A Robust Characterization    [pdf] (joint work with Souvik Roy, Soumyarup Sadhukhan, Arunava Sen, Huaxia Zeng) Abstract We study Random Social Choice Functions (or RSCFs) in a standard ordinal mechanism design model. We introduce a new preference restriction, eventual single-peakedness. We first show that a unanimous RSCFs is strategy-proof on the eventually-single-peaked domain if and only if it is a Probabilistic Generalized Median Voter Schemes (or PGMVS), and satisfies a partial-random-dictatorship condition. Next, we show that a strategy-proof PGMVS defined on this domain is decomposable as a mixture of finitely many strategy-proof generalized median voter schemes only if it satisfies a scale-effect condition. We then construct a non-decomposable strategy-proof PGMVS for the case of more than two voters via the negation of the scale-effect condition, and prove the decomposability of all two-voter strategy-proof PGMVSs. Last, we illustrate the salience of our eventual-single-peakedness and the robustness of our PGMVS characterization theorem by generalizing our analysis to the class of connected domains. Columbia University A model of rent seeking and inequality    [pdf] Abstract Social scientists have argued that inequality fosters rent seeking and that rent seeking is likely to reinforce existing inequalities. In this paper, I formalize these interactions by modeling rent seeking in an unequal endowment economy where agents can choose to be rentiers or not. I find that when the cost of rent seeking is exogenous, more inequality fosters a greater proportion of rentiers, which in turn further skews the distribution of resources. I endogenize the cost of rent seeking by assuming that the rentiers pay the cost to a central institution, which chooses the cost per rentier to maximize its revenue. In this setting, the revenue-optimizing cost of rent seeking per rentier increases with more inequality, which results in a lower proportion of rentiers. However, ex-post inequality still increases. The results show how economies can end up with persistent inequality in the presence of rent seeking. Northwestern University, Kellogg Privacy in Bargaining: The Case of Endogenous Entry    [pdf] Abstract I study the role of privacy in bargaining. A seller makes offers every instant, without commitment, to a privately informed buyer. Potential competing buyers(entrants) pay attention to the negotiation and can choose to interrupt it by triggering a bidding war. When bargaining in public (in view of entrants), the seller can, through her choice of offers, manipulate entrants’ beliefs about the buyer. In equilibrium, the seller’s lack of commitment reverses the seemingly intuitive effects of publicity. When entrants prefer a bidding war against low types of the buyer, the seller typically prefers bargaining in private, even though public bargaining enables her to lure in competition against the incumbent buyer National Taiwan University Correlation with Forwarding    [pdf] Abstract I consider the three-player complete information games augmented with pre-play communication. Players can privately communicate with others, but not through a mediator. I implement correlated equilibria by allowing players to authenticate their messages and forward the authenticated messages during communication. Authenticated messages, such as letters with signatures, cannot be duplicated but can be sent or received by players. With authenticated messages, I show that, if a game G has a worst Nash equilibrium α, then any correlated equilibrium distribution in G, which has rational components and gives each player higher payoff than what α does, can be implemented by a pre-play communication. The proposed communication protocol does not require perfect public recording ([Barany, 1987]) and does not publicly expose players’ messages in any stage during communication. University of Rochester Middlemen and Reputation    [pdf] (joint work with Yu Awaya, Zirou Chen, Makoto Watanabe) Abstract We develop a model in which a reputation mechanism allows for a middleman to mitigate information frictions. The middleman can play such a role even without having superior technologies relative to other agents in identifying product quality, or issuing the quality certificate. We establish an equilibrium where the market organized by the middleman can be some times viable and other times collapse. Our theory provides rationale for why in some market specialist agents or brokers/dealers can operate with their reputation to guarantee their asset quality, but sometimes lose the reputation as a trustworthy investment channel, just like market crashes during financial crisis. Johns Hopkins University Competitive Equilibrium Fraud in Markets for Credence-Goods    [pdf] (joint work with Edi Karni) Abstract This is a study of the nature and prevalence of persistent fraud in competitive markets for credence-quality goods. We model the market as a dynamic game of incomplete information in which the players are customers and suppliers and analyze their equilibrium behavior. Customers characteristics, search cost and discount rate, are private information. Customers do not possess the expertise necessary to assess the service they need either ex ante or ex post. We show that there exists no fraud-free equilibrium in the markets for credence-quality goods and that fraud is a prevalent and persistent equilibrium phenomenon. MIT Robust Cooperation with First-Order Information    [pdf] (joint work with Daniel Clark, Drew Fudenberg, Alexander Wolitzky) Abstract We study when and how cooperation can be supported in the repeated prisoner’s dilemma in a large population with random matching and overlapping generation, when players have only first-order information about their current partners: a player’s record tracks information about her past actions only, and not her partners’ past actions (or her partners’ partners’ actions). We also restrict attention to strict equilibria that are coordination-proof, meaning that two matched players never play a Pareto-dominated Nash equilibrium in the stage game induced by their records and expected continuation payoffs. We find that simple strategies can support limit efficiency if the stage game is either “mild” or “strongly supermodular,” and that no cooperation can occur in equilibrium for a near-complementary parameter set. The presence of “supercooperator” records, where a player cooperates against any opponent, is crucial for supporting maximal cooperation when the stage game is “severe.” Pontificia Universidad Católica del Perú INTERACTIVE EPISTEMOLOGY APPLIED TO DRAFTING CONTRACTS. THE PARTIAL DEATH OF FILLING THE CONTRACTUAL GAPS    [pdf] (joint work with Alvaro Cuba Horna) Abstract This paper analyzes the problematic of the contractual gaps from the interactive epistemology. Until now, it is an axiom that every incomplete contract must be filled by a judge or arbitrator. The present paper will try to demonstrate that this axiom is wrong. On the contrary, from interactive epistemology, I will propose a model in which agents share common knowledge but based on an incorrect state of the world. This model will reflect that the agents, in fact, write contracts based on incorrect information. The true state of the world and the information partitions will only be obtained as different events happen in time. Therefore, when the parties resort to a judge or arbitrator to fill a contractual gap, they would be creating a new contract that the parties never had an opportunity to write, because they were always based on a false state of the world. In this sense, the model that will be developed proposes that a judge or arbitrator should only fill a contractual gap when the event is common knowledge. If this were the case, it would have always belonged to the true state of the world, so that the parties would have always had knowledge of that event. For this, government measures will be essential to create common knowledge information in contracts. Only with this mechanism, the agents will be able to write more perfect contracts. Boston University Venture Capital Contracts under Disagreement    [pdf] Abstract I examine an optimal financial contract between an entrepreneur and a venture capitalist. The entrepreneur has an early stage project that is not fully implemented. In particular, the direction that the project should follow has not been decided yet. Both players have different beliefs about the optimal direction. Given that decisions are not contractible, the venture capitalist demands a fraction of control rights and cash-flow rights to participate in the project. After the direction is selected, the venture capitalist can exert costly effort to increase the probability of success of the project; this increment can be small (venture capitalist is not important) or big (venture capitalist is important). I show that the amount of control rights relinquished by the entrepreneur decreases with disagreement unless the venture capitalist is not important in implementation. Harvard University Belief Polarization and News on Social Media    [pdf] Abstract Social media and other online interactions have recently become a major source of news and information about current events, bringing new social learning patterns and new questions about platform design. To study these questions, we develop a framework involving the co-evolution of beliefs and information-sharing behavior. When people do not fully account for others\' sharing decisions when updating beliefs, echo chambers can produce belief polarization. In environments with fake news, introducing a technology letting users fact-check stories at a cost can have paradoxical effects. Depending on cost structures, such technology may generate a form of social confirmation bias that actually increases polarization. A related challenge is maintaining faith in fact checkers: if users think there is a possibility of a biased fact checker, those with strong beliefs will come to believe fact checkers are biased against their belief. Finally, echo chambers and the associated polarization may arise endogenously due to platform incentives. Networks that expose most users to peers with diverse opinions, however, can be better for the platform and its users in the long run. Harvard University Social Learning and Innovation    [pdf] Abstract We study a model of innovation in which new technologies are formed by combining several atomic ideas. These ideas can be acquired by private investment or via social learning. A large number of firms face a trade-off between secrecy, which protects existing intellectual property, and openness, which facilitates social learning. This decision is modeled as a choice of an interaction rate, which determines an underlying learning network. Incentives and, more strikingly, payoffs can jump at phase transitions in this network. In particular, equilibrium outcomes will be below a critical threshold, while welfare is much higher above the threshold. University of Bielefeld Non-cooperative Games with Shapley utilities    [pdf] (joint work with Roland Pongou and Tondji Jean-Baptiste) Abstract We introduce a new class of strategic form games called non-cooperative games with Shapley utilities. We show that any finite game in this class possesses a pure strategy Nash equilibrium. We also provide a monotonicity condition under which any finite non- cooperative game with Shapley utilities admits a pure strategy Nash equilibrium. University of Bonn Relational Contracts: Public versus Private Savings    [pdf] (joint work with Daniel Garrett) Abstract We study relational contracting with risk-averse agents who thus have preferences for smoothing consumption over time. Agents have the ability to save to defer consumption. We compare principal-optimal relational contracts in two settings. The first where the agent's consumption and savings decisions are private, and the second where these decisions are publicly observed. In the first case, the agent smoothes his consumption over time, the agent's effort and payments eventually decrease over time, and the balances on his savings account eventually increase. In essence, the relationship eventually deteriorates with time. In the second case, the relational contract can specify the level of consumption by the agent. The optimal contract calls for the agent to consume earlier than he would like, consumption and balances on the account fall over time, and effort and payments to the agent increase. We suggest that modeling informal/relational incentives on consumption/savings decisions is a pertinent alternative to the approach in existing literature on dynamic moral hazard, in which consumption is often either formally specified by contract or in which the agent can privately save. Saarland University Probabilistic manipulation of sequential voting procedures    [pdf] (joint work with Ritxar Arlegi) Abstract We consider sequential, binary voting procedures in societies where pairwise voting induces a single-peaked social preference. An agenda setter is uncertain about the social preference, has the power to fix the initial seeding in the voting tree and thus, to possibly manipulate the procedures' probabilistic outcomes. In such settings, our results identify the balanced voting procedure with four candidates as non-manipulable in polarized societies and least manipulable in biased societies. Voting procedures in weakly biased societies turn out to be non-manipulable if and only if the number of candidates is two. University of Montreal Competing Pre-match Investments Revisited: A Precise Characterization of the Limits of Bayes-Nash Equilibria in Large Markets    [pdf] Abstract We solve an open problem pertaining to the relationship between competitive and non-cooperative models of pre-match investment. We study an incomplete information version of Peters and Siow's (2002) model of competing pre-marital investments and NTU matching, with finitely many agents and i.i.d. types. Our main results establish a precise characterization of side-symmetric Bayes-Nash equilibrium (BNE) behavior "in the limit," as the market grows large and the empirical type distributions converge to those of an unbalanced continuum economy à la Peters and Siow (2002). The limits of BNE strategies always differ from the (bilaterally efficient) hedonic equilibrium strategies, and we obtain a neat characterization of an equilibrium concept for the continuum model that has a clear strategic foundation. Our analysis relies on a completely novel way of using advanced results from the theory of approximate distributions of order statistics that allows us to characterize equilibrium behavior even though the limit strategies must be discontinuous and the size of the discontinuities is determined by a complex, two-sided interaction. These techniques should be useful for the study of other large Bayesian games with discontinuous payoffs and interacting distributions of outcomes. University of Rochester Imperfect Collusion in Repeated Bertrand Oligopoly: The Role of Transfer in Penalizing Actual Price-Cutter    [pdf] Abstract Cartels often operate under inter-firm transfer scheme to prevent cheating and penalize violators. Theories of collusion have predicted that well-designed transfer scheme will make firms in line with collusive price level by eliminating (possibly secret) price-cutting incentives. In practice, however, cheating indeed often occurs (Genesove and Mullin (2001)) and actual violation of agreement is punished through inter-firm transfer. To fill the gap, I study traditional repeated Bertrand oligopoly model augmented by transfer sub-stage, focusing on the range of discount factors not sufficiently high to achieve the best collusive price. I show that optimal stationary equilibrium has distinguishing features that explain several real-world cartel practices: (i) collusive agreement divides a range of prices into allowable level of cheating punished through transfer and unallowable degree of cheating leading to cartel break-down. As a result, (ii) both occasional violation and adherence to agreement occur on-path and (ii) violator is punished based on agreed-upon penalty scheme. However, (iii) the amount of transfer is limited by self-enforcing constraint such that it is not sufficient to discipline price-cutting at the highest stake: slight price-cutting at monopoly price is regarded as unallowable cheating and must trigger permanent price competition. Finally, transfer plays a role only in intermediate range of discount factors in that it is not needed at all (when discount factor is high) or cannot be self-enforced (when discount factor is low.) As long as firms play symmetric pricing strategy, transfer and price-cutting are essential part of any imperfect collusive equilibrium. University of Rochester Incentives in Equal-Pay-for-Equal-Work Principle    [pdf] (joint work with Yu Awaya) Abstract Equal-pay-for-equal-work illegalizes unfair discrimination of workers’ wages. Public concerns that the practice may enhance moral hazard problems, especially when employer cannot observe effort levels. This paper addresses the issue when employer can evaluate employees' performances only through peers (subjective peer evaluation). More precisely, each employee privately chooses effort level, which generates private signals to his peers. Employer solicits peer evaluations, while the evaluations are not verifiable. Equal-pay-for-equal-work principle forces the same wage across workers, following after any combination of evaluations. We show that employer can still provide incentives to put effort, when signals are more correlated when employees put efforts while less so when some shirk. George Mason University Mechanism Design with Memory and no Money    [pdf] Abstract The paper provides an automated approach to mechanism design problems without money for arbitrary discount factors using dynamic programming and promised utility. We illustrate the approach with problems from the literature - chore allocation or sharing an indivisible good or goods. Additionally, we discuss the relationships between different classes of mechanisms and show that promised utility mechanisms are more general than mappings from histories of finite memory. Virginia Polytechnic Institute & State University Common Belief in Choquet Rationality with an Attitude''    [pdf] (joint work with Burkhard Schipper) Abstract We consider finite games in strategic form with Choquet expected utility. Using the notion of (unambiguously) believed, we define Choquet rationalizability and characterize it by Choquet rationality and common beliefs in Choquet rationality in the universal capacity type space in a purely measurable setting. We also show that Choquet rationalizability is equivalent to iterative elimination of strictly dominated actions (not in the original game but) in an extended game. This allows for computation of Choquet rationalizable actions without the need to first compute Choquet integrals. Choquet expected utility allows us to investigate common belief in ambiguity love/aversion. We show that Choquet rationality and common belief in Choquet rationality \emph{and} ambiguity love/aversion leads to smaller/larger sets of action profiles. University of Dayton Tacit Collusion in Repeated Unit Commitment Auctions    [pdf] Abstract In an infinitely repeated game, I am proposing to study the level and conduct of collusion under two commonly used wholesale electricity market designs. Both market designs are uniform price auctions run by an independent third-party market operator. In a centrally-committed market, generating firms compete by submitting complex offers that reflect the non-convexities of their operating costs. Under a self-committed market design, generating firms compete by submitting simple offers that represent the minimum price at which a firm is willing to produce all of its capacity. The centrally-committed market design also contains a provision by which firms are guaranteed to be made whole on the basis of their submitted offers, whereas no such guarantees exist under the self-committed market design. These two market designs will be examined in the context of an infinitely repeated game to compare how they facilitate collusion. This will be done by examining the optimal penal code and bidder deviation incentives, generating potentially useful regulatory and public policy insights. Preliminary work suggests that the centrally-committed market design common in the United States may be less prone to tacit collusion than the self-committed markets common to much of Western Europe and Australia. University of Chicago Communication with Detectable Deceit    [pdf] (joint work with Christian Salas) Abstract Lies are detectable. We investigate the implications of this fact in a communication game in which players have no common interests and messages are cheap, but deceit is detectable with positive probability. In any informative equilibrium, the lowest types lie, while some higher types tell the truth. Truth-telling arises because lie detection generates an endogenous cost from lying consisting of being confused with the low types who lie. We show that lie detection is strategically different from state detection, in that the latter does not admit informative equilibria. We analyze three extensions. First, we show that more information may be revealed if the sender is given an opportunity to prepare a lie in advance and thereby decrease its detectability. We then allow the sender to make multiple attempts at convincing the receiver, and show that if lie detectability is high, the receiver may benefit from committing to listening to the sender only once. And finally, we analyze a two-sender version of the model, and show that senders will exaggerate their claims only if the state disadvantages them sufficiently. The University of Manchester Matching and Core Stability with General Demand Structures    [pdf] Abstract A "demand-type" is a set of markets with indivisible with a fixed Slutsky matrix - comprising of comparative statics of aggregate demand vectors. I consider extensions of the assignment model to include general demand types. I prove, essentially, that equilibrium exists and the core is nonempty in an each market with general "demand-type" iff the associated Slutsky matrix is unimodular. As unimodularity may include preferences with complementarities, this extend previously known results beyond the case of substitutability. Strong Robustness to Incomplete Information and The Uniqueness of Correlated Equuilibrium    [pdf] (joint work with Ori Haimanko David Lagziel) Abstract We defi ne and characterize the notion of strong robustness to incomplete information, whereby a Nash equilibrium in a game u is strongly robust if, given that each player knows that his payoffs are those in u with high probability, all Bayesian-Nash equilibria in the corresponding incomplete-information game are close { in terms of action distribution { to that equilibrium of u. We prove, under some continuity requirements on payoffs, that a Nash equilibrium is strongly robust if and only if it is the unique correlated equilibrium. We then review and extend the conditions that guarantee the existence of a unique correlated equilibrium in games with a continuum of actions. The existence of a strongly robust Nash equilibrium is thereby established for several domains of games, including those that arise in economic environments as diverse as Tullock contests, Cournot and Bertrand competitions, network games, patent races, voting problems and location games. Boston College Reputation and Screening in a Noisy Environment with Irreversible Actions (joint work with Mehmet Ekmekci and Lucas Maestri) Abstract We introduce a class of two-player dynamic games to study the effectiveness of screening in a principal-agent problem. In every period, the principal chooses either to irreversibly stop the game, or to continue. In every period until the game is stopped, the agent chooses an action. The agent’s type is his private information, and his actions are imperfectly observed. Both players are long-lived and share a common discount factor. We study the limit of the equilibrium outcomes as both players get arbitrarily patient. We show that Nash equilibrium outcomes of the dynamic game converge to the unique Nash equilibrium outcome of an auxiliary two-stage game. Hence, dynamic screening eliminates noise in monitoring, but beyond that, it is ineffective. We calculate the probability that the principal eventually stops the game against each type of the agent. The principal learns some but not all information about the agent’s type. Applications include procurement, promotions and demotions within organizations and venture-capital financing. Rice University Legislative bargaining with coalition and proposer-dependent surplus    [pdf] Abstract I study a distributive model of legislative bargaining in which players differ in how much they contribute to the coalitions led by others (i.e. their productivity) and by how much they amplify the contributions of others to their own coalition (i.e. their organizational skill). The resulting model is a q-quota legislative bargaining game with different proposer-coalition pairs generating surpluses of different size. Given a parametric specification of surplus, I establish the existence and continuity of stationary subgame perfect equilibria. I show that equilibrium payoffs are unique for any productivity vector when players are homogeneous in skill, and when they are not, equilibria may feature delay and non-generic multiplicity. Payoffs and net contributions are monotonic in productivity and skill, and the most productive players are always recruited, while the most skillful are sometimes left out. I demonstrate that organizational skill has a stronger influence on outcomes, although sometimes it is desirable to trade skill for productivity. I also investigate the effect of patience and required majority on bargaining outcomes and their efficiency. JKU Linz The Norm of Reciprocity in Dynamic Employment Relationships    [pdf] Abstract This paper explores how a relational contract establishes a norm of reciprocity and how such a norm shapes the provision of informal incentives. Developing a model of a long-term employment relationship, I show that generous upfront wages that activate the norm of reciprocity are more important when an employee is close to retirement. In earlier stages, direct incentives promising a bonus in exchange for effort are more effective. Then, a longer remaining time horizon increases the employer's commitment. Generally, direct and reciprocity-based incentives reinforce each other and should thus optimally be used in combination. I also show that more competition can magnify the use of reciprocity-based incentives. Moreover, with asymmetric information on the employee's responsiveness to the norm of reciprocity, an early separation of types is generally optimal. This implies that pooling equilibria where “selfish” imitate “reciprocal” types might be less important for explaining increased cooperation with repeated interaction than often proclaimed. Finally, the principal might even benefit from asymmetric information because a firing threat for non-performance is only credible if the employee potentially is not reciprocal. Brown University Bargaining over Contingent Contracts Under Incomplete Information    (joint work with Geoffroy de Clippel and Kareen Rozen) Abstract We provide a non-cooperative justification for the axiomatic bargaining solution under incomplete information developed by Myerson (1984), when there are verifiable types. We study a simple one round simultaneous offer game with a small bargaining friction, although results also extend to infinite horizon war of attrition games. We study equilibria in which offers are accepted, as the friction vanishes. We show that: there are equilibria converging to an interim-efficient limit; for many bargaining problems any interim efficient limit must belong to the axiomatic solution; and for such bargaining problems, imposing consistency on a agent's off equilibrium path beliefs can rule out non interim-efficient limits. Stony Brook University Solutions for Zero-Sum Two-Player Games with Noncompact Decision Sets    [pdf] (joint work with Eugene A. Feinberg, Pavlo O. Kasyanov, and Michael Z. Zgurovsky) Abstract This paper provides sufficient conditions for the existence of solutions for two-person zero-sum games with possibly noncompact decision sets for both players. Payoff functions may be unbounded, and we do not assume any convexity/concavity-type conditions. For such games expected payoff may not exist for some pairs of strategies. The results of the paper imply several classic results, and they are illustrated with the number guessing game. The paper also provides sufficient conditions for the existence of a value and solutions for each player. Universidad del Pacífico Game Theory and the Law: Legal Rationality (Legal Interpretation)    [pdf] (joint work with Guillermo Flores) Abstract The author proposes the utilitarianism and rationality principles through which an individual: (i) analyzes the content of a legal norm; (ii) having analyzed its content, decides whether to comply with or breach the legal norm; and, (iii) having decided to comply with the legal norm, selects the strategy to be used to comply with it, obtaining the greatest possible maximization of its individual utility function. Since the "utility" that the law has for the legislator in social terms may not be equal to the "utility" that the citizen assigns to it in personal terms, it is necessary for the legislator to know the expectations of utility that the citizen has regarding a legal norm before issuing it, to make both concepts of "utility" compatible. Once issued, the legislator should not focus on communicating the "utility" that the legal norm has for himself, but the compatibility between both concepts of "utility". The less evident the compatibility between both concepts of "utility" is for citizens, the greater the level of coercion that will be required. Since each citizen interprets the compatibility and utility of the legal norm in a different way than any other citizen, each citizen will present a level of compliance with the legal norm different from the level of compliance of another citizen. Therefore, the legislator faces a reality in which there will be citizens who comply with it in the exact way that she wanted it to be complied with, citizens who comply with it in a way close to the one she wanted and citizens who will not comply with it. The intention of this article is to propose the game-theoretic principles through which a citizen interprets a legal norm, decides whether to comply with it or fails to comply with it, and at what level to comply or not comply with it, and thus obtain formal answers to the questions presented. University of Hamburg Shadow links    [pdf] (joint work with Ana Mauleon and Vincent J. Vannetelbosch) Abstract We propose a framework of network formation where players can form two types of links: public links are observed by everyone and shadow links are only observed by some players, e.g., neighbors in the network. We introduce a novel solution concept called rationalizable conjectural pairwise stability, which generalizes Jackson and Wolinsky (1996)’s pairwise stability notion to accommodate shadow links. We then study the case when public links and shadow links are perfect substitutes and relate our concept to pairwise stability. Finally, we consider two specific models and show how false beliefs about others’ behavior may lead to segregation in friendship networks with homophily, reducing social welfare. University of Hamburg Strategic transmission of imperfect information: Why revealing evidence (without proof) is difficult    [pdf] Abstract We investigate cheap talk when an imperfectly informed expert knows multiple binary signals about a continuous state of the world. The expert may report either information on each signal separately (direct transmission) or a summary statistics of her signals (indirect transmission) to a decision-maker. We first establish that fully informative equilibria exist if the conflict of interest is small. Otherwise, direct-transmission equilibria are uninformative, as not revealing part of the signals tightens—not loosens—the expert’s incentive compatibility constraint. On the contrary, indirect-transmission equilibria remain partially informative for intermediate conflicts of interest. Furthermore, comparative statics show that a better informed expert may imply less informative equilibrium communication. Finally, we introduce the possibility for the expert to verify her signals. We show that, if the costs of verification are low, a fully informative direct-transmission equilibrium exists regardless of the conflict of interest. Massachusetts Institute of Technology An Evolutionary Justification for Overconfidence    [pdf] (joint work with Kim Gannon, Hanzhe Zhang) Abstract This paper provides an evolutionary justification for overconfidence. Players are pair-wise matched to fight for a resource and there is uncertainty about who wins the resource if they engage in the fight. Players have different confidence levels about their chance of winning although they actually have the same chance of winning in reality. Each player may know or may not know her opponent’s confidence level. We characterize the evolutionary stable equilibrium, represented by players’ strategies and distribution of confidence levels. Under different informational environments, majority of players are overconfident, i.e. overestimate their chance of winning. We also characterize the evolutionary dynamics and the rate of convergence to the equilibrium. University of Mannheim Reputational Cheap Talk vs. Reputational Delegation    [pdf] Abstract I study whether a principal who is uncertain about an agent's motives should keep control and solicit information from the agent or delegate the decision making to the agent if the interactions between the two parties are repeated so that the agent has reputational concerns. I consider a two-period repeated game. In each period, the uninformed principal fi rst decides whether to delegate the decision making to the informed agent who is either good (not biased) or bad (biased). If she does, the agent takes an action himself. If she does not, the agent sends a cheap talk message to the principal who then takes an action. I find that in the second period, the principal is better off by keeping control instead of delegating to the agent. The optimal authority allocation in the fi rst period depends on a prior cut-off. If the prior about the agent being good is above this cut-off, the principal prefers delegation over communication. Otherwise, communication dominates delegation. Dept. of Economics, University of Pennsylvania Informal Risk Sharing with Local Information    [pdf] (joint work with Attila Ambrus, Pau Milan) Toulouse School of Economics Robust Predictions in Dynamic Screening    [pdf] (joint work with Alessandro Pavan, Juuso Toikka) Abstract We characterize properties of optimal dynamic mechanisms using a variational approach that permits us to tackle directly the full program. This allows us to make predictions for a considerably broader class of stochastic processes than can be handled by the “ﬁrst–order, Myersonian, approach”, which focuses on local incentive compatibility constraints and has become standard in the literature. Among other things, we characterize the dynamics of optimal allocations when the agent’s type evolves according to a stationary Markov processes, and show that, provided the players are suﬃciently patient, optimal allocations converge to the eﬃcient ones in the long run. California State University Fullerton Sequential Auctions with Ambiguity    [pdf] (joint work with Heng Liu) Abstract This paper studies sequential sealed-bid auctions with ambiguity about the distribution of valuations and maxmin bidders. We propose equilibrium notions based on the multiple selves approach to deal with the possible time inconsistency that arises with dynamic bidding for maxmin bidders. We find that the equilibrium predictions are robust to different specifications of preferences and characterize the unique symmetric equilibrium. We show that prices are a supermartingale and the seller's revenue from sequential auctions dominates static multi-unit auctions under general conditions. Ambiguity aversion thus provides a unified explanation for the declining price anomaly'' and the wide adoption of sequential auctions in the real-world. Our model delivers rich testable implications: on the practical side, there is a strong link between the degree of ambiguity, measured by the distance between the true distribution of valuations and bidders' worst case belief, and the magnitude of price variations over time; on the technical side, dynamic inconsistency, which can arise for bidders with multiple priors, generates history dependence in bidding strategies. Princeton University Wars of Attrition with Evolving States Abstract I analyze a model of wars of attrition with evolving payoffs. Two players fight over a prize by paying state-dependent flow costs until one player surrenders. The state of the world is commonly observed and evolves over time. The equilibrium is unique and uses threshold strategies: each player surrenders when the state is unfavorable enough to her, while for intermediate states both players strictly prefer to fight on. Taken as a refinement of the model of wars of attrition with complete information, this model makes related but distinct predictions from the standard reputation-based refinements (Abreu and Gul, 2000). The model is versatile and can be tractably extended to study partial concessions, commitment devices and deadline effects. University of Rochester Interbank Trading, Collusion, and Financial Regulation    [pdf] (joint work with Dean Corbae) Abstract We show theoretically and empirically that interbank markets provide a channel for banks to collude in the market for business loans. By lending funds to a competitor, a bank commits not to compete. Interbank interest rates allow banks to split the benefits from such collusion. Using global syndicated loans data, we find that firms paid 31bps higher spread on $239 billion of loans provided by banks that took an interbank loan from a competitor. We compare the decentralized solution with interbank market to the planner's solution and to the decentralized equilibrium without interbank market. The results suggest that restricting interbank trading may increase aggregate welfare. Harvard University Targeting Interventions in Networks [pdf] (joint work with Andrea Galeotti, Sanjeev Goyal) Abstract We study the design of optimal interventions in network games, where individuals' incentives to act are affected by their network neighbors' actions. A planner shapes individuals' incentives, seeking to maximize the group's welfare. We characterize how the planner's intervention depends on the network structure. A key tool is the decomposition of any possible intervention into principal components, which are determined by diagonalizing the adjacency matrix of interactions. There is a close connection between the strategic structure of the game and the emphasis of the optimal intervention on various principal components: In games of strategic complements (substitutes), interventions place more weight on the top (bottom) principal components. For large budgets, optimal interventions are simple -- targeting a single principal component. Duke University Efficient and Envy Minimal Assignment [pdf] (joint work with Atila Abdulkadiroğlu) Abstract In priority-based allocation problems such as school choice there is a trade-off between efficiency and elimination of justified envy. We study the possibility of resolving this trade-off by finding a constrained optimal solution, i.e. an efficient matching with minimal envy. We establish a negative result: finding such a matching is an NP-hard problem and therefore computationally infeasible. The result is robust to various definitions of envy minimality, such as minimizing the number of justified envy instances or the indirect measure of maximizing the sum of match priorities. Despite the computational complexity result we are able to provide a polynomial-time mechanism that is approximately constrained optimal (maximizes match priorities) in the class of sequential dictatorships in large markets. The large market model that we consider Is representative for the school choice problem and therefore the approximation is likely to be good in that setting. Texas A&M University Coalition-Proof Mechanisms Under Correlated Information [pdf] (joint work with Huiyi Guo) Abstract The paper considers two types of mechanisms that are immune from coalitional manipulations: standard mechanisms and ambiguous mechanisms. In finite-dimensional type spaces, I characterize the set of all information structures under which every efficient allocation rule is implementable via an interim coalitional incentive compatible, interim individually rational and ex-post budget-balanced standard mechanism. The requirement of coalition-proofness reduces the scope of implementability under a non-negligible set of information structures. However, when ambiguous mechanisms are allowed and agents are maxmin expected utility maximizers, coalition-proof implementation can be obtained under all most all information structures. Thus, the paper sheds light on how coalition-proofness can be achieved by engineering ambiguity in mechanism rules. University of Warwick Authority and Information Acquisition in Cheap Talk with Informational Interdependence [pdf] Abstract I study the allocation of decision rights in a two-dimensional cheap talk game with informational interdependence and imperfectly informed senders. The Principal allocates decision rights among all players including herself. Delegation is optimal when the expected informational gains outweigh the loss of control due to biased decisions. Delegating one decision leads to informational gains for the Principal when there are negative informational externalities (Levy and Razin, 2007). Partial delegation (of a controversial decision) is thus optimal when externalities are sufficiently strong. I characterize the maximum bias the Principal is willing to tolerate as a function of informational gains. I also analyse agents\\\' incentives for information acquisition. An agent invests in information when the expected utility gains from revealing it compensate its costs. Truthful communication is a necessary condition for information acquisition, but its influence on beliefs must also be sufficiently large. This implies centralization is always optimal when information costs are high. Endogenous information acquisition allows agents to specialize, which enhances communication incentives because it rules out contradictory information. Finally, I show that delegation leads to ex-post specialization: decision-makers typically receive more information about the more relevant state as compared to centralization. UTS INTERDISTRICT SCHOOL CHOICE: A THEORY OF STUDENT ASSIGNMENT [pdf] (joint work with Fuhito Kojima, Bumin Yenmez) Abstract Interdistrict school choice programs—where a student can be assigned to a school outside of her district—are widespread in the US, yet the market-design literature has not considered such programs. We introduce a model of interdistrict school choice and present two mechanisms that produce stable or efficient assignments. We consider three categoriesof policy goals on assignments and identify when the mechanisms can achieve them. By introducing a novel framework of interdistrict school choice, we provide a new avenue of research in market design. Penn State University Sequential Mechanisms With ex post Participation Guarantees [pdf] (joint work with Itai Ashlagi and Constantinos Daskalakis) Abstract We study optimal screening mechanisms for selling multiple products to a buyer who learns her value for a different product at each period. A mechanism may screen types over time or be static (screen types only in the last period), but must assign the buyer a non-negative utility ex post. We observe that there exists an optimal mechanism that determines the allocation of a product as soon as the buyer learns her value for that product. This observation allows us to solve for optimal mechanisms recursively, and to provide several structural properties of optimal mechanisms. We show that static mechanisms are sub-optimal if the buyer first learns her values for products that are ex ante less valuable. Under this condition, the ability to bundle products is less profitable than the ability to screen types dynamically. Penn State University Consumer-Optimal Market Segmentation [pdf] (joint work with Nima Haghpanah and Ron Siegel) Abstract Consumer surplus in a market is affected by how the market is segmented. We study the maximum consumer surplus across all possible segmentations of a given market served by a multi product monopolist. We characterize markets for which the maximum consumer surplus equals a first best benchmark (i.e., maximum total surplus minus minimum profit). The first best benchmark is achievable whenever the seller does not find it profitable to screen types by offering multiple bundles, highlighting a novel impact of screening. We also characterize markets for which consumer surplus can be increased compared to the unsegmented market, and show that these markets are generic. We construct a simple segmentation that improves consumer surplus in these markets. Ben-Gurion University of the Negev Generalized Coleman-Shapley Indices and Total-Power Monotonicity [pdf] Abstract I introduce a new axiom for power indices, which requires the total (additively aggregated) power of the voters to be nondecreasing in response to an expansion of the set of winning coalitions; the total power is thereby reflecting an increase in the collective power that such an expansion creates. It is shown that total-power monotonic indices that satisfy the standard semivalue axioms are probabilistic mixtures of generalized Coleman-Shapley indices, where the latter concept extends, and is inspired by, the notion introduced in Casajus and Huettner (2018). Generalized Coleman-Shapley indices are based on a version of the random-order pivotality that is behind the Shapley-Shubik index, combined with an assumption of random participation by players. Wuhan University Truthful Intermediation with Monetary Punishment [pdf] (joint work with Ruben Juarez) Abstract A mechanism chooses an allocation of the resource to intermediaries based on their reported ability to transmit it. We discover and describe the set of incentive compatible mechanisms when a monetary punishment to intermediaries who misreport their ability is possible. This class depends on the punishment function and the probability of punishment. It expands previous characterizations of incentive compatible mechanisms when punishment was no available. Furthermore, when the planner has the ability to select the punishment, we provide the minimal punishment necessary to achieve incentive compatibility and the corresponding class of rst-best mechanism. For any punishment, we discover the optimal mechanism for the planner. The Ohio State University Epistemic Experiments: Utilities, Beliefs, and Irrational Play [pdf] Abstract Inspired by the epistemic game theory framework, I elicit subjects' preferences over outcomes, beliefs about strategies, and beliefs about beliefs in a variety of simple games. I find that the prisoners' dilemma and the traditional centipede game are both Bayesian games, with many non-selfish types. Many players choose strategies that are clearly inconsistent with their elicited beliefs and preferences. But these instances of irrationality'' disappear when the game is made sequential and the player moves second, suggesting that irrationality is driven by the presence of strategic uncertainty. University of Texas at Dallas Signaling through Bayesian persuasion [pdf] Abstract This paper considers a Bayesian persuasion model in which the sender has private information about the payoff-relevant state prior to choosing an experiment. The set of payoff-relevant states is fi nite and the sender's payoff is continuous and strictly increasing in the receiver's expectation of the state. It is shown that if full disclosure of the payoff-relevant state is weakly detrimental for the sender under any common prior between the sender and the receiver, then a single-crossing property of the sender's expected payoff across sender types and experiments arises. This single crossing condition leads to the selection of separating equilibria by forward induction refi nements, i.e., the sender's choice of experiment signals his type. The sender's payoff function being concave (and there being no value of persuasion) is stronger than the condition required for this outcome to occur. University Bonn Large Elections with Endogenous Information [pdf] Abstract This paper studies majority elections with large electorates when each voter can acquire information about the election alternatives at a cost. I allow voters to have conflicting interests. I describe all equilibria for all cost regimes: in the polar case when cost are 'high', the equilibrium outcome is Downsian meaning that the outcome that is preferred by the majority under the prior belief is elected. In the other extreme, when information is costless, the equilibrium outcome is full-information-equivalent, meaning that the outcome that is prefered by the majority under full information is elected. As a main result, I show that when cost are 'intermediate' there are equilibria where the majority group fails to coordinate and the minority-preferred outcome is elected with probability 1. More generally, I show that the equilibrium outcome in the non-Downsian equilibria maximizes a weighted utilitarian welfare function that satisfies the Pigou-Dalton principle of fairness. Connected with this fairness principle, I observe that the minority-preferred outcome is only elected in instances when this is utilitarian. This contrasts to the literature on special interest groups and lobbies (Olson, '65; Tullock, '83) which suggests unjust manipulability of political outcomes through small groups with large stakes. Bar Ilan University Valuing Information by Repeated Signals [pdf] (joint work with Ehud Lehrer) Abstract A decision maker who needs to choose an action for a state-dependent payoff but does not know the true state is offered information structures with noisy signals. Which should be preferred? The classical answer is to use the Blackwell informativeness ordering, which is an extremely partial ordering. We permit the decision maker to conduct multiple sequential queries of an information structure, with each query reducing the expected error in distinguishing between the states, towards identifying the true state. Comparing information structures by the reduction in state distinction error per query, utilising concepts from information theory such as Chernoff information and large deviations theory, we obtain a total ordering that monotonically extends the Blackwell ordering. Moreover, our ordering is objective' in the sense of being calculable from the information structures themselves, independently of priors or of specific decision problems, and yields a simple operational interpretation. Using the same underlying construction of repeated signals and large deviations theory, the analysis is extended to states changing under i.i.d. and Markov processes. Total orderings of information structures with decision theoretic justification are again obtained, but these do depend on the initial hypotheses adopted by a decision maker. Bielefeld University The Transmission of Continuous Cultural Traits in Endogenous Social Networks (joint work with Fen Li) Abstract We study a OLG model of transmission of continuous cultural traits across generations in an endogenous social network. Children learn their cultural trait from their parents and their social environment. Parents want their children to adopt a cultural trait that is similar to their own and engage in the socialization process of their children by forming new links or deleting connections. Changing links from the inherited network is costly, but having many links is benefitial. Studying the dynamics of cultural traits and networks, we find that polarization may obtain when extremist subgroups disconnect from the society. This is observed if costs of network changes and benefits from integration are low. For intermediate cost, convergence of all traits to one of the extremist's trait may occur. Large costs and/or large benefits from interactions always imply convergence to a moderate consesus. University of Auckland Strategic Games from an Observer's Perspective [pdf] (joint work with Elon Kohlberg and John W. Pratt) Abstract We investigate the implications of considering an outside observer of a game who believes that, no matter how many times she sees the outcome of similar games she will not be able to give beneficial advice to any player. We argue for a particular specification of what the formal definition of this intuitive statement should be. We then show that such an observer should believe that the players are playing a correlated equilibrium, though she may be uncertain exactly which correlated equilibrium they are playing. Since the set of correlated equilibria is convex her beliefs themselves actually constitute a correlated equilibrium. We further show that if the observer believes that there is nothing connecting'' the players in the game beyond what is explicitly described in the rules of the game the observer must believe that the players are playing a Nash equilibrium, though, again, she may be uncertain which Nash equilibrium they are playing. Collegio Carlo Alberto Price Setting on a Network [pdf] Abstract Most products are produced and sold by supply chain networks, where an interconnected network of producers and intermediaries set prices to maximize their profits. I show that there exists a unique equilibrium in a price-setting game on a network. The key distortion reducing both total profits and social welfare is multiple-marginalization, which is magnified by strategic interactions. Individual profits are proportional to influentiality, which is a new measure of network centrality defined by the equilibrium characterization. The results emphasize the importance of the network structure when considering policy questions such as mergers or trade tariffs. Boston University Very Biased Political Experts: Cheap Talk, Persuasion and the Political Extremes [pdf] Abstract Many lobbying organisations use paid experts to try to sway policymakers. These experts typically share the organisation’s strong ideological preferences, partly due to self-selection on ideology, and partly due to incentives provided by their employers. If we consider the experts’ recommendations to take the form of cheap talk, then it is natural to think that their biases might undermine any informational content in their messages. Two questions then come to mind: first, can strongly biased experts credibly convey any information, and second, are they able to distort policymakers’ decisions in favour of their bias? Existing cheap talk models predict that communication can occur if and only if the sender’s bias is small. I present a cheap talk model where the two receivers engage in Downsian political competition. I show that, even if the sender (expert) is extremely biased, there can be partial revelation about the location of the median voter. Public messages convey no information, but the expert can privately recommend a policy platform to one politician – essentially acting as a political advisor. Partial revelation is made possible by the constraining force of political competition – policies that are too extreme simply cannot win. Furthermore, the expert is able to distort policy in their favoured direction, by recommending the most distorted policy platform that still guarantees a win in the election. I also compare this cheap talk model to a Bayesian persuasion game, where the expert designs a public experiment. I show that this is equivalent to a relaxed gerrymandering problem, and derive the lowest upper bound on the expert’s utility under persuasion. I discuss implications for campaigning strategies by ideologically biased organisations: in particular, fat-tailed (polarised or extreme) voter preferences lead to greater gains from Bayesian persuasion than small-tailed preferences. IMF College Ranking by Revealed Preference From Big Data: An Authority-Distribution Analysis [pdf] Abstract We apply authority distribution (Hu and Shapley, 2003) to sort out a linear ordering for hundreds of alternatives from revealed preference by millions of consumers. The background context is to rank the US colleges. The preference reveals much broader criteria than those set by popular college rankings, and our approach recognizes the heterogeneity in both the characteristics of colleges and the personal considerations of consumers. Also, we aggregate the spillover effects in the network of college interactions, and this leads to a robust steady-state solution to the counterbalance equilibrium of direct bilateral influence. The solution is likely the most comprehensive and the most objective college ranking, compared with dozens of other ones in the market. The approach can be applied in many other areas, such as ranking sports teams and academic journals and calculating real effective exchange rates. Keywords: college ranking; revealed preference; authority distribution; endogenous weighting; big data; matching JEL Codes: C68, C71, C78, D57, D58, D74 RUTGERS UNIV Incentive Compatible Self-fulfilling Mechanisms and Rational Expectations [pdf] Abstract This paper extends the exact equivalence result between the allocations realized by self-fulling mechanisms and rational expectations equilibrium allocations in Forges and Minelli (1997) to a large finite-agent replica economy where different replicates of the same agent are allowed to receive different private information. The first result states that the allocation realized by any incentive compatible self-fulfilling mechanism is an approximate rational expectations equilibrium allocation. Conversely, the second result states that we can associate with any given rational expectations equilibrium an incentive compatible self-fulfilling mechanism whose equilibrium allocation approximately coincides with the rational expectations equilibrium allocation. ESMT Berlin Marginality, dividends, and the value in games with externalities [pdf] (joint work with André Casajus) Abstract We introduce a notion of marginality for games with externalities. It rests on the idea that a player's contribution in an embedded coalition is measured by the change that results when the player is removed from the game. To evaluate the latter, we use the concept of restriction operators introduced by Dutta et al. (J Econ Theor, 145, 2010, 2380-2411). We provide a characterization result using efficiency, anonymity, and restriction marginality, which generalizes Young's characterization of the Shapley value. An application of our result yields a new characterization of the solution put forth by Macho-Stadler et al. (J Econ Theor, 135, 2007, 339-356) without linearity. Bank of Canada Non-Competing Data Intermediaries [pdf] Abstract I study competition among data intermediaries—data brokers and technology companies that collect consumer data and sell them to downstream firms. Under the assumption that firms use consumer data to extract rents, intermediaries have to compensate consumers for their personal data. I show that competition among intermediaries fails: If they offer high compensation to obtain more consumer data, consumers share their data with multiple intermediaries. This lowers the price of the data in the downstream market and hurts intermediaries. I show that this leads to multiple equilibria with different allocations of data among intermediaries. For example, there is a monopoly equilibrium where a single intermediary extracts the maximum possible surplus, even though the model excludes network externalities or returns to scale. There are also a continuum of equilibria with different degrees of data concentration. I show that data concentration benefits intermediaries and hurts consumers. This has a potential implication on regulating dominant online platforms. UNIVERSIDAD DE CHILE Opinion Polarization under Search for Peers [pdf] (joint work with Axel Böhm, Aris Daniillidis) Abstract We propose a model of discrete time opinion dynamics where every agent searches randomly for a peer to update his opinion. Agents' opinions are assumed to be positions on the [-1,1] interval and their initial distribution is represented by a symmetric probability density. Agents must update their respective opinions once at each period with another agent, which we call a peer. The update is unilateral and takes the average of the two opinions. We assume that updating opinions is costly for the agents and the cost is an increasing function of the distance between the agent and his peer. An agent can reject to update with with the first peer he encounters, due to the cost of updating, and instead do a search to find another peer. This search has a cost c>0. An agent can search as many times as he wishes, each time paying the cost c. A new search is independent of any previous searches and the agent can update only with the last agent he finds at that period. Once all agents find their peers at period t and revise their opinions, we pass to period t + 1 where agents are distributed according to a new distribution. We are interested in how this distribution evolves and converges. If the search cost is taken sufficiently high then agents update with the first peer they find. We show that in this case the density converges to the Dirac delta. If the search cost is taken sufficiently low (that agents do not accept some peers) then the distribution converges to an atomic distribution where the number of atoms and the variance of the limit distribution increase as the cost of search decreases. Bowdoin College The Power of Context in Game-Theoretic Models of Networks: Ideal Point Models with Social Interactions [pdf] (joint work with Mohammad T. Irfan, Tucker Gordon) Abstract Game theory has been widely used for modeling strategic behaviors in networked multiagent systems. However, the context within which these strategic behaviors take place has received limited attention. We present a model of strategic behavior in networks that incorporates the behavioral context, focusing on the contextual aspects of congressional voting. One salient predictive model in political science is the ideal point model, which assigns each senator and each bill a number on the real line of political spectrum. We extend the classical ideal point model with network-structured interactions among senators. In contrast to the ideal point model's prediction of individual voting behavior, we predict joint voting behaviors in a game-theoretic fashion. The consideration of context allows our model to outperform previous models that solely focus on the networked interactions with no contextual parameters. We focus on two fundamental problems: learning the model using real-world data and computing stable outcomes of the model with a view to predicting joint voting behaviors and identifying most influential senators. We demonstrate the effectiveness of our model through experiments using data from the 114th U.S. Congress. King's College London One for all, all for one—von Neumann, Wald, Rawls, and Pareto [pdf] (joint work with Mehmet S. Ismail) Abstract Applications of the maximin criterion extend beyond economics to statistics, politics, philosophy, operations research, and engineering. However, the maximin criterion—be it von Neumann's, Wald's, or Rawls'—draws fierce criticism, in part because of its extremely pessimistic stance. I address the criticisms of the maximin criterion and propose a novel approach, dubbed the optimin criterion, which suggests that we should (Pareto) optimize—rather than maximize—the minimum under a reasonable social contract: Do not harm yourself for the sake of harming others. The optimin criterion (i) addresses criticisms of the maximin criterion, including Harsanyi's and Arrow's; (ii) helps explain experimental deviations from utilitarian concepts such as the Nash equilibrium; and (iii) provides insights into sustaining cooperation in noncooperative games. The optimin criterion not only coincides with (1) Wald's statistical decision theory when Nature is the antagonist, but also generalizes (2) stable matchings in matching models such as college admission problems and the housing market, (3) Nash equilibrium in n-person constant-sum games, and (4) the competitive equilibrium in the Arrow-Debreu economy. Moreover, every Nash equilibrium satisfies the optimin criterion in a suitably defined game. University of Wisconsin-Madison Rational Bubbles and Middlemen [pdf] (joint work with Yu Awaya, Makoto Watanabe) Abstract This paper develops a finite-period model of rational bubbles where trade of an asset takes place through a chain of middlemen. We show that there exists a unique equilibrium, and a bubble can occur due to higher-order uncertainty. Under reasonable assumptions, the equilibrium price is increasing and accelerating during bubbles although the fundamental value is constant over time. Bubbles may be detrimental to the economy; however, bubble-bursting policies affect agents' beliefs and it turns out that they have no effect on welfare. We also demonstrate that the possibility that middlemen obtain more information leads to larger bubbles. Institute of Economics, Academia Sinica Virtual implementation by bounded mechanisms:Complete information [pdf] (joint work with Michele Lombardi) Abstract When there are at least three agents, any social choice rule F is virtually implementable both in Nash as well as in rationalizable strategies, by a bounded mechanism. No tail-chasing" constructions, common in the constructive proofs of the literature, is used to assure that undesired strategy combinations do not form a Nash equilibrium. University of Texas at Austin Competing to persuade a rationally inattentive agent [pdf] (joint work with Mark Whitmeyer) Abstract The standard Bayesian persuasion literature allows senders to design arbitrarily informative signal structures, and assumes that receivers costlessly process all information made available to them. This is an unrealistic assumption in many natural contexts, where agents may rationally choose to stay partly ignorant. We study a model of competitive information disclosure by two senders, with the twist that the receiver is allowed to garble each sender's experiment. The more she garbles, the lower her learning costs are. Interestingly, we find that as long as learning costs are not too low, there is an interval of prior means over which it is an equilibrium for both senders to offer full information. Furthermore, the interval expands as learning costs grow. This result stands in sharp contrast to Wei (2018), who shows that in this framework, providing full information is never optimal when there is a single sender. Intuitively, when there are two senders, information on one of them substitutes for information on the other, and further, learning costs lead the receiver to ignore some information available on each sender. Then starting from a situation of full disclosure, if a sender deviates to restrict her learning, she can compensate for it by using some of the surplus information on the other sender. She thereby maintains the probability of making a correct decision and leaves the deviating sender's payoff unaffected. We thus provide a novel insight into why competition might encourage information disclosure, and apply our results to the disclosure of clinical research outcomes by pharmaceutical companies to prescribing doctors. University of Central Florida Risk Dominance, Beliefs, and Equilibrium [pdf] Abstract The term “risk-dominance” has been precisely deﬁned in a very narrow context, but has been used much more broadly. I provide a brief survey of literature on risk-dominance, and note that risk-dominance is related to the diﬃculty of coordination. That suggests that it’s not just an equilibrium selection concept, but signiﬁes something outside of equilibrium. I present the “maximum entropy” approach of Jaynes (1957) to forming beliefs given linear constraints, and apply it to ﬁrst- and second-order beliefs in two-player games in which coordination and payoﬀ-maximization constraints have been loosened; I ﬁnd that this favors risk-dominant equilibria where those are well-deﬁned, but favors risk-dominant nonequilibria where those are intuitive. By way of explication the model is compared to other models of agents who don’t perfectly maximize their payoﬀs conditional on other agents’ actions. A conclusion includes some thoughts on modeling and the enterprise of equilibrium selection. The University of Texas at Austin Disclosure of Sequential Evidence [pdf] Abstract I study the disclosure of history, which is modeled as a sequence of hard evidence. A sender sees a history about an unknown state of the world and tries to influence an uninformed receiver's belief. The receiver is uncertain about the length of history, and the sender can conceal dated signals and disclose only the more recent ones. In any equilibrium, a set of most recent signals which yields the maximal difference between number of favorable and unfavorable signals is always disclosed. In addition, the sender sometimes discloses earlier and seemingly less favorable signals, but the receiver's belief is not influenced by these excessive evidence. University of Hawaii Incentive-Compatible Simple Mechanisms [pdf] (joint work with Jung S You) Abstract We consider mechanisms for allocating a fixed amount of divisible resources among multiple agents when they have quasilinear preferences and can only report messages in a finite-dimensional space. We show that in contrast with infinite-dimensional message spaces, efficiency is not compatible with implementation in dominant strategies. However, for the weaker notion of implementation, such as in the Nash equilibrium, we find that a class of VCG-like' mechanisms is the only efficient selection in one-dimensional message spaces. The trifecta in mechanism design, namely efficiency, fairness and simplicity of implementation, is achieved via a mechanism that we introduce and characterize in this paper. Norges Bank Dividend Payouts and Rollover Crises [pdf] (joint work with Plamen T Nenov) Abstract We study dividend payouts when banks face coordination-based rollover crises. Banks in the model can use dividends to both risk shift and signal their available liquidity to short-term lenders, thus, influencing the lenders’ actions. In the unique equilibrium both channels induce banks to pay higher dividends than in the absence of a rollover crisis. In our model banks exert an informational externality on other banks via the inferences and actions of lenders. Optimal dividend regulation that corrects this externality and promote financial stability includes a binding cap on dividends. We also discuss testable implications of our theory. Rutgers University Approximating Nash Equilibrium Via Multilinear Minimax [pdf] Abstract On the one hand we state {\it Nash equilibrium} (NE) as a formal theorem on multilinear forms and give a pedagogically simple proof, free of game theory terminology. On the other hand, inspired by this formalism, we prove a {\it multilinear minimax theorem}, a generalization of von Neumann's bilinear minimax theorem. Next, we relate the two theorems by proving that the solution of a multilinear minimax problem, computable via linear programming, serves as an approximation to Nash equilibrium point, where its multilinear value provides an upper bound on a convex combination of {\it expected payoffs}. Furthermore, each positive probability vector once assigned to the set of players induces a {\it diagonally-scaled} multilinear minimax optimization with a corresponding approximation to NE. In summary, in this note we exhibit an infinity of multilinear minimax optimization problems each of which provides a polynomial-time computable approximation to Nash equilibrium point, known to be difficult to compute. The theoretical and practical qualities of these approximations are the subject of further investigations. University of Western Ontario Fairness versus Favoritism in Conflict Mediation [pdf] (joint work with Charles Zheng) Abstract A mediator proposes to two adversaries a peaceful split of their contested good to avoid a conflict, modeled as an all-pay auction. The proposed split can manipulate the outcome of conflict through influencing the adversaries' posterior beliefs when they reject it. Despite the adversaries being ex ante identical and having equal welfare weights, the socially optimal proposal is either a biased split such that the favored adversary always accepts it, or the equal split. The former outperforms the latter if and only if an adversary's prior probability to be weak in conflict is below an exogenous threshold. Econ department, UWO Knitting and Ironing: Reducing Inequalities via Auctions [pdf] (joint work with Charles. Z. Zheng) Abstract This paper characterizes all the mechanisms that achieve ex ante Pareto optimality via voluntary wealth transfers induced by auctions. Two items, one good, the other bad, are to be assigned to bidders who value money differently, and the taker of the bad is compensated with proceeds from the good. Pareto-improving transfers occur indirectly when bidders who value money less buy the good, and those who value money more are paid to take the bad. We introduce a new concept, two-part operator, to integrate a bidder’s countervailing information rents, one in buying the good, the other in taking the bad. We bisect the optimal mechanism problem, the objective of which is nonlinear, into two linear programmings, solve each via ironing, and knit the two into the solution for the original problem. We find that any Pareto optimum corresponds to the concatenation of two auctions, each determined by a two-part operator derived from such procedures. The optimal mechanism breaks the linkage between the hierarchy of types and hierarchy of surpluses when the budget balance condition is binding. Stony Brook University Dynamic Tournament Model of Private Tutoring Expenditure Abstract How does the hierarchy of colleges affect households’ pre-tertiary private tutoring expenditure? While empirical evidence suggests that the main purpose of private tutoring expenditure is to win the college admission competition, it is observed that parents of students who have higher school performance spend more on the tutoring expenditure. At the same time, even parents of students with poor school performance spend on average 5% of their income on private tutoring expenditure. To answer the given question and to capture the distribution of private tutoring expenditure, I specify an estimable dynamic tournament model which incorporates the college admission competition between households. The model allows for endogenous cutoffs which are determined by the private tutoring decision of N number of households. Using Korean Education Longitudinal Study 2005 which has detailed information on household’s education expenditure, I estimate the dynamic tournament model using simulated maximum likelihood. Based on the structural estimates, I conduct counterfactual experiments related to college hierarchy to see the changes in the distribution of private tutoring expenditure. Massachusetts Institute of Technology Can Rescues by Banks Replace Costly Bail-Outs in Financial Networks? [pdf] Abstract I model rescue formation in financial networks, where interbank obligations create interdependencies in shareholders’ equity. I show that welfare-maximizing networks are symmetrically connected through intermediate levels of interbank liabilities. In coalition formation framework, welfare-maximizing networks eliminate the well-known trade-off between risk sharing and systemic fragility in financial networks. Endogenously arising rescues show that potential contagiousness does not necessarily imply financial instability. Instead, financial stability is indicated by i) potential bankruptcy costs internalized by banks, and ii) loss absorption capacity of the network (i.e., banks’ aggregate capital). The results provide general insights into coalition formation in networks facing systemic threats. Maastricht University The Midpoint Constrained Egalitarian Bargaining Solution [pdf] (joint work with Shiran Rachmilevitch) Abstract A payoff allocation in a bargaining problem is midpoint dominant if each player obtains at least one n-th of her ideal payoff. The egalitarian solution of a bargaining problem may select a payoff configuration which is not midpoint dominant. We propose and characterize the solution which selects for each bargaining problem the feasible allocation that is closest to the egalitarian allocation, subject to being midpoint dominant. Our main axiom, midpoint monotonicity, is new to the literature; it imposes the standard monotonicity requirement whenever doing so does not result in selecting an allocation which is not midpoint dominant. In order to prove our main result we develop a general extension theorem for bargaining solutions that are order-preserving with respect to any order on the set of bargaining problems. The University of Massachusetts Games Where Players Offers Games to Play: A Foundation of Market Design [pdf] Abstract US Federal Agencies rulemakings such as the FCC Broadcast Incentive Auctions require public commenting as stipulated by the Administrative Procedure Act and often involve coalitional bargaining among stakeholders. The current noncooperative game theory doctrine that assumes that the participants take the extensive form of the game as given and only considers individual behaviors would not provide an accurate description of the real-world rulemaking processes. This paper develops a model where participants propose mechanisms before they agree to commit and choose to play the core-selecting mechanism at an equilibrium. This result provides a theoretical foundation of the finding of Roth (1991) that successful mechanisms in the real world are the ones that produce stable matching and also the finding that Vickrey auctions are only rarely used. The University of Massachusetts On the Virtue of Being Regular and Predictable: A Structural Analysis of the Primary Dealer System in the United States Treasury Auctions [pdf] Abstract We analyze the policy question of whether the US Treasury should maintain the current security distribution mechanism of the primary dealer system in the Treasury market to achieve the debt management objective of lowest funding cost over time. We study the data of 3790 auctions of Treasury securities issued between May 2003 and February 2018 (gross total issuance:$100.5 trillion). We identify potential increases in auction high rate volatilities due to decline in primary dealer activities to be a potential policy concern. Then we compare the effectiveness of the primary dealer system, the direct bidding system, and the syndicate bidding system to address this concern using the novel asymptotic approximation method that does not depend on equilibrium selection and normality of bidder values distribution. We find that the primary dealer system achieves signi cantly lower funding cost volatilities while maintaining an equal level of costs, thus contributes to the debt management objective. Maastricht University Persuading Voters With Private Communication Strategies    [pdf] (joint work with P. Jean-Jacques Herings, Dominik Karos) Abstract We consider a multiple receiver Bayesian persuasion model, where a Sender wants to implement a new proposal and Receivers with homogeneous preferences vote either for or against the proposal. Prior to the vote, Sender chooses a communication strategy which sends private correlated signals to Receivers. First, we show that if Receivers vote sincerely Sender can improve upon a public communication strategy in terms of expected utility by employing private signals. However, under the optimal communication strategy, sincere voting is not an equilibrium. In order to overcome this issue, we characterize the set of communication strategies under which sincere voting constitutes a Bayes Nash equilibrium and determine the optimal communication strategy. The University of Nebraska-Lincoln Efficient and Neutral Mechanisms in Almost Ex Ante Bargaining Problems    [pdf] Abstract I consider two-person bargaining problems in which mechanism is selected at the almost ex ante stage--when there is some positive probability that players may have learned their private types--and the chosen mechanism is implemented at the interim stage. For these problems, I define almost ex ante incentive efficient mechanisms and apply the concept of neutral optima. I show that those mechanisms may not be ex ante incentive efficient. This paper suggests that ex ante incentive efficient mechanisms are not robust to a perturbation of the ex ante informational structure at the time of mechanism selection. The University of Nebraska-Lincoln A Noncooperative Foundation of the Neutral Bargaining Solution    [pdf] Abstract This paper studies Myerson's neutral bargaining solution for a class of Bayesian bargaining problems in which the solution is unique. For this class of examples, I consider a noncooperative mechanism-selection game. I find that all of the interim incentive efficient mechanisms can be supported as sequential equilibria. Further, standard refinement concepts and selection criteria do not restrict the large set of interim Pareto-undominated sequential equilibria. I provide a noncooperative foundation of the neutral bargaining solution by characterizing the solution as a unique coherent equilibrium allocation. Virginia Tech Equilibrium configurations in the heterogeneous model of signed network formation    [pdf] Abstract In a model of signed network formation as proposed by Hiller (2017), this paper studies the possible Nash equilibrium configurations. I characterize the conditions under which complete networks or segregation into two uneven groups can be sustained in the equilibrium in the case of homogeneous agents. I also specify the Nash equilibria in the case of heterogeneous agents. In the model with four agents and two types, I find four categories of possible network configurations. Strong (weak) player refers to a player who has a greater (lower) exogenous intrinsic strength. The first Nash equilibrium configuration obtains when everyone is friend with each other. The second Nash equilibrium configuration is such that players of the same type coalesce. In the third configuration, one of the players is being bullied by the others. In the fourth configuration, there exist three groups consisting respectively of two strong players, one weak player, and one strong player. I further generalize the first and second Nash equilibrium configurations to then–player case; and I derive the specific conditions under which they arise in a Nash equilibrium. University of Pittsburgh Characterization, Existence, and Pareto Optimality in Insurance Markets with Asymmetric Information with endogenous and asymmetric Disclosures: Revisiting Rothschild Stiglitz    [docx] (joint work with Joseph Stiglitz, Jungyoll Yun) Abstract We study the Rothschild-Stiglitz model of competitive insurance markets with endogenous information disclosure by both firms and consumers. We show that an equilibrium always exists, (even without the single crossing property), and characterize the unique equilibrium allocation. With two types of consumers the outcome is particularly simple, consisting of a pooling allocation which maximizes the well-being of the low risk individual (along the zero profit pooling line) plus a supplemental (undisclosed and nonexclusive) contract that brings the high risk individual to full insurance (at his own odds). We show that this outcome is extremely robust and Pareto efficient. University of Pittsburgh Mediated Persuasion    [pdf] Abstract We study a game of strategic information design between a sender, who chooses state-dependent information structures, a mediator who can then garble the signals generated from these structures, and a receiver who takes an action after observing the signal generated by the first two players. We characterize sufficient conditions for information revelation, compare outcomes with and without a mediator and provide comparative statics with regard to the preferences of the sender and the mediator. We also provide novel conceptual and computational insights about the set of feasible posterior beliefs that the sender can induce, and use these results to obtain insights about equilibrium outcomes. The sender never benefits from mediation, while the receiver might. The receiver benefits when the mediator’s preferences are not perfectly aligned with hers; rather the mediator should prefer more information revelation than the sender, but less than perfect revelation. Princeton university Information Structures and Information Aggregation in Threshold Equilibria in Elections    [pdf] Abstract I study a model of information aggregation in elections with multiple states of the world and multiple signals. I focus on a particularly simple class of equilibria - threshold equilibria - and completely characterize information aggregation within this class. In particular, I identify conditions on the distributions of the signals that are necessary and sufficient for information aggregation in every sequence of threshold equilibria, as well as simple conditions that are sufficient but not necessary for information aggregation in threshold equilibria. I also identify (generic) conditions that are necessary and sufficient for information not to be aggregated in any sequence of threshold equilibria. As a consequence, my analysis provides sufficient conditions for the existence of a sequence of equilibria that does not aggregate information. Harvard University A Perfectly Robust Approach to Multiperiod Matching Problems    [pdf] Abstract Many two-sided matching situations involve multiperiod interaction. Traditional cooperative solutions, such as stability and the core, often identify unintuitive outcomes (or are empty) when applied to such markets. As an alternative, this study proposes the criterion of perfect alpha-stability. An outcome is perfect alpha-stable if no coalition prefers an alternative assignment in any period that is superior for all plausible market continuations. Behaviorally, the solution combines foresight about the future and a robust evaluation of contemporaneous outcomes. A perfect alpha-stable matching exists, even when preferences exhibit inter-temporal complementarities. A stronger solution, the perfect alpha-core, is also investigated. Extensions to markets with arrivals and departures, transferable utility, and many-to-one assignments are proposed. Pennsylvania State University On Dynamic Pricing    [pdf] (joint work with Ilia Krasikov, Rohit Lamba) Abstract This paper studies a canonical model of dynamic price discrimination- when firms can endogenously discriminate amongst consumers based on the timing of information arrival and/or the timing of purchase. A seller and buyer trade repeatedly. The buyer's valuation for the trade is private information and it evolves over time according to a renewal Markov process. The seller offers a dynamic pricing contract which options a sequence of forwards. As a first step, we show that this relatively simple dynamic pricing contract achieves. We then show that this contract is (a) the optimum when a single object is sold at a fixed time and (b) the optimum under strong monotonicity in the repeated sales model. The full optimum however may use buybacks which our dynamic pricing instruments do not allow. Moreover, we show that the optimum is backloaded and provide a theoretical bound for a fraction of optimal revenue which can be extracted by the seller from using our mechanism. The construction of the mechanism and bounds is then extended to multiple players to study repeated auctions. At every step of the analysis a mapping is established between the pricing model (indirect mechanisms) and general direct mechanisms. In this process, novel tools are developed to study dynamic mechanism design when global incentive constraints bind. Tsinghua University Hierarchical Bayesian Persuasion    [pdf] (joint work with Zhonghong Kuang, Jaimie W. Lien, Jie Zheng) Abstract We study a hierarchical Bayesian persuasion game with a sender, a receiver and several potential intermediaries, generalizing framework of Kamenica and Gentzkow (2011, AER). The sender must be persuasive through the hierarchy of intermediaries in order to reach the final receiver, whose action affects all players’ payoffs. The intermediaries care not only about the true state of world and the receivers action, but also about their reputations, measured by whether the receivers action is consistent with their recommendation. We characterize the perfect Bayesian equilibrium for the optimal persuasion strategy, and show that the persuasion game has multiple equilibria but a unique payoff outcome. Among the equilibria, two natural persuasion strategies on the hierarchy arise: persuading the intermediary who is immediately above one’s own position, and persuading the least persuadable individual in the hierarchy. As major extensions of the main model, we analyze scenarios in which intermediaries have private information, the endogenous reputation of intermediaries, and when intermediaries have an outside option. We also discuss as minor extensions, the endogenous choice of persuasion path, parallel persuasion, and costly persuasion. The results provide insights for settings where persuasion is prominent in a hierarchical structure, such as corporate management, higher education admissions, job promotion, and legal proceedings. University of Economics Prague Observing Actions in Bayesian Games    [pdf] (joint work with Dominik Grafenhofer) Abstract We study Bayesian coordination games where agents receive noisy private information over the game's payoff structure, and over each others' actions. If private information over actions is precise, we find that agents can coordinate on multiple equilibria. If private information over actions is of low quality, equilibrium uniqueness obtains like in a standard global games setting. The current model, with its flexible information structure, can thus be used to study phenomena such as bank-runs, currency crises, recessions, riots, and revolutions, where agents rely on information over each others' actions. Singapore Management University Maskin Meets Abreu and Matsushima    [pdf] (joint work with Yi-Chun Chen, Yifei Sun, and Siyang Xiong) Abstract We study the classical Nash implementation problem due to Maskin (1999), but allow for the use of lottery and monetary transfer as in Abreu and Matsushima (1992,1994). We therefore unify two well-established but somewhat orthogonal approaches in implementation theory. We show that Maskin monotonicity is a necessary and sufficient condition for mixed-strategy Nash implementation by a finite (albeit indirect) mechanism. In contrast to previous papers, our approach possesses the following appealing features simultaneously: finite mechanisms (with no integer or modulo game) are used; mixed strategies are handled explicitly; neither transfer nor bad outcomes are used in equilibrium; our mechanism is robust to information perturbations; and the size of off-the-equilibrium transfers can be made arbitrarily small. Finally, our result can be extended to infinite/continuous settings and ordinal settings. Lehigh University Mediated Talk: An Experiment (joint work with Andreas Blume and Wooyoung Lim) Abstract Theory suggests that mediation has the potential to improve information sharing. This paper experimentally investigates whether and how this potential can be realized. It is the first such study in a cheap-talk environment. We find that mediation encourages players to use separating strategies. Behavior gravitates toward pooling with direct talk and toward separation with mediated talk. This difference in behavior translates into a moderate payoff advantage of mediated over direct talk. There are systematic departures from the equilibrium prediction, characterized by over-communication by senders in the initial rounds of direct talk, stable under-communication by senders under mediated talk, and over-interpretation (attributing too much information to messages) by receivers under both direct and mediated talk. Rutgers University The Multilinear Minimax Relaxation of Bimatrix Games and Comparison with Nash Equilibria via Lemke-Howson    [pdf] (joint work with Bahman Kalantari) Abstract It is known that Nash Equilibrium computation is PPAD-complete, first by Daskalakis, Goldberg, and Papadimitriou for 4 or more players, then by the same authors for 3 players, and even for the bimatrix case by Chen and Deng. On the other hand, Dubey showed that Nash equilibria of games with smooth payoff functions are generally Pareto-inefficient. In particular, it means that it is possible that a strategy, possibly mixed, that is not a Nash equilibrium will admit a higher payoff for both players than a Nash equilibrium. Kalantari has described a multilinear minimax relaxation (MMR) that provides an approximation to a convex combination of expected payoffs in any Nash Equilibrium via linear programming. In this paper, we study this relaxation for the bimatrix game, with payoff matrices normalized to values between 0 and 1, solving its corresponding LP formulation and comparing its performance to the Lemke-Howson algorithm. We also give a game theoretic interpretation of MMR formulation for the bimatrix game which involves a meta-player. Our relaxation has the following theoretical advantages: (1) It can be computed in polynomial time; (2) For at least one player, the computed MMR payoff is at least as good any Nash Equilibrium payoff; (3) There exists a computable convex scaling of the payoff matrices so that the corresponding expected payoffs are equal. Computationally, we have compared our approach with the state-of-the-art implementation of the Lemke-Howson algorithm. In problems up to 150 actions, apparently the guaranteed computational limit of Lemke-Howson, we observe the following advantages: (i) MMR outperformed Lemke-Howson in time complexity; (ii) In about 80% of the cases the MMR payoffs for both players are better than any Nash Equilibria; (iii) in the remaining 20%, while one player's payoff is better than any Nash Equilibrium payoff, the other player's payoff is only within a relative error of 17%. In summary, MMR is a strong relaxation for Nash. Economics Institute of the Czech Academy od Sciences Preferences, Beliefs, and Strategic Plays in Games    [pdf] (joint work with Rudolf Kerschbamer and Jianying Qiu) Abstract We examine strategic plays in games while controlling for distributional preferences and beliefs. We elicit players’ distributional preferences before they play a series of two-person strategic games. We also elicit players’ belief about the opponent’s strategies. Our control of distributional preferences does not rely on any particular parametric forms; it is rather based on revealed preferences. The payoff vectors in strategic games are the same as the payoff vectors in the distributional preferences task. This allows us to examine whether preferences elicited in a static scenario - dictator gamelike situations - predict choices in strategic games. The first-order beliefs combined with the payoff features in some of normal form games allows us to examine how beliefs might enter preferences directly, as suggested by psychological game theories. Finally, since players in strategic games know their opponent’s choices in distributional preferences tests, our design allows us to examine whether this information is used in making own choice. In particular, we explore indirect reciprocity, i.e. do players behave nicely to people who are nice to others? Experimental results show that rational equilibrium prediction performs no better than randomness, whereas there is a strong consistency of choices in distributional preferences task with the choices in strategic games, both at the population level and at the individual level. We also find supporting evidence that beliefs could enter preferences directly. Finally, there is some evidence that people are nice to people who are nice to others. Bielefeld University; University of Paris 1 Anti-conformism in the threshold model of collective behavior    [pdf] (joint work with Michel GRABISCH, Fen LI) Abstract We provide a first study of the threshold model, where both conformist and anti-conformist agents coexist. The paper is in the line of a previous work by the first author (Grabisch et al., 2018), whose results will be used at some point in the present paper. Our study bears essentially in answering the following question: Given a society of agents with a certain topology of the network linking these agents, given a mechanism of influence for each agent, how the behavior/opinion of the agents will evolve with time, and in particular can it be expected that it converges to some stable situation, and in this case, which one? Also, we are interested by the existence of cascade effects, as this may constitute a undesirable phenomenon in collective behavior. We divide our study into two parts. In the first one, we basically study the threshold model supposing a fixed complete network, where every one is connected to every one, like in the work of Granovetter (1978). We study the case of a uniform distribution of the threshold, of a Gaussian distribution, and finally give a result for arbitrary distributions, supposing there is one type of anti-conformist. In a second part, the graph is no more complete and we suppose that the neighborhood of an agent is random, drawn at each time step from a distribution. We distinguish the case where the degree (number of links) of an agent is fixed, and where there is an arbitrary degree distribution. Wuhan University The folk theorem for repeated games with time-dependent discounting    [pdf] Abstract This paper defines a general framework to study infinitely repeated games with time-dependent discounting, in which we distinguish and discuss both time-consistent and time-inconsistent preferences. To study the long-term properties of repeated games, we introduce an asymptotic condition to characterize the fact that players become more and more patient, that is, the discount factors at all stages uniformly converge to 1. Two types of folk theorem's are proven under perfect observations of past actions and without the public randomization assumption: the asymptotic one, i.e. the equilibrium payoff set converges to the individual rational set as players become patient, and the uniform one, i.e. any payoff in the individual rational set is sustained by a single strategy profile which is an approximate subgame perfect Nash equilibrium in all games with suffciently patient discount factors. As corollaries, our results of time-inconsistency imply the corresponding folk theorem's with the quasi-hyperbolic discounting. Yale University Logical Differencing in Network Formation Models under Non-Transferable Utilities    [pdf] (joint work with Wayne Yuan Gao, Sheng Xu) Abstract This paper considers a semiparametric model of dyadic network formation under nontransferable utilities. Such dyadic links arise frequently in real-world social interactions that require bilateral consent but by their nature induce additive non-separability. In our model we show how two-way fixed effects (corresponding to unobserved individual heterogeneity in sociability) can be canceled out without requiring additivity. The approach uses a new method we call logical differencing. The key idea is to construct an observable event involving the intersection of two mutually exclusive restrictions on the fixed effects, while these restrictions are obtained by taking the logical contraposition of multivariate monotonicity. Based on this identification strategy we provide consistent estimates of the network formation model. Finite-sample performance is analyzed in a simulation study. An empirical illustration using the risk-sharing data of Nyakatoke is presented. Motivated by the empirical findings, we discuss how to differentiate homophily versus assortativity. Cowles Foundation Dynamic Obstruction    [pdf] (joint work with German Gieczewski, Christopher Li) Abstract We study a model of policy experimentation by an incumbent politician who seeks to be reelected, but faces the prospect of obstruction by the opposing party. In the main variant of the model, the incumbent initiates a policy reform as early as possible if initial support is moderate, delays its implementation if support is high, and does not attempt it at all if support is low. The prospect of obstruction may dissuade the incumbent from initiating a policy reform, but does not change the timing conditional on a reform being initiated, and the opposition party ramps up its obstruction as the next election approaches. McGill University Comparative Statics of Product Disclosure Statements    [pdf] (joint work with Anastasia Burkovskaya, Jian Li) Abstract Different ways of framing the same information have impact on the final consumer decisions, implying that firms should pay close attention to how the product information is presented to the consumer. This paper investigates the implications of State Aggregation Subjective Expected Utility (SASEU) agent's behavior for Product Disclosure Statement (PDS) of an insurance company. An agent is SASEU if she is not neutral to information frames. We analyze the changes in the insurer profit from aggregating different sections of the current PDS together and provide (1) quantitative results in the case when consumer preferences are known, and (2) monotone comparative statics characterized by simple properties of the agent's event aggregation functional. The Chinese University of Hong Kong Strategic Post-exam Preference Submission in the School Choice Game    [pdf] (joint work with Vladimir Mazalov, Artem Sedakov, Jaimie W. Lien, and Jie Zheng) Abstract We consider a college admissions problem in which students take an exam before submitting their college applications under heterogeneous abilities and homogeneous preferences. Under an exam-based admissions procedure, students play an application game with each other under knowledge of their own score while being uncertain over other students’ scores. We provide a framework and solve for the equilibria of this game, which are in the class of threshold strategies with respect to one’s own score. In some situations, a result can occur in equilibrium such that students with mediocre performance apply to and are accepted at better colleges than their higher performing peers. It can be understood as a type of bluffing strategy over one’s exam score which corresponds to other students avoiding the better college out of fear of an undesirable admissions outcome. Such strategies may result in socially inefficient matches between students and colleges. Colgate University Optimal Information Design for Reputation Building    [pdf] Abstract Conventional wisdom holds that ratings and review platforms serve consumers best when they reveal the maximum amount of information to consumers at all times. This paper shows within a stylized model how this may not be true. The channel is that partial information may incentivize reputation-minded firms to invest more in quality. Committing to publish all reviews can lead to a "cold start problem," where there is a failure to attract early adopters, thereby shutting down the source of information. To find a solution to this problem, I use a dynamic Bayesian Persuasion model in which a long-run firm with a persistent type interacts with a sequence of short-run consumers. When the platform designs the public information policy to maximize total consumer welfare, there is a policy with three phases that converges to optimal as reviews become frequent. In the first phase, the platform reveals reviews with an interior probability and consumers learn about the firm. In the second phase, the consumers observe all reviews, and the firm always produces high quality. Finally, in the third phase, new reviews are hidden entirely and the firm produces low quality without damage to its reputation. When the designer has weaker commitment power and may revise its policy at a small cost, a repeated three phase policy is robust to revisions and remains optimal. Michigan State University Convention and Coalitions in Repeated Games    [pdf] (joint work with Nageeb Ali) Abstract We develop a theory of repeated interaction for coalitional behavior. We consider stage-games where both individuals and coalitions may jointly deviate. However, coalition-members cannot commit to long-run behavior (on and off the path), and are farsighted in that they recognize that today's actions influence tomorrow's behavior. We evaluate the degree to which history-dependence of this form can ward off coalitional deviations. If monitoring is perfect, every feasible and strictly individually rational payoff can be supported by history-dependent conventions. By contrast, if players can make secret side-payments to each other, every coalition achieves a coalitional minimax value. South University of science & technology Multipartite Games And Evolutionary Stable Matching    [pdf] Abstract In the matching theory of Gale and Shapley, every bipartite matching game has a stable matching. But a game beyond bipartite may not do so. In this paper, we reshape the universality of stable matching for multipartite games through generalizing the matching game whose players could be either partite agents or their coalitions. A dynamics of matching could be developed through introducing a series of matchings in different generations. The dynamics introduces a notion of evolutionary stable matching with matching refinement by matching stabilization in each generation of its dynamics. In the dynamic theory of matching, every matching game could have an evolutionary stable matching. University of Michigan Robust Predictions in Bargaining with Incomplete Information    [pdf] Abstract This paper studies robust predictions when players may have additional private information that is unknown to an outside analyst in an otherwise standard Coasian bargaining between a seller and a buyer with private values. The robust predictions in the frequent-offer limit depend crucially on the type of information that players may have: (i) when the seller has additional information about the buyer's value and this fact is common knowledge between players, the limiting equilibrium outcomes are always efficient and any surplus division between the seller and the buyer is possible; (ii) when the buyer is uncertain about the seller's information, any feasible and individually rational payoff vectors can be the limiting equilibrium payoffs. The results have direct policy implications regarding markets for information and privacy. Toulouse School of Economics Learning while Trading: Experimentation and Coasean Dynamics    [pdf] Abstract I study dynamic bilateral bargaining with one-sided incomplete information when superior outside opportunities may arrive during negotiations. Gains from trade are ex ante uncertain: in a good-match market environment, outside opportunities are not available; in a bad-match market environment, superior outside opportunities stochastically arrive for either or both parties. The two parties begin their negotiations with the same belief on the type of the market environment. Arrivals are public and learning about the market environment is common. One party, the seller, makes price offers at every instant to the other party, the buyer. The seller has no commitment power and the buyer is privately informed about his own valuation. This gives rise to rich bargaining dynamics. In equilibrium, there is either an initial period with no trade or trade starts with a burst. Afterward, the seller screens out buyer types one by one as uncertainty about the market environment unravels. Delay is always present, but it is inefficient only if valuations are interdependent. Whether prices increase or decrease over time depends on which party has a higher option value of waiting to learn. When the seller can clear the market in finite time at a positive price, prices are higher than the competitive price. This, however, need not be at odds with efficiency. Applications include durable-good monopoly without commitment, wage bargaining in markets for skilled workers, and takeover negotiations. National University of Singapore Optimal Selling Mechanisms with Buyer Price Search    [pdf] (joint work with Jingfeng Lu, Zijia Wang) Abstract We study optimal dynamic selling mechanisms when the buyers are initially and privately endowed with their values of the object on sale, and they can conduct costless search for their second-stage outside prices. Buyers’ optional prices are independent of their values. With private outside prices, second-stage incentive compatibility only requires semi-monotonicity of allocation rule but it is violated by the optimal design under public outside prices; moreover, off-equilibrium-path second stage best strategy cannot be pinned down. Thus, the privateness of the second-stage information matters for optimal designs, and deriving the optimal mechanism with private buyer options requires an innovative method. The revenue-maximizing mechanism with private options is established through conducting a modified Myerson convexification procedure to regularize the buyers’ virtual values in the dimension of the outside prices. The optimal mechanism requires a non-refundable deposit at the first stage and allocates the object to the buyer with the highest nonnegative regularized virtual value. Other buyers take the outside options if and only if outside prices are lower than their values. When there is only one buyer, the seller merely offers a first-stage fixed price if the outside price is the buyer’s private information; however, if the outside price is public, the seller offers a first-stage fixed price combined with a second-stage price that is matched to the buyer’s outside option. École Polytechnique Reputation and Social Learning    [pdf] (joint work with Ekaterina Logina and Konstantin Shamruk) Abstract This paper focuses on the interplay between social learning and reputation dynamics. Taking the herding model of Bikhchandani et al (1992), we endogenize the state of nature as a choice of quality made by the long-run player. We construct a Markov perfect equilibrium in which both cascade regions still exist, and once the beliefs are stuck there, incentives to build reputation vanish. Investment in quality follows an inverse U-shaped pattern: depending on whether the public belief is tilted in favor or against the long-run player, current trust may either destroy or boost up subsequent reputation building. Randomization on behalf of the long-run player tends to slow down social learning, but the public belief eventually reaches one of the two cascade regions. Unlike in the canonical information cascades\' setup, greater private signal precision may reduce the players\' responsiveness to their information. University of Warwick Banking Competition and Stability: The Role of Leverage    [pdf] (joint work with Xavier Freixas) Abstract This paper re-examines the classical issue of the possible trade-offs between banking competition and financial stability, by highlighting the role of bank leverage. We show that when loan market competition reduces entrepreneurs' moral hazard and loan portfolio risks, a bank's insolvency risk increases only if its leverage is sufficiently high. When bank leverage is endogenous, the relationship between competition and stability crucially depends on the financial safety net subsidies that reduce the cost of banks' debt and increase their leverage. Our analysis helps to reconcile seemingly contradictory empirical results on the issue and generates new testable hypotheses. Interdisciplinary Center (IDC), Herzliya Attacking a nuclear facility with a noisy intelligence and Bayesian agents    [pdf] (joint work with Yair Tauman) Abstract We study the role of a noisy intelligence in two Bayesian rival countries. Player 1 (he) wishes to develop a nuclear bomb. Player 2 (she) aims to prevent him from building it by attacking his facilities. Player 1 is asked to open his facility for inspection. If he does not possess the bomb, he can avoid Player 2's potential attack by opening to reveal it. Player 1 incurs a cost for allowing inspections. Player 1's strategies are: B (to build the bomb), NBO (not to build the bomb and open for inspection), or NB (not to build and not open). If 1 refuses to open, 2 can either attack or not attack. If 1 opens, 2 will not attack. Player 2 operates an intelligence system (IS) to spy on Player 1. IS sends either signal b or nb, meaning 1 has the bomb or not, respectively. The precision of IS is alpha, 1/2 It is shown that there exists a unique perfect Bayesian equilibrium with the following characteristics: (i) There exists a threshold c0 s.t. Player 1 with inspection cost below c0 in equilibrium chooses to open his facility for inspection; if the cost exceeds c0, he mixes the strategies B and NB (ii) There exists a threshold alpha0 s.t. Player 1 assigns higher probability on B if Player 1's estimation on IS's precision is below alpha0. In this case, Player 2 ignores the signal and attacks 1 when IS is not too accurate and only follows the signal when IS is relatively accurate. (iii) If Player 1's estimation on IS's precision is above alpha0, Player 1 acts conservatively (assigns lower probability on B) . Player 2 in this case ignores the signal and does not attack 1 if IS is not too precise and follows the signal only if it is relatively accurate HEC Paris Learning in Repeated Routing Games with Symmetric Incomplete Information    [pdf] (joint work with Marco Scarsini and Tristan Tomala) Abstract We consider a model of repeated routing games under symmetric incomplete information with dynamic populations. It consists in a routing game in which costs are determined by an unknown state of the world. At each stage, a demand of ran- dom size routes over the network and equilibrium costs are observed. Our objective is to study how information aggregates according to the equilibrium dynamics and to which extent agents can learn about the state of the world. We define several forms of learning (whether agents eventually learn the state of the world or act as in full information) and present a simple example which shows that in such a framework, with a non-atomic set of players, learning may fail and routing may be inefficient even in the looser sense. This contrasts with the atomic case, in which a folk theorem ensures players can learn the game parameters. In a non-atomic setup, learning cannot be ensured unless there is an additional source of randomness to incentivize exploration of the network. We show that this role can be fulfilled by a variable and unbounded demand size. We prove that under some network topology condition and costs unboundedness, a variable and unbounded demand is sufficient to ensure learning. This result holds whether the state space is finite or not. We additionally provide examples to show these conditions are tight. We finally con- nect our work with the social learning literature and show that if instead of having random demand size, costs are observed with unbounded noise, then learning does not occur in the general case unless some limited recall assumption is made. PUC Chile Debt and information aggregation in financial markets    [pdf] (joint work with Ana Elisa Pereira) Abstract We analyze how the capital structure of a firm affects the information revealed by secondary financial markets. Firms use information contained in market activity to guide real investment decisions. We show that, if markets are sufficiently liquid, excessively high or low levels of debt hinder the informativeness of financial markets and reduce firm value through the feedback from prices to real decisions. In this case, an intermediate level of debt maximizes the value of the firm. When markets are illiquid, intermediate levels of debt dilute incentives for trading on information, and extreme levels of debt are often optimal. Maastricht University Strategy-proofness and perfect mechanisms    [pdf] (joint work with Yu Zhou) Abstract We introduce the notion of a perfect mechanism---a structured pair consisting of (i) a dynamic perfect information game form, and (ii) a convention specifying an honest strategy for each player given his type---to establish the existence of socially optimal ex-post perfect equilibria in a large class of dynamic games where the agents may have extremely limited information about the economy (Theorem 1). Applications include marriage markets with dating apps, labor markets with telephones, and online auctions. Stanford University Learning Through the Grapevine: The Impact of Message Mutation, Transmission Failure, and Deliberate Bias    [pdf] (joint work with Matthew Jackson, Suraj Malladi, David McAdams) Abstract We examine how well someone learns when information from an original sources only reaches them after repeated person-to-person noisy relay (oral or written). We consider three distortions in communication: random mutation of message content, random failure of message transmission, and deliberate biasing of message content. We characterize how many independent chains a learner needs to access in order to accurately learn. With only mutations and transmission failures, there is a sharp threshold such that a receiver fully learns if they have access to more chains than the threshold number, and learn nothing if they have fewer. A receiver learns not only from the content, but also from the number of received messages---which is informative if agents' propensity to relay a message depends on its content. We bound the relative learning that is possible from these two different forms of information. Finally, we show that learning can be completely precluded by the presence of biased agents who deliberately relay their preferred message regardless of what they have heard. Thus, the type of communication distortion determines whether learning is simply difficult or impossible: random mutations and transmission failures can be overcome with sufficiently many sources and chains, while biased agents (unless they can be identified and ignored) cannot. We show that partial learning can be recovered by limiting the number of contacts to whom an agent can pass along a given message, a policy that some platforms are starting to use. Stony Brook University Resource Destruction in Optimal Mechanisms for Bilateral Trade (joint work with Eric Maskin) University of Wisconsin-Madison Bayesian Persuasion with Hidden Motives Abstract Digital media has given people access to vast amounts of information. Much of it is produced by sources whose motives are not clear to the consumer. This lack of transparency affects the way in which people draw inferences from the messages they receive, as well as the value for providing information. I model this as a game between a finite number of senders with a hidden move by nature. Every sender simultaneously chooses a signal and commits to disclosing its message. Then, nature privately chooses which of the signal realizations the receiver gets to observe. Whenever senders partially pool their signals, the sender is uncertain about the informativeness of the message received. This uncertainty may incentivize a sender to provide more information or to ”slack-off” depending on the receiver’s belief about their type. I characterize a sufficient condition for equilibrium to be (essentially) babbling despite the assumed commitment power and even if there are senders whose payoff function is not concave at the prior. Further, increasing the variety of senders with state-independent preferences always reduces the informativeness of the signals they choose. However, this uncertainty un-ravels whenever senders can verify their true motives. This has important implications for decentralized information platforms like social media. They can improve the quality of information by verifying self-reported existence of biases like financial sponsorships or political endorsements without taking a stance on the quality of information directly. Daito Bunka University The Shapley Value of the Lower Game for Partially Defined Cooperative Games    [pdf] (joint work with Jose M. Zarzuelo) Abstract The classical approach to cooperative games assumes that the worth of every coalition is known. However, in the real world problems there may be situations in which the amount of information is limited and consequently the worths of some coalitions are unknown. The games corresponding to those problems are called partially defined cooperative games and surprisingly have not received yet enough attention. Partially defined cooperative games were first studied by Willson (1993). However, this author restricted the attention to partially defined games in which if the worth of a particular coalition is known, then it is also known the worth of all the coalitions with the same cardinality. Moreover, Willson (1993) proposed and characterized an extension of the Shapley value for partially defined cooperative games. This extended Shapley value coincides with the ordinary Shapley value of a full defined game. In this full defined game the coalitions whose worth were known in the original game maintain the same worth, but otherwise they are assigned a worth zero, that seems to be not well justified. Masuya and Inuiguchi (2016) considered partially defined cooperative games which are assumed the superadditivity. Further, it is assumed that the worths of the grand and singleton coalitions are at least known. Then they defined two full defined games which are called lower game and upper game respectively. In this work we propose the Shapley value of the lower game for superadditive partially defined cooperative games. Moreover, we characterize the proposed value using five axioms. Three of them are the well known axioms of efficiency, symmetry and covariance. The fourth one, called the axiom of fairness, is proposed by Myerson (1980). The fifth axiom is a similar version of the axiom of coalitional strategic equivalence which was first considered by Chun (1989) for full defined games. University of South Carolina Fair and Square Contests    [pdf] Abstract What is a fair competition? As a rule, this means that all players operate on a level playing field. In this paper, we address this question for contests. What does it mean that the contest is fair? Participants exert irreversible efforts to win a prize (sometimes prizes) in a contest. Is the contest fair if participants have the same equilibrium winning probabilities? This may be fair, but what if participants have different prize values. In this case, the same winning probabilities can mean that participants have different expected payoffs in the contest. Is that fair? It seems fair to focus on the expected equilibrium payoffs. We consider Tullock’s contests with reimbursements and find a special class of contests in which participants get the same expected equilibrium payoffs, even if their prize values are different. It turns out that the Sad-Loser contest is the only Tullock’s contest with reimbursements in which participants receive the same expected equilibrium payoffs. University of Bonn Bayesian Persuasion With Costly Information Acquisition    [pdf] Abstract A sender choosing a signal to be disclosed to a receiver can often influence receiver's actions. Is persuasion harder when the receiver has additional information sources? Does the receiver benefit from having them? We extend Bayesian persuasion to a receiver's acquisition of costly information. The game can be solved as a standard Bayesian persuasion under an additional constraint: the receiver never costly learns. The threat' of learning hurts the sender. However, the outcome can also be worse for the receiver. We further propose a new solution method not directly relying on concavification, which is also applicable to a standard Bayesian persuasion. Université Paris Diderot, IRIF Incentives in Popularity-based Random Matching Markets    [pdf] (joint work with Hugo Gimbert and Claire Mathieu and Simon Mauras) Abstract Stable matching in a community consisting of N men and N women is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley [GS62]. In this paper, we use a probabilistic model, based on the popularity model of Immorlica and Mahdian [IM15], to generate the input preference lists. When popularities are uniform on one side and geometric on the other side (the i-th person has popularity λ^i ) then we prove that the expected fraction of participants who have more than one stable partner tends to 0. By [IM15] this implies that, in any stable matching mechanism, the best response of a participant is almost surely the truthful strategy; moreover, the induced game has a Nash equilibrium in which in expectation almost all strategies are truthful. When preference lists are arbitrary on the men’s side and are generated from geometric popularities on the women’s side, we prove that a woman using a non-truthful strategy can improve the rank of her partner in her preference list by at most a constant, in expectation over the preference lists of women. The proof relies on a decomposition of the matching market into blocks of expected constant size, in which a block contains men of similar popularities. When preference lists are uniform then the expected number of stable pairs is asymptotically equivalent to N ln N [Pit92]; when they are arbitrary on one side and uniform on the other side, then the expected number of stable pairs is asymptotically at most N ln N [KMP90]. When preference lists are arbitrary on the men’s side and are generated from (general) popularities on the women’s side then we prove that the expected number of stable pairs is asymptotically at most N ln N . Thus adding correlations between preference via popularities can only decrease the number of stable pairs, asymptotically. Higher School of Economics When Should We Care About Privacy? Information Collection in Games (joint work with Arina Nikandrova) Abstract The amount of information produced every day is staggering and Internet makes a lot of this information available almost for free. We argue that free access to information does not guarantee that it is going to be used for making decisions. More precisely, sufficient conditions for cheap payoff-relevant information not to be collected in a symmetric equilibrium are: (1) sufficiently many people have access to this information, and (2) the usefulness of information to a person highly depends on other people’s actions. Primary examples are elections (when free-riding discourages information collection) and financial markets (when competition is too vigorous). Our conclusion alleviates concerns over making private information available in public domain: publicity might render information useless, thus effectively protecting sensitive information from prying eyes. University of Chicago Booth School of Business Dynamic Project Standards with Adverse Selection Abstract We study a principal-agent relationship in which the agent has private informa- tion about the future profitability of the relationship or a currently operated project, but is biased in favor of continuing the project. When the principal retains liqui- dation rights over the relationship or project and must introduce distortions in the liquidation policy itself in order to elicit the agent\'s private information. The op- timal policy consists of a threshold which, if the profitability falls below, triggers liquidation. When the agent reports a higher growth rate of the projects prof- itability, the optimal threshold will be either decreasing over time and approach the principal\'s first-best level (i.e., the distortions from eliciting the agent\'s information are temporary) or will be increasing and divergent over time (i.e., liquidation at later times takes place at unboundedly inefficient levels). A simple condition on the relative profitability of the project across agent types tells us when the distortions are temporary or permanent. These results are robust to the use of transfers (e.g., wage payments) provided that a limited liability condition is respected for the agent. They are also robust to the use of direct auditing methods to assess profitability. The model provides a tractable way to analyze contractual distortions in the pretense of private information, and in particular, shows that contracts simultaneously front- and back-loaded across a menu of options in the same principal-agent relationship. * Brigham Young University Polarization and Pandering in Common Interest Elections    [pdf] Abstract This paper analyzes candidate positioning in common interest elections, meaning that voter differences reflect private estimates of what is best for society, not idiosyncratic tastes. Centrist candidates have a competitive advantage, but may be bad for welfare. An extreme candidate can still win if truth is on her side, though, so for a variety of model specifications, candidates polarize in equilibrium, even when each wants very badly to win. Indian Institute of Technology Kanpur Toward Controlling Discrimination in Online Ad Auctions    [pdf] (joint work with L. Elisa Celis, Nisheeth K. Vishnoi) Abstract Online advertising platforms are thriving due to the customizable audiences they offer advertisers. However, recent studies show that the audience an ad gets shown to can be discriminatory with respect to sensitive attributes such as gender or ethnicity, inadvertently crossing ethical and/or legal boundaries. To prevent this, we propose a constrained ad auction framework that allows the platform to control the fraction of each sensitive type an advertiser's ad gets shown to while maximizing its revenues. Building upon Myerson's classic work, we first present an optimal auction mechanism for a large class of fairness constraints. Finding the parameters of this optimal auction, however, turns out to be a non-convex problem. We show how this non-convex problem can be reformulated as a more structured non-convex problem with no saddle points or local-maxima; allowing us to develop a gradient-descent-based algorithm to solve it. Our empirical results on the A1 Yahoo! dataset demonstrate that our algorithm can obtain uniform coverage across different user attributes for each advertiser at a minor loss to the revenue of the platform, and a small change in the total number of advertisements each advertiser shows on the platform. Australian National University On the existence of equilibrium in Bayesian games without complementarities    [pdf] (joint work with Rabee Tourky) Abstract In a recent paper Reny (2011) generalized the results of Athey (2001) and McAdams (2003) on the existence of monotone strategy equilibrium in Bayesian games. Though the generalization is subtle, Reny introduces far-reaching new techniques applying the fixed point theorem of Eilenberg and Montgomery (1946, Theorem 5). This is done by showing that with atomless type spaces the set of monotone functions is an absolute retract and when the values of the best response correspondence are non-empty sub-semilattices of monotone functions, they too are absolute retracts. In this paper we provide an extensive generalization of Reny (2011), McAdams (2003), and Athey (2001). We study the problem of existence of Bayesian equilibrium in pure strategies for a given partially ordered compact subset of strategies. The ordering need not be a semilattice and these strategies need not be monotone. The main innovation is the interplay between the homotopy structures of the order complexes that are the subject of the celebrated work of Quillen (1978), and the hulling of partially ordered sets, an innovation that extends the properties of Reny’s semilattices to the non-lattice setting. We also describe some auctions that illustrate how this framework can be applied to generalize the existing results, and extend the class of models for which we can establish existence of equilibrium. As with Reny (2011) our proof utilizes the fixed point theorem in Eilenberg and Montgomery (1946). Hebrew University of Jerusalem Screening Inattentive Agents    [pdf] Abstract Abstract: An important aspect of mechanism design problems is the information to which the agents involved have access. A potential complication is that this information may endogenously depend on which options they are offered. I model this by considering an optimal mechanism design problem in which a principal screens an agent with uncertain value. The agent is inattentive regarding their true value, and decides how to optimally acquire information about it in response to the offered mechanism. I show that the optimal mechanism is characterized by a non-participation belief, which in turn determines the contour of possible beliefs and transfers for every possible probability of allocation (including those not used in the mechanism). For every possible non-participation belief, the mechanism design problem then reduces to one of Bayesian persuasion. The optimal mechanism is then implicitly determined by choosing the optimal non-participation belief. Copenhagen Business School Robust Information Aggregation Through Voting    [pdf] (joint work with Tomás Rodríguez Barraquer and Justin Valasek) Abstract Numerous theoretical studies have shown that information aggregation through voting is often fragile: Since the probability that any agent's vote influences the committee's decision becomes arbitrarily small in a large committee, voting behavior is very sensitive to the payoff structure. For example, when agents face payoffs that condition on their individual vote, then these vote-contingent payoffs, no matter how small, can drive voting behavior in large committees. We consider a general model of voting in large committees with vote-contingent payoffs and characterize the set of payoff vectors k that support equilibria that aggregate information in a robust way. Robust, in the sense that all payoff vectors sufficiently close to k must also support equilibria that aggregate information. Furthermore, we characterize the payoff vectors under which robust information aggregation is the unique equilibrium outcome. We find that robust information aggregation only depends on the ratio of relative payoffs agents receive for voting for the ex-post correct option given that the committee also selects the correct option. However, the uniqueness of the equilibrium that aggregates information depends on payoffs when the committee selects the incorrect option; agents must be punished for voting with the majority side when the committee chooses the incorrect option. MIT Media Capture: A Bayesian Persuasion Perspective    [pdf] (joint work with Arda Gitmez and Pooya Molavi) Sao Paulo School of Economics - FGV Information Design with Recommender Systems    [pdf] (joint work with Caio Lorecchio) Abstract An uninformed long-run sender restricted to public communication rules, such as recommender systems and rating systems, faces a sequence of short-lived receivers. Each receiver must decide whether or not to invest in a project of fixed, but unknown, quality. The sender seeks to maximize investment, but is uninformed about the project's quality and learning must be elicited through experimentation by the receivers. We show that the optimal rule is a simple recommendation (two ratings). In contrast, if learning is independent of the agents' actions, the designer's payoff increases with the number of ratings. We provide conditions under which simple rules approximate the Bayesian persuasion payoff. Universidad Pablo de Olavide Optimal Management of Evolving Hierarchies    [pdf] (joint work with Jens Leth Hougaard; Juan D. Moreno-Ternero; Lars Peter Østerdal) Abstract We study the optimal management of evolving hierarchies, which abound in real-life phenomena. An initiator invests into ﬁnding a subordinate, who will bring revenues to the joint venture and who will invest herself into ﬁnding another subordinate, and so on. The higher the individual investment (which is private information), the higher the probability of ﬁnding a subordinate. A transfer scheme speciﬁes how revenues are reallocated, via upward transfers, as the hierarchy evolves. Each transfer scheme induces a game in which agents decide their investment choices. We consider two optimality notions for schemes: initiator-optimal and socially-optimal schemes. We show that the former are schemes imposing to each member a full transfer to two recipients (the predecessor and the initiator) with a constant ratio among the transfers. We show that the latter are schemes imposing full transfers to the immediate predecessors. US Army Noisy and Silent Games of Timing with Detection Uncertainty and Numerical Estimates    [pdf] (joint work with David B. Bednarz, Nicholas A. Krupansky, Bernhard von Stengel) Abstract In previous work, Bednarz (2016), we described the interactions between a mobility player, who is trying to maximize the chances that he makes it from point A to point B with one chance to refuel, and a terrain player who is trying to minimize that probability by placing an obstacle somewhere along the path from A to B. This relates to the literature of games of timing. In this paper, we generalize the game of timing studied previously to include the possibility that the players' actions are known to their adversary. In other words, we examine both noisy and silent versions of the game. In addition to this, one player may have an imperfect ability to detect their adversary. This situation is known as detection uncertainty and was first studied in Sweat (1971). Here, we extend those results to compare noisy and silent versions of this game of timing with detection uncertainty and obtain numerical estimates of the optimal strategies using the sequence form VonStengel (1996). Northwestern University Social Value of Information in Networked Economies    [pdf] Abstract This paper studies the social value of information in economies with heterogeneous interactions. Agents play a coordination game facing a trade-off in optimizing their actions to an unknown state and to other agents' actions. The benefits from coordination can vary across agents and are described by an interaction matrix whose (i, j)-th entry measures i's coordination motive with j. Agents receive a private and a public signal about the state. We characterize a unique equilibrium of this game via the Katz-Bonacich centrality defined on the interaction matrix, and show that the relative weight on the public signal is strictly increasing in his centrality. Using this characterization, we provide two different insights on the value of information. First, we generalize the Morris and Shin (2002)'s anti-transparency result: in the beauty contest model, more public information can be detrimental to welfare if and only if the Katz-Bonacich centrality vector is sufficiently large. Second, we study the heterogeneity in the value of information, and show that more private (public) information can hurt agents who have small (large) Katz-Bonacich centralities but benefit others. Finally, we also extend our model to incorporate a semipublic signal and anti-coordination motives. Brown University DETERMINANTS OF THE COLLEGE EARLY ADMISSIONS MARKET CONFIGURATION    [pdf] Abstract Most of the top private selective colleges in the US offer early admission programs. Two formats are predominant: Restrictive Early Action (REA) and Early Decision (ED). Both programs allow students to apply to only one college and receive an official admission decision before the regular admissions process. REA and ED are different in that the former doesn't convey a binding enrollment commitment from the student upon admission, allowing her to apply in the regular process to other colleges, while the latter forces the student to enroll if admitted early. We construct a college admissions model that allows for the endogenous decision of which type of early program to offer. The model can explain the relationship between some stylized facts about the market, taking some as assumptions and some as consequences: early applicants are wealthier, on average, and are more likely to be admitted than regular applicants. Also, the colleges that offer REA are disproportionately less budget and more capacity constrained, have traits that are more attractive for students, are more popular in the application process and are more selective. In the model, if early applicants are wealthier, the benefit of comparing financial offers is high, college A is likely to overbid college B in aid, and college A is capacity constrained while college B is budget constrained, then there exists an equilibrium where college A offers REA and college B offers ED. Under such an early market configuration, a profitable wealth screening device arises for both colleges: students with a high financial need benefit the most from comparing financial offers and thus, due to its commitment nature, they are unlikely to apply early to an ED program if there is a REA program available. Under this situation, college B benefits from attracting and capturing a relatively wealthier population, while college A benefits from attracting high quality, high financial need students that avoid applying early to B. ETH Zurich Feedback effects in the experimental double auction with private information    [pdf] (joint work with Nunez Duran, Pradelski) Abstract Controlled laboratory and online experiments in economics have confirmed that the continuous double auction for nondurables rapidly approximates competitive equilibrium under private information. Interestingly, this convergence regularly occurs asymmetrically through rising prices. Here, we stress-test this finding by varying various fundamental constituents of the market institution (regarding price rule, market asymmetry, and equilibrium structure) with particular focus on the role of order-book feedback, that is, which parts of the order book (i.e., bids, asks, realized prices) are available to market participants. We provide an empirical foundation for convergence with asymmetries, even in markets that are markedly set up against it, that is, in terms of equilibrium structure and lack of feedback. Stanford University A Systematic Test of the Independence Axiom    [pdf] (joint work with Ritesh Jain) Abstract We investigate the Independence Axiom---a central tenet of expected utility theory. We design a lab experiment to test this axiom on the entire probability simplex. This method allows us to study both the certainty effect and the reverse certainty effect. Our results suggest that the Independence Axiom is violated systematically across the entire simplex, but violations are much more common in the direction opposite the conventional "certainty effect." The nature of violations is more consistent with the reverse certainty effect as opposed to the accepted experimental knowledge of the certainty effect. Our experiment contributes to the existing literature by studying the Independence Axiom on the entire simplex and is one of the first to document the prevalence of the reverse certainty effect. We contribute game theory in understanding how individuals maximize their utility under risk. University of South Carolina Asymmetric Contests and the Effects of a Cap on Bids (joint work with Alexander Matros) Abstract We study asymmetric all-pay auctions where the prize has the same value for all players, but players might have different cost functions. We provide sufficient conditions for existence and uniqueness of the conventional mixed-strategy equilibrium when the cost functions are right-continuous. Applications for all-pay auctions with various caps placed on the bids are discussed. We also discuss how different types of caps placed on bids affects the revenue for the seller. Nazarbayev University Harmful Screening in Competitive Markets    [pdf] (joint work with Irina Kirysheva) Abstract We consider a model where competitive firms commit to prices and screen con- sumers. Surprisingly, we find that while screening allows firms to avoid inefficient trades, it results in excessive rejections and can reduce welfare. We characterize market equilibria and show that inefficiencies arise when there are few firms and the social value of screening is low. Yale Aiming for the goal: contribution dynamics of crowdfunding    [pdf] (joint work with Joyee Deb, Kevin R. Williams) University of the Basque Country Characterization of efficient networks in a connections model with decreasing returns technology    [pdf] (joint work with Federico Valenciano) Abstract We consider a network-formation model where the strength or quality of a link depends on the amount invested in it and is determined by a link-formation technology, i.e. an increasing differentially and strictly concave function which is the only exogenous ingredient in the model. The revenue from investments in links is the information that the nodes receive through the network. The structures of the only efficient networks are characterized. University Paris Dauphine A solution for stochastic games    [pdf] (joint work with Luc Attia and Miquel Oliu-Barton) Abstract "Stochastic games have a value'' was the five-word abstract chosen by Mertens and Neyman (1981) to announce the existence of the uniform values for stochastic games, a model introduced by Shapley (1953) as an extension both to matrix games and to Markov decision processes. Their result was a major accomplishment, as it meant a very robust notion of solution for stochastic games. Since then, the problem of characterizing the value has remained unsolved. The main contribution of this paper is to settle this question, based on a reduction of stochastic games to matrix games depending on a real parameter. Northwestern University A result on convergence of sequences of iteration, with applications to best-response dynamics    [pdf] (joint work with Wojciech Olszewski) Abstract The result which says that the sequence of iterations xk+1 = f(xk) converges if f : [0, 1] → [0, 1] is an increasing function, has numerous applications in elementary economic analysis. I generalize this simple result to some mappings f : S ⊂ [0, 1]n→ S. The applications of this result include, but are not limited to, the convergence of best-response dynamics in the general version Crawford and Sobel (1982) model. Northwestern University Equilibrium Existence in Games with Ties    [pdf] (joint work with Wojciech Olszewski and Ron Siegel) Abstract We prove the existence of equilibria for a class of games with discontinuous payoffs. Our class of games includes: (a) a general version of all-pay contests, (b) first-prize auctions with interdependent values, and (c) Hotelling models with incomplete information. University of California, Santa Barbara Computing Optimal Taxes in Atomic Congestion Games    [pdf] (joint work with Rahul Chandan, Dario Paccagnan, Bryce L. Ferguson, Jason R. Marden) Abstract While selfish behaviour often results in sub-optimal system operation, taxation mechanisms have been proposed to improve the overall efficiency of the system. In this work we focus on the class of atomic congestion games, and show how to compute taxation mechanism that optimize the resulting worst-case efficiency while being robust against network modifications (network-agnostic). Specifically, we first show how to determine the price of anarchy of a given network-agnostic taxation mechanism through the solution of a tractable linear program. Second, we prove that optimal network-agnostic taxation mechanisms are linear maps from the set of latency functions to the set of tolls. Finally, we leverage these results to compute optimal network-agnostic taxation mechanisms and accompany them with a corresponding price-of-anarchy certificate. Our solution differs from those existing in the literature in that the optimal taxes are determined without any information about the specific game instance at hand, significantly reducing the computational burden. At the same time, their performance is almost identical to those derived with full information. City University of New York To What Extent is a Group an Individual?    [pdf] Abstract We consider the issue of regarding a group as an agent. In his book The Intentional Stance, Daniel Dennett considers the issue of entities which can be regarded as agents. These are entities whose behavior we are able to predict (somewhat) by asking, What does it want? What does it know? What is it able to do?\" Such questions are already difficult when we are dealing with other human beings. They become more tricky when we are dealing with a group as an agent and face difficult questions of defining its wishes and its possible actions. We start by pointing out that a set like Democrats, or Muslims, does not satisfy the requisite conditions, at least not fully and point to some insights. MIT Graphon games: A statistical framework for network games and interventions    [pdf] (joint work with Francesca Parise, Asuman Ozdaglar) Abstract In this paper, we introduce the new class of graphon games to describe strategic behavior in heterogeneous populations of infinite size. As a first contribution, we investigate properties of the Nash equilibrium of this newly defined class of games, including existence, uniqueness and comparative statics. As a second contribution, we illustrate how graphon games can be used to approximate strategic behavior in sampled network games, which are games where players interact according to a network that is randomly sampled from the graphon, and we derive precise bounds for the distance between graphon and sampled network game equilibria in terms of the population size. As a third contribution, we show that it is possible to design almost optimal interventions for sampled network games by relying on the graphon model. This procedure results in simple intervention policies that are robust to stochastic variations and can be applied to multiple network realizations. EIEF Robust Predictions in Dynamic Policy Games    [pdf] (joint work with Juan Pablo Xandri) Abstract Dynamic policy games feature a wide range of equilibria. This paper provides a methodology for obtaining robust predictions. We begin by focusing on a model of sovereign debt although our methodology applies to other settings, such as models of monetary policy or capital taxation. The main result of the paper is a characterization of outcomes that are consistent with a subgame perfect equilibrium conditional on the observed history. Our methodology provides observable implications common across all equilibria that we illustrate by characterizing, conditional on an observed history, the set of all possible continuation prices of debt and comparative statistics for this set; by computing bounds on the maximum probability of a crisis; and by obtaining bounds on means and variances. In addition, we propose a general dynamic policy game and show how our main result can be extended to this general environment. Tel Aviv University Bilateral Trade With a Benevolent Intermediary    [pdf] (joint work with Ran Eilat) Abstract We study intermediaries who seek to maximize gains from trade in bilateral negotiations. Intermediaries are players: they cannot commit to act against their objective function and deny some trades they believe to be beneficial -- a commitment that is used by mechanisms to achieve ex-ante optimality. The intermediation game is equivalent to a mechanism design problem with an additional "credibility" constraint, requiring that every outcome be interim-optimal, conditional on available information. Consequently, an interesting information trade-off arises, whereby acquiring fine information makes the trading decision more responsive to the parties' valuations, while coarse information allows more flexibility to credibly deny beneficial trades. We investigate how such intermediaries communicate with the parties and make decisions, and derive some properties of optimal intermediaries. Northwestern University Trust and Betrayals: Reputational Payoffs and Behaviors without Commitment    [pdf] Abstract I introduce a reputation model in which all types of the reputation-building player are rational and are facing lack-of-commitment problems. I study a repeated trust game in which a patient player (e.g., a seller) wishes to win the trust of some myopic opponents (e.g., buyers) but can strictly benefit from betraying them. Her benefit from betrayal is her persistent private information. I provide a tractable formula for the highest equilibrium payoff for every type of that patient player. Interestingly, incomplete information affects this payoff only through the lowest benefit in the support of the prior belief. In every equilibrium that attains this highest payoff, the patient player's behavior depends nontrivially on past play. I establish bounds on her long-run action frequencies that apply to all of her equilibrium best replies. These features of her behavior are essential for her to extract information rent while preserving her informational advantage. I construct a class of such high-payoff equilibria in which the patient player's reputation depends only on the number of times she has betrayed as well as the number of times she has been trustworthy in the past. This captures some realistic features of online rating systems. Stanford University Revenue maximization with heterogeneous discounting: Auctions and pricing (joint work with Jose Correa, Juan Escobar) Abstract We characterize the revenue maximizing mechanism in an environment with pri- vate valuations and asymmetric discount factors. The optimal mechanism combines auctions to encourage competition and dynamic pricing to screen of buyers’ valuations. When buyers are ex-ante symmetric and the seller is more patient than the buyers, the optimal mechanism takes a remarkably simple form. The seller runs a modified second price auction and allocates the item to the highest bid buyer if and only if the second highest bid exceeds the reserve price. The winning buyer pays the second highest bid. If the item is not sold in the auction, the seller posts a price path that depends on the second highest bid. The item is then allocated to the highest bid buyer at a strictly positive time. Our results imply that, for a patient seller, auctions and pricing schemes are complements and caution against the presumption that it is ex-ante optimal to commit not to trade when an auction fails. BME Objective ambiguity    [pdf] Abstract The possibility of gaining advantage in strategic situations by using ambiguity is well documented in the literature. However, so far there is not known any method or procedure to generate objective ambiguity, that is, there is not known such "coin tossing" which produces ambiguous outcomes. In this paper we introduce a procedure which -- like coin tossing in the case of probability distributions -- can generate objective ambiguity. The procedure is based on the random set approach of ambiguity. Universidad de Chile Bounding the Value of Observability in a Dynamic Pricing Problem    [pdf] (joint work with Jose Correa,Gustavo Vulcano) Abstract Research on dynamic pricing has been growing during the last four decades due majorly to its use in practice by a variety of companies as well as the several model variants that can be considered. In this work, we consider the particular pricing problem where a firm wants to sell one item to a single buyer in order to maximize her expected revenue. The firm pre-commits to the price function over an infinite horizon. The buyer has a private value for the item and purchases at the time when his utility is maximized. In our model, the buyer is more impatient than the seller and we study how important is to observe the buyer time arrival in terms of the seller’s expected revenue. We prove that in a very general setting, the expected revenue when the seller observes the buyer’s arrival is at most roughly 3.6 times the expected revenue when the seller does not know the time when the buyer arrives. Argyros School of Business and Economics, Chapman University Innovation, Diffusion and Shelving    [pdf] (joint work with Swapnendu Banerjee and Monalisa Ghosh) Abstract In an oligopoly model with an outside innovator and two asymmetric licensees, we consider a story of technology transfer of a cost reducing innovation. While the innovation reduces the cost of the inefficient firm only, we explore the strategic incentives of the efficient firm to acquire the technology. We find situations where the efficient firm acquires the technology, however shelves it and situations where it does not shelve it and further licenses it to the inefficient firm. We see the impact of technological diffusion (or no diffusion) from innovation on consumer welfare and industry profits. We also find the optimal mode of technology transfer of the innovator in this environment. Carnegie Mellon University Accommodating Cardinal, Ordinal and Mixed Preferences: An Extended Preference Domain for the Assignment Problem    [pdf] Abstract We extend the preference domain of the assignment problem to accommodate ordinal, cardinal and mixed preferences together and thereby allow the mechanism designer to elicit different levels of information about individuals' preferences. Our domain contains preferences over lotteries which are monotonic, continuous and satisfy independence axiom. The stochastic dominance order is the coarsest element of this domain while a vNM preference order is a finest element according to a natural coarseness relation on preferences over lotteries. We characterize this domain in terms of consistent vNM preferences and propose a preference reporting language that enables agents to report preferences from this domain. We then extend the pseudo-market mechanism of Hylland and Zeckhauser (1979) to this domain and show that the family of pseudo-market mechanisms are efficient and weakly envy-free while they fail strategy proofness. We show that the impossibility results concerning the incompatibility of incentive compatibility and efficiency of Zhou (1990) and Bogomolnaia and Moulin (2001) for cardinal and ordinal preferences respectively applies to our domain as well. brpowers@asu.edu An Analysis of Dual Issue Final-Offer Arbitration    [pdf] Abstract We discuss final-offer arbitration where two quantitative issues are in dispute and model it as a zero-sum game. Under reasonable assumptions we both derive a pure strategy pair and show that it is both a local equilibrium and furthermore that it is the unique global equilibrium. brpowers@asu.edu N-Player Final-Offer Arbitration: Harmonic Numbers in Equilibrium    [pdf] Abstract We consider how a mechanism of final-offer arbitration may be applied to a negotiation between N players attempting to split a unit of wealth. The game model is defined where the judge chooses a fair split from a Dirichlet distribution. For the case of a uniform probability distribution the equilibrium strategy is found as a function of the Harmonic numbers. Yale University Persuasion through a strategic moderator    [pdf] Abstract We study how intermediation affects information disclosure in a strategic information design problem. A sender publicly commits to an editorial policy to persuade a receiver to take a particular action. The communication channel between the sender and the receiver, however, is controlled by a moderator who verifies and chooses whether to faithfully deliver the realized message. We solve the sender\'s optimal persuasion problem and examine how it varies with the characteristics of the moderator: (1) the receiver strictly prefers to have a moderator who is biased against the action favored by the sender, (2) but not necessarily a more informed moderator. University of Oxford Contradiction-Proof Information Design    [pdf] (joint work with Ansgar Walther) Abstract We study the role of information design in settings where privately informed parties can additionally make strategic disclosures. A committed persuader and an uncommitted, privately informed sender can disclose hard evidence to a decision-maker. Treating the problem as one of design, we fully characterize the range of informational outcomes that can be obtained in equilibrium by means of a general opacity principle. Using the opacity principle, we establish a solution method for a class of optimal design problems with endogenous disclosures, and compare our solutions to the Bayesian Persuasion benchmark without disclosures. For a range of disclosure costs, the presence of voluntary disclosures forces the persuader to provide no less information than the benchmark if the benchmark setting gives high types the greatest incentive to disclose. When intermediate types most want to disclose, optimal persuasion can become less informative than the commitment benchmark. Finally, we apply our results to study optimal financial stress tests, performance reviews and investment advice. University of Illinois at Chicago Measuring the power of the dominant partner among married couple (joint work with John Hardwick) Abstract Suppose N men and N women , based on personal preferences select subsets of acceptable partners. We can associate a zero-one matrix where a one in row i column j means the ith woman and jth man are mutually acceptable to each other. Suppose they are paired and we have, complete matching. Then after the matching we want to quantify the dominant person's relative power. We suggest the associated assignment game as a natural model and develop an algorithm to compute the nucleolus and propose the nucleolus as the measure of the relative power. It can be extended to also maximal matching that allows same sex partners. The algorithm to evaluate the nucleolus exploits many combinatorial theorems of Edmunds- Berge and so on. University of Minnesota Contractual Pricing with Incentive Constraints    [pdf] Indian Institute of Management Bodh Gaya, India A New Algorithm for Student-Optimal Matching    [pdf] (joint work with Prabhat Ranjan and Sanjeet Singh) Abstract College Admission Problem is a matching problem for colleges offering seats for admission to students. Each college has its own preference (merit) list of students and each student also has his/her list of preferred colleges. Many stable matchings can be achieved. One of these matchings is college-optimal, and one is student-optimal. Two different matchings, college-optimal and student-optimal, can be obtained by using two variants of deferred-acceptance (DA) algorithm, college-proposing and student-proposing, respectively. Since student-proposing DA algorithm obtains student-optimal matching, it is used by most of the matching clearing houses. However, student-proposing DA algorithm has limitations that it cannot be applied in a scenario where colleges have different time-lines of merit list announcement and students can opt out of the market. In the context of proposed application for post-graduate admission at management institutions in India, institutions announce merit list at different time points, and outside-market exit option by students is allowed till the start of the course. This paper proposes an algorithm which can convert all stable non-student-optimal matchings to student-optimal matching. Combination of college-proposing DA algorithm and proposed algorithm can be applied in continuous manner. It makes way for institutions to announce result on different time points and students to opt out of the market. A framework can be developed which allows the clearing house to obtain the student-optimal matching after any institution announces its merit list or any student opts out of the market. The proposed framework will have dual benefits; it can be applied on continuous manner that is an advantage of college-proposing DA algorithm along with proposed algorithm, and it obtains student-optimal solution that is favored by the markets. Princeton University Sequencing Naive Social Learning    [pdf] Abstract I extend the DeGroot model to allow for sequential information arrival and show that the sequencing of information affects the final consensus. I identify the optimal and pessimal information release sequences that ensure the highest an lowest attainable consensus respectively, and in doing so, I reveal that there is room for manipulation of the final consensus. I show that a type of endogenous social forgetting of earlier information arises, wherein in the final consensus the relative weights of signals released earlier are lower than that of more recent signals. I further show that in a large society, where the number of agents goes to infinity, the optimal information release sequence remains to a large extent unchanged, where the lowest signal released in round K is higher than the highest signal released in round K-1. Finally, in a large society where the influence of the most influential agent goes to zero, I analyze the robustness of the wisdom of crowds with respect to sequential information release and find that to a large extend wisdom fails when information is released sequentially. Rochester Institute of Technology Selling Reputational Information    [pdf] Abstract This paper studies information provision by a third party in a dynamic model of reputation. An intermediary has monopoly access to information about the past behavior of a long-lived firm and commits to a disclosure policy mapping the firm's histories to distributions over a set of signals. The intermediary then sells signals conveying information about the firm to a sequence of one-period lived agents. The paper characterizes the optimal disclosure policy from the point of view of the intermediary and shows that the intermediary can always attain the payoff from this disclosure policy provided the costs of gathering information are sufficiently small. The policy chosen in the equilibrium is always inefficient, that is, there are alternative policies that generate higher social welfare. School of Business, Stevens Institute of Technology Learning from Failures: Optimal Contract for Experimentation and Production    [pdf] (joint work with Fahad Khalil, Jacques Lawarree) Abstract Before embarking on a project, a principal must often rely on an agent to learn about its profitability. We model this learning as a two-armed bandit problem and highlight the interaction between learning (experimentation) and production. We derive the optimal contract for both experimentation and production when the agent has private information about his efficiency in experimentation. This private information in the experimentation stage generates asymmetric information in the production stage even though there was no disagreement about the profitability of the project at the outset. The degree of asymmetric information is endogenously determined by the length of the experimentation stage. An optimal contract uses the length of experimentation, the production scale, and the timing of payments to screen the agents. Due to the presence of an optimal production decision after experimentation, we find over-experimentation to be optimal. The asymmetric information generated during experimentation makes over-production optimal. An efficient type is rewarded early since he is more likely to succeed in experimenting, while an inefficient type is rewarded at the very end of the experimentation stage. This result is robust to the introduction of ex post moral hazard. National Taiwan University Measuring Freedom in Games    [pdf] Abstract The paper axiomatizes a measure of freedom for game theoretic settings. The central idea of the measure is that freedom is increasing in the degree to which an agent's outcomes are determined by the agent's preferences. The measure is characterized by rational, non-consequentialist preferences of an impartial observer over games endowed with the observer's beliefs over actions. The measure generalizes several measures from the opportunity set based freedom literature to situations where agents interact. This allows freedom to be measured in general economic models and thus derive policy recommendation based on the freedom instead of the welfare of agents. To illustrate this, optimal libertarian income tax progression policies are derived in a production economy with heterogeneous agents. National Taiwan University Procedural Mixture Spaces    [pdf] Abstract This paper introduces procedural mixture spaces as mixture spaces in which it is not necessarily true that a mixture of two identical elements yields the same element. The following representation theorem is proven; a rational, independent, and continuous preference relation over procedural mixture spaces can be represented either by expected utility plus the Shannon entropy or by expected utility under probability distortions plus the Renyi entropy. The entropy components can be interpreted as the utility or disutility from resolving the mixture and therefore as a procedural instead of consequentialist value. University of Arizona A Revealed Preference Approach to Multidimensional Screening Abstract This paper develops a data-driven approach to multidimensional screening. The principal observes a population of decision makers each choose from a finite number of exogenously specified sets of allocations, and her beliefs about the agent's preferences are informed by this data. In my model, there are a multiplicity of preference distributions that are consistent with the principal's observations. Rather than assign privilege to any one distribution, she evaluates mechanisms by computing their worst-case payoff against the set of distributions that are compatible with the choice data. I show that there are circumstances in which the principal can do better than using a mechanism that recreates one of the choice environments in her data set, even when she knows nothing about the agent's preferences beyond what's implied by the data. More broadly, I allow for arbitrary domains of preferences, and identify conditions under which update mechanisms that use only allocations that are vertically differentiated from the allocations in the data are optimal. National Defense University Future Combat Air System Pricing    [pdf] Abstract National Defense University Brief Abstract for Stony Brook Game Theory Conference Optimal Pricing for Future Combat Air System Dr. Tim Russo April 30, 2019 The proposed Future Combat Air System (FCAS) will be an integrated network with individual parts that enable the system to conduct operations and accomplish the mission. The value provided will no longer be so easily attributed to each piece operating independently in support of the larger mission. All the pieces will operate together, share information, and enhance each other’s capabilities. Therefore, pricing is longer straight forward. To find the optimal price-quantity pair, we employ a combination of the economic theories bilateral monopoly, network pricing, and two-part tariffs with modifications as necessary. University of Kansas Monotone Global Games    [pdf] (joint work with Eric Hoffmann and Tarun Sabarwal) Abstract We extend the global games method to finite player, finite action, monotone games. These games include games with strategic complements, games with strategic substitutes, and arbitrary combinations of the two. Our result is based on common order properties present in both strategic complements and substitutes, the notion of p-dominance, and the use of dominance solvability as the solution concept. In addition to being closer to the original arguments in Carlsson and van Damme (1993), our approach requires fewer additional assumptions. In particular, we require only one dominance region, and no assumptions on state monotonicity, or aggregative structure, or overlapping dominance regions. As expected, the p-dominance condition becomes more restrictive as the number of players increases. In cases where the probabilistic burden in belief formation may be reduced, the p-dominance condition may be relaxed as well. We present some examples that are not covered by existing results. University of Guelph Selten's Horse: an Experiment on Sequential Rationality    [pdf] (joint work with Nikolaos Tsakas) Abstract In a seminal paper, Selten (1975) developed the game Selten's Horse to illustrate some aspects of rationality. In our study, we test the equilibrium predctions of Selten's Horse through a laboratory experiment and we find that most of the behaviour is toward an outcome that is in stark contrast to predictions of all existing refinements that adhere to sequential rationality. Some, though not all, behaviour is better explained by the notion of Ideal Reactive Equilibrium (2018), according to which the players behave as if they could remove existing information sets and observe their opponents' actions. In the presence of multiple equilibria, sequentiality of moves is often considered to provide an advantage to the first--mover, in moving towards her most preferred equilibrium. However, we also find strong evidence that players who play last can anticipate such behavior and exploit it by systematically reaching off--equilibrium outcomes that are more favorable to them. Western University Persuading part of an audience    [pdf] Abstract I propose a cheap-talk model in which the sender can use private messages and only cares about persuading a subset of her audience. For example, a candidate only needs to persuade a majority of the electorate in order to win an election. I find that senders can gain credibility by speaking truthfully to some receivers while lying to others. In general settings, the model admits information transmission in equilibrium for some prior beliefs. The sender can approximate her preferred outcome when the fraction of the audience she needs to persuade is sufficiently small. I characterize the sender-optimal equilibrium and the benefit of not having to persuade your whole audience in separable environments. I also analyze different applications and verify that the results are robust to some perturbations of the model, including non-transparent motives as in Crawford and Sobel (1982), and full commitment as in Kamenica and Gentzkow (2011). Urmia University Locating the Sale Agents in Spoke Model through Uniform Distribution of Consumers    [pdf] (joint work with Salah Salimian, Kiumars Shahbazi, Naeimeh Hozouri) Abstract Most manufactures sell their products through sale agents and are not directly engaged with consumers. Therefore, determination of optimal location and optimal number of sale agents is highly significant in their planning. The main objective of this paper is theoretical modeling of sale agents and expansion of location models through a method in such a way that the assumptions are closer to reality and could provide the required conditions for selection of optimal location and optimal number of sale agents. To this end, Spoke model of Chen & Riordan (2007) and Lijesen & Reggiani (2013) has been used. In each street n consumers have been uniformly located. The results showed under what conditions is the city center the optimal location of sale agents and when the city margin is the optimal location and indicated that the cost of launching sale agents is the main factor in making such decision. Moreover, the results showed that the optimal number of sale agents is a function of the number of streets, the customers' valuation of each unit of product, the price of sale agents, the number of consumers on each street, the earned profit by the sale agents and the cost of launching sale agents. Urmia University Location Choice of Firms in an Unequal Length Streets Model: Game Theory Approach (Extension of the Spoke Model)    [pdf] (joint work with Kiumars Shahbazi, Salah Salimian, Naeimeh Hozouri) Abstract Locating is one of the key elements in success and survival of industrial centers and has great impact on cost reduction of establishment and launching of various economic activities. In this study, streets with unequal length model has been used that is the classic extension of Spoke model; however with unlimited number of streets with uneven lengths. The results showed that the spoke model is a special case of streets with unequal length model. According to the results of this study, if the strategy of enterprises and firms is to select both price and location, there would be no balance in the game. Furthermore, increased length of streets leads to increased profit of enterprises and with increased number of streets, the enterprises choose locations that are far from center (the maximum differentiation) and the enterprises' output will decrease. Moreover, the enterprise production rate will incline toward zero when the number of streets goes to infinity and complete competition outcome will be achieved. Urmia University The Expansion of Hotelling Location Model using Triangular Distribution Approach and Types of Consumer (Experienced and Inexperienced)    [pdf] (joint work with Salah Salimian, Kiumars Shahbazi,Jalil Badpeyma, Naeimeh Hozouri) Abstract In this study optimal location has been analyzed assuming two types of experienced and inexperienced consumers, distributed with a triangular distribution density function. The results indicate that demand functions of two firms depend on the acquired desirability of a certain type of food and the number of experienced consumers and the unit Nash equilibrium costs are increasing compared to transportation costs. In addition, with increase in transportation costs, firm 1 approaches the center and firm 2 get away from it. Furthermore, if two firms are located in the same point, they do not demand uniform equilibrium prices and the price of each firm is more sensitive to the location of other first than its location. University of Valencia Security in digital markets    [pdf] (joint work with Amparo Urbano) Abstract This paper contributes to the literature on security in digital markets. We analyze a two-period monopoly market in which consumers have privacy concerns. We make three assumptions about privacy: first, that it evolves over time; second, that it has a value that is unknown by all market participants in the first period; and third, that it may affect market participants' willingness to pay for products. The monopolist receives a noise signal about consumers' average privacy. This signal allows the monopolist to adjust the price in the second period and engage in price discrimination. The monopolist's price in period two acts as a signal to consumers about privacy. This signal, together with consumers' purchase experiences from the first period, determines demand. We address two scenarios: direct investment in security to improve consumers' experiences and investment in market signal precision. IGIDR Corruption in Multidimensional Procurement Auctions under Asymmetry    [pdf] (joint work with Shivangi Chandel and Shubhro Sarkar) Abstract We examine corruption in first- and second-score procurement auctions in an asymmetric bidder setting. We assume that the auction is delegated to an agent who possesses more information about quality than the procurer and is known to be corrupt with some probability. Using this information asymmetry, the corrupt agent asks for a bribe from one of two bidders and promises to manipulate bids in return. We show that the agent approaches the weaker firm for higher levels of bidder asymmetry in both the auction formats. Using a symmetric quasi-linear scoring rule we show that neither the first- nor the second-score auction implements the optimal mechanism, with or without corruption. Our numerical simulations suggest that the buyer prefers the first-score auction when the stronger firm is approached by the agent in the second-score auction. If the weaker firm is favored on the other hand, the buyer switches to the second-score auction if the probability of corruption is high. Finally, our paper highlights the limited manipulation power of the agent in the second-score auction. LUISS The buck-passing game    [pdf] (joint work with Roberto Cominetti, Matteo Quattropani) Abstract We consider a model where agents want to transfer the responsibility of doing a job to one of their neighbors in a social network. This can be considered a network variation of the public good model. The goal of the agents is to see the buck coming back to them as rarely as possible. We frame this situation as a game, called the buck-passing game, where players are the vertices of a directed graph and the strategy space of each player is the set of her out-neighbors. The cost that a player incurs is the expected long term frequency of times she gets the buck. We consider two versions of the game. In the deterministic one the players choose one of their out-neighbors. In the stochastic version they choose a probability vector that determines who of their out-neighbors is chosen. We use the finite improvement property to show that the deterministic buck-passing game admits a pure equilibrium. Under some conditions on the strategy space this is true also for the stochastic version. This is proved by showing the existence of an ordinal potential function. These equilibria are prior-free, that is, they do not depend on the initial distribution according to which the first player having the buck is chosen. Bar Ilan University, Israel Voluntary Disclosure of Bad News in a Dynamic Model    [pdf] (joint work with Ilan Kremer, Andrzej Skrzypacz and Amnon Schreiber) Abstract We examine a dynamic model of voluntary disclosure of private information. In our model, a manager of a firm who may learn the value of the firm interacts with a competitive capital market and maximizes a weighted sum of all period prices. The value of the firm changes over time. In such a model, the expectation of prices does not depend on the disclosure policy of the firm. Our main result shows that there is a unique equilibrium disclosure policy with the following property: in each period before the last period there is a range of values that the manager discloses, and the disclosure of a value in this range results with a price that is lower than the non-disclosure price. Humboldt University Berlin Uncertain Commitment Power in a Durable Good Monopoly    [pdf] Abstract This paper considers dynamic pricing strategies in a durable good monopoly model with uncertain commitment power to set price paths. The type of the monopolist is private information of the firm and not observable to consumers. If commitment to future prices is not possible, the initial price is high in equilibrium, but the firm falls prey to the Coase conjecture later to capture the residual demand. The relative price cut is increasing in the probability of commitment as buyers anticipate that a steady price is likely and purchase early. Pooling in prices may occur for perpetuity if commitment is sufficiently weak. Polling for infinity is also preserved if committing to a high price is endogenously chosen by the firm. Columbia University A Dynamic Model of Reputation-Driven Media Bias    [pdf] Abstract I study how media bias, specifically that which is driven by reputational concerns, changes over time. To this end, I present a dynamic model of reputation-driven media bias. A firm privately learns about an issue in increments and reports to a consumer with each new piece of information. With each new report, the consumer updates her beliefs about the firm's information quality, i.e., the firm's reputation. Firms are forward-looking and thus take into account both their immediate and future reputations when reporting. Nonetheless, I establish that equilibrium reporting behavior is identical for myopic and forward-looking firms. In equilibrium, firms bias their reports, and this bias is shown to be driven by two separate factors. First, firms can appear more reputable by appealing to a consumer's prior bias (the prior effect). Separately, firms with reports that are more consistent across time are viewed more favorably (the consistency effect). Though static models highlight the prior effect, they do not account for the consistency effect, which changes with time. Furthermore, the relative importance of the consistency effect grows over time as the firm accumulates a richer history of reports. ITAM Sequential Expert Advice: Superiority of Closed Door Meetings.    [pdf] (joint work with Parimal Bag) Abstract Two career-concerned experts sequentially give advice to a Bayesian decision maker (D). We find that secrecy dominates transparency, yielding superior decisions for D. Se- crecy empowers the expert moving late to be pivotal more often. Further, (i) only secrecy enables the second expert to partially communicate her information and its high precision to D and swing the decision away from first expert’s recommendation; (ii) if experts have high average precision, then the second expert is effective only under secrecy. These results are obtained when experts only recommend decisions. If they also report the quality of advice, fully revealing equilibrium may exist. Lehigh University Shuffling as a Sales Tactics: An Experimental Study of Selling Expert Advice    [pdf] (joint work with James Dearden, Ernest K. Lai) Abstract This study explores the interaction between a product expert, who offers to sell a product ranking, and an incompletely informed consumer. The consumer considers acquiring the expert's product ranking not only because the expert has superior information about the quality of the products the consumer is considering and knows the consumer's utility function, but also because the expert can directly influence consumer utility of a product by the product's rank. There are multiple equilibria in this setting with strategic information transmission: ones in which the expert ranks products in a manner that is consistent with the consumer's pre-ranking utilities, which depend exclusively on the products themselves, and ones in which the expert does not. We design a laboratory experiment to investigate which equilibrium an expert and consumer play. Across the three treatments we examine, which vary by the consumer's possible pre-ranking utilities, we find evidence that product experts are likely to select a ranking methodology that involves considerable uncertainty about the final product ranking, even though doing so involves ranking products in a manner that is inconsistent with consumer pre-ranking utilities. Peking University Screening with Network Externalities    [pdf] (joint work with Yiqing Xing) Abstract Increasingly many products feature “network externalities": the utility of one's consumption increases in his neighbors' consumption. Although information of network structure is important to the seller, it is often privately known to the buyers. We model a monopoly's (constrained) optimal pricing strategy to “screen” buyer's network information: their susceptibility (out-degree) and influence (in-degree). We characterize the optimal allocation for both the case of directed networks where each buyer's influence and susceptibility are independent, and the case of undirected networks where the two are identical. For directed networks, we show the optimal allocation can only depend on a buyer's susceptibility and is linear in virtual type (of susceptibility) with quadratic intrinsic value. For undirected networks, we disentangle the different effects of influence and susceptibility on optimal allocation and show that with quadratic intrinsic value, the allocation is a linear combination of a buyer's type and virtual type. We contrast the analysis with two benchmarks, the complete information pricing and uniform pricing, to shed light on the value of network information. We also extend the model to accommodate for weak affiliation between a buyer's influence and susceptibility, and the situation where influence and susceptibility are endogenous to the optimal allocation. Princeton University Persuasion via Weak Institutions    [pdf] (joint work with Elliot Lipnowski, Doron Ravid) Abstract A sender (S) publicly commissions a study by an institution to persuade a receiver (R). The study consists of a research plan and an official reporting rule. S privately learns the research's outcome, and also whether she can influence the report. Under influenced reporting, S can privately change the report to a message of her choice. Otherwise, the official reporting rule applies. We geometrically characterize S's highest equilibrium value and examine how optimal persuasion varies with the probability that reporting is uninfluenced – S's "credibility." We identify two phenomena: (1) R can strictly benefit from a reduction in S's credibility; and (2) small decreases in credibility often lead to large payoff losses for S, but this typically will not happen when S is almost fully credible. Federal University of Juiz de Fora, Department of Economics Investment Decision Under Inflation Targeting in Emerging Market Economies    [pdf] (joint work with Silvinha Vasconcelos, Claudio R. F. Vasconcelos, Ricardo B. L. M. Oscar) Abstract This article is aimed at understanding in which conditions emerging market economies (EMEs) can find themselves with a high level of investment in inflation targeting regimes. We extend the game proposed by \cite{asako2017guiding} and introduce a stochastic learning rule through an Agent-based Computational Economics (ACE) model. Entrepreneurs and workers iteratively play an evolutionary game to make investment decisions. Investments are assumed to be complementary. Thereby, the conditions for successfully guiding the EMEs toward the long-run equilibrium in which all players invest at the target inflation rate are: (i) investment must be demand-creating innovation; (ii) Central Bank must have credibility on the announced target inflation rate. Our contributions are twofold: (a) a refinement of dynamic equilibrium to determine the level of investment in the economy for a given inflation targeting rate; (b) greater accuracy on the proportion of agents willing to invest in both physical and human capital, optimizing the implementation of the economic policies. University of Zurich, Department of Economics Experts, Quacks and Fortune-Tellers: Dynamic Cheap Talk with Career Concerns    [pdf] (joint work with Egor Starkov) Abstract The paper studies a dynamic communication game in the presence of adverse selection and career concerns. An expert of privately known competence, who cares about his reputation, chooses the timing of the forecast regarding the outcome of some future event. We find that in all equilibria in a sufficiently general class earlier reports are more credible. Further, any report hurts the expert's reputation in the short run, with later reports incurring larger penalties. Reputation of a silent expert, on the other hand, gradually improves over time. Humboldt University Berlin Efficient Design With Small Informational Size and Maxmin Agents    [pdf] Abstract We study efficient implementation in general mechanism design settings where the incremental impact of any single agent's information given the information of others is small. If agents are Bayesian, McLean and Postlewaite (2015) show that a generalized Vickrey-Clarke-Groves (VCG) mechanism is approximately incentive compatible. We show that if each agent perceives a nontrivial amount of ambiguity, there exist modifications to the generalized VCG transfers that restore incentive compatibility whenever agents are sufficiently informationally small. More generally, we show that if there exists a mechanism that is either (i) approximately efficient and fully incentive compatible or (ii) fully efficient and approximately incentive compatible in a Bayesian environment, then we can construct a mechanism that is both efficient and incentive compatible in an environment with a small amount of ambiguity. Finally, we apply the results to the study of large double auctions. Penn State University Ethics and Talent in Banking    [pdf] (joint work with Anjan Thakor) Abstract This paper develops a theory of optimal ethical standards, capital requirements and talent allocation in banking wherein two types of banks, one being protected by regulatory safety nets ("depositories") and the other not so protected ("shadow banks"), innovate financial products and compete for managerial talent. Ethical violations are "mis-selling" products to customers who would not bene t from them, and they entail financial losses and regulatory penalties for the miscreant bank. Bank capital is shown to be more efficient than a penalty for implementing ethical standards. For any capital level, banks choose higher ethical standards and experience fewer ethical violations when bank managers are more talented. However, banks adopting higher ethical standards experience managerial talent migration to banks with lower standards. In equilibrium, endogenously-determined regulatory capital and ethical standards are higher in depositories than in shadow banks, and this difference is bigger with talent competition than without. Consequently, depositories hire less talented managers and innovate less, implying that prudential bank regulation has unavoidable labor market consequences in financial services. These effects arise despite customers being sophisticated enough to recognize that mis-selling may occur and, hence, do not overpay on average. If customers are naive and do not recognize potential mis-selling and the regulator perceives a cost associated with customer overpayment, socially optimal capital requirements and ethical standards are higher. Glasgow University Adverse implementation    [pdf] (joint work with Alexei Savvateev) Abstract We consider a situation when social planer tries to implement a project that agents don't like. Immediate examples of such situations involve market collusion, tax evasion, pollution control, certification exam cheating, etc. We take a double implementation approach to take into account that the agents have every incentive for collective action in this setup. Our sufficient conditions involve single crossing property of the agents' preferences as well as order semi-invariance and payoff complementarity of proposed incentive schemes. This way the strong Nash equilibrium exists, and, moreover any other possible Nash equilibrium is even better for the social planner. Other applications of our model include serial cost sharing of non-convex public goods, as well as scenarios of collective moral hazard under perfect monitoring and costly enforcement. EPGE-FGV & USP-SP, Brazil Conflict-free and Pareto-optimal allocations in matching markets: A solution concept weaker than the core    [pdf] (joint work with David Castrillo and Marilda Sotomayor) Abstract Abstract In the one-sided assignment game any two agents can form a partnership and decide how to share the surplus created. Thus, an outcome involves a matching and a vector of payo⁄s. In this market, stable outcomes often fail to exist. We introduce the idea of conict-free outcomes: they are individually rational outcomes where no matched agent can form a blocking pair with any other agent, neither matched nor unmatched. We propose the set of Pareto-optimal (PO) conict-free outcomes, which is the set of the maximal elements of the set of conict-free outcomes, as a natural solution concept for this game. We prove several properties of conict-free outcomes and PO conict-free outcomes. In particular, we show that each element in the set of PO conict-free payo⁄s provides the maximum surplus out of the set of conict-free payo⁄s, the set is always non-empty and it coincides with the core when the core is non-empty. We further support the set of PO conict-free outcomes as a natural solution concept by suggesting an idealized partnership formation process that leads to these outcomes. In this process, partnerships are formed sequentially under the premise of optimal behavior and two agents only reach an agreement if both believe that more favorable terms will not be obtained in any future negotiations. Munich Center for Mathematical Philosophy Lying and Lie-Detection in Bayesian Persuasion Games with Costs and Punishments    [pdf] (joint work with Mantas Radzvilas, Todd Stambaugh) Abstract If the aim of pharmaceutical regulators is to prevent dangerous and ineffective drugs from entering the market, the procedures they implement for approval of drugs ought to incentivize the acquisition and accurate reporting of research on the questions of safety and effectiveness. These interactions take the form of Sender-Receiver games, in which Pharmaceutical companies seeking approval of a drug conduct research themselves and report the results to the regulators. Of course, the companies may be inclined to falsify these reports, even in light of the costs and possible penalties for doing so. The main aim of this work is to give a formal model for this kind of interaction and to identify the mechanism that is optimal for the regulatory body, and by proxy, the public, when the costs of information, lying, and the detection of lies are nontrivial. In this model, the Sender incurs costs via noisy acquisition of information by sequential testing, falsification of reports of individual tests, and punitive measures upon detection by the Receiver of falsified reports . Further, the model has an epistemic dimension, in which the Sender believes that the likelihood of being caught lying is increasing in the number of falsified reports. The Receiver is cautious in the sense that she doesn’t rule out the possibility that falsification is a viable strategy for the Sender’s type, and she makes probabilistic inferences about the Sender’s type and strategy from the messages she receives. The ability of the Receiver to detect lies is limited by the costs of her verification procedure. We identify sequential equilibria of the game under multiple constraints on the payoffs, costs, and type structures of the players. Additionally, we identify the report verification strategy that is optimal - if known to the Sender, the strategy minimizes the incentives to falsify reports. Stanford University Likely Existence of Pairwise Stable Networks in Large Network Formation Games    [pdf] Abstract Since its introduction in Jackson and Wolinsky (1996), pairwise stability has been the preponderant equilibrium notion for network formation games in both the theoretical and applied networks literatures. Yet pairwise stable networks do not exist in some network formation settings, and relatively little work has been done to explore how general the problem of nonexistence may be. This paper demonstrates that the known sufficient conditions for existence are both restrictive, in that they rule out features of preferences present in most real-world settings, and fragile, in that even minor violations of the conditions can lead to nonexistence. We then show that nonetheless pairwise stable networks exist with high probability in large network formation games in which agents' preferences are sufficiently uncorrelated and noisy. Finally we show how this result can be used to derive likely existence even in an information sharing network formation game for which explicit representations of agents' preferences are computationally intractable. Massachusetts Institute of Technology Reputation Concerns Under At-Will Employment    [pdf] (joint work with Dong Wei) Abstract We study a continuous-time model of long-run employment relationship with fixed wage and at-will firing; that is, termination of the relationship is non-contractible. Depending on his type, the worker either always works hard, or can freely choose his effort level. The firm does not know the worker’s type and the monitoring is imperfect. We show that, in the unique Markov equilibrium, as the worker’s reputation worsens, his job becomes more insecure and the strategic worker works harder. We further demonstrate that the relationship between average productivity and job insecurity is U-shaped, which is consistent with typical findings in the organizational psychology literature. Wuhan University Perfect and proper equilibria in large games    [pdf] (joint work with Yishu Zeng) Abstract This paper studies pure strategy perfect and proper equilibria for games with non-atomic measure spaces of players and infinitely many actions. A richness condition (nowhere equivalence) on the measure space of players is shown to be both necessary and sufficient for the existence of such equilibria. The limit admissibility of perfect and proper equilibria is also proved. University of Minnesota Reputation for Persuasion    [pdf] Abstract I study the optimal disclosure of information of uncertain quality. Each period, a firm wishing to issue debt hires a credit rating agency to investigate their ability to repay and reveal the results to investors. The credit rating agency obtains a signal of the firm and chooses to reveal some or all of that information to investors. The accuracy of the signal is unknown. A reputation for the quality of the credit rating agency arises as investors learn of the firm through a rating made noisy both by the credit rating agency’s mistakes and withheld information. A rating methodology that is more revealing about the firm is also more revealing about the quality of the credit rating agency after firm uncertainty is resolved. The credit rating agency must balance the incentives of the firm employing them with incentives coming from their own reputation. Contrary to commonly held intuition, I find that reputational concerns typically do not lead to the credit rating agency revealing more information. In some cases it can even lead to less information revelation in equilibrium. University of Minnesota Monitor Reputation and Transparency    [pdf] (joint work with Ivan Marinovic) Abstract We study the disclosure policy of a regulator overseeing a monitor with reputation concerns, such as a bank or an auditor. The monitor faces a manager, who chooses how much to manipulate given the monitor’s reputation. Reputational incentives are strongest for intermediate reputations and uncertainty about the monitor is valuable. Instead of providing transparency, the regulator’s disclosure keeps the monitor’s reputation intermediate, even at the cost of diminished incentives. Beneficial schemes feature random delay. Commonly used ones, which feature immediate disclosure or fixed time delay, destroy reputational incentives. Surprisingly, the regulator discloses more aggressively when she has better enforcement tools. Maastricht University Stronger bonds with less connected agents in stable resource sharing networks    [pdf] Abstract This is a model of network formation in which agents create links following a simple heuristic- they invest their limited resources proportionally more in neighbours who have fewer links. This decision rule captures the notion that when considering social value more connected agents are on average less beneficial as neighbours and it is a useful proxy when the payoffs are difficult to compute. The decision rule also illustrates an externalities effect, whereby an agent's actions also influence his neighbours' neighbours. Besides complete networks and fragmented networks with complete components, the pairwise stable networks produced by this model include many non-standard ones with characteristics observed in real life. Multiple stable states can develop from the same initial structure- the stable networks could have cliques linked by intermediary agents while sometimes they have a core-periphery structure. Standard networks that are usually seen in the literature like the star, circle, line, wheel, biregular graphs and incomplete regular graphs are not stable. Even though the complete networks are most efficient, the observed pairwise stable networks have close to optimal welfare. This limited loss of welfare is due to the fact that when a link is established, this is beneficial to the linking agents, but makes them less attractive as neighbours for others, thereby partially internalising the externalities the new connection has generated. PSU Preferences for Power    [pdf] (joint work with Elena Pikulina) Abstract Power---the ability to determine the outcomes of others---usually comes with various benefits: higher compensation, public recognition, etc. We develop a new game, the Power Game, and use it to demonstrate that a substantial fraction of individuals enjoy the intrinsic value of power: they accept a lower payoff in exchange for power over others, without any additional benefits to themselves. We show that preferences for power exist independently of other components of decision rights. Further, these preferences cannot be explained by social preferences, are stable over time and are not driven by mistakes, confusion or signaling intentions. Using a series of additional experiments, we show that (i) power is related to determining outcomes of others directly as opposed to simply influencing them; (ii) depends on how much freedom the decision-maker has over deciding those outcomes; (iii) is tied to relationships between individuals and not necessarily organizations; and (iv) likely depends on the domain: power is salient in work-place settings but not necessarily in others. We establish that ignoring preferences for power may have large welfare implications. Consequently, our findings provide strong reasons for incorporating power preferences in the study and design of political systems and labor contracts. PSU Correlation Neglect in Student-to-School Matching    [pdf] (joint work with Alex Rees-Jones, Ran Shorrer) Abstract A growing body of evidence suggests that decision-makers fail to account for correlation in signals that they receive. We study the consequences of this behavior for application strategies to schools. In a lab experiment presenting subjects with incentivized school-choice scenarios, we find that subjects generally follow optimal application strategies when schools' admissions decisions are determined independently. However, when schools rely on a common priority---inducing correlation in their decisions---decision making suffers, and students often fail to apply to attractive safety'' options. We document that this pattern holds even within-subject, with significant fractions of participants pursuing different strategies in mathematically equivalent situations that differ only by the presence of correlation. We provide a battery of tests supporting the possibility that this phenomenon is at least partially driven by correlation neglect, and we discuss implications that arise for the design and deployment of student-to-school matching mechanisms. University of Technology Sydney Sophistication and Cautiousness in College Applications    [pdf] (joint work with Yan Song, Xiaoyu Xia) Abstract As in many places in the world, Chinese provinces reformed their college admission mechanisms from the Immediate Acceptance mechanism to the new ones that share the features of the Deferred Acceptance mechanism. In this article, we propose a novel approach to evaluate these reforms in terms of the student welfare by estimating the fractions of three major behavioral types as well as the student preferences. We first show that the reforms would not affect the equilibrium outcome played by rational students, but our data do not support this hypothesis. Motivated by this observation, we extend the model to include the following types of students classified by their strategic sophistication and beliefs: the rational type, the naive type and the cautious type. We identify and estimate the fractions of these types only from the assignment data before and after the policy change, which allows us to analyze the welfare effect of policy changes separately by the behavioral types. University of Texas Rio Grande Valley A Political Reciprocity Mechanism    [pdf] (joint work with Roland Pongou (University of Ottawa), Jean-Baptiste Tondji (University of Texas Rio Grande Valley)) Abstract This paper considers the problem faced by a political authority that has to design a legislative mechanism that guarantees the selection of policies that are stable, efficient, and inclusive in the sense of strategically protecting minority interests. Experimental studies suggest that some of these desirable properties can be achieved if decision-makers (e.g., legislators) are induced to display reciprocal and pro-social behavior. However, the question of how a voting mechanism can be designed to incentivize `selfish" individuals to display such behavior remains unresolved. We propose such a mechanism and find that it is a simplification of legislative procedures used in some democratic societies. Our mechanism satisfies all of the aforementioned properties under mild conditions, and it is easily implementable. In addition, it encourages positive reciprocity and generally protects minorities without having to make use of a supermajority rule as many real-world political institutions do. Finally, a comparative analysis shows that this mechanism has other desirable features and properties that distinguish it from other well-known political procedures. Indian Institute of Management Bangalore Fair Pricing in a Two-sided Market Game    [pdf] Abstract Pricing is one of the important strategic decisions in two-sided markets. A key finding of prior research is that the pricing structure necessitates endogenizing network externalities and adopting a pricing strategy where one side of the market often subsidizes the other side. In effect, the prices charged on one side does not usually reflect the costs incurred to serve that side. This gives rise to a popular opinion that many two-sided market platforms often adopt a pricing structure that is biased against one side and favors the other side. A fundamental question that arises in such a setting is: how much subsidization is fair in two-sided markets? We analyze the question of fair pricing structure in two-sided markets from the point of view of coalitional game theory. Given a two-sided market, we define a related coalitional game which we call a two-sided market game. We analyze the two-sided market game using various fairness-based solution concepts in coalitional game theory. This study has an implication on how competition policy can be applied in two-sided markets. University of Virginia Obvious Manipulations    [pdf] (joint work with Peter Troyan and Thayer Morrill) Abstract A mechanism is strategy-proof if agents can never profitably manipulate, in any state of the world; however, not all non-strategy-proof mechanisms are equally easy to manipulate - some are more “obviously” manipulable than others. We propose a formal definition of an obvious manipulation and argue that it may be advantageous for designers to tolerate some manipulations, so long as they are non-obvious. By do- ing so, improvements can be achieved on other key dimensions, such as efficiency and fairness, without significantly compromising incentives. We classify common non-strategy-proof mechanisms as either obviously manipulable (OM) or not obviously manipulable (NOM), and show that this distinction is both tractable and in-line with empirical realities regarding the success of manipulable mechanisms in practical market design settings. Hitotsubashi University LQG Information Design    [pdf] Abstract A linear-quadratic-Gaussian (LQG) game is an incomplete information game with quadratic payoff functions and Gaussian information structures. It has many applications such as a Cournot game, a Bertrand game, a beauty contest game, and a network game among others. LQG information design is a problem to find the Gaussian information structure from a given collection of feasible information structures that maximizes the expected value of a quadratic function of actions and payoff states when players follow a Bayesian Nash equilibrium. Because the LQG model is tractable enough but not too specific, it can be a good starting point for exploring a general relationship between the optimal information structures and the economic environments. In this problem, the variable to be determined is represented as the covariance matrix of actions and payoff states; the objective function is represented as a Frobenius inner product of a constant symmetric matrix and the covariance matrix; the constraints are represented as linear equalities of the covariance matrix which must be positive semidefinite. This implies that we can formulate LQG information design as semidefinite programming. Thus, we can numerically obtain the optimal information structures by using semidefinite programming solvers, and in some cases, we can analytically characterize them. As an immediate consequence of the formulation, we provide sufficient conditions for optimality and suboptimality of full and no information disclosure. Moreover, we identify the optimal information structures in a couple of special cases. In the case of symmetric LQG games, we characterize the optimal symmetric information structure as a closed-form expression. In the case of asymmetric LQG games, we characterize the optimal public information structure as a closed-form expression. In both cases, we discuss what properties of the constant matrix in the objective function determine the optimal information structures. Virginia Polytechnic Institute Sakshi Upadhyay    [pdf] (joint work with To Join or not to Join: Coalition Formation in Public Good Games) Abstract Commitment devices such as coalitions can increase outcome efficiency in public goods provision. This research studies the role of social preference in a two stage public good game where, in the first stage, heterogeneous agents first choose whether or not to join a coalition then, in the next stage, the coalition votes on whether its members will contribute. We find that individuals with stronger social preferences are more likely to join the coalition and vote for the coalition to contribute to the public good. We further show that higher marginal benefits of contribution leads to more people joining the coalition and contributing to the public good. These results hold whether the coalition’s decision is determined by a majority voting or a unanimous voting rule. The results are also robust to different model specifications. University of Valencia Demand for Privacy, selling consumer information, and consumer hiding vs. opt-out.    [pdf] (joint work with S. Anderson, N. Larson and M. Sanchez) Abstract We consider consumers choosing whether to buy a good, when they know that information about them can be sold to another firm selling another good they might also buy. This causes some consumers to hide their types by not buying the first good, which delivers an endogenous demand for privacy and renders the demand for the second good more inelastic. But it also can give the firm in the first market a greater incentive to harvest consumers to sell to the second firm, and, therefore, the upstream price can go down while increasing the downstream price. We determine whether information selling improves upstream profits, consumer surplus, and total welfare, and we find the consequences of allowing consumers to opt out of having their information sold by the upstream firm. United States Naval Academy What's Love Got To Do With It? Random Search, Optimal Stopping, and Stable Marriage    [pdf] Abstract I study a decentralized large marriage market with incomplete information and heterogenous preferences. Agents play a matching game in which each agent learns about his/her preferences over possible marriage partners via a sequence of random matches. Provisionally matched agents who find each other mutually acceptable marry and drop out of the the search process. I introduce a stability notion which requires that members of pair become mutually aware of each other at some point in the search process prior to their respective marriages to be considered a blocking pair. I obtain results equivalent to Lauermann and Nöldeke, and Burdett and Coles that there is a perfect Bayesian equilibrium in which an agent accepts a match if the private surplus from doing so equals or exceeds an equilibrium threshold; otherwise s/he rejects the match and continues to search for an acceptable spouse. The perfect Bayesian equilibrium has a set of limiting assignments, all of which satisfy the awareness-constrained pairwise stability condition. University of Pittsburgh Delegation in Veto Bargaining (joint work with Navin Kartik and Andreas Kleiner) Abstract We study a canonical problem a Romer-Rosenthal bargaining game in which the veto player’s preferences (specifically, his ideal point) is private information. Our innovation is to not restrict the proposer to a single policy: instead, the proposer can offer a set of policies—a delegation set—from which the veto player can select any one (or choose the status quo). This can also be viewed as screening/mechanism design with a status-quo constraint. We identify conditions under which the optimal delegation set takes certain forms, in particular when it is an interval. We show that is it generically not a single policy but it can be the entire policy set between the status quo and the proposer’s ideal point, meaning that the proposer can do no better than giving the veto player complete autonomy to choose his ideal policy. We show that as the proposer and veto player become more aligned (in a stochastic sense), the veto player is given less discretion (a smaller delegation set), but nevertheless the probability of vetos goes down. Santigo de Cali University Robust Equilibria in Tournaments with Externalities (joint work with Ruben Juarez ) Abstract Agents form coalitions among them, and every agent has preferences over the feasible coalitions in which he belongs. Each formed coalition has power, thus the one with the largest power wins the tournament. The partition of these agents is a no threat equilibrium (NTE) if, whenever a group of agents gains by forming their own coalition, there exists another group of agents that gains by forming their own coalition and harms at least one agent who initially deviated from the partition. We characterize the class of feasible coalitions that guarantees the existence of a NTE partition for all preferences and all powers. Indeed, these sets of feasible coalitions will come from sets of connected coalitions in networks without cycles. Moreover, we prove that there always exist a NTE in the matching problem when couples (or singles) have power in the process. Finally, we show that the characterized class expands more limited versions in which traditional equilibria such as the core do not exist. University of Helsinki Mechanism without Commitment - General Solution and Application to Bargaining Abstract This paper identifies mechanisms that are implementable even when the planner cannot commit to the rules of the mechanism. The standard approach is to require mechanism to be robust against redesign, which often leads to existence problems. The novelty of this paper to require robustness against redesigns that are themselves robust against redesigns that are themselves robust against... . That is, we allow the planner to costlessly redesign the mechanism any number of times, and identify redesign strategies that are both optimal and dynamically consistent. A mechanism design strategy that credibly implements a direct mechanism after all histories is shown to exist. The framework is applied to bilateral bargaining situations. We demonstrate that a welfare maximizing second best mechanism can be implemented even without commitment. University of Texas, Austin Signaling in mean-field games    [pdf] Abstract In this paper, we consider an infinite-horizon discounted dynamic mean-field game where there is a large population of homogenous players sequentially making strategic decisions and each player is affected by other players through an aggregated population state. Each player has a private type that only she observes. Such games have been studied in the literature under simplifying assumption that population state dynamics are stationary. In this paper, we consider non-stationary population state dynamics and present a novel backward recursive algorithm to compute Markov perfect equilibrium (MPE) that depend on both, a player’s private type, and current (dynamic) population state. Ecole Polytechnique Complexity of Strategic Thinking and Robustness of Interim Rationalizability    [pdf] (joint work with Olivier Gossner) Abstract In games of incomplete information, interim rationalizability is the equilibrium concept that stems from the iterative deletion of dominated strategies. It is known that for a fixed game this solution concept can be sensitive to misspecifications of the beliefs and higher order beliefs of a player's type. For a fixed game, we identify all types whose finite order interim rationalizable strategies coincide as one equivalence class. Then we define the complexity of a type as the cardinality of the smallest type space which contains a type in its equivalence class. We interpret this measure as the type's complexity of strategic thinking in a game and show that interim rationalizable strategies are robust to perturbations of players' higher order beliefs as long as they preserve the order of complexity of strategic thinking. Stanford University Unraveling in the Presence of a Secondary Market    [pdf] Abstract Matching markets often unravel, with matches between agents occurring inefficiently early and based on little information. Extensive unraveling has occurred in a variety of industries, ranging from the hiring of investment bankers to the funding of startups by venture capital firms to the drafting of athletes in professional sports. However, initial matchings are not permanent: employees can move to different firms after being hired, startups can partner with different VC firms, and athletes can change teams. This feature has not been incorporated in prior research. I develop a tractable model of screening and matching with a secondary market that allows some firms to hire laterally instead of at the entry-level. A novel phenomenon emerges where early matching arises as a strategic decision by firms to prevent poaching, independent of the talent levels in the labor pool. Traditionally thought to increase efficiency due to rematching, the presence of a transparent secondary market has the opposite effect: it leads to a decrease in welfare as a consequence of the adverse signaling incentive it creates. Nanjing Audit University Strong Stochastic Dominance    [pdf] (joint work with Ehud Lehrer) Abstract We generalize the monotone likelihood ratio property of univariate random variables. We say that one distribution strongly stochastically dominates another if the former is a convex transformation of the latter. The main contribution this paper wishes to introduce is phrasing the equivalence condition in utility theory terms. Several economic applications of this equivalence condition are given. These applications include Bayesian learning and dynamic decision-making under uncertainty, auctions with independent private values, pricing of risky assets, implications to a portfolio's value-at-risk and to production expansions. Duke University, University of Sydney Self-Similar Beliefs in Games with Strategic Substitutes    [pdf] (joint work with Mengke Wang) Abstract This paper studies strategic situations where a population of heterogeneous players are randomly matched with each other to play games with strategic substitutes and players have incomplete information about their opponents' private types. If players hold type-independent beliefs about their opponents' types, then in equilibrium players' actions are monotonic with respect to their types. Since players' private types are often not observable to the analyst, to understand what kind of observable behavior can be explained by this model, a representation result is established for this model when the analyst observes how the population behaves on an aggregate level. Of course, a model with type-independent beliefs may not be justified, since types could be correlated in many applications. Moreover, in experiments where individuals are randomly matched to play games with strategic substitutes, they report systematically heterogeneous conjectures about their opponents' actions: Players who act more aggressively also conjecture that their opponents would act more aggressively. This not only contradicts the type-independent belief model but is also counterintuitive because in games with strategic substitutes, opponents' aggressive behavior discourages players from playing aggressively. A model is then proposed where players have self-similar beliefs. It captures the intuition that higher types believe that their opponents are also of higher types and fits the experimental observations. One important and surprising result is that models with type-independent beliefs and self-similar beliefs are observationally equivalent for many payoff parameters, that is they have identical behavioral implications. University of California San Diego Cheaper Talk    [pdf] (joint work with Peicong Hu) Abstract As the new media gain popularity, the cost for an average person to voice his or her opinion drops substantially. Why would a decision-maker seek for information from social media knowing the information there is largely inaccurate? Does the "age of information" necessarily imply better-informed decision making? Despite the lower trust enjoyed by new media compared to traditional news outlets, why is new media used more in disseminating news while traditional newsroom employment has been declining? In this paper, we introduce a fixed cost of "talking" into the canonical cheap talk model and allow the sender to be potentially imperfectly informed. The main results are as follows: (1) We show that while a better-informed sender can provide information of higher quality, a less-informed sender can have the advantage in quantity in the sense that the latter can be more likely to supply information; (2) We show the effectiveness of communication is not monotonic in the talking cost. This is because a moderate cost can serve to align players’ preferences with respect to ideal actions, but too high a cost disincentivizes talking. The receiver’s payoff drops as the talking cost approaches zero; (3) We show that the sender’s favorite cost can be lower than that of the receiver’s, in which case the sender would opt for a less costly communication technology even if doing so sacrifices his credibility and hence the communication effectiveness. A slightly severe bias may align the players’ preferences with respect to the communication technology. Peking University Strategy Space Collapse: Experiment and Theory    [pdf] (joint work with Zhijian Wang) Abstract To detect the strategy space collapse during the successive eliminating dominated strategy(SEDS), we conduct three matrix forms of the Von Neumann 3-Card Porker game experiments. In addition to the Nash distribution and social cycling in long run, we observed the pulse signals from dominated strategies before their extinction. The result shows that, all these observations, including Nash distribution, social cycling and the pulse signal, can be well explained by evolutionary game dynamics simultaneously and quantitatively. It can be seen that SEDS or the strategy space collapse processes is an area worthy of further exploration. Stony Brook University How the market structure affects the r&d decision when acquisition is possible? (joint work with Sandro Brusco) Abstract Gordron M.Phillips and Alexei Zhdanov (2012) initiate a new point of view that active acquisition market encourages firms to conduct reserach and development (R&D), particularly by small firms in an industry. In their paper, they build a model and provide empircial tests showing that small firms may optimally decide to innovate more when they can be sold out to larger firms, and larger firms may find it is disadvantageous to engage in an R&D race with small firms, as they can obtian access to innovation through acquisition. In this paper, we examine how firms' R&D behaviors will respond, if the demand side of the acquisition market becomes more competitive. And we find that as the number of big firms increases, big firms will invest more in R&D, since they are less likely to outsource innovation due to the incresing demand in the acquisition market. One the other hand, small firms are even more engaged in investing in R&D, since they are more likely to be sold out to larger firms and also because they are likely to gain more bargain power once innovate. University of California, San Diego Plain Consistency and Perfect Bayesian Equilibrium    [pdf] Wuhan University The reform of China’s college admission mechanisms: A empirical (and Experimental) study    [pdf] (joint work with Lijia Wei) Abstract The China’s college admission is the largest matching actives held by a central matching system every year. The reform could be used to test the matching theory about school admission because the State Council of China suddenly requires all the regions in China to improve their admission policies in 2014. Although some regions already start their reform before 2014, Beijing, Shanghai and some other regions refuse to change their admission polices until 2014. This paper empirically analyzes China’s college admission results from 2010 to 2015, which includes the second and third stage of the reform. Our main findings are: (1) The reform dose not increase the total admission rates of candidates. Instead, the reform increase the admission rates of candidates do well in exams but not candidates with poor performs which causes the college admission results more stable. (2) The reform decreases the quit rates of colleges admission which causes the college admission result more efficiency. (3) The very top universities receive much more applications than before, which implies the reform causes more truthful telling when candidates report the preferences. (4) The standard deviant of exam scores of admitted students in very top universities became much smaller than before, which implies the reform causes college admission results less manipulable University of Texas at Austin Bayesian Elicitation    [pdf] Abstract How can a receiver design an information structure in order to elicit information from a sender? Prior to participating in a standard sender-receiver game (in which messages are possibly costly a la Spence), the receiver may commit to any information structure–any degree of transparency. Committing to a less informative signal about the sender's choice affects the endogenous information generation process such that the receiver may thereby secure himself more information. We establish broad conditions under which the problem of designing a receiver-optimal information structure can be reduced to a much simpler problem, committing optimally to a distribution of actions as a function of the sender's message. Moreover, we relate the choice of information structure to inattention and establish conditions under which the optimal degree of inattention is equivalent to the optimal degree of transparency. We apply these results to various situations including those in which the sender has an incentive to feint, as well as a political scenario. University of Vienna Echo Chambers: Social Learning under Unobserved Heterogeneity    [pdf] (joint work with Cole Randall Williams) Abstract In a society with homogeneous individuals, who differ only in private information, rational social learning requires individuals who are confronted with disagreement to learn to agree. In this article, I show that in a society with unobserved heterogeneity in preferences or priors, individuals instead respond to disagreement with a rational form of confirmation bias I call local learning: individuals place greater weight on opinions or behavior that is closer to their own. When individuals choose who to learn from, local learning leads to the development of echo chambers. Columbia University Strategic Exploration    [pdf] (joint work with Qingmin Liu and Yu Fu Wong) Abstract This paper provides a tractable model of strategic exploration in which competing agents search for viable candidates from a large set of alternatives. The model features continuous time and continuous space. We show that the model has an essentially unique equilibrium which has a simple and intuitive characterization. We define distributional strategies for continuous-time games with unobservable actions and prove a representation result for mixed strategies. The model is flexible and we provide several variants that may prove useful in studying search, learning, and experimentation. Warsaw School of Economics, Poland Distributional equilibria in dynamic supermodular games with a measure space of players and no aggregate risk    [pdf] (joint work with Lukasz Balbus, Pawel Dziewulski, Kevin Reffett) Abstract We study a class of discounted infinite horizon stochastic games with strategic complementarities with a continuum of players. We first define our concept of Markov Stationary Distributional equilibrium, that involves a equilibrium action-state distribution and a law of motion of aggregate distributions. We next prove existence of such an equilibrium under different set of assumptions than Jovanovic and Rosenthal (1988) or Bergin and Bernhardt (1992), via constructive methods. Importantly dynamic law of large numbers we develop to study transition of private signals implies no aggregate uncertainty. Our construction, i.e. distributional game specification, equilibrium concept and the exact law of large numbers are critical, to avoid problems in characterizing dynamic complementarities in actions between periods and beliefs reported recently by Mensch (2018). As a result, we are able to dispense with some continuity assumptions necessary to obtain existence. In addition, we provide computable monotone comparative dynamics results for ordered perturbations of the space of stochastic games (see Acemoglu and Jensen (2015)). Finally, we discuss the relation of our result to the recent work on mean-field equilibria in oblivious strategies of Adlakha, Johari, and Weintraub (2015); or Weintraub, Benkard, and Van Roy (2008) and some recent works on large but finite dynamics games (Kalai and Shmaya, 2018) and imagined-continuum equilibrium. We provide numerous examples including social dissonance models, dynamic global games or keeping up with the Joneses economies. Keywords: large games, distributional equilibria, supermodular games, games with strategic complementarities, computation of equilibria, non-aggregative games, law of large numbers JEL classification: C62, C72, C73 Paris School of Economics Managing relational contracts    [pdf] (joint work with Marta Troya Martinez) Abstract Relational contracts are typically modeled as being between a principal and an agent, such as a firm owner and a supplier. Yet in a variety of organizations relationships are overseen by an intermediary such as a manager. Such arrangements open the door for collusion between the manager and the agent. This paper develops a theory of such managed relational contracts. We show that managed relational contracts differ from principal-agent ones in important ways. First, kickbacks from the agent can help solve the manager’s commitment problem. When commitment is difficult, this can result in higher agent effort than the principal could incentivize directly. Second, making relationships more valuable enables more collusion and hence can reduce effort. We also analyze the principal’s delegation problem and show that she may or may not benefit from entrusting the relationship to a manager. Chinese University of Hong Kong Getting Information from Your Enemies    [pdf] (joint work with Tangren Feng) Abstract A decision maker DM needs to choose from two options. DM does not know which option is better, whereas a group of experts do. However, the experts prefer that DM chooses the wrong option. We find that it is possible for DM to extract information from the experts using a mechanism without transfers and make an informed choice that benefits himself and hence harms them. We further analyze the possibility and effectiveness of such information extraction under a variety of incentive compatibility constraints, including dominant strategy IC, ex post IC, and Bayesian IC. We also discuss two extensions. In the first extension, we show that DM can extract information even if his commitment to a mechanism is limited. In the second extension, we show that if DM can Blackwell-garble the information source of the experts then information extraction becomes more effective. University of Oregon Intergenerational Transmission of Preferences and the Marriage Market    [pdf] (joint work with Hanzhe Zhang) Abstract We examine intergenerational transmission of preferences under different organizations of the marriage market. We demonstrate that the number and properties of equilibria depend on the underlying two-sided matching technology. Namely, the equilibria resemble those in a coordination game under random matching and those in an anti-coordination game under assortative matching. The matching technology influences not only who matches with whom but also, more importantly, individual choices that shape future generations’ preferences and choices. We discuss the model’s implications on the evolution of female labor force participation and the effectiveness of government’s campaign to alter preferences. University of Arizona Persuasive Disclosure    [pdf] Abstract This paper studies the general information disclosure model (Grossman, 1981; Milgrom, 1981) relaxing the assumption of monotonicity in preferences. I apply the belief-based approach, which is developed in Bayesian persuasion (Kamenica and Gentzkow, 2011) and applied to cheap talk (Lipnowski and Ravid, 2018), to solve for Perfect Bayesian Equilibrium (PBE) outcomes. I find that under full verifiability and rich language, the PBE outcomes take the form of a combination of those in separated auxiliary cheap talk games with lower bounds for sender payoff. Also, I provide a sufficient condition for the original unraveling result to hold in the general case. Finally, I compare information disclosure with cheap talk and Bayesian persuasion. School of Computer Science and Software, Zhaoqing University, China A Game Theory Approach for Evaluating and Assigning Suppliers in Supply Chain Management    [pdf] (joint work with Dachrahn Wu,Yi-Ming Chen,Yu-Min Chuang) Abstract In this study, we propose a framework for supplier evaluation that incorporates two game theory models designed to advise manufacturer to choose suppliers when the available budget is limited. In the first step, the interactive behaviors between the elements or participants of the manufacturer and the supplier are modeled and analyzed as a two-player and zero-sum game, after which the supplier power value is derived from the mixed strategy Nash equilibrium. The second model uses twelve supplier power values to compute the Shapley value for each supplier, in terms of the thresholds of the majority levels in three manufacturing processes. The Shapley values are then applied to create an allocated set of limited supplier orders. Simulation results show that the manufacturer can use this framework to quantitatively evaluate the suppliers and easily allocate suppliers within three manufacturing processes. Tsinghua University College Admission with Flexible Major Quotas    [pdf] (joint work with Dalin Sheng, Xiaohan Zhong) Abstract In this paper, we develop a college admission mechanism in which the number of seats allocated to each major in a college can adjust in response to students' demand and each major may have its own priority order over students. We show that the mechanism always results in the student-optimal matching among those that satisfy individual rationality, non-wastefulness, and no justified envy. Besides, the mechanism is group strategy-proof for students, respects unambiguous improvement in student standing in the priority orders, and is unanimously preferred by students to a standard deferred acceptance mechanism where each major has a fixed number of seats. Johns Hopkins University A Theory of Multiplexity: Sustaining Cooperation with Multiple Relationships (joint work with Chen Cheng, Wei Huang, Yiqing Xing) Abstract People are embedded in multiple social relations. These relationships are not isolated from each other: the network pattern of an existing relationship is likely to affect the formation of a new relationship. This paper provides a framework to analyze the multiplex of networks. We present a model in which each pair of agents may form more than one relationship. Each relationship is captured by an infinitely repeated prisoner’s dilemma, with endogenous stake of cooperation. We show that multiplexity, i.e. having more than one relationship on a link, boosters incentives as different relationships serve as social collateral for each other. We then endogenize the network formation and ask: when an agent has a new link to add, will she multiplex with a current neighbor, or link with a stranger? We find the following: (1) There is a strong tendency to multiplex, and “multiplexity trap” can occur. That is, agents may keep adding relationships with current neighbor(s), even if it is more compatible to cooper- ate with a stranger. (2) Individuals tend to multiplex when the current network (a) has a low degree dispersion (i.e., all individuals have similar numbers of friends), or (b) is positively assortative. We provide empirical evidence that is consistent with our theoretical findings. Johns Hopkins University Communication with Informal Funding (joint work with Chen Cheng, Jin Li, Yiqing Xing) Abstract We present a model on communication with informal funding. Specifically, on top of the classical Crawford-Sobel (1982) cheap-talk model, in which only the principal (she) can take an action, we allow the agent (he) to take a costly action that is additive to the principal’s, e.g., finance a project using his informal funding. We show that if the principal can choose the cost of informal funding to the agent, there is an optimal cost level, neither too high nor too low, which can implement the principal-preferred state. Then we study the case that the agent’s cost is exogenous. When it is below the principal’s optimal level, the best equilibrium involves no communication, and the project is mostly financed by the informal funding. When the agent’s cost is above the principal’s optimal level, there is a dichotomy of communication and informal funding: up to a threshold of the underlying state, there is Crawford-Sobel style communication and no informal funding is used; beyond that threshold, informal funding is used but there is no further communication. When the principal also pays a cost to the informal funding, communication improves in the cost to the principal. When the cost to the principal is high enough, informal funding serves as a credible threat to the principal, and leads to better communication than the best equilibrium in Crawford-Sobel. Singapore University of Technology and Design Convergence of the Best-response Dynamic in Potential Games    [pdf] Abstract We prove that the continuous-time best-response dynamic from a generic initial point converges to a pure-strategy Nash equilibrium in an ordinal potential game under a minor condition for the payoff matrix. We then study the best-response dynamic defined in a consideration-set game, where players face random strategy constraints with a small probability when playing the underlying game. In the case that the underlying game is a two-player common-payoff game with cheap talk, we show that if one player is under a strategy constraint slightly biased towards the efficient outcome, then the best-response dynamic from a generic initial point must approach to the efficient outcome, regardless of the constraint for the other player. The University of Chicago Social Learning under Information Control    [pdf] Abstract We study to what extent information aggregation in social learning environments is resilient to centralized information control by a principal with a state-independent preference. We consider a population of agents who arrive sequentially and obtain information about the state of the world both from their private signals and by observing information about other agents' actions either exogenously or through the principal. It is shown that information aggregation can be both fragile and resilient depending on the degree of the centralized information control. If the principal has full control over how information is disseminated, information aggregation fails regardless of the social learning environment. However, information aggregation can be achieved if the agents have excess to some exogenous observations of others' actions that are outside of the principal's control. In this case, the learning dynamics becomes more interesting: whether information aggregation can be achieved depends on whether and how the beliefs generated by the private signals are bounded, as well as the type of exogenous observations that agents have. The Ohio State University (Cost-of-) Information Design    [pdf] (joint work with Yaron Azrieli and Shuo Xu) Abstract We introduce the cost of information design problem where a decision maker acquires information subject to a cost selected by a designer. We show that when restricted to the family of posterior-separable cost functions, the designer achieves the same level of utility as in the case where she herself chooses the information for the decision maker. The designer fails to achieve the first best if the family of cost functions are further restricted to be invariant to the labeling of the states. We show in an example that the designer induces full information at zero cost when the cost functions are multiples of average reduction of Shannon entropy. We also introduce competition to the cost of information design problem where two designers simultaneously select the cost of information and the decision maker only acquires information from one of the designers. We show in an example that the designers exhibit Bertrand-like behaviors and the unique equilibrium is for the designers to induce full information at zero cost. Ohio State University Termination fee as a sequential screening device    [pdf] Abstract We consider an intertemporal monopolistic selling environment where the buyers arrive sequentially and the value uncertainty is resolved over time. We show that a contract involving a floor price and a termination fee between the seller and the early arrived buyer can serve as a sequential screening device: an optimistic buyer accepts an offer with high price and high termination fee to avoid fierce future competition; a pessimistic buyer, however, will promise a low price and accept a low break-up fee to avoid “over-purchase” since they expect a high probability that the realized valuation is low. We confirm analytically and numerically that the seller can raise higher revenue from this selling mechanism than that from an optimal static mechanism. This result provides a potential rationale for the use of go-shop negotiation in the M&A market, among other selling procedures with break-up terms. We further demonstrate that the go-shop negotiation, although not fully optimal among all dynamic mechanisms, can be revenue close to the optimal dynamic mechanism in a large class of parametric distributions. University of Chicago Implications of Consumer Data Monopoly    [pdf] Abstract This paper explores the implications of an informational monopoly. An informational monopolist is able to provide consumer data to the producers and facilitate price discrimination. By examining the revenue-maximizing mechanisms for the informational monopolist, this paper shows that the consumer surplus is always entirely extracted away. Furthermore, it characterizes the optimal mechanisms for the informational monopolist, which feature an upper isolation segmentation. The characterization of optimal mechanisms further leads to an equivalence result: In terms of the extracted revenue, the producer profit, the consumer surplus and the volume of trade, an informational monopoly is equivalent to a conglomerate monopoly who has control of both production technology and consumer data. University of Georgia Matching with Complementary Contracts    [pdf] (joint work with Marzena Rostek) Abstract In this paper, we provide existence results for matching environments with complementarities, such as markets for patent licenses, differentiated products, or multi-sided platforms. Our results apply to both nontransferable and transferable utility settings, and allow for multilateral agreements and those with externalities. Additionally, we give comparative statics regarding the way primitive characteristics are combined to form the set of available contracts. These show the impact of various contract design decisions, such as the application of antitrust law to disallow patent pools, on stable outcomes. Boston University Preference, Rationalizability and Robustness in Games with Incomplete Information    [pdf] Abstract This paper defines the notion of interim correlated rationalizability in the very general class of games with incomplete information. Working with the Epstein-Wang universal type space, our framework is not restricted to the conventional subjective expected utility model and general enough to accommodate non-expected utility cases and ambiguity averse players. Interim correlated rationalizability is a natural generalization of the rationalizability concept in the expected utility case and properties of the concept are studied. In particular, our rationalizability concept characterizes rationality and common knowledge of rationality. Furthermore, we investigate robustness to higher order uncertainty. Interim correlated rationalizability is the strongest solution concept satisfying upper hemicontinuity in the universal type space. Moreover, any rationalizable action can be made uniquely rationalizable by perturbing higher order uncertainty. Finally, we figure out structure of rationalizable sets. As is the case for the expected utility model, the rationalizable action profile is generically unique even though ambiguity aversion must weakly enlarge the rationalizable set. Korea University Membership Mechanism Abstract This paper studies an environment in which a seller seeks to sell two different items to buyers. The seller designs a \textit{membership mechanism} that assigns positive allocations to members only. Exploiting the restrictive set, the seller finds a revenue-maximizing incentive compatible mechanism. We first establish the optimal allocation rule for this membership mechanism given a regularity condition for a modified valuation distribution reflecting the set, which provides the existence of a member set and the optimal payment rule. The optimal allocation enables us to compare the membership with separate selling of the two items, suggesting conditions under which the membership dominates the separate selling: interplay between the number of bidders and the degree of the stochastic dominance of valuation distributions. Stony brook University Firm Entry Decline, Market Structure and Dominant Firm’s Productivity    [pdf] Abstract Firm entry decline in the US has created concerns regarding job creation, firm churning, resource reallocation and aggregate productivity. Based on empirical facts regarding concentration and markup trends, this research tries to understand if increasing market concentration (through the productivity increase of large, dominant firms) may cause the entry decline. To quantitatively evaluate the effect, I do this using a firm dynamics model which introduces ”dominant firm vs. competitive fringe” framework into the general equilibrium version of Hopenhayn (1992). I find that an increase in dominant firm’s productivity can explain entry decline of fringe firms. Center for the Study of Rationality, The Hebrew University of Jerusalem Strategic use of seller information in private-value first-price auctions    [pdf] (joint work with Todd R. Kaplan) The Basque Country University Two solutions for bargaining problems with claims    [pdf] (joint work with M. J. Albizuri, B. Dietzenbacher and J. M. Zarzuelo) Abstract A bankruptcy problem is an elementary allocation problem in which claimants have individual claims on a deficient estate. In a bankruptcy problem with transferable utility (O’Neill, 1982), the estate and claims are of a monetary nature. These problems are well-studied, both from an axiomatic perspective and a game theoretic perspective. On the other hand, Chun and Thomson (1992) considered bankruptcy problems with nontransferable utility, where the estate can take a more general shape and corresponds to a set of utility allocations. Thus NTU-bankruptcy problems form a natural generalization of the traditional bankruptcy problems. These authors proposed the "proportional solution" for NTU-bankruptcy problems using an axiomatic approach. In this paper, we propose and characterize two solutions for NTU-bankruptcy problems that are closely related to Nash and Kalai-Smorodinski bargaining solutions. These two characterizations consist of the traditional axioms used by Nash and Kalai and Smorodinski, together with a new axiom called Independence of Higher Claims (IHC). This axiom requires that if in a problem an agent received his/her claim, then he would not receive less if some other agents increased their claims but the estate did not change. New York University On Incentive Compatibility in Dynamic Mechanism Design with Exit Option in a Markovian Environment    [pdf] (joint work with Tao Zhang, Quanyan Zhu) Abstract In this work, we consider a class of dynamic mechanism design framework in a Markovian environment described in (Pavan et al. 2014) and analyze a direct mechanism model of a principal-agent problem in which the agent is allowed to exit at any period of time. The agent privately observes time-varying information, reports the information to the principal by using a reporting strategy, and chooses a stopping time to exit the mechanism. The principal, on the other hand, chooses decision rules consists of an allocation rule and a payment rule. In order to influence the agent's stopping decision, the principal designs a termination transfer rule that is only delivered at the realized stopping time by the agent's stopping rule. We focus on the one-period look-ahead (O-LA) stopping rule and construct the payment rule and termination transfer rule by fixing the allocation rule that satisfies a first order condition of incentive compatibility. We obtain the necessary and sufficient conditions for the implementability of the allocation rule by characterizing one-shot deviation principle and strong monotonicity conditions derived from the cyclical monotone ( Roche, 1987). Michigan State University Bargaining and Reputation with Ultimatums    [pdf] (joint work with Mehmet Ekmekci) Abstract Two players divide a unit pie. Each player is either justified to demand a fixed share and never accepts any offer below that or unjustified to demand any share but nonetheless wants as a big share of the pie as possible. Each player can give in to the other player’s demand at any time, or can costly challenge the other player to an ultimatum to let the court settle the conflict. We solve equilibrium strategies and reputation dynamics of the game when there is no ultimatum (Abreu and Gul, 2000), when the ultimatum is available to one player, and when the ultimatum is available to both players. Several interesting results follow from the analysis. First, equilibrium dynamics involve non-monotonic probabilities of sending ultimatums when the challenge opportunities do not arise frequently: at first both players mix between challenging and not challenging when a challenge opportunity arrives, then one player challenges for sure and the other player does not challenge at all, and at last both players do not challenge and resort to a war of attrition. Second, when the challenge opportunities arise sufficiently frequently for both players and when the prior probabilities of being justified are sufficiently small, neither player can build up his or her reputation, and inefficient and infinite delay in bargaining occurs. Third, an unjustified player does not want to have the challenge opportunity, because it destroys his or her possibility of pretending to be justified and weakens his or her commitment power; on the other hand, a justified player strictly prefers to have the challenge opportunity. Finally, the implications overturn classic results of one-sided reputation in Myerson (1991). Michigan State University Pre-Matching Gambles    [pdf] Michigan State University Overcoming Borrowing Stigma: The Design of Lending-of-Last-Resort Policies    [pdf] (joint work with Yunzhi Hu) Abstract How should the government effectively provide liquidity to banks during periods of financial distress? During the most recent financial crisis, banks avoided borrowing from the Fed’s Discount Window (DW) but bid more in its Term Auction Facility (TAF), although both programs share similar requirements on participation. Moreover, some banks paid higher interest rates in the auction than the concurrent discount rate. Using a model with endogenous borrowing stigma, we explain how the combination of the DW and the TAF increased banks’ borrowings and willingnesses to pay for loans from the Fed. Using micro-level data on DW borrowing and TAF bidding from 2007 to 2010, we confirm our theoretical predictions about the pre-borrowing and post borrowing conditions of banks in different facilities. Finally, we discuss the design of lending-of-last-resort policies. Tel Aviv University Information as Regulation    [pdf] (joint work with Eilon Solan) Abstract We study dynamic inspection problems where the regulator faces several agents. Each agent may benefit from violating certain legal rules, yet by doing so the agent faces a penalty if the violation is detected by the regulator. There is a constraint on the regulator's inspection resources and he cannot inspect all agents simultaneously. The regulator's goal is to minimize the (discounted) number of violations and he has a commitment power. We compare two monitoring structures. Under public monitoring, the inspector publicly announces his observations after each period (i.e., the identity of the inspected agent (if any), and the inspection result); whereas under private monitoring, the inspector conceals his observations. We show that announcing his observations may, in fact, hurt the regulator, and we identify conditions under which this occurs. The University of Chicago Perception Bias in Tullock Contest (joint work with Jaimie Lien, Hangcheng Zhao, Jie Zheng) Abstract Players in a contest setting sometimes hold misperceptions about their winning chances. To understand the effects of such psychological biases on competitive behavior and outcomes, we analyze a two-player Tullock contest with contestants who may have perception biases about the effectiveness of their efforts. In the benchmark model in which only one player has a perception bias, we characterize the unique equilibrium, in which the other player benefits at the biased player\'s expense, and both individual effort and total effort are decreasing in the severity of perception bias in the directions of either underconfidence or overconfidence. If both players have perception biases, multiple equilibria may exist for underconfident contestants, and the monotonic relationship between bias and effort no longer holds. We additionally depart from the benchmark case by allowing players\' valuations of the prize to differ. Our results show a surprising non-monotonic relationship between the total effort and the valuations of the players. The results contribute to the behavioral contest literature by offering a better understanding of how individuals behave under a psychological bias. Boston University Optimal Contracts with Learning from Bad News    [pdf] Abstract I study a continuous-time principal-agent model in which the agent's success is not directly observable and can only be learned from bad news. A public breakdown arrives at some Poisson rate when the agent has not achieved a success. Once a success has been achieved, no breakdowns will ever occur. In the optimal contract where the agent observes his own success, the agent exerts full effort until a success or a breakdown. The principal makes a bonus payment after the report of success with some delay, which can be implemented as a stock option. Before the report, the principal makes no payments at the beginning and offers a constant wage starting from some point. In the optimal contract where the agent does not observe his own success, the effort is frontloaded but inefficient. The reward scheme could take three different forms depending on parameter values. Boston University Dynamic Delegation with Adverse Selection    [pdf] Abstract I study a dynamic model of delegated decision making with adverse selection and imperfect monitoring. Each period a principal may delegate to an agent who has better information. The agent's information is also imperfect and the accuracy of the information depends on the ability of the agent. In the optimal mechanism where the agent's ability is publicly observable, the principal delegates at the beginning and the agent behaves optimally for the principal. Eventually the principal either promises to delegate forever or stops delegating, depending on the history. When the agent's ability is private information, I characterize the optimal mechanism of a two-type model. The principal offers pooling mechanism if both types are relatively high. If both types are relatively low, the principal optimally separates different types of the agent by offering different mechanisms. George Mason University Competition with Indivisibilities and Few Traders    [pdf] (joint work with Cesar Martinelli, Jianxin Wang) Abstract We study minimal conditions for competitive behavior with few agents. We adapt the strategic market game by Dubey (1982), Simon (1984) and Benassy (1986) to an indivisible good environment. We show that the Dubey-Simon-Benassy equivalence of Nash equilibrium outcomes and competitive outcomes holds in this setting. Furthermore, we give sufficient and necessary conditions for all the Nash equilibrium outcomes with active trading to be competitive that can be checked directly by observing the set of competitive equilibria. We test our strategic market game in laboratory experiments under minimal environments that do and do not guarantee competitive outcomes of Nash equilibria with active trading, and compare the performance of a static and a dynamic institution. We find that the dynamic institution achieves higher efficiency than the static one and leads to competitive results when all Nash equilibria outcomes of the market game are competitive. The dynamic institution also allows monopoly, if is present, to extract more surplus than the static institution. Tsinghua University Information Design in Simultaneous All-pay Auction Contests    [pdf] (joint work with Zhonghong Kuang, Hangcheng Zhao, Jie Zheng) Abstract We study the information design problem of the contest organizer in a simultaneous 2-player 2-type all-pay auction contest environment, where players have limited information about own/others types or valuations of the prize. The contest organizer can send a public message to the contestants about the type distribution to persuade them to exert higher effort. We allow the players' ex-ante symmetric type distributions to be correlated, and the information disclosure policy to take the stochastic approach of Bayesian persuasion, which is a generalization of the traditional information disclosure policy. The optimal design, the structure of which depends on the degree of the correlation of players' types, is completely characterized and shown to achieve higher effort than the type-dependent information disclosure policy. Given players' types are private information, if there is a strong positive correlation, the optimal design consists of two posteriors with one representing a perfect positive correlation and the other representing a positive correlation identified by a cutoff condition; if there is a weak positive correlation or negative correlation between types, the optimal design consists of two posteriors with one such that both players being high types is impossible and the other representing a positive correlation identified by the cutoff condition. We also consider the case in which only the designer knows players' types and the case in which the type information is asymmetric between the two players. Welfare comparisons are conducted across different informational setups. Our work is the first study on full characterization of information design for games with two-sided asymmetric information and infinite action space. Stanford University Time preference and information acquisition    [pdf] Abstract In this paper, we study how temporal discounting determines sequential decision making. We analyze decision time distribution induced by all sequential information acquisition strategies that 1) implements a target information structure, 2) satisfies a constraint on flow informativeness of signal. The main result is that decisive Poisson signal creates the most dispersed decision time distribution (in mean preserving spread order), and the pure accumulation of information creates the least dispersed. This implies that for a decision maker with convex discount function, decisive Poisson signal is the optimal learning strategy. Yale University Information Structure and Price Competition (joint work with Mark Armstrong and Jidong Zhou) Abstract This paper studies how product information possessed by consumers (e.g. from product reviews, platform recommendations, etc.) affects competition between firms. We consider symmetric firms which supply differentiated products and compete in prices. Before purchase, consumers observe a private signal of their valuations for various products. For example, the signal might reveal their valuations perfectly or not at all, or only inform them of the ranking of products. We consider a fairly general class of signal structures which induce a symmetric pure-strategy pricing equilibrium, and derive for the signal structure within this class which is optimal for firm or for consumers. The key trade-off is that with more detailed product information consumers are more able to buy their preferred products, but at the same time firms have more market power and charge higher prices. The firm-optimal signal structure induces consumers to view the products as being sufficiently differentiated, while the consumer-optimal information structure induces choosy consumers to buy their preferred product but pools other consumers by providing little information to them in order to intensify price competition. The firm-optimal information structure often does not cause mismatch between consumers and products and so maximizes total welfare, while the consumer-optimal information structure often causes mismatch and does not maximize total welfare. We also derive an upper bound for consumer surplus across all symmetric information structures, which shows that allowing for mixed-strategy pricing equilibria could increase consumer surplus only slightly. New York University Early Selections and Affirmative Actions in the High School Admission Reform in China    [pdf] (joint work with Tong Wang) Abstract In the past decade, high school admissions in China have experienced dramatic changes. One of these changes is adopting a Chinese version of affirmative actions in the admission procedure. The Chinese affirmative action does not involve a fixed type-specific quota system, but rather a flexible adjusted priority-based school choice method that has gained much popularity with time. Specifically, in the admission procedure, several designated students receive a privilege (lump-sum extra scores) in addition to their exam scores. Two popular procedures are used to determine who could receive this privilege: one involves an early selection before the normal admission procedure, and the other adjusts the priority based on the rank-ordered list submitted by schools in the normal admission procedure. In this project, we corroborate that both mechanisms have flaws and generate undesired results. We also propose a strategy-proof and stable mechanism that could eliminate the flaws in real-life admission procedures and reserve the flexibility without posing hard constrained type-specific quotas. Moreover, we combine new administration data with a preference survey from China to test the existing matching mechanisms. Considerable evidence has affirmed that several schools take advantage of the existing mechanism and cause significant welfare loss for their students . Boston University Dynamic Coordination with Informational Externalities    [pdf] Abstract I study observational learning in a two-player investment timing game with coordination. Each player is endowed with one opportunity to make a reversible investment, whose value depends on an ex ante unknown state. Each player learns about the return of the investment project by observing a private signal, and the actions of the other player. The return of the project is realized at the time when the two players coordinate on joint investment. I characterize the unique symmetric equilibrium of this game. The equilibrium exhibits waves of investment in the initial stage, and delayed investment and disinvestment in the continuation play. As the precision of the signal distributions increases, the equilibrium distributions of players’ posterior belief about the state when they invest or disinvest are ranked by first-order stochastic dominance; and the speed of learning increases. NUS Sign equivalent transformation and network games    [pdf] (joint work with Yves Zenou; Junjie Zhou) Abstract Many equilibrium models in economics and operations research can be formulated as Variational Inequalities (VI). In this paper, we introduce an operation called sign equivalent transformation (SET) on VI, which has the property of preserving the set of solutions on any rectangular domain. As applications, we revisit many classical network games in the economics literature, which include games with uni-dimensional or multi-dimensional strategies, games with strategic complementary or substitutes, games with linear or nonlinear best-reply functions, etc. For each of these games, by identifying certain sign equivalence transformations (SET), we are able to transform the original VI problem into a much simpler one. The new VI problem (and not the original one) satisfies an integrability condition, which enables us to reformulate this problem as a minimization programming. As a by-product, we explicitly construct a best-response potential function of the original game, from which various properties of Nash equilibrium, such us existence, uniqueness and stability, can be easily derived. Moreover, we develop and analyze new classes of games played on networks using SET. Lastly, we discuss several applications of SET beyond network games. the Pennsylvania State University Creative Contests --- Theory and Experiment    [pdf] Abstract This paper introduces “creative contests”, in which the criterion for ranking contestants is not fully specified in advance. Examples include architecture contests and logo design contests. Both pure strategy and mixed strategy equilibria might emerge and are characterized by solutions to a system of non-linear and differential equations. I then consider a case where the organizer has private information about his preference and makes strategic decisions about information disclosure. I find that it is beneficial for him to disclose information when bidding cost is low and conceal when bidding cost is high. Lastly, I conduct a lab experiment. Results are largely consistent with model predictions. Universitat Pompeu Fabra Rationalizability, Observability and Common Knowledge    [pdf] (joint work with Antonio Penta) Abstract We study the strategic impact of players' higher order uncertainty over the observability of actions in general two-player games. More speci cally, we consider the space of all belief hierarchies generated by the uncertainty over whether the game will be played as a static game or with perfect information. Over this space, we characterize the correspondence of a solution concept which represents the behavioral implications of Rationality and Common Belief in Rationality (RCBR), where 'rationality' is understood as sequential whenever a player moves second. We show that such a correspondence is generically single-valued, and that its structure supports a robust re nement of rationalizability, which often has very sharp implications. For instance: (i) in a class of games which includes both zero-sum games with a pure equilibrium and coordination games with a unique efficient equilibrium, RCBR generically ensures ecient equilibrium outcomes; (ii) in a class of games which also includes other well-known families of coordination games RCBR generically selects components of the Stackelberg pro les; (iii) if common knowledge is maintained that player 2's action is not observable (e.g., because 1 is commonly known to move earlier, etc.), in a class of games which includes of all the above RCBR generically selects the equilibrium of the static game most favorable to player 1.

Back