# Directory

#### A Brouwer-Tarski fixed-point theorem with applications to game theory

There exists two main methods to proving the existence of a Nash equilibrium in normal form games: the first approach, initiated by Nash and Glicksgberg’s theorems, is primarily topological, the second approach, initiated by Topkis’ theorem, relies on order theoretic methods. The first method is related to Brouwer fixed-point theorem, the second one to Tarski fixed-point theorem. In this paper, we propose a new approach which allows us to unify many interesting cases of both theorems. For that, new n-dimensional fixed-point theorems for possibly discontinuous functions are established, extending Brouwer’s fixed-point theorem and encompassing interesting cases of Tarski’s fixed-point theorem. The main idea is to require that at any discontinuity point where the graph of the function “crosses the diagonal”, the function satisfies a specific assumption (similar to upward jumps). For this purpose, we classify some types of discontinuities, and use them to prove our new fixed-point theorems, with some applications in game theory.

#### A CONTINUUM MODEL OF STABLE MATCHING WITH FINITE CAPACITIES

We introduce a unified framework for stable matching, which nests the traditional definition of stable matching in finite markets and the continuum definition of stable matching from Azevedo and Leshno (2016) as special cases. Within this framework, we introduce a novel continuum model, which makes individual-level probabilistic predictions.

Our model always has a unique stable outcome, which can be found using an analog of the Deferred Acceptance algorithm. The crucial difference between our model and that of Azevedo and Leshno (2016) is that they assume that the amount of student interest at each school is deterministic, whereas we assume that it follows a Poisson distribution. As a result, our model accurately predicts the simulated distribution of cutoffs, even for markets with only ten schools and twenty students.

We use our model to generate new insights about the number and quality of matches. When schools are homogeneous, we provide upper and lower bounds on students’ average rank, which match results from Ashlagi et al. (2017) but apply to more general settings. Our model also provides clean analytical expressions for the number of matches in a platform pricing setting considered by Marx and Schummer (2019).

#### A core-selecting auction for portfolio’s packages

We introduce the “local-global” approach for a divisible portfolio and perform an equilibrium analysis for

two variants of core-selecting auctions. Our main novelty is extending the Nearest – VCG pricing rule in a

dynamic two-round setup, mitigating bidders’ free-riding incentives and further reducing the sellers’ costs.

The two-round setup admits an information-revelation mechanism that may offset the “winner’s curse”,

and it is in accord with the existing iterative procedure of combinatorial auctions. With portfolio trading

becoming an increasingly important part of investment strategies, our mechanism contributes to increasing

interest in portfolio auction protocols.

#### A Dynamic Theory of Regulatory Capture

Firms often try to influence individuals that, like regulators, are tasked with advising or deciding on behalf of a third party. In a dynamic regulatory setting, we show that a firm may prefer to capture regulators through the promise of a lucrative future job opportunity (the revolving-door channel) than through a hidden payment (a bribe). This is because the revolving door publicly indicates the firm’s eagerness and commitment to rewarding lenient regulators, which facilitates collusive equilibria. We find that opening the revolving door conditional on the regulator’s report is usually more efficient than a blanket ban on post-agency employment.

#### A game theoretic decision forest for feature selection and classification

Classification and feature selection are two of the most intertwined problems in machine learning.

Decision Trees are straightforward models that address these problems offering also the advantage of explainability. However, solutions that are based on them are either tailored for the problem they solve, and their performance is dependent on the split criterion used.

A game-theoretic decision forest model is proposed to approach both issues. Decision trees in the forest use a splitting mechanism based on the Nash equilibrium concept. A feature importance measure is computed after each tree is built. The selection of features for the next trees is based on the information provided by this measure. To make predictions, training data is aggregated from all leaves that contain the data tested, and logistic regression is further used. Numerical experiments illustrate the efficiency of the approach. A real data example that studies country income groups and world development indicators using the proposed approach is presented.

#### A Generalized Implementation Problem

In this paper, we generalise the classical implementation problem by introducing an exogenous set of social choice functions whose realisations determine the set of feasible outcomes in every state. In Remarks 1 to 3, we provide a set of simple yet dire conclusions regarding the (weak) implementability of rules by means of feasible (and exhaustive) mechanisms. We then introduce the notion of support, and show in Theorems 1 & 2 that a rule is (weakly) supportable if and only if there exists an ‘equivalent’ prblem whose set of feasible outcomes is the original exogenous set of social choice functions.

#### A Mediator Approach to Mechanism Design with Limited Commitment

We study information structures in mechanism design problems with limited commitment. In each period, a principal offers a “spot” contract to a privately informed agent without committing to future contracts; the agent responds. In contrast to the classical model with a fixed information structure, we allow for all admissible information structures. We represent the information structure as a fictitious mediator and re-interpret the model as mechanism design by the committed mediator. The mediator collects the agent’s private information and then, in each period, privately recommends players’ actions in an incentive-compatible manner. First, we construct examples to explain why new equilibrium outcomes can arise when considering all information structures. Next, we develop a durable-good monopoly application. In the seller-optimal mechanism, trade dynamics and welfare substantially differ from those in the classical model: the seller offers a discount to the high-valuation buyer in the initial period, followed by the high surplus-extracting price until an endogenous deadline, when the buyer’s information is revealed and extracted. Thus, the Coase conjecture fails. We also discuss the unmediated implementation of the seller-optimal outcome.

#### A Minimum-Offer Contribution Mechanism for the Provision of Public Goods

#### A new Issue for Social Choice

Much of the discussion about social choice tends to focus on the process of the election. We enter the voting booth and choose among the candidates. But is this process good enough? We know from Arrow that when there are three or more candidates, no system of election can be fair or rational. The US system is particularly troubling. The electoral system overweighs the smaller states.

Still almost all the existing work on social choice focuses on comparison between candidates and not on the {\em quality}\, of the candidates. In particular in November, American voters will be faced with a choice between two candidates whose quality is being doubted. This is a defect of the {\em process of nomination}\, and not of the electoral process proper.

Is there a way to ensure that the people who are nominated by the parties are already of high quality so that the public chooses between two good candidates rather than two poor ones? This is an issue involving social choice and mechanism design. Mechanism to design a system whereby nominating candidates of high quality is a dominant game theoretic strategy for the parties.

We suggest a positive answer to this question.

#### A note on linear complementarity via zero sum two person matrix games

The matrix M of a linear complementarity problem can be viewed as a payoff matrix of a zero-sum two person game. In this paper, we use game theoretic approach to obtain new sufficient condition on the matrix M under which Lemke’s algorithm can be successfully applied to reach a complementary solution or infeasibility. |

#### A Population’s Feasible Posterior Beliefs

We consider a population of Bayesian agents who share a common prior over some finite state space and each agent is exposed to some information about the state.

We ask which distributions over empirical distributions of posteriors beliefs in the population are feasible.

We provide a necessary and sufficient condition for feasibility. We apply this result in several domains.

First, we study the problem of maximizing the polarization of beliefs in a population.

Second, we provide a characterization of the feasible agent-symmetric product distributions of posteriors.

Finally, we study an instance of a private Bayesian persuasion problem and provide a clean formula for the sender’s optimal value.

#### A Robust Efficient Dynamic Mechanism

Athey and Segal introduced an efficient budget-balanced mechanism for a dynamic stochastic model with quasilinear payoffs and private values, using the solution concept of perfect Bayesian equilibrium (PBE). However, this implementation is not robust in multiple senses. For example, we will show a generic setup where the efficient strategy profiles can be eliminated by iterative elimination of weakly dominated strategies. Furthermore, this model used strong assumptions about the information of the agents, and the mechanism was not robust to the relaxation of these assumptions. In this paper, we will show a different mechanism that implements efficiency under weaker assumptions and using the stronger solution concept of “efficient Nash equilibrium with guaranteed expected payoffs”.

#### A Signaling Approach to Reputation

We study reputation dynamics in continuous-time signaling games with one-sided incomplete information. Public signals about the informed player’s type and his action are distorted by Brownian noise. The framework encompasses both the case of private and interdependent values. For example, we show that it is suited to study predatory behavior in an oligopoly in which a leader facing a competitive fridge has private information about the state of the demand or about its production cost. We characterize equilibria for any fixed discount rate as a solution to a system of ordinary differential equations and show that all public perfect equilibrium payoffs can be achieved in Markov equilibrium. In contrast to a setup with commitment types, reputational incentives depend on the equilibrium behavior of each type of the informed player. In the duopoly application, predation has longer-lasting effects in the signaling model as learning is slower. Moreover, in contrast to the signaling literature, we do not focus on a specific class of Markov equilibria, such as linear Markov.

#### A Simple Higher-Order Rational Greater-Fool Bubble Model

We propose a simple higher-order rational greater-fool bubble model where the motive for trade is intertemporal consumption smoothing, as in Liu et al. (2023). Our nth order bubble lasts for n + 2 periods, but only requires n + 2 states of the world. Thus, it is simpler than the model constructed in Conlon (2004), which required n2 + 5n + 5 states. Also, the information structures in our model are reminiscent of those in Rubinstein’s (1989) email game.

#### A Theory of Anti-Pandering

We consider a two-period game between an incumbent politician and a voter. In each

period, the incumbent faces a choice between a status quo and a risky reform policy. The

incumbent can be either competent or incompetent, and the competent incumbent receives a

private signal about the reform policy’s outcome. The voter can observe the incumbent’s action

but not its outcome. We show that the equilibria exhibit “anti-pandering” behavior: To signal

that she is competent, the incumbent chooses the reform action even when its outlook is not

promising. The voter’s welfare is nonmonotonic in the amount of electoral benefit: While

such anti-pandering behavior prevents an efficient policy choice, it also helps the voter select

the competent one. We also analyze the incumbent’s ideological bias’s effect and extend the

model to a cheap-talk setting.

#### A Theory of Authoritarian Propaganda

#### A Theory of Charge Bargaining

To secure a guilty plea ahead of a criminal trial, a prosecutor has three discretionary tools to make a plea deal attractive to a defendant: sentence, charge, or fact bargaining. Economic models of plea bargaining reduce each tool to its certainty equivalent on the ultimate sentence from trial. Thus, the models fail to account for the common reality of charge bargains over sentence bargains. This article develops a model of plea bargaining to identify when a prosecutor and defendant would choose to bargain over charges instead of sentences.

#### A Theory of Ghosting

Ghosting is a phenomenon where communication between two parties abruptly stops after one side becomes deliberately unresponsive. This occurs in a variety of settings, but the term entered the mainstream after its usage to describe an important aspect of the online dating experience that, surprisingly, has been sparsely studied. Both online dating platforms and their users report that ghosting is one of the primary drivers hurting user experience and preventing good outcomes. We develop a model of ghosting and study some communication policies that platforms have implemented to deal with this problem. Interestingly, we show that not only are some of these policies ineffective, but that they can lead to worse outcomes than if no structure on communication was put in place to begin with. In a similar vein, we show that platforms that improve their technology in order to help users find better matches can paradoxically make users worse off. We conclude by characterizing the first-best outcome and showing that while this outcome is unattainable, a simple policy can lead to strictly higher welfare than unstructured communication.

#### A theory of media bias and disinformation

We build a model of media bias in which consumers with heterogeneous beliefs do not know whether the media firm (i) is biased and (ii) has received an informative signal (“news”). The firm submits a report to consumers, who may in turn invest in verification and leave if supposed news is false. We distinguish two types of disinformation: distortion and fabrication of news. We first show that there is fabricated news in any equilibrium, while distorted news is only present when future customer revenue is not important relative to the biased firm’s agenda. Second, comparative statics reveal that, contrary to common perception, increasing the polarization of beliefs may decrease the prevalence of fabricated news.

#### A Theory of Stable Market Segmentations

A strategic tension between consumers in a monopolistic market arises when many high-value consumers want to pool with a few lower-value consumers in order to obtain low prices from the seller. We study the interaction between consumers and the resulting market segmentation into consumer groups as the outcome of a cooperative game between the consumers. We introduce two new solution concepts, the weakened core and stability, which coincide with the core whenever it is nonempty. We show that these concepts are in fact equivalent and non-empty, and are characterized by efficiency and saturation. A segmentation is saturated if shifting consumers from a segment with a higher price to a segment with a lower price leads the seller to optimally increase the lower price. We show that stable segmentations that maximizes average consumer surplus (across all segmentations) always exist.

#### A Three-State Rational Greater-Fool Bubble Model With Intertemporal Consumption Smoothing

We construct a simple greater-fool bubble model with rational agents, where the motive for trade is intertemporal consumption smoothing. This yields an easy-to-understand bubble model with only three states of the world, instead of the five required in previous research, and may therefore provide a convenient point of departure for future work on greater-fool bubbles. Our model suggests that such bubbles are more likely when there are asset sellers (e.g., innovators) with profitable investment opportunities, but little wealth. We show that an anti-bubble policy can reduce the welfare of even the greater fools it’s supposed to protect, if it interferes with consumption smoothing by those agents in earlier periods of their lives.

#### Adverse Selection and Information Acquisition

We study screening with endogenous information acquisition. A monopolist offers a menu of quality-differentiated products. After observing the offer, a consumer can costly learn about which product is right for them. We characterize the optimal menu and how it is distorted: typically, all types receive lower-than-efficient quality, and distortions are more intense than under standard screening, even when no information is acquired in equilibrium. The additional distortion is due to the *threat* from the buyer to obtain information that is not optimal for the seller. This threat interacts with the division of surplus: profits are U-shaped in the level of information costs and, when such costs are low, the consumer may be better off than when information is free. Among several applications, we show (i) transparency policies may harm consumers by lowering their strategic advantage, and (ii) an analyst who empirically measures distortions ignoring information acquisition could severely underestimate the level of inefficiency in the market.

#### Affirmative Action in India: Restricted Strategy Space, Complex Constraints, and Direct Mechanism Design

We document evidence of market design literature’s largest scale strategy-space restriction. Each year, tens of millions of people in India apply to government jobs and public schools subject to a comprehensive affirmative action sanctioned by federal and state laws. Individuals eligible for affirmative action (AA) positions care about the type of positions they are assigned under—affirmative action or open-category positions—for several reasons, such as the stigma associated with receiving AA positions. However, preferences are only elicited over institutions and completely ignore position types. We unfold the severe implementation issues caused by the strategy-space restrictions. Legally mandated vertical and horizontal reservations and de-reservations present design challenges for their joint implementation. To offer satisfactory solutions, we introduce an extensive many-to-one matching framework with generalized lexicographic choices that nests prominent matching models in the literature. We formulate legal requirements and key policy goals as formal axioms and propose priority designs for institutions that satisfy them. We propose several direct mechanisms for centralized clearinghouses and show that they satisfy desirable axioms.

#### Agenda Control in Real Time

This paper models legislative decision-making when an agenda setter proposes amendments in real time. In our setting, voters are sophisticated and the agenda setter cannot commit to her future proposals. Nevertheless, the agenda setter obtains her favorite policy in every equilibrium regardless of the initial status quo. Central to our results is a new condition on preferences, *m**anipulability*, that holds in many rich policy spaces, including spatial settings and distribution problems. Our results overturn the conventional wisdom that voter sophistication alone constrains an agenda setter’s power.

#### Aggregating Strategic Information

#### Aggregative Efficiency of Bayesian Learning in Networks

When individuals in a social network learn about an unknown state from private

signals and neighbors’ actions, the network structure often causes information loss.

We consider rational agents and Gaussian signals in the canonical sequential social learning

problem and ask how the network changes the efficiency of signal aggregation.

Rational actions in our model are a log-linear function of observations and admit a

signal-counting interpretation of accuracy. This generates a fine-grained ranking of

networks based on their aggregative efficiency index. Networks where agents observe

multiple neighbors but not their common predecessors confound information, and we

show confounding can make learning very inefficient. In a class of networks where

agents move in generations and observe the previous generation, aggregative efficiency

is a simple function of network parameters: increasing in observations and decreasing

in confounding. Generations after the first contribute very little additional information

due to confounding, even when generations are arbitrarily large.

#### AI in Journalism: An Agency-Theoretic Analysis

Do accountability measures always ensure desired outcomes? We consider this question in light of journalists’ use of AI tools. News outlets’ editorial oversight and AI usage transparency are appealing accountability measures, but their efficacy in promoting desired (i.e., assistive) AI use is unclear. As news outlets increasingly screen journalists for AI-tool abilities and produce AI-generated content for consumers, AI use in journalism presents adverse selection and moral hazard concerns. Therefore, we propose a novel principal-agent framework considering news outlets’ relationships with consumers and journalists and analyze incentivizing assistive AI use through oversight and transparency. With a pure adverse selection base model, we show that editorial oversight dominates AI ability screening when deciding on effort-based incentives for journalists. Conversely, news outlets’ AI use transparency with readers is ineffective in encouraging consumption from news outlets using AI assistively. Our base model insights hold even under costly oversight, realizable property rights, occasional shirking, and responsible AI use. Then, using our framework’s pure moral hazard version with imperfect effort observability, we show that incentivizing assistive AI usage aligns with our base model insights. Finally, our study presents a trust vs. accountability dilemma that has significant implications for journalism’s AI regulation discussions.

#### Algebraic Properties of Blackwell’s Order and A Cardinal Measure of Informativeness

*I establish a translation invariance property of the Blackwell order over experiments **for dichotomies, show that garbling experiments bring them closer **together, and use these facts to define a cardinal measure of informativeness. **Experiment A is inf-norm more informative (INMI) than experiment B if the infinity **norm of the difference between a perfectly informative structure and **A is less than the corresponding difference for B. The better experiment is **“closer” to the fully revealing experiment; distance from the identity matrix **is interpreted as a measure of informativeness. This measure coincides with **Blackwell’s order whenever possible, is complete, order invariant, and prior-independent, **making it an attractive and computationally simple extension of **the Blackwell order to economic contexts.*

#### Algorithmic Collusion and Price Discrimination: The Over-Usage of Data

As firms’ pricing strategies increasingly rely on algorithms, two concerns have received much attention: algorithmic tacit collusion and price discrimination. This paper investigates the interaction between these two issues through simulations. In each period, a new buyer arrives with independently and identically distributed willingness to pay (WTP), and each firm, observing private signals about WTP, adopts Q-learning algorithms to set prices. We document two novel mechanisms that lead to collusive outcomes. Under asymmetric information, the algorithm with information advantage adopts a *Bait-and-Restrained-Exploit* strategy, surrendering profits on some signals by setting higher prices, while exploiting limited profits on the remaining signals by setting much lower prices. Under a symmetric information structure, competition on some signals facilitates convergence to supra-competitive prices on the remaining signals. Algorithms tend to collude more on signals with higher expected WTP. Both uncertainty and the lack of correlated signals exacerbate the degree of collusion, thereby reducing both consumer surplus and social welfare. A key implication is that the over-usage of data, both payoff-relevant and non-relevant, by AIs in competitive contexts will reduce the degree of collusion and consequently lead to a decline in industry profits.

#### Algorithmic Robust Forecast Aggregation

Forecast aggregation combines the predictions of multiple forecasters to improve accuracy. However, the lack of knowledge about forecasters’ information structure hinders optimal aggregation. Given a family of information structures, robust forecast aggregation aims to find the aggregator with minimal worst-case regret compared to the omniscient aggregator. Previous approaches for robust forecast aggregation rely on heuristic observations and parameter tuning. We propose an algorithmic framework for robust forecast aggregation. Our framework provides efficient approximation schemes for general information aggregation with a finite family of possible information structures. In the setting considered by Arieli et al. (2018) where two agents receive independent signals conditioned on a binary state, our framework also provides efficient approximation schemes by imposing Lipschitz conditions on the aggregator or discrete conditions on agents’ reports. Numerical experiments demonstrate the effectiveness of our method by providing a nearly optimal aggregator in the setting considered by Arieli et al. (2018).

#### ALL-WAY STOPS

We show that tweaking a familiar crossroad-traffic-control practice in the United States has the potential to significantly reduce energy consumption and carbon emissions (about 0.5 months of gas consumption every year) while NOT jeopardizing road safety: Instead of erecting one stop sign in each direction (e.g. four stop signs at a four-way crossroads), erect one fewer sign at each crossroads (e.g. only three stop signs at a four-way crossroads). The simpler mechanism preserves road safety because it permits all drivers’ abidance of traffic rules as a Nash equilibrium, while the existing mechanism does not. Hence, the new mechanism is also self-enforcing, leading to savings in police expenditures besides those in carbon/pollutants emissions, drivers’ time, and infrastructure costs.

#### Alpha-tolerant Nash equilibria in network games

#### An Axiomatic Model of Cognitive Dissonance

This paper proposes an intrapersonal game to model decision-makers (DMs) who distort beliefs to alleviate cognitive dissonance stemming from past choices. The DM makes observable decisions in periods 1 and 2, while optimally manipulating beliefs during an interim stage between them. The subgame perfect Nash equilibria of this game are characterized by tractable closed-form solutions. By characterizing the equilibria, I identify axioms under which the DM behaves “as if” she optimally distorts her belief in the interim stage. The key axiom relaxes the standard dynamic consistency condition in a way that allows behaviors akin to classic works on cognitive dissonance, including Festinger (1957) and Akerlof and Dickens (1982).

#### An Economic Approach to Prior Free Spatial Search

We propose a model of sequential spatial search with learning. There is a mapping from a space of technologies (or products) to qualities that is unknown to the searcher. The searcher can learn various points on this mapping through costly experimentation. She cares both about the technology that she adopts as well as the best one available, as would a firm in an innovation race or an online shopper concerned with missing a good deal. She does not have a prior over mappings but knows only that neighboring technologies in attribute space are similar in quality. We characterize optimal search strategies when the searcher worries about worst-case mappings at every step of the way. These are mappings that trigger wild-goose chases: excessive search with relatively poor discoveries to show for it. We derive comparative statics that match patterns observed in empirical studies on spatial search. Finally, we apply the results to a problem of search space design faced by online platforms: building a network of related product recommendations to facilitate search.

#### An Economic Model of Consensus on Distributed Ledgers

In recent years, the designs of many new blockchain applications have been inspired by the Byzantine fault tolerance (BFT) problem. While traditional BFT protocols assume that most system nodes are honest (in that they follow the protocol), we recognize that blockchains are deployed in environments where nodes are subject to strategic incentives. This paper develops an economic framework for analyzing such cases. Specifically, we assume that 1) non-Byzantine nodes are rational, so we explicitly study their incentives when participating in a BFT consensus process; 2) non-Byzantine nodes are ambiguity averse, and specifically, Knightian uncertain about Byzantine actions; and 3) decisions/inferences are all based on local information. The consensus game then resembles one with preplay “cheap talk” communications. We characterize all equilibria, some of which feature rational leaders withholding messages from some nodes in order to achieve consensus. These findings enrich those from traditional BFT algorithms, where an honest leader always sends messages to all nodes. We study how the progress of communication technology (i.e., potential message losses) affects the equilibrium consensus outcome.

#### An Equilibrium Model of the First-Price Auction with Strategic Uncertainty: Theory and Empirics

#### An Evaluation of Agency in Game-Theoretic Models of Integrative Divorce-Bargaining

Many divorce clients hire legal representation and because divorce-bargaining typically presents integrative potential; lawyers may navigate the integrative process on their client’s behalf. However, familiar agency problems suggest that lawyers may not always act in their client’s best interests. Accordingly, this paper investigates the effects of agency on the likelihood of divorce clients’ attaining integrative outcomes. A novel model is posited, the IDBG (Integrative Divorce-Bargaining Game), a game played by ‘lawyers’ on behalf of ‘clients’. Comparative statics are utilised to analyse how varying divorce parameters affect the lawyers’ ‘timing’ of settlements. Specifically, how varying parameters, including the lawyer’s reputational capital, fee-structures and the prospect of trials promote over/under investment with respect to the integrative and client welfare maximising outcome. The model predicts overinvestment and corroborates the systematic settlement delays observed in practice. The model’s main implications are the limitations of a lawyer’s reputational capital in terms of its suggested facilitation of client welfare and signalling of interest-alignment. The evolutionary ramifications for the market of divorce-law are discussed.

#### An Evolutionary Approach to Feature Selection and Classification based on an Equilibrium Decision Tree

The feature selection problem has become a key undertaking within machine learning. For classification problems, it is known to reduce the computational complexity of parameter estimation, but it also adds an important contribution to the explainability aspects of the results. An evolution strategy for feature selection is proposed in this paper. Feature weights are evolved with decision trees that use the Nash equilibrium concept to split node data. Trees are maintained until the variation in probabilities induced by feature weights stagnates. Predictions are made based on the information provided by all the trees. Numerical experiments illustrate the performance of the approach compared to other classification methods.

#### An Experimental Investigation of Global Games with Strategic Substitutes

We experimentally investigate behavior in a global game where actions are strategic substitutes.

Following the theoretical foundations of Harrison and Jara-Moroni (2021), we focus on a 3 agent,

binary action game where payoffs depend on some underlying value of a state fundamental.

For some values of the state, the game predicts multiple equilibria. Furthermore, payoffs are

heterogeneous across agents which results in an ordering of agent “types.” The global game

equilibrium selection results in a unique equilibrium in which agents adopt threshold strategies,

with thresholds following the order of types. Our experiment provides some support for the

theory. 2/3 of the subjects adopt threshold strategies with few mistakes. While the estimated

thresholds deviate from point predictions, the comparative statics still hold. Finally, a majority of

outcomes correspond to the global games equilibrium even in regions of multiplicity.

#### An Extension of the Shapley Value for Partially Defined Cooperative Games in Partition Function Form

In this paper we consider partition function form games when not all the coalitions are feasible. In

addition we extend and characterize some solution concepts proposed for partition function form

games to this more general setting. Among these solution we consider the value proposed by Myerson

(1977).

#### An Iterative Approach to Rationalizable Implementation

The paper characterizes the class of two-player social choice functions implementable in rationalizable strategies under complete information.

#### An LQG Game on Networks

We study games of incomplete information, which are featured by two structural assumptions: (i) linear best responses stemming from quadratic payoff functions, and (ii) Gaussian uncertainty. This class of games—termed LQG games—is useful for studying various economic situations, where both fundamental and strategic uncertainty are present. We provide a general framework of LQG games that can encode arbitrary payoff and information networks over agents. Novel proof techniques are developed to establish the generic existence, uniqueness, and continuity of equilibrium by utilizing Fredholm’s theory of integral operators. In an application, we study the social value of public information, while allowing for the correlation among private signals, and show that the denseness of an information network determines how effectively the public information could contribute to social welfare.

#### Approximating an Absorbing Game Using Collections of Games

#### Arena Games

In an arena game two teams strategize on the order in which their members participate in pairwise matches. An underlying bitournament determines the corresponding winners with the first match being between the two players on the top of the two orders. After each match, the loser is removed and the winner stays in the arena as to play against the successor of the loser. The procedure continues until one of the teams has no player left. Local pure Nash equilibria are shown to always exists and a full characterization of the set of pure Nash equilibria when each team consists of at most three players is provided. In general, the absence of a hamiltonian cycle in the bitournament is necessary and its acyclicity is sufficient for the existence of pure Nash equilibria.

#### Arrow’s Theorem, May’s Axioms, and Borda’s Rule

#### Asymmetric Equilibria in Symmetric Multiplayer Prisoners’ Dilemma Supergames

We propose a finite automaton-style solution concept for supergames. In our model, we define an equilibrium to be a cycle of state switches and a supergame to be an infinite walk on states of a finite stage game. We show that if the stage game is locally non-cooperative, and the utility function is monotonously decreasing as the number of defective agents increases, the symmetric multiagent prisoners’ dilemma supergame must contain one symmetric equilibrium and can contain asymmetric equilibria.

#### Asymptotic Learning with Ambiguous Information

We study asymptotic learning when the decision maker is ambiguous about the precision of her information sources. She aims to estimate a state and evaluates outcomes according to the worst case scenario. Under prior-by-prior updating, ambiguity regarding information sources induces ambiguity about the state. We show that this induced ambiguity does not vanish even as the number of information sources grows indefinitely, and characterize the limit set of posteriors. The decision maker’s asymptotic estimate of the state is generically incorrect. We consider several applications. Among them, (i) we provide a foundation for disagreement among agents with access to the same large dataset; (ii) we show that a small amount of ambiguity can exacerbate the effect of model misspecification on learning; and (iii) we analyze a setting in which the decision maker learns from observing others’ actions.

#### Asynchronous Degroot Dynamics

We analyze an asynchronous variation of the DeGroot dynamics. In this model, we study the convergence of

opinions, the consensus of the limiting opinions, and the ability to aggregate information (“wisdom of the

crowd”). The results are obtained for both finite and infinite networks. We provide estimates of the speed of

convergence in finite networks. A fragmentation process of independent interest is found to be closely related

to the asynchronous DeGroot, and this relation is the basis of our analysis.

#### Auctions between Regret-Minimizing Agents

We analyze a scenario in which software agents implemented as regret-minimizing algorithms engage in a repeated auction on behalf of their users. We study first-price and second-price auctions, as well as their generalized versions (e.g., as those used for ad auctions). Using both

theoretical analysis and simulations, we show that, surprisingly, in second-price auctions the players have incentives to misreport their true valuations to their own learning agents, while in first-price auctions it is a dominant strategy for all players to truthfully report their valuations

to their agents.

The talk will also include results from a companion paper published at NeurIPS 2022 on the incentives of users of learning algorithms in general game-theoretic contexts, beyond auctions (a joint work with Noam Nisan https://arxiv.org/pdf/2112.07640).

#### Auctions for a regulated monopolist

A monopolist with a convex production technology must elicit consumer demand in order to determine price and quantity, and a regulator requires that efficient and fair allocations be implemented in dominant strategies. We characterize the class of mechanisms available to the regulated monopolist. Among those that moreover guarantee ex-post voluntary participation across both sides of the market, we find that (i) in line with previous results from the literature, the VCG mechanisms are optimal for the producer, and (ii) a novel class of mechanisms are optimal for consumers. This latter class involves sometimes distributing part of the production profit to consumers regardless of whether they purchase anything.

#### Audience Design

Does a sender benefit from communicating with an audience in groups rather than publicly or privately? In a cheap-talk game, I show that the sender can gain credibility by ensuring diversity of opinions in a group so that her incentive to lie to a subset of the group is offset by her incentive to be truthful to the rest. The sender’s optimal grouping, or partition, of the audience maximises her benefit from gaining credibility from each group. Public or private communications are not necessarily optimal when the sender can benefit from differently diverse groups of receivers. When the sender values each receiver equally and can gain credibility only by ensuring diversity of opinions in her audience, I show that it is optimal for the sender to separate those who need to be persuaded from those who do not. I also derive further properties of optimal communication when receivers are “single-minded,” and demonstrate the role of diversity in shaping optimal communication.

#### Axiomatic measures of assortative matching

An active debate concerns the direction of change in assortative matching on education in the US, because different measures yield different conclusions. To identify appropriate measures of assortative matching, I adopt an axiomatic approach: Start with the properties a measure should satisfy and identify the measures that satisfy them. I find that normalized trace (the proportion of pairs of like types) and the aggregate likelihood ratio (Eika, Mogstad and Zafar, 2019) satisfy my basic equivalence and monotonicity axioms and are uniquely characterized by axioms with cardinal interpretations. They also naturally extend to markets with singles and one-sided markets. The relation induced by the odds ratio is the unique total order on two-type markets that satisfies marginal independence, but it yields ordinal interpretation only and does not have a multi-type extension (Chiappori, Costa-Dias, and Meghir, 2024). For multi-type markets, I show the impossibility result whereby there is no total order that satisfies monotonicity and the additional requirement of robustness to categorization (i.e., the assortativity order holds regardless of the categorization of types). I apply the measures to shed light on the evolution of educational assortativity in the US and other countries.

#### Axioms for optimal rules and fair division rules in the multiple-partners job market

We consider a job market where multiple partnership is allowed (Sotomayor, 1992), that is, each firm may hire several workers and also each worker may be matched to several firms, up to a given quota. We show that the firms-optimal stable rules, that given a valuation profile select an optimal matching and salaries according to the firms-optimal stable payoffs, are neither valuation monotonic nor pairwise monotonic, in contrast to the simple assignment game, where each agent establishes at most one partnership. However, if a firm decreases all its valuations in a constant amount up to a given threshold, then this firm decreases its payoffs by the same amount in the firms-optimal stable rule. This firm-covariance characterizes the firms-optimal stable rules among all stable rules. Moreover, although the firms-optimal stable rules are not strategy-proof, they cannot be manipulated by a constant over-report of a firm’s valuations. Parallel results are obtained for the workers-optimal stable rules. Finally, we show that a known characterization of the fair-division rules on the domain of simple assignment games, by means of a grand valuation fairness and a consistency property, also characterizes these rules on the domain of multiple-partners job markets.

#### Bargaining and the timing of information acquisition

We consider an ultimatum game in which the value of the object being sold to the buyer can be

either high or low. The seller knows what the value is but the buyer does not. The value of the

object to the seller is zero. We introduce the option for the buyer to acquire information before

or after the offer, at a low cost. This information either reveals the value is high or provides

no information. As the cost of information vanishes, in all Pareto-undominated equilibria, the

buyer gets all the surplus although the option is never used.

#### Bargaining as a Coordination Problem

Two players bargain over division of a pie. As in Rubinstein (1982), the players take turns making offers to each other. The novelty is that the players negotiate through non-strategic bargaining agents. At the beginning of each negotiation round, the players privately instruct their agents about what to do at the negotiation table: The proposing player tells his agent which offer to make; the responding player submits to her agent a limit order – the minimal share of the pie that the agent should accept.

For this game, I study outcomes that correspond to two different solution concepts: iterative admissibility and sequential equilibrium. The main result is that in the patient limit, the set of payoffs that correspond to outcomes that survive arbitrarily large, but finite number of rounds of iterative admissibility approaches a straight segment between the disagreement point and the Nash solution. Thus, under iterative admissibility, the bargaining problem resembles a problem of pure coordination (cf. Schelling (1960)). In contrast, the game has a plethora of sequential equilibria. In particular, payoffs that can be obtained in equilibria in insistent strategies (cf. Myerson (1991)) comprise the Pareto frontier of the set of feasible payoffs. Thus, when the players focus on insistent strategies, the bargaining resembles a problem of pure conflict. This example suggests that in bargaining, a particular logic that players employ may affect whether the problem is a coordination or a conflict.

#### Bargaining Failures Due to Network Effects

Complete information bilateral bargaining games predict agreement. The present paper demonstrates that, despite complete information, bilateral bargaining may fail due to network effects. The framework is a continuous time, infinite horizon, complete and perfect information game in a network, where connected players have regular opportunities to bargain. The solution concept used throughout most of the paper is that of stationary acceptance–based equilibrium — a subset of sub–game perfect equilibria, but a superset of Markov perfect equilibria. The two–player game has a unique stationary acceptance–based equilibrium where all meetings end in agreement. However, in connected networks with at least three players, multiple equilibria arise, provided that opportunities arrive sufficiently frequently. In particular, for any pair of connected players, there exist stationary acceptance–based equilibria where they fail to reach agreement.

#### Bargaining Foundations for the Outside Option Principle

#### Bargaining in Non-Stationary Networks

Dealers in over-the-counter markets bargain over the profit from executing an investor’s order with other dealers to whom they are connected via the inter-dealer network. Investor’s orders arrive randomly. I study a model of bargaining in continuous time in networks where profitable opportunities arise randomly to agents who contact a neighbor to split the surplus. If a pair of agents fail to reach an agreement, their link is eliminated from the network. This leads to non-stationarity of the network. I prove the existence of a Markov perfect equilibrium using an inductive argument. Players’ bargaining power in an equilibrium depends on their continuation values in sub-networks reached when some of their links are eliminated. In particular, the relative bargaining power between a pair of connected agents depends on the difference in the change in their continuation values in the current network and the sub-network without their link.

Under certain conditions, agreement in all bargaining meetings is an equilibrium. These cases are important because the network remains unchanged despite the threat of severance. I prove that agreement in all meetings is an equilibrium if and only if the cost of maintaining a connection is lower than a network specific threshold. Comparison of thresholds across different networks provides insight to their relative stability. I show that star networks are more stable than lines and polygons. Inter-dealer networks in OTC markets exhibit a core-periphery structure which include star networks.

#### Bargaining with Learning of a Varying Type

A firm tries to sell a good to potential consumers. It may take time to learn about consumer type through tracking on third party platforms. An important concern for firms and platforms is that consumers’ preferences may change over time and the value of acquired information may depreciate. How does the varying preference affect firm’s dynamic pricing behavior and consumer’s acceptance of price offers? I build a continuous time bargaining model with one-sided incomplete information where buyer’s private type is publicly revealed through Brownian motion and the binary type changes via a Poisson process. In equilibrium, firms will start with high prices which will only be accepted by high type with positive probability and as belief drifts down, the firm will offer the lowest price that will be accepted by both types immediately after a certain threshold. The existence of type change has two effects: a level effect that leaves low value consumers less likely to accept the same offer and a slope effect that the firm screens high value consumers faster. Hence type change benefits both types of consumers at the cost of the firm. If the firm is restricted to post the same price and can use the acquired information to select consumers, it is better off than under flexible price scheme. The continuation bargaining process gets resolved slower under fixed price than under flexible price, which makes consumers more willing to accept the same offer and avoid longer delay. This expectation of future bargaining benefits the firm at the cost of low type consumer. At last, the firm benefits from knowing when the type changes even if the new type is an independent new draw and the firm does not observe the realization.

#### Bargaining with Partial Commitment

This paper discusses how the power of the commitment device influences the interaction between the long-lived principal and the agent. The principal wants to sell a durable product to a buyer with private value. The principal can propose a price plan for all future periods at the beginning of the game. But in each subsequent period, there is an exogenous probability that the contract is voided. We find that no matter how weak the commitment power is, the principal always proposes a non decreasing price plan such that no trade happens in the succeeding periods within a contract. Trade only happens when new contract is proposed. We find a non-monotonicity between the commitment power on social welfare. This suggesting that partial commitment can be worse than no long term commitment at all.

#### Beckmann’s approach to multi-item multi-bidder auctions

We consider the problem of revenue-maximizing Bayesian auction design with several i.i.d. bidders and several items. We show that the auction-design problem can be reduced to the problem of continuous optimal transportation introduced by Beckmann. We establish the strong duality between the two problems and demonstrate the existence of solutions. We then develop a new numerical approximation scheme that combines multi-to-single-agent reduction and the majorization theory insights to characterize the solution.

#### Behavioral Real Options, Financial Literacy and Investor’s Inertia

I study a real-options model with a biased investor that faces uncertainty regarding the value of a project (VOP). I show that such a problem presents significant technical difficulties in using dynamic programming. By allowing the investor to compute the VOP from data, I transform the problem into one where dynamic programming is feasible. As she is biased, the investor’s estimation process is subject to computation mistakes. I show that the biases lead to a wait-and-see approach: at the VOP at which a rational investor optimally exercises the option, the biased one is still unconvinced and waits for a more extreme valuation. Finally, I show that the wait-and-see approach explains the documented relationship between financial literacy and investors’ inertia (investors with a poor understanding of financial concepts exhibit long inactivity spells).

#### Beliefs in Repeated Games

This paper uses a laboratory experiment to study beliefs and their relationship to action and strategy choices in finitely and indefinitely repeated prisoners’ dilemma games.

We find subjects’ beliefs about the other player’s action are generally accurate despite some small systematic deviations corresponding to early pessimism in the indefinitely repeated game and late optimism in the finitely repeated game.

The data reveals a close link between beliefs and actions that differs between the two games.

In particular, the same history of play leads to different beliefs, and the same belief leads to different action choices in each game.

Moreover, we find beliefs anticipate the evolution of behavior within a supergame, changing in response to the history of play (in both games) and the number of rounds played (in the finitely repeated game).

We then use the subjects’ beliefs over actions in each round to identify their beliefs over supergame strategies played by the other player.

We find these beliefs correctly capture the different classes of strategies used in each game.

Importantly, subjects using different strategies have different beliefs, and for the most part, strategies are subjectively rational given beliefs.

The results also suggest subjects tend to underestimate the likelihood that others use less cooperative strategies.

In the finitely repeated game, this helps explain the slow unravelling of cooperation.

In the indefinitely repeated game, persistence of heterogeneity in beliefs underpins the difficulty of resolving equilibrium selection.

#### Best-response reasoning leads to critical-mass equilibria

Best-response reasoning leads the players of an n-person strategic game to one of n possible critical-mass equilibrium concepts, C1, …, Cn, with

C1=dominant strategy equilibrium and Cn=Nash equilibrium. These concepts were used earlier by Eliaz (2002) and others in studies of robust implementation, large games, and equilibrium viability. For increasing m, the justification of Cm equilibrium requires a larger critical mass of players that adhere to their equilibrium strategies. The use of stag hunt games in the proof of the main theorem provides new insights into the age-old topic of stability of social contracts.

#### Beyond Your Município: An Empirical Test of Inequality Preferences in Brazil

Why do many Latin American countries with high income inequality see only moderate levels of redistribution? This paper focuses on the demand side of redistributive policies by addressing both information-driven and value-driven aspects of public opinion in forming demand for redistributive policies in Brazil. Utilizing the variation in local inequality conditions in Brazil from both observational and survey data, I show that Brazilians tend to overestimate both local and national levels of inequality. Local perceptions of inequality are more closely aligned with reality and respond to factual information, but national inequality is consistently overestimated, even when provided with factual information. I then quantify the extent to which inequality aversion and perceptions of inequality matter respectively in forming redistribution preferences. Using a micro-founded choice model of redistribution preferences, I experimentally test how information about local and/or national inequality affect redistribution preferences. I find that willingness for local redistribution is driven by intrinsic inequality aversion, whereas willingness to redistribute nationally is driven by overestimations of national inequality.

#### Bi-dimensional screening with substitutable attributes

A principal (he) decides whether to reward an agent with a bi-dimensional type consisting of her evidence and talent, both valuable to the principal. He does so by asking the agent to present evidence and send a cheap talk message about her talent and then possibly testing her. The test score is increasing in both evidence and talent. When testing is free, the principal achieves (resp. does not achieve) the first-best if the test is more (resp. less) sensitive to talent than talent is valuable to him. When the test is under-sensitive to talent, then the optimal mechanism makes two types of errors: it (i) rewards some unworthy agents with high evidence—which they can hide to imitate agents with low evidence and high talent—and (ii) does not reward some worthy agents with low evidence. When testing is costly, the optimal mechanism even rewards agents with high evidence (including unworthy ones) without testing them.

#### Blackwell-Monotone Information Costs

A Blackwell-monotone information cost function assigns higher costs to Blackwell more informative experiments. This paper provides simple necessary and sufficient conditions for Blackwell monotonicity over finite experiments.

The key condition is a system of linear differential inequalities that are convenient to check given an arbitrary cost function. When the cost function is additively separable across signals, our characterization implies that Blackwell monotonicity is equivalent to sublinearity. This identifies a wide range of practical information cost functions.

Finally, we apply our results to bargaining and persuasion problems with costly information.

#### Boards and Executive Compensation: Another Look

We analyze the optimal contracts offered to an empire-building CEO and a reputation-concerned board when the CEO persuades the board to approve an investment project. We show that lack of flexibility about the fixed part of the board’s or the executive compensation generates shareholders’ tradeoff between size and share of profits. The shareholders choose between contracts for which profits are large but also CEO’s and board’s compensations are large and contracts for which profits and compensations are low. Tolerance to excessive investments with low profits is optimal if the ex ante expected value of the project is large, the CEO’s outside option on the labor market is not very attractive, the CEO’s empire-building benefit is large, and the board’s reputational concerns are moderate. We show that the optimal contracts involve stocks but not options and the variable parts of the CEO’s and the board’s compensations are substitutes. Boards’ reputational concerns affect information quality and company profits in a non-monotonic manner.

#### Bridging Bargaining Theory with the Regulation of a Natural Monopoly

In this paper, we integrate the bargaining theory with the problem of regulating a natural monopoly under symmetric information or asymmetric information with complete ignorance. We prove that the unregulated payoffs under symmetric information and the optimally regulated payoffs under asymmetric information define a pair of bargaining sets which are dual to (reflections of) each other. Thanks to this duality, the bargaining solution under asymmetric information can be obtained from the solution under symmetric information by permuting the implied payoffs of the monopolist and consumer provided that the bargaining rule satisfies anonymity and homogeneity. We also show that under symmetric (asymmetric) information the bargaining payoffs (permuted payoffs) obtained under the Egalitarian, Nash, and Kalai-Smorodinsky rules are equivalent to the Cournot-Nash payoffs of unregulated symmetric oligopolies, involving two, three, and four firms, respectively. Moreover, we characterize two bargaining rules using, in addition to (weak or strong) Pareto optimality, several new axioms that depend only on the essentials of the regulation problem.

#### Büchi Objectives in Countable MDPs

We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a given subset of states infinitely often. A question left open by T.P. Hill in 1979 is whether there always exist epsilon-optimal Markov strategies, i.e., strategies that base decisions only on the current state and the number of steps taken so far. We provide a negative answer to this question by constructing a non-trivial counterexample. On the other hand, we show that Markov strategies with only 1 bit of extra memory are sufficient.

#### Buying In and Selling Out: The Dynamic Returns to Investing in Expertise

We study dynamic incentives for an agent with expertise that is hidden, durable, and inalienable, and who contracts with a sequence of principals as principal-agent match quality changes. Contracts induce the agent to make costly, hidden investments in expertise and to reveal those investments. Our model has two layers of adverse selection: each principal must elicit an agent’s expertise when a contract is initiated, and contracts must ensure ongoing truthful reporting by the agent. There is a double externality between current and future principals: current and anticipated contracts both influence investment in expertise, and the benefits from today’s investment extend to future projects. The optimal contract features two regimes. In the serial training regime, the agent begins each relationship with few incentives and is accumulates claims on cash flows (e.g. buys in) only after some specified time. In the serial founder regime, the agent begins with a full claim on cash flows but transfers some of those claims to the principal (e.g. sells out) after positive performance. The training regime generates little expertise and arises when the agent has little bargaining power, while the founder regime generates high expertise and arises when the agent has high bargaining power.

#### Buying Opinions

A principal hires an agent to acquire a distribution over *unverifiable* posteriors before reporting to the principal, who can contract on the realized state. An agent’s optimal learning and truthful disclosure completely specify the relative incentives the principal must provide, which simplifies the principal’s problem. When the agent 1) is risk neutral, and 2a) has a sufficiently high outside option, or 2b) can face sufficiently large penalties, the principal can attain the first-best outcome. We also explore in detail the general problem of cheaply implementing distributions over posteriors with limited liability constraints and a risk-averse agent.

#### Causality and Causal Misperception in Dynamic Games

I study causality and causal misperception in game theory. My agents have observation-consistent expectations (OCE): they sustain incorrect beliefs about Nature or other players’s actions insofar as those beliefs are observationally equivalent to the actual probabilities. The OCE that maximizes the Shannon entropy (MaxEnt OCE) exhibits correlation neglect, capturing common causal misperceptions such as omitted-variable and simultaneity biases. Every finite extensive-form game with perfect recall has an equilibrium in which every player best responds to their MaxEnt OCE. A probabilistic definition of causality and causal misperception is derived from logical axioms.

#### Certification in Search Markets

We consider a firm seeking to fill a single vacancy by searching over a sequence of workers who are ex-post differentiated in their productivity but are ex-ante identical. Prior to its hiring decision, the firm may acquire information about a worker’s productivity by paying an intermediary to certify the worker. The intermediary, through the certification tests and fees it offers, affects how much surplus is generated in each period as well as how long the firm searches. We characterize the intermediary’s profit-maximizing spot-contract and show that the contract (i) induces efficient hiring standards, (ii) extracts the full-surplus, and (iii) strings along the firm, i.e., keeps the firm searching for longer than the firm would like. We also consider the case in which the worker pays the certification fees and show that (iv) the intermediary may be able to create a demand for certification, even when the certificate conveys little to no information, and (v) the worker benefits when disclosure of test results is mandatory.

#### Characterization of Priority-Neutral Matching Lattices

Priority-neutral matchings are an important new class of matchings, introduced by (Reny, 2022). Priority-neutrality generalizes stability to allow for certain priority-violations, while also allowing for Pareto optimal matchings. The set of priority-neutral matchings also maintains some of the important theoretical structure of stable matchings which underpins much of market design theory. Namely, the set of priority-neutral matchings forms a lattice.

In this paper, we characterize the structure of the priority-neutral lattice. We show that this structure is considerably more intricate that the stable lattice, which is always distributive and can always be represented by a partial order on some set of rotations. Building on prior recent work that shows that the priority-neutral lattice need *not *be distributive (Thomas, 2024), we show instead that priority-neutral lattice are characterized by an involved property which we term being a “movement lattice”, which (while much more general than distributivity) does enforce some structure on what can arise as a priority-neutral lattice. Our results also give the first known polynomial-time algorithm for checking whether a given matching is priority-neutral.

#### Cheap Talk with Transparent Motives in Broker Games: An Experimental Analysis

We conducted experiments on Lipnowski and Ravid (2020)’s model of cheap talk with transparent motives using a broker-investor game with belief elicitation. We find that senders largely adopted the equilibrium strategy of a median cutoff policy, and variations in brokerage proportions do not affect the information conveyed. However, we find evidence of inconsistent belief updating; some receivers did not update. Most receivers failed to trade off more accurate beliefs against higher brokerage fees. Subjects showed limited learning, primarily identifying whether others were lying or trusting, rather than learning the partner’s strategy. Some pairs exhibited “grim-trigger-strategy” behaviors to sustain cooperation through sender truth-telling and receiver trusting. Other pairs showed distrust from the onset through sender babbling or lying and receiver not-acting or counteracting.

#### Cheap Talk with Unbounded State Spaces

We extend Crawford and Sobel’s seminal one sender and one receiver cheap talk model (1982) to situations in which the state space is not bounded from at least one direction, so arbitrarily extreme states are possible. We show that all equilibria continue to have partition structure. Furthermore, under a mild integrability condition there always exist informative equilibria. For thin-tailed distributions, under a regularity condition on the sender’s payoff function, there exist equilibria with arbitrarily high number of partition cells, including infinitely many. Nevertheless, information transmission is bounded away from being perfect, as most of the information cells in such equilibria are tail events and occur with very low probability. How much information can be transmitted in the central part of the distribution depends on the magnitude of the bias of the sender, as in Crawford and Sobel. The qualitative implications of the model change when the prior has a sufficiently fat tail. Information transmission can be very coarse no matter how small the bias of the sender is, as strategic information transmission at the tails of the distribution interferes with how much information can be credibly transmitted at more likely states.

#### Choosing Sides in a Two-sided Matching Market

I model a competitive labor market in which agents of different skill levels decide whether to enter the market as a manager or as a worker. After roles are chosen, a two-sided matching market is realized and a cooperative assignment game occurs. There exists a unique rational expectations equilibrium that induces a stable many-to-one matching and wage structure. Positive assortative matching occurs if and only if the production function exhibits a condition that I call *role supermodularity*, which is stronger than the strict supermodularity condition commonly used in the matching literature because the role(s) that high-skilled agents are willing to enter the market as and the degree of complementarity between roles together determine the equilibrium matching pattern. The wage structure in equilibrium is consistent with empirical evidence that the wage gap is driven both by increased within-firm positive sorting as well as between-firm segregation.

#### Coalition Formation in Public Goods Games: Experimental Evidence

An individual’s social preferences can help explain their cooperation in a public goods game. In this experiment we use an incentivized modified dictator game to estimate individuals social preferences. We find that subjects who give money to others in the modified dictator game have a higher probability of joining the coalition and contributing to the public good. Joining the coalition translates to contributing to the public good. In addition, higher MPCR (Marginal per capita return return from the public good) not only leads to an increase in coalition size, but also enhances the chance of more subjects joining the coalition and contributing to the public good. Further, we find that joining and contributing to the public good depend positively on coalition size.

#### Coarse Learning Equilibrium

This paper studies a 2-stage game in which participants in a mechanism have no priors about a relevant distribution. They can however learn coarse information about it and in doing so, restrict their space of possible beliefs. A coarse learning equilibrium is one in which any type has an action that is a best-response at all feasible beliefs. This equilibrium notion extends the idea of ex post equilibrium to settings where the latter do not exist. I provide a characterization of coarse learning equilibria in general settings and then study winner-pays-bid procurement auctions as a particular application. I find that while none of these auctions admit ex post equilibria, some of them do admit coarse learning equilibria. I then find a simple, primitive condition on the auction’s rules to determine if this is the case.

#### Coasian Dynamics Under Informational Robustness

This paper studies durable good monopoly without commitment under an informationally robust objective. A seller cannot commit to future prices and does not know the information arrival process available to a representative buyer. We introduce a formulation whereby the seller chooses prices to maximize profit against a dynamically-consistent worst-case information structure. In the gap case, the solution to this model is payoff-equivalent to a particular known-values environment, immediately delivering a sharp characterization of the equilibrium price paths. Furthermore, for a large class of environments, allowing for arbitrary (possibly dynamically-inconsistent) worst-case information arrival processes would not lower the seller’s profit as long as these prices are chosen. We call a price path with this property a reinforcing solution. As other formulations of our problem introduce dynamic-inconsistency, the notion of a reinforcing solution may be useful for researchers seeking to tractably relax the commitment assumption while maintaining a robust objective. To highlight the non-triviality of these observations, we show that while the analogy to known values can hold under an equilibrium selection in the no-gap case, it does not hold more generally.

#### Cognitive Forensics: Using Information Search to Model Strategic Thinking in Two-Person Guessing Game Experiments

Abstract: Theories of behavior in games rest at least implicitly on assumptions about strategic thinking, which has been studied in experiments that elicit subjects’ initial responses to games. Costa-Gomes and Crawford (2006; “CGC”) continued a line of previous work with experiments in which subjects played 16 different two-person guessing games with other subjects, in a design that very strongly separates decision rules and which revealed many subjects’ rules with great clarity. CGC supplemented their Baseline treatment with six Robot/Trained Subjects (“R/TS”) treatments, in which each subject, instead of playing the games with other subjects, was assigned and trained to follow one of six leading decision rules and played with “the computer”, whose rule justified his assigned rule’s beliefs. CGC also monitored subjects’ searches for hidden payoff parameters and used the search data as an additional lens through which to study subjects’ thinking. CGC made limited use of their search data and R/TS decision data. This paper uses those data in a more detailed analysis, which addresses several questions raised by CGC analysis.

#### Collusion Stability and the Number of Firms, Revisited

ordination cost and monitoring cost associated with an agreement. Second, the

agreement must be economically stable; it must generate inter-temporal incen-

tives to comply with the agreement instead of breaching the implicit deal. Most

of the literature has concluded that as the number of firms increases these two

issues intensify, then it is tougher to sustain any implicit collusive agreement.

In this paper we show that under some simple conditions, the economic stability

may have a non-monotonic relationship with the number of firms. In particular,

the profitability of collusion first increases with the number of firms and, after

a threshold decreases. That is, a collusive agreement may be easier to sustain

with four rather than with two firms. The reason is that the profit in a collusive

agreement decreases mainly by the number of firms; however the profit in the

competitive Nash equilibrium decreases for two reasons: for the competition

the aggregated profits. Consequently, if the competitive profit is monotonically

decreasing in the number of firms, the additional profit due to a collusive agree-

ment may increase with the number of firms. The literature has focused in a

Bertrand model to explain the economic stability of collusion. The Bertrand

model may be quite pedagogical, yet it may be an extreme (and sometimes

unrealistic) case when analyzing the effect of the number of firms on economic

stability. A symmetric Cournot model provides a more convenient environment

where we show that our results hold.

#### Combining Combined Forecasts> a Network Approach

This study explores theoretically the practice of combining forecasts to improve accuracy when experts collaborate to pool forecasts before presenting them to the decision-maker. The significance of this subject extends to various contexts where experts contribute their assessments to decision-makers following discussions with peers. Preliminary findings show that, irrespective of the information structure, pooling rules and mixtures introduce no bias to decision-making in expected terms. Nevertheless, the concern revolves around variance. In situations where experts are equally precise, and pair-wise correlation of forecasts is the same across all pairs of experts, the network structure plays a pivotal role in decision-making variance. Star networks exhibit the highest variance, contrasting with d-regular networks that achieve zero variance, emphasizing their efficiency. These insights contribute to a nuanced understanding of decision-making dynamics under varying information and network structures, and pooling methodologies.

#### Commitment and Randomization in Communication

When does a Sender, in a Sender-Receiver game, strictly value commitment? In a setting with finite actions and finite states, we establish that, generically, Sender values commitment if and only if he values randomization. In other words, commitment has no value if and only if a partitional experiment is optimal. How often (i.e., for what share of preference profiles) does this happen? For any number of states, if there are |A| actions, the likelihood that commitment has no value is bounded below by 1/|A|^|A|. As the number of states grows large, this likelihood converges precisely to 1/|A|^|A|.

#### Commitment, Firm and Industry Effects in Strategic Divisionalization

We modify the canonical two-stage game of strategic divisionalization by adding an initial stage to allow firms to credibly commit to whether they will create additional divisions or not. This generates a unique equilibrium prediction consistent with the key stylised fact that often only one of the mother firms alone creates independent divisions. Examples include GM versus Ford for national markets and many cases of franchising in local markets (e.g., Walmart vs Target, McDonald’s vs Burger King). A key implication for organization theory is that the adoption of the M versus the U-form is part of a strategic whole necessarily involving all competitors, rather than just intra-firm managerial and informational considerations as in the classical theory. The differences between the predictions of the latter and of the present approach are highlighted.

#### Common ownership in production networks

We characterize the firm-level welfare effects of a small change in ownership overlap, and how it depends on the position in the production network. In our model, firms compete in prices, internalizing how their decisions affect the firms lying downstream as well as those that have common shareholders. While in a horizontal economy the common-ownership effects on equilibrium prices depend on firm markups alone, in the more general case displaying vertical inter-firm relationships a full knowledge of the production network is typically required. Addressing then the normative question of what are the welfare implications of affecting the ownership structure, we show that, if costs of adjusting it are large, the optimal intervention is proportional to the Bonacich centrality of each firm in the weighted network quantifying interfirm price-mediated externalities. Finally, we also explain that the parameters of the model can be identified from typically available data, hence rendering our model amenable to empirical analysis.

#### Commonality of Information and Commonality of Beliefs

A group of agents with a common prior receive informative signals about an unknown state repeatedly over time. If these signals were public, agents’ beliefs would be identical and commonly known. This suggests that if signals were private, then the more correlated these are, the greater the commonality of beliefs. We show that, in fact, the opposite is true. In the long run, conditionally independent signals achieve greater commonality of beliefs than correlated ones. We then apply this result to binary-action, supermodular games.

#### Communicating with Anecdotes

We study a communication game between a sender and receiver where the sender has access to a set of informative signals about a state of the world. The sender chooses one of her signals, called an “anecdote” and communicates it to the receiver. The receiver takes an action, yielding a utility for both players. Sender and receiver both care about the state of the world but are also influenced by a personal preference so that their ideal actions differ. We characterize perfect Bayesian equilibria when the sender cannot commit to a particular communication scheme. In this setting the sender faces “persuasion temptation”: she is tempted to select a more biased anecdote to influence the receiver’s action. Anecdotes are still informative to the receiver but persuasion comes at the cost of precision. This gives rise to “informational homophily” where the receiver prefers to listen to like-minded senders because they provide higher-precision signals. In particular, we show that a sender with access to many anecdotes will essentially send the minimum or maximum anecdote even though with high probability she has access to an anecdote close to the state of the world that would almost perfectly reveal it to the receiver. In contrast to the classic Crawford-Sobel model, full revelation is a knife-edge equilibrium and even small differences in personal preferences will induce highly polarized communication and a loss in utility for any equilibrium. We show that for fat-tailed anecdote distributions the receiver might even prefer to talk to poorly informed senders with aligned preferences rather than a knowledgeable expert whose preferences may differ from her own. We also show that under commitment differences in personal preferences no longer affect communication and the sender will generally report the most representative anecdote closest to the posterior mean for common distributions.

#### Communication of Expertise before Disclosure

We consider communication of expertise in a disclosure game. A sender wants to persuade a receiver to take a high action. The sender has private information about how much evidence she can acquire. After the sender communicates her private expertise to the receiver through a cheap talk message, she obtains some evidence that she can choose to disclose or conceal. The receiver partially attributes any incomplete disclosure to the sender concealing unfavorable evidence and wants to take an action based on the true information. We show that ex ante, the sender never wants the receiver to consider she has the lowest expertise. Moreover, the sender’s expertise can be communicated through pure cheap talk before the disclosure game, and the expertise information can be partially transmitted in a partition equilibrium that contains finite or infinite intervals under certain conditions.

#### Communication Technology Advance and Consequences: Using Two-sided Search Model

Does communication technology advance, such as online dating sites and social networking services, really make us happier? In this paper, I construct a non-stationary two-sided search market equilibrium model to analyze the quantitative effects of the communication technology advance on individuals’ marital behavior and social welfare. In the model, I include cohabitation as well as marriage as an individual choice and provide a new identification argument for separately identifying parameters that have been considered important but difficult to identify, with new proof of the existence of the non-stationary market equilibrium. Using the National Longitudinal Study of the High School Class of 1972 and the National Longitudinal Survey of Youth 1997, I quantify the effects of the communication technology advance on society and reveal which types of individuals benefit from it.

#### Community Costs in Neighborhood Help Problems

We define neighborhood help problems where agents may seek and/or provide various kinds of help as one-sided matching markets with incompatibilities. To obtain a Pareto efficient outcome the top trading cycles mechanism (TTC) (Shapley and Scarf, 1974) may be used. However, a short supply of compatible helpers may result in many agents being unmatched forcing them to rely on costly outside options. These agents leave the market without helping and a lot of potential is lost. To overcome this issue we introduce the so-called pool option. This pool gives agents an incentive to provide help when being helped outside of the market. We propose the neighborhood top trading cycles and chains mechanism that incorporates the pool option and is based on the TTCC by Roth et al. (2004). The mechanism is Pareto efficient and strategy-proof. Additionally, it (weakly) reduces overall costs compared to the TTC.

#### Comparing Search and Intermediation Frictions Across Markets

In intermediated markets, trading takes time and intermediaries extract rents. We estimate a structural search-and-bargaining model to quantify these trading delays, intermediaries’ ability to extract rents, and the resulting welfare losses in government and corporate bond markets. Using transaction-level data from the UK, we identify a set of clients who are active in both markets. We exploit the cross-market variation in the distributions of these clients’ trading frequency, prices, and trade sizes to estimate our structural model. We find that trading delays and dealers’ market power both play a crucial role in explaining the differences in liquidity across the two markets. Dealers’ market power is more severe in the government bond market, while trading delays are more severe in the corporate bond market. We find that the welfare loss from frictions in the government and corporate bond markets are 7.8% and 12.2%, respectively, and our decomposition implies that this loss is almost exclusively caused by trading delays in the corporate bond market, while trading delays and dealers’ market power split the welfare loss equally in the government bond market. Using data from the COVID-19 crisis period, we also find that these welfare losses might more than triple during turbulent times, revealing the fragility of the OTC market structure.

#### Competing Narratives in Action: Belief Cycles throughout the Pandemic

We study belief dynamics during the COVID-19 pandemic and propose an equilibrium model to explain the data. Using a representative panel of US households we uncover the presence of cycles in beliefs about the effectiveness of risk prevention measures such as wearing a mask or avoiding restaurants. We show that, for behaviors that are effective in mitigating infection risks, the fraction of the population that believes in their effectiveness closely follows fluctuations in infection risk. No such variation is found for behaviors that are not effective in reducing risk. We present a model in which agents choose between competing narratives about behavior effectiveness to maximize their expected utility. We prove that under natural assumptions the equilibrium share of agents adopting the `effective behavior’ narrative is increasing in risk, leading to belief cycles. We estimate that infection rates would have been between 3-8% lower in the counterfactual scenario of all agents adopting the effective behavior narrative.

#### Competing to Commit: Markets with Rational Inattention

Two homogeneous-good firms compete for a consumer’s unitary demand. The consumer is rationally inattentive and pays entropy-based information processing costs to learn about the firms’ offers. While trade is always inefficient if firms collude, we obtain efficiency under competition for a range of information costs. Competition puts downward pressure on prices. Additionally, competition increases the demand pointwise, since the consumer’s information processing decision depends on the level of competition. For high enough information costs, this effect dominates: firms’ total surplus is larger under competition than under collusion. Our model reveals why markets with common ownership may remain competitive.

#### Competition and Errors in Breaking News

Reporting errors are endemic to breaking news, even though accuracy is prized by consumers. I present a continuous-time model to understand the strategic forces behind such reporting errors. News firms are rewarded for reporting before their competitors, but also for making reports that are credible in the eyes of consumers. Errors occur when firms *fake*, reporting a story despite lacking evidence. I establish existence and uniqueness of an equilibrium, which is characterized by a system of ordinary differential equations. Errors are driven by both and by competition. A lack of commitment power gives rise to errors even in the absence of competition: firms are tempted to fake after their credibility has been established, capitalizing on the inability of consumers to detect fake reports. Competition exacerbates faking by engendering a preemptive motive. In addition, competition introduces observational learning, which causes errors to propagate through the market. The equilibrium features rich dynamics. Firms become gradually more credible over time whenever there is a preemptive motive. The increase in credibility rewards firms for taking their time, and thus endogenously mitigates the haste-inducing effects of preemption. A firm’s behavior will also change in response to a rival report. This can take the form of a *copycat effect*, in which one firm’s report triggers an immediate surge in faking by others.

#### Competition, Persuasion, and Search

We consider sequential search by an agent who cannot directly observe the quality of goods he samples but can purchase signals designed by profit-maximizing principal(s). By formulating the contracting problem as a repeated game between the agent and the principal(s), we characterize the set of equilibrium payoffs. We show that the agent’s equilibrium payoff set under monopoly strictly dominates the agent’s equilibrium payoff set under competition in the strong set order whereas the converse holds for the set of total surplus. Our results show that while competition among principals may benefit the agent, it may also reduces overall efficiency.

#### Computing Perfect Bayesian Equilibria: From a Characterization with Self-Strategy-Independent Consistency to a Differentiable Path-Following Method

As a relaxation to the requirements of sequential equilibrium, the concept of perfect Bayesian equilibrium was established by demanding Kreps and Wilson’s sequential rationality, belief consistency, and subgame perfection. Nonetheless, the establishment lacks a practicable and effective formulation for computationally finding a perfect Bayesian equilibrium. To address this concern, this paper develops a mathematical characterization of perfect Bayesian equilibrium, which comes from self-strategy-independent consistency and local sequential rationality. With this characterization, the perfect Bayesian equilibrium can be explicitly specified by a polynomial system, which leads to a differentiable path-following method for computing such an equilibrium. In the development of the method, we constitute a barrier extensive-form game in which each player at each information set solves a strictly convex optimization problem. We establish with the barrier game the existence of a smooth path that starts from an arbitrary behavioral strategy profile and ends at a perfect Bayesian equilibrium. Numerical results further confirm the effectiveness and efficiency of the method.

#### Computing Subgame Perfect Equilibrium Payoffs

#### Concavification Bounds and Mechanism Simplicity

I develop a new type of bound, relating to mechanism simplicity, in a class of economic models: the linear functional optimization models. In these models, which are frequently used in economic design contexts, the optimal function is piecewise constant with at most n jumps, where n is a finite number related to the number of constraints in the problem. My bound quantifies, over all model primitives, the maximum shortfall a designer could have from using the optimal piecewise constant function with at most m jumps, with m<n. I illustrate my results in two economic examples: the capacity-constrained selling problem of Loertscher and Muir, and the affirmative action via randomized admissions problem of Fryer and Loury.

#### Conformity Concerns: A Dynamic Perspective

In many settings, individuals imitate their peers’ decisions for two distinct reasons: they infer that such decisions adapt to an unknown and common fundamental state of the world and they infer that their peers approve of such decisions. While initially uncertain, both inferences become more precise as players observe more peers. These improved inferences endogenously increase conformity concerns and result in a permanent switch from a separating to a pooling equilibrium in finite time. This switch implies the players imperfectly learn both the fundamental state of the world and the preferences of their peers (i.e., social learning fails), resulting in inefficient decisions. Finally, this paper shows that amongst these two possible misperceptions, correcting the misperceptions of the preferences of one’s peers may be preferred, in line with social psychology findings.

#### Connections models with diminishing marginal rate of substitution link-formation technologies

We study a connections model where the strength of a link depends on the

amounts invested in it by the two nodes and is determined by an increasing

strictly concave function. The revenue from investments in links is the value

(information, contacts, friendship) that the nodes receive through the network.

First, efficient networks are characterized. Second, assuming that links are the

result of investments by the node-players involved, there is the question of sta-

bility. We characterize a notion of marginal equilibrium, where all nodes play

locally best responses, and identify different marginally stable structures. This

notion is based on weak assumptions about node-players’ information, and is

necessary for Nash equilibrium and for pairwise stability. Efficiency and stability

are shown to be incompatible.

#### Consumer Search with Rational Inattention

We study a consumer search model with rationally inattentive consumers. Even after paying search costs to find a product, consumers cannot directly observe its quality and price, but they can investigate the product by additionally incurring attention costs. We show that consumers search if and only if the search cost is not too large and the attention cost is not too large or too small. As an application, we examine a return policy that prevents sellers from charging too high. Our numerical example shows that the policy may harm consumers because consumers’ less attention leads to more aggressive pricing.

#### Content-hosting platforms: discovery, membership, or both?

We develop a model that classifies platforms in the so-called “creator economy”, such as

Youtube, Patreon, TikTok, and Twitch, into three broad business models: pure discovery

mode (provides recommendations to help viewers to discover creators); pure membership

mode (enables individual creators to monetize their viewers directly through transactions);

and hybrid mode that combines both. Creators respond to platforms’ decisions by individually

choosing to supply content designed along a niche-broad spectrum, which involves a tradeoff

between viewership size and per-viewer revenue. Such endogenous responses create a

link between two sources of platform revenue (advertising and transaction commission).

Compared to the pure modes, the hybrid mode affects creators’ design decisions and leads to

negative spillovers across the two sources of platform revenue. Thus, it is not necessarily more

profitable. In the case of competing platforms, incentives to avoid the negative spillovers from

competition in transaction commissions to advertising revenue result in platforms choosing

different equilibrium business models.

#### Contest design with a finite type-space

We study the classical contest design problem in an incomplete information environ-

ment with linear costs and a finite type-space. For any contest with an arbitrary finite

type-space and distribution over this type-space, we characterize the unique symmetric

Bayes-Nash equilibrium of the contest game. We find that the equilibrium is in mixed

strategies, where agents of different types mix over disjoint but connected intervals, so

that more efficient agents always exert greater effort than less efficient agents. Using

this characterization, we solve for the expected equilibrium effort under any arbitrary

contest, and find that a winner-takes-all contest maximizes expected effort among all

contests feasible for a budget-constrained designer. Our results extend the optimality

of the winner-takes-all contest under a continuum type-space (Moldovanu and Sela

[2001]) to the finite type-space environment, and our analysis introduces new techniques

for the study of contest design problems in such environments.

#### Context and Communication

This article develops a formal theory of the relationship between context and communication. We define the *context* of an interaction to be the sum total of common knowledge shared by the parties involved. Our main theorem shows how context can be used as a coordinating device to reduce communication costs below the limit described by Shannon’s source coding theorem, albeit at the risk of making communication more ambiguous when taken “out of context.” The theory provides a novel foundation for the inherent difficulty of writing complete contracts and endogenizes communication costs as a function of how existing knowledge is arranged in firms and other organizations.

#### Contracting with Heterogeneous Researchers

We study the design of contracts that incentivize a researcher to conduct a costly experiment, extending the work of Yoder (2022) from binary states to a general state space. The cost is private information of the researcher. When the experiment is observable, we find the optimal contract and show that higher types choose more costly experiments, but not necessarily more Blackwell informative ones. When only the experiment result is observable, the principal can still achieve the same optimal outcome if and only if a certain monotonicity condition with respect to types holds. Our analysis demonstrates that the general case is qualitatively different than the binary one, but that the contracting problem remains tractable.

#### Contracts with Interdependent Preferences

A principal contracts with multiple agents, as in Lazear and Rosen (1981) and Green and Stokey (1983). The setup is classical except for the assumption that agents have interdependent preferences. We characterize cost effective contracts, and relate the direction of co-movement in rewards — “joint liability” (positive) or “tournaments” (negative) — to the assumed structure of preference interdependence. We also study the implications of preference interdependence for the principal’s payoffs. We identify two asymmetries. First, the optimal contract leans towards joint liability rather than tournaments, especially in larger teams, in a sense made precise in the paper. Second, when the mechanism-design problem is augmented by robustness constraints designed to eliminate multiple equilibria, the principal may prefer teams linked via adversarial rather than altruistic preferences.

#### Control and spread of contagion in networks with global effects

We study proliferation of an action in binary action network coordination games that are generalized to include global effects. This captures important aspects of proliferation of a particular action or narrative in online social networks, providing a basis to understand their impact on societal outcomes. Our model naturally captures complementarities among starting sets, network resilience, and global effects, and highlights interdependence in channels through which contagion spreads. We present new, natural, and computationally tractable algorithms to define and compute equilibrium objects that facilitate the general study of contagion in networks and prove their theoretical properties. Our algorithms are easy to implement and help to quantify relationships previously inaccessible due to computational intractability. Using these algorithms, we study the spread of contagion in scale-free networks with 1,000 players using millions of Monte Carlo simulations. Our analysis provides quantitative and qualitative insight into the design of policies to control or spread contagion in networks. The scope of application is enlarged given the many other situations across different fields that may be modeled using this framework.

#### Convergence of Discrete-Time Models with Small Period Lengths

#### Coordination and Sophistication

How coordination can be achieved in isolated, one-shot interactions is a long-standing question in game theory. Without communication and in the absence of focal points, whether coordination can be reached at all is unclear. We show, however, that in a non-equilibrium model in which the level of reasoning responds to incentives, high stakes may induce coordination when the cognitive sophistication of the players is heterogeneous and when this is agreed upon. The equilibrium on which coordination is expected to occur, according to our model, depends on the payoff structure of the game in ways that differ from those implicit in standard solution concepts, or from the implications that one could draw applying exogenous criteria for the attribution of the strategic advantage. Our model therefore provides a novel mechanism for endogenous coordination and one in which it is differences between players, rather than their similarities, that lead to increased coordination. Using the model as a framework, we conduct an experiment to examine coordination in such a setting.

#### Coordination with Uncertainty

An experiment is conducted in which subjects play simple stochastic games in which they face choices between stage payoffs and random continuation values. In the two-player game, both players receive the same payoff, and thus find it in their interest to coordinate to optimize their payoff, but may face conflict ex-ante if their risk preference and time preferences are different. It is observed that, while even after many periods in the one-player game agents only weakly settle on pure Markov strategies,

in the two-player game agents are more reluctant to switch strategies after achieving coordination, so that two-player games do generally converge on equilibria in stationary Markov strategies. Subjects have more trouble coordinating in a state of potential loss than in a state of potential gain, especially where they have different time/risk preferences.

#### Corporate Financing and Investment Decisions When Equity Issuance Reveals Firms’ Information to Investors

I study a two-stage infinite signaling game, in which firms can issue debt or equity to finance sequentially arriving investment projects. When management’s first-stage decision can change investors’ beliefs and consequently impact the second-stage security issuance, its optimal choice differs significantly from the strict debt-equity preference in a comparable one-stage model. I discuss a refinement concept that restricts the set of separating equilibria by requiring that the low type firm has no incentive to mimic the high type firm’s actions. In equilibrium, a dynamic pecking order arises, suggesting that the information friction can solely explain various observed corporate financing behaviors.

#### Correlation-Robust Optimal Auctions

I study the design of auctions in which the auctioneer is assumed to have information only about the marginal distribution of a generic bidder’s valuation, but does not know

the correlation structure of the joint distribution of bidders’ valuations. I assume that a generic bidder’s valuation is bounded and $\bar{v}$ is the maximum valuation of a generic bidder. The

performance of a mechanism is evaluated in the worst case over

the uncertainty of joint distributions that are consistent with the marginal distribution. For the two-bidder case, \textit{the second-price auction with the uniformly distributed random reserve} maximizes the worst-case expected revenue across \textit{all} dominant-strategy mechanisms under certain regularity conditions. For the $N$-bidder ($N\ge 3$) case, \textit{the second-price auction with the $\bar{v}-$scaled $Beta (\frac{1}{N-1},1)$ distributed random reserve} maximizes the worst-case expected revenue across \textit{standard} (a bidder whose bid is not the highest will never be allocated) dominant-strategy mechanisms under certain regularity conditions. When the probability mass condition (part of the regularity conditions) does not hold, \textit{the second-price auction with the $s^*-$scaled $Beta (\frac{1}{N-1},1)$ distributed random reserve} maximizes the worst-case expected revenue across standard dominant-strategy mechanisms, where $s^*\in (0,\bar{v})$.

#### Corruption Networks

This paper studies how an agent’s propensity to accept bribes depends on the organizational structure modeled with a monitoring network. In hierarchies, bribe taking is risker if others accept more bribes, for it is then easier for a corruption investigation to trace through bribe transactions to locate bribe takers. The opposite happens in flat, two-layer networks, as the subordinates who offer bribes are then better protected from being caught. In equilibrium, a denser monitoring network always deters agents from accepting bribes. I use this model to point out a corruption identification problem and propose a remedy to it.

#### Cost Sharing in Public Goods Game in Networks

The existing literature studying the effect of network structure on public good provision reports a negative relationship between the number of neighbors an individual has and their likelihood of investing. The evidence points to the lack of incentives that individuals in central network positions have to invest in the local public good. This paper uses a laboratory experiment to test the relative efficacy of two cost sharing rules in raising efficiency across three network structures in a best shot public goods game. Across the three network structures, I vary the asymmetry in the number of neighbors each position has in the network. The two cost sharing rules are designed to incentivize individuals with more neighbors to invest. The first rule is a local cost sharing, where individuals who invest receive transfers from each of their neighbors who do not invest. The second is a global cost sharing rule, where the total cost of investment is equally divided among individuals who benefit from the public good. The efficiency of provision is the lowest in absence of cost sharing rules. The low efficiency is driven by the under-provision of the public good. Introducing the two cost sharing rules increases the provision of the public good. The local cost sharing rule increases efficiency across all three network structures. The effectiveness of the global cost sharing rule in raising efficiency decreases as the asymmetry of the network structure increases.

#### Costly monitoring in signaling games

#### Costly Multidimensional Screening

A screening instrument is *costly* if it is socially wasteful and *productive* otherwise. A principal screens an agent with multidimensional private information and quasilinear preferences that are additively separable across two components: a one-dimensional productive component and a multidimensional costly component. Can the principal improve upon simple one-dimensional mechanisms by also using the costly instruments? We show that if the agent has preferences between the two components that are positively correlated in a suitably defined sense, then simply screening the productive component is optimal. The result holds for general type and allocation spaces, and allows for nonlinear and interdependent valuations. We discuss applications to optimal regulation, labor market screening, and pricing and bundling by a multiple-good monopolist.

#### Costly Persuasion by a Partially Informed Sender

I study a model of costly Bayesian persuasion by a privately and partially informed sender who conducts a public experiment. I microfound the cost of an experiment via a Wald’s sequential sampling problem and show that it equals the expected reduction in a weighted log-likelihood ratio function evaluated at the sender’s belief. I focus on equilibria satisfying the D1 criterion. The equilibrium outcome depends on the relative costs of drawing good and bad news in the experiment. If bad news is more costly, there exists a unique separating equilibrium outcome, and the receiver unambiguously benefits from the sender’s private information. If good news is sufficiently more costly, the single-crossing property fails. There exists a continuum of pooling equilibria, and the receiver strictly suffers from sender private information in some equilibria.

#### Costly Verification and Money Burning

Consider a principal designing a mechanism to allocate an indivisible good, e.g., a promotion, to one of many agents. The mechanism does not allow for monetary transfers. Instead, we consider the interplay between two instruments that have been studied in isolation: external audits, i.e. “costly verification of the agent’s type”

and internal bureaucracy or influence activities that waste time, i.e.“money burning”. We utilize a graph theoretic approach to tackle incentive constraints with two instruments. We show that the optimal mechanism features pooling at the bottom and the instruments are complements instead of imperfect substitutes.

#### Credible Persuasion

We propose a new notion of credibility for information design. A disclosure policy is credible if the sender cannot profit from tampering with her messages while keeping the message distribution unchanged. We show that the credibility of a disclosure policy is equivalent to a cyclical monotonicity condition on its induced distribution over states and actions. We characterize when credibility considerations completely shut down informative communication, as well as settings where the sender is guaranteed to benefit from credible persuasion. We apply our results to the market for lemons and bank runs. In the market for lemons, we show that no useful information can be credibly disclosed by the seller, even though a seller who can commit to her disclosure policy would perfectly reveal her private information to maximize profit. In the context of bank runs, whether the regulator can credibly perform a stress test to forestall a bank run depends on the welfare cost of a liquidity crisis.

#### Critical Mass Reasoning and Equilibrium

**Question**: Is an equilibrium assumed in game theoretic analysis viable? Would players play it, and would it deter defections?

**Main Theorem**: The von Neumann-Nash framework of n person optimization admits exactly n distinct equilibrium concepts C1,…,Cn. The concepts Cm, called equilibria of critical mass m, are defined and arranged by a critical mass index in decreasing level of viability.

The longer lecture by Ehud, Tuesday 9:45-10:30, presents the definitions and properties of critical mass index and equilibrium, and their importance for assessing equilibrium viability. The main theorem and its proof will be presented by Adam immediately after, Tuesday 11:00-11:20, in the session on solution concepts.

#### Cross-Examination

Two opposed parties seek to influence a decision maker. They invest in acquiring information and select what to disclose. The decision maker then adjudicates. We compare this setting with one allowing cross-examination. A cross-examiner tests the opponent in order to persuade the decision maker that the opponent did not disclose the whole truth. We show that the quality of decision-making deteriorates because both the threat and the potential benefits from cross-examination reduce incentives to investigate and because cross-examination too often makes the truth appear as falsehood.

#### Crowdsourced Appraisals and Connectedness Bias

Consider a principal who wants to award a high value agent from a group of agents whose values are private but follow the same known distribution. Each agent may hold private conclusive evidence about his own value and about the values of agents he knows (his *neighbors*). Agents compete for the award and each agent strategically decides which pieces of his private evidence to reveal to the principal. After evaluating the transmitted evidence, the principal assigns the award. We identify two equilibria: 1) In the adversarial disclosure equilibrium, agents disclose positive evidence about themselves, negative evidence about neighbors and nothing else. Here, despite agents having the same ex-ante expected value, their ex-ante expectation of receiving the prize varies with the number of their neighbors; this number is informative for the principal given the disclosure strategies. 2) In the no-snitching equilibrium, agents only reveal positive evidence about themselves and nothing else. Here, all agents’ ex-ante expectations of receiving the prize are the same.

We show that these two equilibria (or combinations thereof) are especially robust, and no other outcomes are. With commitment, the principal cannot achieve the first best, but she can improve over any robust equilibrium outcome.

#### (Doubly) Irreversible Disclosure

I study a dynamic disclosure game between an agent and a decision maker where the agent’s decisions to start and stop disclosing are both irreversible. Over time, the agent privately receives conclusive bad signals that arrive according to a Poisson process. The agent chooses a time to start and stop disclosing this information process to the decision maker whose action affects the payoffs of both players. In the unique Markov perfect Divine equilibrium, the later the agent starts disclosing, the longer he discloses. While disclosure is in progress, the agent faces a tradeoff between a more favorable action and higher risks, which leads to delayed stopping by a more optimistic agent.

#### DanceSport and Power Values

DanceSport is a competitive form of ballroom dancing. At a DanceSport event, couples perform multiple dances in front of judges. This paper shows how a goal for a couple and the judges’ evaluations of the couple’s dance performances can be used to formulate a weighted simple game. We explain why couples and their coaches may consider a variety of goals. We also show how prominent power values can be used to measure the contributions of dance performances to achieving certain goals. As part of our analysis, we develop novel visual representations of the Banzhaf and Shapley-Shubik index profiles for different thresholds. In addition, we show that the “quota paradox” is relevant for DanceSport events.

#### Decentralized Foundation for Stability of Supply Chain Networks

We propose simple dynamics that generate a stable supply chain network. We prove that any unstable network can reach a stable network through decentralized interactions where randomly-selected blocking chains form successively. Our proof suggests an algorithm for finding a stable network that generalizes the classical Gale and Shapley (1962) deferred acceptance algorithm.

#### Delegated Experimentation and Reputation for Learning

This paper builds a principal-agent model of experimentation: the expert receives the signal and reports the action recommendation. Then the agent updates her belief concerning the expert’s ability to give the correct recommendation and chooses how much effort to exert. Failure may be triggered by agent’s insufficient effort, and the agent’s incentives to provide it depend on her assessment of the expert’s competence. We characterise equilibrium in this game and show that the concern for his reputation makes the expert overly conservative in the advice.

#### Delegated Recruitment and Statistical Discrimination

We study how delegated recruitment shapes talent selection. Firms typically pay recruiters via refund contracts, which specify a payment upon the hire of a suggested candidate and a refund if a candidate is hired but terminated during an initial period of employment. We develop a model where refund contracts naturally arise and show that delegation leads to statistical discrimination, where the recruiter favors candidates with more precise information about their productivity. This is misaligned with direct hiring, where the firm has option value from uncertain candidates. Under tractable parametric assumptions, we characterize the unique equilibrium in which candidates with lower expected productivity but more informative signals (“safe bets”) are hired over candidates with higher expected productivity but less informative signals (“diamonds in the rough”).

#### Dependency Equilibria: Extending Nash Equilibria to Entangled Belief Systems

This paper gives a detailed formal characterization of dependency equilibria—a novel solution concept for games that provides a natural extension of Nash equilibria to strategic interactions where the standard assumption of non-cooperative game theory of the causal independence of the players’ choices is retained, but the assumption of their probabilistic independence is forgone. Hence, players’ beliefs may be entangled, i.e., permit probabilistic dependencies between their choices, in which case they maximize conditional expected utility (in contrast to correlated equilibria, where players maximize posterior unconditional expected utility). We demonstrate how this novel equilibrium concept can account for seeming out-ofequilibrium behavior in a variety of experimentally and socially relevant games. We further provide lower and upper bounds for the existence of dependency equilibria, determine epistemic conditions for their obtaining, and demonstrate how certain simple iterative belief revision algorithms can lead players into a common dependency equilibrium state.

#### Determinants of Agricultural Fires: An Aggregative Games Approach

The effects of deforestation through land fires used by farmers (especially, smallholders) are twofold. From an individual perspective, these burnings contribute to land preparation, improving its fertility. On the other side, the aggregate decision harms air and water quality, degrading the environment, and this is reverted as a negative impact on the productivity of the land. In this work, we present an aggregative game framework that incorporates these effects, enabling us to analyze the impact of variations in fire cost and the number of farmers. Furthermore, using data from Brazilian research institutes, we test the sign and magnitude of the impacts of those determinants on the aggregate deforestation in Brazil for the period 2009 to 2018. The theoretical model allows us to conclude the negative impact of fines for

burning on deforestation and the positive impact of the number of farmers on it. We also demonstrate the pervasive cross-effect of fines charged to a farmer (or group of farmers) on the burning decisions of others. The empirical analysis verifies some of the theoretical results and also highlights the insufficient or ineffective activities of environmental authorities in reducing land fires through infraction notices.

#### Deterrence Game with private signals and updated beliefs

#### Differential games of public investment: Markovian best responses in the general case

We define a differential game of public investment with a discontinuous Markovian strategy space. The best response correspondence for the game is well-behaved: a best response exists and maps a profile of opponents’ strategies back to the strategy space. Our chosen strategy space thus makes the differential game well-formed as a normal form game, resolving a long-standing open problem in the literature. We provide a user-friendly necessary and sufficient condition for constructing the best response. Our methods do not require recourse to specific functional forms. Our theory has general applications, including to problems of noncooperative control of stock pollutants, harvesting of natural resources, and joint investment problems.

#### Digital Tokens and Platform Building

We present a dynamic game rationalizing the economic value of digital tokens for launching peer-to-peer platforms: By using the blockchain to transparently distribute tokens before the platform begins operation, a token sale overcomes later coordination failures between transaction counterparties during the platform operation. This result stems from the forward induction equilibrium refinement, under which the costly and observable action of token acquisition credibly communicates the intent to participate on the platform. Our theoretical framework demonstrates the applications of digital tokens to entrepreneurship, including initial coin offerings (ICOs), and offers guidance for both practitioners and regulators.

#### Disclosing Technological Breakthroughs in Innovation Races.

We study a model of multiple firms engaged in an innovation race. Firms allocate their resources in continuous time either to (i) `develop’ (with a slower incumbent technology) or (ii) conduct `research’ (for finding a faster new technology). Under the assumption that firms cannot observe the opponent’s technology level, we uniquely characterize a symmetric equilibrium. There are three types of equilibria: (i) indefinitely developing with an incumbent technology; (ii) indefinitely doing research; (iii) doing research up to a certain date, then mixing research and development. We show that the unobservability may lead to an inefficient social outcome. We also explore whether the firms would voluntarily disclose their technology breakthroughs under various patentable settings.

#### Distributed Asynchronous Stochastic Games

#### Diversity and Empowerment in Organizations

We study how the interplay between empowerment and diversity affects organizational performance. In our model, an organization is comprised of two members: one can acquire costly information, while the other can exert effort to execute the decision. Members of the organization are in charge of different interrelated tasks and have different (prior) beliefs about the best course of action. We show that when the information technology is sufficiently informative and the member executing the task is empowered enough, a bounded increase in diversity improves the quality of decisions and the motivation to exert costly effort in implementation. Furthermore, empowerment enhances the effectiveness of an increase in diversity in organizational performance.

#### Diversity Preferences, Affirmative Action, and Choice Rules

I study the relationship between diversity preferences and the choice rules implemented by institutions, with a particular focus on the affirmative action policies. I characterize the choice rules that can be rationalized by diversity preferences and demonstrate that the recently rescinded affirmative action mechanism used to allocate government positions in India cannot be rationalized. I show that if institutions evaluate diversity without considering intersectionality of identities, their choices cannot satisfy the crucial substitutes condition. I characterize choice rules that satisfy the substitutes condition and are rationalizable by preferences that are separable in diversity and match quality domains.

#### Do Peer Preferences Matter in School Choice Market Design?

Can a school-choice clearinghouse generate a stable matching if it does not allow students to express preferences over peers? Theoretically, we show stable matchings exist with peer preferences under mild conditions but finding one via canonical mechanisms is unlikely. Increasing transparency about the previous cohort’s matching induces a tâtonnement process wherein prior matchings function as prices. We develop a test for stability and implement it empirically in the college admissions market in New South Wales, Australia. We find evidence of preferences over relative peer ability, but no convergence to stability. We propose a mechanism improving upon the current assignment process.

#### Do Protests Induce Accountability? Evidence and theory from Brazil’s 2013 Mass Protests.

The effectiveness of mass street protests as an accountability mechanism remains uncertain, particularly concerning the quality of the messages produced and their subsequent impact. This paper empirically analyzes the effects of the large street protests in Brazil in 2013 on both voter and federal legislator behavior. Leveraging geolocated Twitter data, we construct two distinct measures: protest intensity and the quality of protesters’ demands at the municipal level, where quality refers to protesters articulating few and clear demands. Our findings provide causal evidence that more intense protests lead to higher levels of pork barrel spending, while protests with less focused demands negatively affect legislators’ responsiveness. Additionally, we find a negative causal impact of less focused protest demands on legislators’ vote share. However, legislators who responded to protest demands are less negatively impacted by protests. These results support the idea that protests can be effective political accountability mechanisms, as long as they have clear demands. Finally, our results can be interpreted within the framework of a noisy persuasion game between protesters and the government. Our model demonstrates that protests lacking clear demands and encountering a noisy communication channel not only achieve diminished success but can also be ex-ante inefficient as mechanisms of persuasion. Intriguingly, noisy protests help differentiate politicians who address protesters’ demands from those who do not, thereby enhancing electoral accountability.

#### Double-Sided Moral Hazard and the Innovator’s Dilemma

I explain why current success might undermine an organization’s ability to innovate. I study a principal-agent relationship deciding whether to adopt an innovation. For the innovation to deliver profits, the agent must develop new capabilities, and the principal must divert resources from a profitable status quo. When contracts are incomplete and learning the innovation’s profitability takes time, a double-sided moral hazard problem arises. Its severity increases with the status quo’s profits, so more successful firms have higher innovation costs after accounting for profit cannibalization. The model provides predictions about which innovations will be more difficult for successful firms to adopt.

#### Downside of Transparency in Delegated Experimentation with Costly Switching

Consider a principal-agent scenario where both players benefit from infinite-armed bandit experimentation by the agent. The agent has to pay a cost every time he switches arms, which makes him prone to sticking with the same arm for several periods. To incentivize more frequent switching, the principal can design a reward scheme that assigns a (potentially random) payoff to each arm. The main limitation faced by the principal is that her reward scheme is inflexible: it either cannot be changed during experimentation, or can be changed at a cost. A central preliminary result shows that under these conditions it is optimal for the principal to maximize the non-transparency of the reward scheme, as measured by the uncertainty of the reward distribution. The optimal design is characterized by lottery-like extremity: the agent receives a high reward with low probability or nothing. This emphasizes the role of non-transparency in delegated experimentation with asymmetric incentives, and how it can be used in a simple direct-reward reward scheme to achieve more efficient experimentation by the agent. This also sheds light on why contemporary two-sided market platforms often rely on non-transparent incentive schemes for the producers, as opposed to more classic screening contracts.

#### Dynamic Concern for Misspecification

We consider an agent that posits a set of structural probabilistic models for the payoff relevant states. The agent has a probabilistic belief over this set but still fears that the actual model is not in the support and uses a generalization of the multiplier preferences introduced by Hansen and Sargent (2001) to hedge against this possibility. The beliefs over structural models are adjusted using Bayesian updating given the endogenously generated evidence. Also, the concern for misspecification is endogenous: If a model explains the previous observations well, the agent attenuates their concern. We show how different existing or novel equilibrium concepts arise as the limit behavior, depending on the preferences of the agent and on whether they are misspecified. Finally, we axiomatize this decision criterion and how quickly the agent adjusts their misspecification concern.

#### Dynamic Investigations

Investigations are an important feature of our legal system used to uncover wrongdoing and hold people accountable for their actions. We study an environment where an investigator uses a Poisson process to investigate a potentially guilty party who, in turn, can invest resources into obstructing the course of the investigation, slowing down the arrival of damning evidence. In equilibrium, obstruction occurs with positive probability, and increases as time progresses. We consider the model in the context of investigating a political candidate seeking office. Guilty candidates have incentives to obstruct the investigation, thereby reducing voter information during the election. This problem intensifies when legal penalties for wrongdoing increase, but is ambiguous in increases to voter’s distaste for wrongdoing. When voters display stronger distaste for wrongdoing, relatively secure (but guilty) candidates obstruct more in an attempt to avoid confirmation of wrongdoing. On the other hand, less secure candidates may be so *tainted* by the accusation they are unlikely to win the election *even if* they successfully obstruct the investigation, and therefore obstruct less in equilibrium. We augment the model in two directions. First, we consider a legal environment that separately punishes wrongdoing *and* obstruction, and give voters a corresponding distaste for wrongdoing and dishonesty. We show that punishing obstruction can increase voter welfare under certain ranges of the parameter space and depends crucially on whether or not the election constraint is binding for the investigator. Second, we consider an opposition candidate with *verifiable information* that can choose when to level an accusation against their opponent. We show that in close elections, credible information is released as an `October surprise’, whereas non-credible information is released early in the hopes that something comes of it, in spite of the accusation being ex-ante unlikely. This result differs from recent literature on the timing of accusations by focusing on uncertainty surrounding the median voter’s preferences.

#### Dynamic Monitoring Design

This paper studies a dynamic principal-agent model in which the principal designs both monitoring structure and compensation scheme. The model predicts simple effort choice and coarse evaluation. In the optimal contract, the agent exerts effort until termination or tenure, and Poisson processes emerge as the optimal monitoring structure. In the trial period, the principal monitors with inconclusive Poisson bad news, arrival of which leads to termination. The non-stationary Poisson monitoring becomes more informative and less frequent over time. After the trial period, the principal switches to a stationary Poisson monitoring structure with two-sided experiments. Bad news leads to termination and good news leads to tenure.

#### Dynamic Opinion Aggregation: Long-run Stability and Disagreement

This paper proposes a model of non-Bayesian social learning in networks that accounts for heuristics and biases in opinion aggregation. The updating rules are represented by nonlinear opinion aggregators from which we extract two extreme networks capturing strong and weak links. We provide graph-theoretic conditions on these networks that characterize opinions’ convergence, consensus formation, and efficient or biased information aggregation. Under these updating rules, agents may ignore some of their neighbors’ opinions, reducing the number of effective connections and inducing long-run disagreement for finite populations. For the wisdom of the crowd in large populations, we highlight a trade-off between how connected the society is and the nonlinearity of the opinion aggregator. Our framework bridges several scattered models and phenomena in the non-Bayesian social learning literature, thereby providing a unifying approach to the field.

#### Dynamic Political Investigations: Obstruction and the Optimal Timing of Accusations

This paper explores how an opposition party strategically times evidence-backed accu- sations against a political candidate, knowing that their accusation will trigger a formal investigation which the candidate may obstruct. Obstruction is costly but slows down the arrival rate of incriminating information. The candidate’s probability of election is decreasing in voters’ belief that the candidate is guilty, and if the investigation uncovers evidence of wrongdoing, he may face legal penalties. We characterize how the optimal obstruction strategy changes over the course of an investigation and determine when the opposition releases evidence to trigger an investigation. When the election is close and evidence is credible or the opposition is the clear front-runner, they wait until right before the election to release evidence—an October Surprise—leaving the investigation no time to search for additional evidence before voting occurs. In contrast, when the election is close and evidence is weak or the candidate is the clear front-runner, the opposition releases evidence immediately to allow time for a full investigation. Ob- struction interacts with this timing decision by making investigations less informative and inducing more October Surprises, which reduce voter information and welfare.

#### Dynamic Price Competition: Theory and Evidence from Airline Markets

We study dynamic price competition where sellers are endowed with finite capacities

and face uncertain demands toward a sales deadline. With perfect information,

price dynamics are determined not only by changing own-sale opportunity costs and

demand, but also by strategic incentives to soften future price competition. We study

equilibrium properties and apply our framework to the airline industry using daily

pricing and bookings data for competing airlines. We show that the use of pricing

algorithms—similar to those implemented in practice—soften price competition but

do not create additional dead-weight loss compared to the perfect information benchmark.

#### Dynamic Pricing with Limited Commitment

A monopolist wants to sell one item per period to a consumer with evolving and persistent private information. The seller sets a price each period depending on the history so far, but cannot commit to future prices. We show that, regardless of the degree of persistence, any equilibrium under a D1-style refinement gives the seller revenue no higher than what she would get from posting all prices in advance.

#### Dynamic Reward Design

#### Dynamics of Risky Agreements

We study the formation, dissolution and duration of risky agreements. Two agents decide whether to participate in an agreement at each instant over an infinite horizon. The agreement may favor one agent, but which agent might be favored is, ex-ante, unknown to both agents. A favorable agreement is beneficial compared to no agreement, but an unfavorable agreement is costly. Either agent can kill the agreement at any time. Each agent chooses to participate in the agreement if they are sufficiently optimistic that it is favorable to them. In equilibrium agents behave as if myopic, and agreement duration is generically inefficient.

#### Economic Harmony—A Rational Theory of Fairness and Cooperation in Strategic Interactions

Experimental studies show that the Nash equilibrium and its refinements are poor predictors of behavior in non-cooperative strategic games. Cooperation models, such as ERC and inequality aversion, yield superior predictions compared to the standard game theory predictions. However, those models are short of providing a general theory of behavior in economic interactions. In two previous articles, we proposed a rational theory of behavior in non-cooperative games, termed Economic Harmony theory (EH). In EH, we retained the rationality principle but modified the players’ utilities by defining them as functions of the ratios between their actual and aspired payoffs. We also abandoned the equilibrium concept in favor of the concept of “harmony,” defined as the intersection of strategies at which all players are equally satisfied. We derived and tested the theory predictions of behavior in the ultimatum game, the bargaining game with alternating offers, and the sequential common-pool resource dilemma game. In this article, we summarize the main tenets of EH and its previous predictions and test its predictions for behaviors in the public goods game and the trust game. We demonstrate that the harmony solutions account well for the observed fairness and cooperation in all the tested games. The impressive predictions of the theory, without violating the rationality principle nor adding free parameters, indicate that the role of benevolent sentiments in promoting fairness and cooperation in the discussed games is only marginal. Strikingly, the Golden Ratio, known for its aesthetically pleasing properties, emerged as the point of fair demands in the ultimatum game, the sequential bargaining game with alternating offers, and the sequential CPR dilemma game. The emergence of the golden ratio as the fairness solution in these games suggests that our perception of fairness and beauty are correlated. Because the harmony predictions underwent post-tests, future experiments are needed for conducting ex ante tests of the theory in the discussed games and in other non-cooperative games. Given the good performance of economic harmony where game theory fails, we hope that experimental economists and other behavioral scientists undertake such a task.

#### Efficient Cheap Talk in Complex Environments

Decision making in practice is often difficult, with many actions to choose from and much that is unknown. Experts play a particularly important role in such complex environments. We study the strategic provision of expert advice in the classic sender-receiver game when the environment is complex. We identify an efficient cheap talk equilibrium for (potentially) bounded action spaces that is sender-optimal. In fact, the equilibrium action is exactly what the sender would choose were she to hold full decision making power. This contrasts with the inefficient equilibria of the canonical model in which decision making environments are more simple. Thus, strategic communication is not only more favorable to the expert when the environment is complex, it is also more effective.

#### Efficient Entry in Cournot (Global) Games

We present a two stage entry game in which a large number of firms choose simultaneously whether to enter a market or not. Firms that decide to enter the market produce a homogeneous good facing Cournot competition under a parametrized demand. Using a global game approach, we show that there exist selection of a unique equilibrium in the first stage entry game, in which there is efficient entry, i.e. firms that enter are the ones with the lowest entry cost, providing theoretical foundation for the equilibrium selection assumption utilized in entry models in the empirical entry literature. We explore as well efficiency properties of the selected equilibrium and provide examples that do not fit our general framework, but where similar results may be obtained.

#### Efficient networks in connections models with heterogeneous nodes and links

We culminate the extension of the results on efficiency in the seminal connections model of Jackson and Wolinsky (1996), partially addressed in previous papers. The structure of efficient networks is characterized in a model where both nodes and links are heterogeneous, i.e. nodes have different values and the strength of a link depends on the amounts invested in it by the two nodes that it connects.

#### Egalitarian Resource Sharing Over Multiple Rounds

#### Equal Pay for Similar Work

Equal pay laws increasingly require that workers doing “similar” work are paid equal

wages within firm. We study such “equal pay for similar work” (EPSW) policies theoretically

and test our model’s predictions empirically using evidence from a 2009 Chilean EPSW.

When EPSW only binds across protected class (e.g., no woman can be paid less than any

similar man, and vice versa), firms segregate their workforce by gender. When there are

more men than women in a labor market, EPSW increases the gender wage gap. By contrast,

EPSW that is not based on protected class can decrease the gender wage gap.

#### Equivalent Mechanisms for Information Intermediation

An intermediary serves as a platform through which a producer with a private cost sells a product to a unit mass of consumers. The intermediary has information about the match values between consumers and the product, which can be used to inform both the consumers and the producer. This paper studies the revenue-maximizing mechanisms for the intermediary under two critical business models: the **retail**** **model and the** ****marketplace** model. We show that the market outcomes are equivalent across the two business models. Furthermore, it is optimal for the intermediary to either (i) provide upper-censored partial information to consumers and no information to the producer, or (ii) fully inform consumers about their values and partially disclose that information to the producer and induce **quasi-perfect** price discrimination. This result suggests that product recommendation and price discrimination are outcome-equivalent mechanisms for an intermediary.

#### Evidence Games: Lying Aversion and Commitment

Voluntary disclosure literature suggests that in evidence games, where the informed sender chooses which pieces of evidence to disclose to the uninformed receiver who determines his payoff, commitment has no value, as there is a theoretical equivalence of the optimal mechanism and the game equilibrium outcomes. In this paper, we experimentally investigate whether the optimal mechanism and the game equilibrium outcomes coincide in a simple evidence game. Contrary to the theoretical equivalence, our results indicate that outcomes diverge and that commitment has value. We also theoretically show that our experimental results are explained by accounting for lying averse agents.

#### Evident Competition

United States civil courts rely on an adversarial system where two parties in a lawsuit obtain and present evidence according to a process known as discovery. Two discovery regimes are predominantly used: voluntary disclosure, which does not require parties to reveal all evidence in their possession, and formal discovery, which does. How do these regimes influence the extent of the parties’ search for evidence and the information available to the judge? We find that each regime has its advantages: Voluntary disclosure tends to provide a stronger incentive to search relative to formal discovery, but formal discovery ensures the judge is better informed conditional on the evidence found. Furthermore, the quality of evidence plays an important role, when evidence is decisive for the judge, parties are encouraged to search more and present more evidence. Our results can help explain the legal literature’s inconclusive findings on the relationship between disclosure and settlement.

#### Evolutionarily Stable (Mis)specifications: Theory and Applications

We introduce an evolutionary framework to evaluate competing (mis)specifications in strategic situations, focusing on which misspecifications can persist over correct specifications. Agents with heterogeneous specifications coexist in a society and repeatedly play a stage game against random opponents, drawing Bayesian inferences about the environment based on personal experience. One specification is *evolutionarily stable* against another if, whenever sufficiently prevalent, its adherents obtain higher average payoffs than their counterparts. Agents’ equilibrium beliefs are constrained but not wholly determined by specifications. Endogenous belief formation through the learning channel generates novel stability phenomena compared to frameworks where single beliefs are the heritable units of cultural transmission. In linear-quadratic-normal games where players receive correlated signals but possibly misperceive the information structure, the correct specification is evolutionarily unstable against a correlational error whose direction depends on social interaction structure. We also endogenize coarse thinking in games and show how its prevalence varies with game parameters.

#### Ex-Ante Design of Persuasion Games

How does receiver commitment affect incentives for information revelation in Bayesian persuasion? We study many-sender persuasion games where a single receiver commits to a posterior-dependent action profile, or *allocation*, before senders design the informational environment. We develop a novel revelation-like principle for *ex-ante* mechanism design settings where sender reports are Blackwell experiments and use it to characterize the set of implementable allocations in our model. We show global incentive constraints are pinned down by “worst-case” punishments at finitely many posterior beliefs, whose values are independent of the allocation. Moreover, the receiver will generically benefit from the ability to randomize over deterministic outcomes when solving for the constrained optimal allocation, in contrast to standard mechanism de- sign models. Finally, we apply our results to analyze efficiency in multi-good allocation problems, full surplus extraction in auctions with allocation externalities, and optimal audit design, highlighting the role that *monotone* mechanisms play in these settings.

#### Experimental cost of information

We relate two main representations of the cost of acquiring information: a cost that depends on the experiment performed, as in statistical decision theory, and a cost that depends on the distribution of posterior beliefs, as in applications of rational inattention. We show that in many cases of interest, posterior-based costs are inconsistent with a primitive model of costly experimentation. The inconsistency is at the core of known limits to the application of rational inattention in games and, more broadly, equilibrium analyses where beliefs are endogenous; we show that an experiment-based approach helps to understand and overcome these difficulties.

#### Experimental Persuasion

We introduce experimental persuasion between Sender and Receiver. Sender chooses an experiment to perform from a feasible set of experiments. Receiver observes the realization of this experiment and chooses an action. We characterize optimal persuasion in this baseline regime and in an alternative regime in which Sender can commit to garble the outcome of the experiment. Our model includes Bayesian persuasion as the special case in which every experiment is feasible; however, our analysis does not require concavification. Since we focus on experiments rather than beliefs, we can accommodate general preferences including costly experiments and non-Bayesian inference.

#### Experts & Experiments

We develop a two-period model of decision making under uncertainty. The key novelty is that the decision maker can both consult an expert for advice and experiment, learning from his experience. We characterize a family of equilibria in which expert advice and experimentation coexist on the equilibrium path. We show the decision maker’s ability to experiment shapes the advice he receives from the expert and, in turn, that the expert’s advice shapes the experiments the decision maker undertakes. In equilibrium, expert advice and experimentation are complements. The more precisely the expert communicates, the greater the decision maker’s incentive to experiment. However, there exists an upper bound on the quality of advice that the expert can provide in equilibrium, and this bound is lower than when the decision maker cannot experiment. The ability to experiment empowers the decision maker but, in so doing, makes communication with the expert more difficult, so much so that both players can be left worse off.

#### Exploding Offers, Risk Aversion and Welfare

We study exploding offers focusing on the strategic interaction between a low-tier firm with one open position and a pool of workers. Each worker has a private value for the firm and receives offers from preferred top-tier firms according to an exogenous stochastic process. First, we show that workers’ risk aversion makes exploding offers prevalent in equilibrium independently of the top-offer arrival processes. Second, we show that when exploding offers are prevalent, workers can fall through the cracks: a worker may receive no top offer and be willing to accept an offer from the low-tier firm, which instead proposes to a less preferred candidate. Finally, we show that, although exploding offers have a welfare-augmenting screening effect, the utilitarian welfare of sufficiently risk-averse workers is maximized in all equilibria if and only if exploding offers are banned.

#### Exploration and Exploitation in R&D Competition

This paper considers a dynamic model of R&D in which firms navigate a trade-off

between exploration (i.e. staying in the patent race) and exploitation (i.e. competing in

the market). In the model, greater rivalry in the patent race has an ambiguous effect on

equilibrium R&D incentives. On the one hand, there is a higher chance of rival success,

which raises R&D incentives through a *racing effect*. On the other hand, there is less

rivalry in the product market, which lowers R&D incentives through a novel economic

force: the *competition effect*. Once considered, the competition effect has significant

implications for the dynamics of firm investment; thus, it provides a rich description of

real-world behavior compared to models that consider only racing effects. In terms of

expected welfare, the total amount of R&D performed in equilibrium is socially insufficient.

A change in market structure, specifically a merger to monopoly, may increase R&D

incentives through enhanced appropriability. However, if the trade-off between exploration

and exploitation is large, then a merger *always reduces* R&D incentives, regardless of its

effect on appropriability.

#### Extreme Equilibria

We study the structure of mixed Nash equilibria of general normal-form games. We provide a simple characterization of Nash equilibria that are extreme points. Our characterization implies that any mixed Nash equilibrium that is sufficiently random can either be improved by correlating agents’ actions or by switching to a less random equilibrium, regardless of the designer’s objective. Notably, symmetric equilibria across a range of symmetric games are shown to be suboptimal, irrespective of the goal. Furthermore, our analysis extends to various applications such as voting, auctions, and potential games, where commonly studied equilibria are found to be susceptible to improvement.

#### Fair Division of Indivisible Items: The Incompatibility of Envy-Freeness (Both EF and EFX) and Pareto-Optimality

In the 2-person fair division of indivisible items, there may be no envy-free (EF) allocation with additive utilities, whereby each player thinks it received at least as valuable a bundle of items as the other player. But if there is an EF allocation, there is also one that is both Pareto-optimal (PO) and maximin (MX).

For *n* > 2 players, this link breaks down: An EF allocation may be neither PO nor MX. We illustrate this clash between EF and PO-MX allocations with 3-person and 4-person examples.

In the *n*-person case, a weakened form of EF, called EFX–or EF with the removal of any item X from an envied player’s bundle–also clashes with PO-MX and another welfare property, maximum Nash welfare (MNW), based on the product of player utilities. In short, because of the fundamental conflict between EF-EFX allocations, on the one hand, and PO-MX-MNW allocations on the other, one may need to make a difficult choice.

A long-standing conjecture is that if there is no EF allocation, there is at least an EFX allocation, with envy once removed. We do not prove or disprove this conjecture. We show, instead, that if some players consider some items worthless (0-valued), an EFX allocation may require that the players receive some of these worthless items, even in the 2-person case. This makes EFX allocations Pareto-inferior to non-EFX allocations, casting doubt on their desirability as a second-best alternative to EF allocations.

#### Fair pricing on a platform with heterogeneous sellers: A cooperative game approach

A two-sided market platform that facilitates trade between sellers and buyers enters into the sellers’ space with its own product or services offerings. This creates heterogeneity among the sellers in terms of their competitive position on the platform. The sellers face positive *cross-side externalities* from a higher participation level of buyers (and vice versa), and negative *same-side externalities* from a higher participation level of sellers. Under this modeling choice, we develop a cooperative game-based framework to study the fairness issues in the pricing decision of the platform. The framework proposes converting the pricing decision problem of the platform into a cooperative game-based payoff allocation problem, and then characterizing a fair pricing structure using a fairness-based solution concept from cooperative game theory. This paper also contributes to the methodological literature of analyzing market platforms as cooperative games, an alternative to the traditional method of equilibrium points.

#### Fairer Chess: A Reversal of Two Opening Moves in Chess Creates Balance Between White and Black

Unlike tic-tac-toe or checkers, in which optimal play leads to a draw, it is not known whether optimal play in chess ends in a win for White, a win for Black, or a draw. But after White moves first in chess, if Black has a double move followed by a double move of White and then alternating play, play is more balanced because White does not always tie or lead in moves. Symbolically, Balanced Alternation gives the following move sequence: After White’s (W) initial move, first Black (B) and then White each have two moves in a row (BBWW), followed by the alternating sequence, beginning with W, which altogether can be written as WB/*BW*/WB/WB/WB… (the slashes separate alternating pairs of moves). Except for reversal of the 3rd and 4th moves from WB to BW (italicized), this is the standard chess sequence. Because Balanced Alternation lies between the standard sequence, which favors White, and a comparable sequence that favors Black, it is highly likely to produce a draw with optimal play, rendering chess fairer. This conclusion is supported by a computer analysis of chess openings and how they would play out under Balanced Alternation.

#### Fake news in social media: A supply and demand approach

We introduce a model of a platform in which users encounter news of unknown veracity. Users vary in their propensity to share news and can learn the veracity of news at a cost. In turn, the production of fake news is both more sensitive to sharing rates and cheaper than its truthful counterpart. As in traditional markets, the equilibrium prevalence of fake news is determined by a demand and supply of misinformation. However, unlike in traditional markets, the exercise of market power is generally limited unless segmentation methods are employed. Combating fake news by lowering verification costs can be ineffective due to the demand for misinformation only being weakly reduced. Likewise, the use of algorithms that imperfectly filter news for users can lead to greater prevalence and diffusion of misinformation. Our findings highlight the important role of natural elasticity measures for policy evaluation.

#### Favoritism and Social Stratification

Motivated by both contemporary and historical evidence, we develop a model for studying optimal taxation, ruler selection, and the impact of fractionalization in (ethnically-)divided societies. We show that the political environment generates social stratification, reinforces inequality, and fuels internal tensions. First, we show that the ruler optimally creates a ranking among social groups and demands lower taxes from higher ranks. This divide-and-conquer strategy (political favoritism) creates social stratification even among identical social groups and reinforces inequality by assigning higher ranks (thus lower taxes) to wealthier/stronger groups.

Second, we show that the ruler’s extractive capacity increases in society’s fractionalization, providing a normative explanation of why we should expect worse economic outcomes in these settings.

Finally, we show that, despite the ruler’s extractive capacity increases in its own power, social groups would select the strongest group as the ruler to minimize their tax burden, unless they are shielded by a representative institution.

#### Fees, Incentives, and Efficiency in Large Double Auctions

Fees are omnipresent in markets but, with few exceptions, are omitted in economic models—

such as Double Auctions—of these markets. Allowing for general fee structures, we show that

their impact on incentives and efficiency in large Double Auctions hinges on whether the fees

are homogeneous (as, e.g., fixed fees and price fees) or heterogeneous (as, e.g., bid-ask spread

fees). Double Auctions with homogeneous fees share the key advantages of Double Auctions

without fees: markets with homogeneous fees are asymptotically strategyproof and efficient. We

further show that these advantages are preserved even if traders have misspecified beliefs. In

contrast, heterogeneous fees lead to complex strategic behavior (price guessing) and may result

in severe market failures. Allowing for aggregate uncertainty, we extend these insights to market

organizations other than the Double Auction.

Keywords: Double Auction, Fees, Transaction Costs, Incentives, Strategyproofness, Efficiency,

Robustness.

#### Finding out who you are: a self-exploration view of education

I study the role of education as self-exploration. Students in my model have different priors about their talents and update their beliefs after receiving noisy signals about themselves. I characterize a socially optimal design of the signal structure. An optimal structure encourages a career in which participating students are on average more confident. I apply the model to students in the United States and estimate the parameters from data. Advanced science classes in high school tend to encourage science majors. Their estimated self-exploration value is a four-percent increase in earnings after graduation.

#### Fine-Grained Buy-Many Mechanisms are Not Much Better Than Bundling.

Multi-item optimal mechanisms are known to be extremely complex, often offering buyers randomized lotteries of goods. In the standard buy-one model it is known that optimal mechanisms can yield revenue infinitely higher than that of any ”simple” mechanism, even for the case of just two items and a single buyer. One possible explanation for this bizarre property is that the seller is unrestricted in their choice of mechanisms.

We introduce a new class of mechanisms, buy-k mechanisms, which smoothly interpolates between the classical buy-one mechanisms and buy-many mechanisms. Buy-k mechanisms allow the buyer to (non-adaptively) buy up to k many menu options, progressively shrinking the seller’s feasible set of mechanisms. We show that restricting the seller to the class of buy-n mechanisms suffices to overcome the bizarre, infinite revenue properties of the buy-one model for the case of a single, additive buyer. The revenue gap with respect to bundling, an extremely simple mechanism, is bounded by O(n^3) for any arbitrarily correlated distribu-tion D over n items. For the special case of n= 2, we show that the revenue-optimal buy-2 mechanism gets no better than 40 times the revenue from bundling. Our upper bounds also hold for the case of adaptive buyers.

Finally, we show that allowing the buyer to purchase a small number of menu options does not suffice to guarantee sub-exponential approximations. If the buyer is only allowed to buy k= Θ(n^(1/2)−ε) many menu options, the gap between the revenue-optimal buy-k mechanism and bundling may be exponential inn. This implies that no ”simple” mechanism can get a sub-exponential approximation in this regime. Moreover, our lower bound instance, based on combinatorial designs and cover-free sets, uses a buy-k deterministic mechanism. This allows us to extend our lower bound to the case of adaptive buyers

#### Fragile Stable Matchings

We study decentralized one-to-one matching markets. Roth and Vande Vate (1990) showed that for any unstable matching, there are simple dynamics generating a stable one. Nonetheless, stable outcomes are fragile. First, we prove that any stable matching can be attained using these dynamics from any unstable one under mild conditions. Next, we quantify fragility. We show markets in which (i) some stable matchings are more robust than others; (ii) extremal stable matchings are most fragile; (iii) all stable matchings are fragile. Finally, even in markets with unique stable matchings, re-equilibration usually takes a long time and involves many market participants unmatched and rematched for a substantial number of periods. We prove that the addition of a small fraction of market participants can make stabilization dynamics in the new market take an exponentially long time for almost any perturbation.

#### Fragility of Confounded Learning

We consider an observational learning model with exogenous public payoff shock.

We show that confounded learning doesn’t arise for almost all private signals and almost all shocks, even if players have sufficiently divergent preferences.

#### From Prejudice to Racial Profiling and Back

A designer conducts random searches to detect criminals, and may condition the search probability on individuals’ appearance. She updates her belief about the distribution of criminals across appearances using her search results, but incorrectly takes her sample distribution for the population distribution.

In equilibrium she employs optimal search probabilities given her belief, and her belief is consistent with her findings. We show that she will be discriminating an appearance if and only if she overestimates the probability of this appearance’s being criminal. Moreover, in a linear model, tightening her budget will worsen the situation of those most discriminated against.

#### Full Surplus Extraction and Consideration Sets

We examine the surplus extraction problem in a novel mechanism design setting with consideration sets. In our model consideration sets are defined as the sets of types a particular type can imitate. We characterize the sufficient conditions that guarantee full surplus extraction in a finite version of the reduced form environment of McAfee and Reny (1992). While the standard convex independence condition identified in Crémer and McLean (1988) is still sufficient, it could be partially relaxed in this context. We also discuss two simple environments in which the characterization could be easily interpreted: a separable environment and a environment with honest types.

#### Gacha Game: When Prospect Theory Meets Optimal Pricing

This paper studies the pricing problem of selling a unit good to a prospect theory buyer. With non-negative constraints on the price, the optimal profit is always bounded. This suggests a fundamental distinction between random selling mechanisms and gambling, where the principal can extract infinite profit.

If the buyer is naive about her dynamic inconsistency, the uniquely optimal dynamic mechanism is to sell a “lucky chest” that delivers the good with some constant probability in each period. Until she finally gets the good, the consumer always naively believes she will try her luck just one last time.

In contrast, if the buyer is sophisticated, the uniquely optimal dynamic mechanism includes a “pity system”, in which after a successive failure in getting the good from all previous lucky chests, the buyer can purchase the good at full price.

#### Games under the Tiered Deferred Acceptance Mechanism

We study a multi-stage admission system, known as the Tiered Deferred Acceptance mechanism, designed to benefit some schools over others. The current New York City public school and Chinese college admission systems are two examples. In this system, schools are partitioned into tiers, and the Deferred Acceptance algorithm is applied within each tier. Once assigned, students cannot apply to schools in subsequent tiers. This mechanism is not strategyproof. Therefore, we study the Nash equilibria of the induced preference revelation game. We show that Nash equilibrium outcomes are nested in the sense that merging tiers preserves all equilibrium outcomes. We also identify within-tier acyclicity as a necessary and sufficient condition for the mechanism to implement stable matchings in equilibrium. Our findings suggest that transitioning from the Deferred Acceptance mechanism to the Tiered Deferred Acceptance mechanism may not improve student quality at top-tier schools as intended.

#### General Forms of Berge’s Maximum Theorem and their Applications to Games with Perfect Information

#### General theory of equilibrium in models with complementarities

We develop the general theory of equilibrium in models with complementarities used widely in economics and other disciplines. We isolate weak complementarity conditions that unify the study of equilibrium in these models. Using these weaker conditions, we strictly generalize structure theorems due to Tarski (1955) and Zhou (1994) and use our results to prove that the equilibrium set in all the standard models is a nonempty complete lattice without any new assumptions on primitives, solving a long-standing problem in the literature. Combined with two new and natural set comparison relations (star complete and star lattice relations), we provide new and widely applicable theories of order-nearest approximation of equilibria and of monotone comparative statics (MCS) of equilibrium correspondences. Our meta-theorems apply regardless of the manner in which individual choices are made as long as they satisfy our weak conditions, which are proved to hold in standard models prevalent in the literature.

#### Global Manipulation by Local Obfuscation

We study adversarial information design in a regime-change context. A continuum of agents simultaneously chooses whether to attack the current regime. The attack succeeds if and only if the mass of attackers outweighs the regime’s strength. A designer manipulates information about the regime’s strength to maintain the status quo. Our optimal information structure exhibits local obfuscation: some agents receive a signal matching the regime’s true strength, and others receive an elevated signal professing slightly higher strength. This policy is the unique limit of finite-signal problems. Public signals are strictly suboptimal, and in some cases where public signals become futile, local obfuscation guarantees the collapse of agents’ coordination, making the designer’s information disclosure time consistent and relieving the usual commitment concern. The model is applied to understand the transition of the dominant form of autocrats in the 21st century.

#### Heads in the sand: Information Aversion in a Market Context

In this paper, we consider information avoidance in product markets. We show that misinformation can be an equilibrium outcome if consumers receive disutility when proven wrong in their product quality assessment. Consumers, however, are assumed to respond to market and other incentives. The incentive to learn contradicting information increases in the price of the product. The possibility of false information in equilibrium provides a rationale for regulation or establishing tort liability. However, even though regulation dampens the effects of information aversion, laissez-faire still might be better for consumers even when regulation is highly effective.

#### Hierarchical Bayesian Persuasion: Importance of Vice Presidents

We study strategic information transmission in a hierarchical setting where information gets transmitted through a chain of agents up to a decision maker whose action is of importance to every agent. This situation could arise whenever an agent can communicate to the decision maker only through a chain of intermediaries, for example, an entry-level worker and the CEO in a firm, or an official in the bottom of the chain of command and the president in a government. Each agent can decide to conceal part or all the information she receives. Proving we can focus on simple equilibria, where the only player who conceals information is the first one, we provide a tractable recursive characterization of the equilibrium outcome, and show that it could be inefficient. Interestingly, in the binary-action case, regardless of the number of intermediaries, there are a few pivotal ones who determine the amount of information communicated to the decision maker. In this case, our results underscore the importance of choosing a pivotal vice president for maximizing the payoff of the CEO or president.

#### How do you know you won’t like it if you’ve never tried it? Preferences discovery and strategic bundling

We model the interaction between a provider of composite bundles and a consumer who learns about her own preferences and the quality of single goods through consumption.

The consumer can be representative of many of them.

We analyze how the provider can strategically manipulate bundles to bias the learning process of the consumer. The provider can favor or disfavor specific goods.

Strategic bundling can also delay learning and increase profits by leveraging temporary biases.

We combine results from ordinary least squares estimation and decision theory, adopting concepts from network theory.

We finally provide intuition on the mechanism of the model with a toy example on the movie industry.

#### How to De-reserve Reserves: Admissions to Technical Colleges in India

We study the joint implementation of reservation and de-reservation policies in India that has been enforcing comprehensive affirmative action since 1950. The landmark judgment of the Supreme Court of India in 2008 mandated that whenever OBC category (with 27 percent reservation) has unfilled positions, they must be reverted to general category applicants in admissions to public schools without specifying how to implement it. We disclose the drawbacks of the recently reformed allocation procedure in admissions to technical colleges and offer a solution through ”de-reservation via choice rules.” We propose a novel priority design—Backward Transfers (BT) choice rule—for institutions and the deferred acceptance mechanism under these choice rules (DA-BT) for centralized clearinghouses. We show that DA-BT corrects the shortcomings of existing mechanisms. By formulating India’s legal requirements and policy goals as formal axioms, we show that the DA-BT mechanism is unique for the concurrent implementation of reservation and de-reservation policies.

#### Hurwicz meets Veatch: Rationing deceased-donor transplants under dynamic asymmetric information

Since the late 80’s, there is a heated debate on the principles of distributive justice for rationing transplants. At the same time, it is well-known that the U.S. transplantation authority has recurrently faced a pervasive problem of asymmetric information about transplant candidates’ medical urgency. I investigate the optimal design of prioritization rules under different social welfare functions while taking patients’ incentives to misrepresent medical needs into account, and analyze their long run stability. While the history of reports of medical urgency could always be used to incentivize truth-telling, it is not necessarily optimal to do so. When the social objective is to minimize the mass of unserved sick patients, the optimality of screening is ambiguous and depends on the parameter region. In sharp contrast, when the objective is utilitarian, screening is not optimal in general. Moreover, while the prescribed optimal policies for the two objectives are in general different, there is a region of parameters where they coincide, in which case, once the incentive problem is taken into account, the two principles of distributive justice are not in conflict anymore.

#### Identification and Estimation in Search Models with Social Information

We propose a theoretical analysis of the conditions under which estimates of search cost distributions are biased when Bayes rational agents search in the presence of social information. We extend the canonical empirical sequential and simultaneous search models by allowing a share of the agents in the population to observe the choice of one of their social connections. We find that social information changes agents’ optimal search decisions. We compute the estimator of search cost distributions under various standard datasets. We find that neglecting social information typically leads to biased and inconsistent estimates of search cost distributions, with the bias sign and magnitude depending on the dataset’s content. The bias magnitude is increasing in the share of agents in the population with social information. We also discuss offline estimation techniques, exogenous variations in the data, and partial identification approaches that are useful to recover correct estimates of search cost distributions.

#### Identity-based Elections

*We study the electoral implications of motivated media choice by Bayesian citizens*

*aiming to preserve their political identity. In addition to their chosen media, citizens*

*are somewhat exposed to outside information, which they try to counteract. When*

*the outside information is unbiased, substantial political advantage may accrue to the*

*side whose base is less exposed to it, or if that base incorrectly believes that it is imprecise*

*or biased. Biased outside information works against the side that propagandizes.*

*Finally, propaganda is beneficial only if citizens are unaware of its bias or in the case*

*of a regime with censorship.*

#### Impacts of Public Information on Flexible Information Acquisition

Interacting agents receive public information at no cost and flexibly acquire private information at a cost proportional to entropy reduction. When a policymaker provides more public information, agents acquire less private information, thus lowering information costs. Does more public information raise or reduce uncertainty faced by agents? Is it beneficial or detrimental to welfare? To address these questions, we examine the impacts of public information on flexible information acquisition in a linear-quadratic-Gaussian game with arbitrary quadratic material welfare. More public information raises uncertainty if and only if the game exhibits strategic complementarity, which can be harmful to welfare. However, when agents acquire a large amount of information, more provision of public information increases welfare through a substantial reduction in the cost of information. We give a necessary and sufficient condition for welfare to increase with public information and identify optimal public information disclosure, which is either full or partial disclosure depending upon the welfare function and the slope of the best response.

#### Implementation in vNM stable set

We fully identify the class of social choice functions that are implementable

in von Neumann Morgenstern (vNM) stable sets (von Neumann and Morgenstern,

1944) by a rights structure. A rights structure formalizes the idea

of power distribution in a society. Following the so-called Harsanyi’s critique

(Harsanyi, 1974), we also study the implementation of social choice

correspondences in strict vNM stable sets

#### Implementation with Statistics

A method of implementation is introduced for collective decision problems when only some statistics about the type space Ω are known: First, use those statistics to whittle Ω down to a high probability event Ω*. Then, design a mechanism M* to ex-post implement the desired outcome treating Ω* as the type space. Viewed as a mechanism over the true type space Ω, M* is typically not ex-post. However, under a weaker solution concept I call ε-ex-post equilibrium, M* implements the desired outcome in a high probability subevent of Ω*. An application to a repeated allocation problem shows how implementation with statistics can yield significantly better results than ex-post implementation.

#### In Search of a Unicorn

The search of valuable investment opportunities is one of the fundamental responsibilities of corporate managers. Existing studies of this search process usually model the investment opportunity as a binary signal and the role of the manager ends when such a signal arrives. This paper studies a dynamic agency model in which investors delegate a manager to find valuable investment opportunities arriving stochastically with two novel features. First, investment targets arrive at different levels of quality that are only observable to the manager. Second, once the investment target is chosen, the same manager is also in charge of the ensuing production process and can continue to utilize his superior information about the target to extract rents from the investors. These novel features imply an adverse selection problem interacting with a moral hazard problem. The optimal contract features a progressively lower threshold for investment if a suitable target is not found in time. The investment threshold is always lower than the first-best along the equilibrium path, consistent with the over- investment behaviors observed in practice. The theoretical predictions of the model offer empirically relevant hypotheses regarding the strategies and returns of mergers and acquisitions, hedge function activism, or special purpose acquisition companies.

#### Incentive Compatibility, Condorcet, and Borda

Two important voting systems are the Condorcet method(s) and the Borda count. In this paper it is shown that these are two endpoints of a continuum in which the Condorcet method corresponds to higher levels of information (and associated incentive compatibility constraints) and the Borda count corresponds to lower levels of information.

#### Incentive Design for Talent Discovery

We study how career concerns within an organization distort employee risk-taking. When employees act to maximize their chances of promotion, aggregate risk-taking can be either too high or too low. Their choices can be influenced through incentive schemes which pay bonuses and/or reallocate promotions between groups of employees. We show that the optimal incentive tool depends on the desired power of incentives, with low-powered incentives optimally provisioned through bonuses while high-powered incentives are achieved by reallocating promotions. When asymmetric schemes are possible, the organization may further benefit from dividing employees into multiple groups and incentivizing different rates of risk-taking in each group.

#### Incentive Design with Spillovers

A principal uses bonuses conditioned on stochastic outcomes of a team project to elicit costly private efforts from the team members. We characterize the optimal allocation of incentive pay across agents and outcomes under arbitrary smooth team production functions. It is optimal to make the strength of an agent’s incentives proportional both to marginal productivity and to a measure of organizational centrality that reflects the strength of complementarities with productive colleagues.

Insights from the theory of network games play a crucial role in analyzing how incentives given to one agent spill over to others and shape the optimal contract. The results generalize Holmstrom’s characterization of optimal single-agent contracts under uncertainty to the multi-agent case.

#### Incentives and Peer Effects in the Workplace: On the Impact of Inferiority Aversion on Organizational Design

The article is concerned with the impact of social preferences on the optimal organizational design of firms. Based on stylized facts, we focus on two dimensions of that design. First, we document that organizations differ in the extent to which workers are integrated into single units with open internal communication or separated among many decentralized units. Second, we consider wage secrecy clauses and provide evidence on the positive effect wage transparency has on output and effort.

To theoretically analyze the two organizational aspects, we consider a stylized moral-hazard environment with other-regarding workers. In our setup, an employer generates output by engaging two workers. Output depends on each of the workers’ effort. When working jointly, output is further enhanced through a positive externality the workers generate on one another. Since effort is not contractible, the employer uses bonus contracts to align incentives. The workers are assumed to be inferiority averse. Accordingly, while their utility increases with their own income it decreases with that of their co-worker, provided the latter is observed and is higher. Under this setup, the employer first needs to decide whether to choose a joint (integrated) production setup, where both workers interact, or to separate them. Second, if integration is chosen, the employer decides whether to impose a wage secrecy rule or, on the contrary, make payments public.

We find that the optimal design depends on whether the employer faces a limited-liability constraint in the workers’ side that forces wages to be non-negative in all states. When such a constraint is not imposed, the presence of inferiority aversion is increasing employment costs, thereby making productive synergies and inferiority aversion strategic substitutes. As a result, if payments are common knowledge, integration will be chosen only if workers are not too inferiority averse and the production externality is sufficiently high. Imposing wage secrecy removes the cost associated with social comparisons, making worker integration the optimal choice. In stark contrast, in the presence of limited liability constraints, productive synergies and inferiority aversion may become strategic complements. The constraint on wages implies that rents are paid as long as workers are not too inferiority averse. When working jointly and under common knowledge on pay outcomes, workers increase effort to reduce the likelihood of falling behind their co-worker’s wage. This comes at the expense of the rents, thereby providing a “free lunch” to the employer. Accordingly, beyond the productive externality, under wage transparency joint production enables the employer to exploit the incentive effect of pay inequality and raise productive efforts and profits. It is only when inferiority aversion is sufficiently high that it becomes optimal to impose wage secrecy, if possible, or separate the workers if not. In the same vein, employers may deliberately establish pay inequality by opting for individual performance pay rather than group bonuses. On the normative side, we conclude that popular pressures for transparency and “sunshine laws” may not be in the best interest of employees.

#### Incentives for Contract Designers and Contractual Design

This paper examines the optimal provision of incentives for contract designers and the implications for contractual design. A buyer hires an agent to draft a contract for the seller that is incomplete because the ex-ante specified design might not be appropriate ex-post. The degree of contract incompleteness is endogenously determined by the effort exerted by the agent, who can manipulate the buyer’s beliefs because his effort is not observable (moral hazard), and he is better informed at the outset (adverse selection). We discuss how the asymmetric information generated during the contract drafting stage explains some empirical observations and contracting phenomena.

#### Incentives for Research Effort: An Evolutionary Model of Publication Markets with Double-Blind and Open Review

Contemporary debates about scientific institutions and practice feature many proposed reforms. Most of these require increased efforts from scientists. But how do scientists’ incentives for effort interact? How can scientific institutions encourage scientists to invest effort in research? We explore these questions using a game-theoretic model of publication markets. We employ a base game between authors and reviewers, before assessing some of its tendencies by means of analysis and simulations. We compare how the effort expenditures of these groups interact in our model under a variety of settings, such as double-blind and open review systems. We make a number of findings, including that open review can increase the effort of authors in a range of circumstances and that these effects can manifest in a policy-relevant period of time. However, we find that open review’s impact on authors’ efforts is sensitive to the strength of several other influences.

#### Incentivizing Agents through Ratings

I study the optimal design of performance or product ratings to motivate agents’ performance or investment in product quality. The principal designs a rating that maps their quality (performance) to possibly stochastic scores. Agents have private information about their abilities (cost of effort/quality) and choose their quality. The market observes the scores and offers a wage equal to the agent’s expected quality [resp. ability]. For example, a school incentivizes learning through a grading policy that reveals students’ quality to the job market.

I first show that an incentive-compatible interim wage function can be induced by a rating (i.e., feasible) if and only if it is a mean-preserving spread of quality [resp. ability]. Thus, I reduce the principal’s rating design problem to the design of a feasible interim wage. When restricted to deterministic ratings, the rating design problem is equivalent to delegation with participation constraints (Amador and Bagwell, 2022). Using optimal control theory, I provide necessary and sufficient conditions under which lower censorship is optimal within deterministic ratings and solve for the optimal deterministic ratings in general. In particular, when the principal elicits maximal effort (quality), lower censorship [resp. pass/fail] is optimal if the density is unimodal [resp. increasing]. For general ratings, I provide sufficient conditions under which lower censorship remains optimal. In the effort-maximizing case, a pass/fail test remains optimal if the density is increasing.

#### Independence of Existence of Measurable Equilibrium Selections

We study the problem of selecting, in a measurable manner, equilibria

from each of a family of games. Typical equilibrium selection theorems

assume continuity in, and compactness of, actions, while we merely assume

equilibria exist for all games in the family, and payoffs are jointly

Borel in parameter and actions. The existence of Lebesgue- or universally

measurable selectors turns out to be independent of ZFC; the result

is robust to restriction to zero-sum games, as well as to allowing mixed

strategies. We show, however, that the existence of analytically measurable

selections, well-known to exist for single decision makers, fails for

families of two-player zero-sum games.

#### Individual delays, learning, and aggregate coordination with payoff complementarities

In a coordination game with incomplete information where a positive payoff of individual investment requires a sufficiently large fraction of agents to invest (or an attacked status quo to be abandoned), does the option to delay facilitate coordination? In this paper, delaying agents observe a binary signal depending on whether the fraction of non-delaying agents surpasses a threshold. The answer to the question depends on the discount rate and on the observation threshold. If the discount rate (or the period length) is small, there is less coordination (or the status quo is more stable) than in the static one-period case. Successful coordination is particularly less likely when the observation is the same as the fall of the status quo. The result is reversed when the discount rate is large, or the observation threshold is small. In this case, however, when the heterogeneity of the agents (i.e. the variance of their private information) is sufficiently small, the unique equilibrium in monotone strategies is unstable. This property is indicative of the difficulty that agents may have in coordinating actions with strategic complementarities in a multi-period context. The model is analyzed in a two-period framework, which is extended to multiple periods. We discuss implications for macroeconomics, finance and political stability.

#### Inductive Shapley values in cooperative transportation games

The Shapley value (Sh) gives each player in a cooperative game his

marginal contribution to the H (art &)M (as-Colell)-potential of the grand coalition. For

cooperative transportation games, computing the worth of a coalition may involve solving a

traveling salesman or a vehicle routing problem which are known to be NP -hard. Moreover, the

computation of Sh is NP -hard itself. We take into account that in applications one may face a time

restriction, implying that a solution, i.e., an allocation of savings or profits, may be required

before all worths of the coalitions, necessary for the computation of Sh, have been established.

An inductive Shapley value (ISh) gives each player in a cooperative game his marginal contribution

to an inductive HM -potential of the grand coalition. First, the worth of the grand coalition is

determined which (by assumption) is always possible. As long as the time constraint is not met, the

worths for all coalitions with cardinality 1 are computed, then for those with cardinality 2, and

so on. Simultaneously, the HM -potential of the game restricted to the coalition at hand is

determined, as well as an auxiliary function based on all HM -potentials found until then. If the

computation time reaches the constraint while establishing worths for coalitions with cardinality U

+ 1, the auxiliary function determines the inductive HM -potential for each coalition with more

than U members based on the completed calculations up to U. If the constraint is not binding, the

ISh coincides with the Sh.

We introduce two axioms, namely sensitivity up to cardinality U i.e., a value depends on the

worths of all coalitions with cardinality at most , and insensitivity beyond cardinality U i.e.,

a value is independent from all worths of coalitions with cardinality larger thanU the grand

coalition excluded. We show that ISh satisfies both axioms for any U, moreover, the axioms

incorporate important aspects of fairness regarding the impact of computations being based on

restricted information.

JEL-codes: C71

Keywords: Inductive Hart & Mas-Colell potentials, inductive Shapley values, cooperative

transportation games

#### Inference with Selectively Disclosed Data

This paper considers the disclosure problem of a sender who wants to use hard

evidence to persuade a receiver towards higher actions. When the receiver hopes to

make inferences based on the distribution of the data, the sender has an incentive to

drop observations to mimic the distributions observed under better states. We find

that, in the limit when datasets are large, it is optimal for senders to play an imitation

strategy, under which they submit evidence imitating the natural distribution under

some desirable target state. The volume of data that the sender can submit must meet

a certain standard, a “burden of proof”, before the receiver can be persuaded to take

a high action. The outcome exhibits partial pooling: senders are honest when either

they have little data or the state is good, but they try to deceive the receiver when

they have access to a lot of data and the state is bad.

#### Information Acquisition with Uncertain Signal Structure

An agent repeatedly chooses between a familiar source and an unfamiliar source to learn a persistent fundamental. When the signal structure of the unfamiliar source is also uncertain, the learning problem is two-dimensional. Moreover, the two dimensions become naturally correlated as the agent chooses the unfamiliar source over time. This paper makes a novel observation that it is the correlation, instead of the marginal belief over the uncertain signal structure, that determines the informativeness of the unfamiliar source. Based on this observation, this paper characterizes the agent’s asymptotic choice of source. Under appropriate conditions, the agent settles on the familiar source with probability one for any joint prior, even if the true signal structure of the unfamiliar source Blackwell dominates that of the familiar source. As an implication, this result explains the well-documented preference towards familiarity from a rational perspective.

#### Information Cascades in Strategic Environments

This paper studies information aggregation in sequential learning environments in which agents face strategic incentives. A seminal result in the social learning literature establishes that information aggregation fails when rational agents choose actions sequentially unless signals meet strong informational requirements; namely, that posteriors remain unbounded whatever the prior (Smith and Sørensen, 2000). In the absence of arbitrarily strong signals, individual actions lead to information cascades—also known as rational herds—in which the information inferred from previous actions overwhelms the private information available to agents. Once this occurs, agents act as they would if they had no private information, and learning stops since their actions do not contain any new information.

The goal of this paper is to investigate the existence and character of information cascades in strategic environments. In other words, can there be “strategic herds”? Do strategic incentives change the conditions under which sequential choices lead to learning? Our main finding is that the conditions under which learning occurs in strategic environments are intrinsically distinct from the individual setting, and depend intimately on the specifics of the strategic incentives at play. Signals that are as informative as those needed to obtain learning in individual settings may lead to a failure of learning. Likewise, with the right strategic incentives, minimally informative signals—ones that would create information cascades with probability one in individual settings—can result in learning. Hence, whether information is aggregated successfully or not in the long-run depends on the environment in which individual agents take actions, and, thus, can be seen as a goal of policy or market design.

#### Information Design Beyond Prior and State

Classical information design models (in particular, Bayesian persuasion) require the knowledge of the prior belief and the state of the world, which is often unrealistic. This paper studies repeated persuasion problems in which the information designer (and the receiver) has no access to the prior or the state. The information designer has to learn the prior while persuading the receiver. We design learning algorithms for the information designer to design signaling schemes that achieve no regret compared to using the optimal signaling scheme with known prior, under three models of the receiver’s decision-making: (1) if the receiver knows the prior and is Bayesian-rational, then there exists a learning algorithm for the information designer with $O(\log^2 T)$ regret; (2) if the receiver does not know the prior and the signaling schemes have to be persuasive, then there exists a learning algorithm with $O(\sqrt{T \log T})$ regret; (3) if the receiver does not know the prior and employs a no-regret learning algorithm to take actions, then there exists a learning algorithm for the information designer that achieves regret $O(\sqrt{\rReg(T) T})$, where $\rReg(T)=o(T)$ is an upper bound on the receiver’s regret

due to taking sub-optimal actions. Our work provides a comprehensive solution to the problem of repeated persuasion without access to the prior or states.

#### Information Design for Credence Goods

#### Information Design versus Auditing in Mitigating Hold-Up Risks

The asymmetric information that creates an ex-ante investment incentive can limit the principal’s ability to expropriate the gains from the investment at the contracting stage. In this article, we compare the effectiveness of ex-ante signal design and after-production auditing in an adverse selection model with effort and hold-up risk. Our results show that when and only when investment cost is low, an ex-ante signal design is preferred. The difference in payoffs generated by these two methods depends on their ability to induce investment and the relative efficiency of the agent’s effort levels in the respective contracts. When investment costs are low, a signal design contract, which involves less downward distortion of effort, results in a higher payoff for the principal. As investment cost rises, auditing becomes a better information instrument when the effect of a decreasing investment probability outweighs the impact of increased effort efficiency. When inducing investment becomes prohibitively costly, the principal may either deter investment through perfect signal design or impose a large punishment, leading to a complete hold-up.

#### Information exchange through secret vertical contracts

This paper studies a stylized common agency problem in which two downstream firms, who operate in separated markets and receive private signals about a common demand state, simultaneously offer a secret menu of two-part tariff contracts to their common supplier. While direct communication is not possible, they may still exchange their information through signal-contingent menus of vertical contracts. We show that a perfect Bayesian equilibrium exists in which information is fully transmitted, and the downstream firms obtain nearly the first best industry surplus. Our result suggests that efficient collusion with market allocation may not necessitate direct communication even when vertical contracts remain secret.

#### Information Payoffs: An Interim Perspective

We study the payoffs that can arise under some information structure from an interim perspective. There is a set of types distributed according to some prior distribution and a payoff function that assigns a value to each pair of a type and a belief over the types. Any information structure induces an interim payoff profile which describes, for each type, the expected payoff under the information structure conditional on the type. We characterize the set of all interim payoff profiles consistent with some information structure. We illus- trate our results through applications.

#### Information requirement for efficient decentralized screening

We establish new efficiency results for decentralized markets with quality uncertainty. Buyers encounter a succession of passing trade opportunities and related asset information, allowing them to screen the quality of assets by conditioning pricing on informative signals. We link key equilibrium properties with the intensity of screening. This innovative approach delivers conditions under which efficient equilibria exist, characterizes efficient and inefficient equilibria in terms of asset screening and trade dynamics, and presents a new measure for the information required for efficient trade and asset screening. Trade dynamics may manifest as either *standard* or *reversed*.

#### Information Selling under Prior Disagreement

This paper studies monopolistic information selling in environments in which (1) the seller has limited commitment power, and (2) the buyer and the seller hold different beliefs about the state of the world. We show that in environments with a common prior, there is no advantage to selling information sequentially; the seller cannot achieve higher revenue than by offering an experiment that fully reveals the state in one period. We find that if, on the other hand, the agents agree to disagree about their prior beliefs, the seller achieves a strictly higher revenue by gradually selling information over multiple periods. Moreover, increasing the number of periods of the protocol strictly increases the seller’s expected revenue. In addition, in some environments, it is optimal for the seller to first offer a free sample test, i.e., an experiment that partially reveals information, at no charge.

#### Informative Simplicity in Matching

This paper introduces a measurement of informational size to matching problems, which inherits its ideas from Mount and Reiter (1974). This concept measures a matching mechanism’s simplicity based on the minimum size of information for effective operation. Our results reveal that the non-strategy-proof Immediate Acceptance (IA) mechanism is informatively simpler than other three commonly used strategy-proof matching mechanisms, the Deferred Acceptance (DA) mechanism, the Top Trading Cycles (TTC) mechanism, and the Serial Dictatorship (SD) mechanism in matching problems. When the matching market contains at least four participants, the TTC demands less information compared to the DA to implement a desired allocation. We further apply this measurement to compare the information demands between static and dynamic auction mechanisms. This study provides a microfoundation to understand the differences in credibility, auditability, and privacy protection among matching and auction mechanisms. It also offers a new perspective on evaluating mechanism complexity, steering economics design towards more transparent and efficient systems.

#### Informed Principal and Screening Problem

This paper studies an informed mechanism designer problem in which the principal’s private information is a number of agents. We define mechanical equivalence such that it holds if each agent’s and the principal’s perspectives are consistent in the sense that a conversion problem for a grand mechanism is resolved – each agent’s expected payment taking into account the prin- cipal’s private information can be incorporated into the principal’s revenue. With mechanical equivalence and, additionally, the principal’s expected payoff linearity, there is a single threshold for the optimal grand mechanism if a sub-mechanism cannot depend on the principal’s private information. Interestingly, the main result shows that if a sub-mechanism can also depend on his private information, the optimal grand mechanism is characterized by double thresholds such that the principal does not announce the number of agents if it is in the middle range. We further extend the signal structure to include rich signal sets.

#### Integrative Negotiation: An Economic Perspective

We expand the canonical bargaining framework used in the economics literature to incorporate the process of “integrative negotiation,” a phase in negotiation that has received significant attention outside of economics. Integrative negotiation, consists of collaborative attempts of the negotiators to express their priorities and interests, and to jointly acquire information to increase the size of the pie, which is the first stage of our model. The second stage consists of a classic Nash bargaining model. We show that the type of information acquired in integrative negotiation is affected by changes in distributive negotiation. We identify non-monotonic relationships between bargaining power and players’ payoffs. Finally, we investigate to what extent side payments prior to the integrative phase can improve players’ payoffs.

#### Interim Strategy-Proof Mechanisms

We study a new robustness concept in mechanism design with interdependent values: Interim Strategy-Proofness (ISP). It requires that truth-telling is an interim dominant strategy for each agent, i.e., conditional on an agent’s own private information, the truth-telling maximizes her interim expected payoff for all possible strategies the other agents could use. We first show that ISP mechanisms are higher-order belief-free: an agent’s first-order belief is sufficient to determine whether a strategy is interim dominant, whereas higher-order beliefs do not matter. We then provide full characterizations of ISP mechanisms in two classical settings: single unit auctions and binary collective decision-makings.

#### Inventory, Market Making, and Liquidity: Theory and Application to the Corporate Bond Market

We develop a search-theoretic model of a dealer-intermediated over-the-counter market. Our key departure from the literature is to assume that when a customer meets a dealer the dealers can only sell assets that they already own. Hence, in equilibrium, dealers choose to hold inventory. We derive the equilibrium relationship between dealers’ cost of holding assets on their balance sheets, their optimal inventory holdings, and various measures of liquidity, including bid-ask spreads, trade size, volume, and turnover. Using transaction-level data from the corporate bond market, we calibrate the model to quantitatively assess the impact of post-crisis regulations on dealers’ inventory costs, liquidity, and welfare.

#### Investment Timing and Reputation

An agent learns dynamically about the profitability of a project and decides when

to make an irreversible investment. The agent seeks to maximize his reputation for

learning. Equilibrium strategies are dictated by the prior belief that the project is

profitable: a high-ability agent plays a cutoff strategy in every period, where the cutoffs

are bounded below by the prior. Agents are reputationally rewarded for both speed and

accuracy, but accuracy becomes gradually less consequential for reputation over time.

Compared to a benchmark where the agent has no reputational motive, investment

timing may be either premature or delayed. For projects with a substantial downside

potential, reputation induces premature investment. Meanwhile, when projects have a

positive net present value ex-ante, reputation induces delayed investment.

#### Issue Selection in Constrained Communication

#### k-level Forward-Looking Dynamics in Monotone Games

The limiting behavior of adaptive learning dynamics in monotone games has been widely-studied. As players eventually choose undominated strategies as a response to past play, such learning processes are intrinsically backwards-looking. However, it is reasonable to assume that players anticipate and incorporate the backwards-looking behavior of their opponents into their beliefs about the future. This results in forward-looking dynamics, a topic which has been largely neglected in the monotone games literature. Using a cognitive hierarchy framework, this paper establishes bounds on the limits of all such learning processes which allow for k-levels of anticipation in each period of play, where k may vary both between players and between rounds. Our main result shows that in the context of a monotone game, a serially undominated strategy exists if, and only if, all such k-level adaptive dynamics converge to that equilibrium. We then show that experimental data are better explained by k-level dynamics compared to their backwards-looking counterpart, which suggests that players are in fact forward-looking.

#### Knowledge Transfers

We study the design of knowledge transfers from a principal to multiple agents who compete in a contest, in the absence of monetary transfers. Increasing an agent’s productivity takes time, akin to onboarding employees in an organization, and each agent’s incentives to exert effort depend both on their current and expected future productivity and on the competitive environment. On the one hand, to minimize procrastination, the principal benefits from committing to curtailing the future productivity of an agent. On the other hand, faced with multiple ex-ante identical agents, the princi- pal may find it optimal to disproportionately increase the productivity of one agent while ceiling the productivity of another. We characterize the optimal policy using optimal control tools and show that its asymmetry increases with the discount rate.

#### Knowledge: gift or burden of innovation?

Is knowledge a gift or a burden to the novelty of a scientific finding? The paper tries to answer this question using a new way that separates the general field knowledge and references that are relied on in a research project. Using a theoretical model, the paper indicates that knowledge is a gift, cutting both costs of direct innovation as well as innovation based on references. However, references become a burden and hinder innovation by making it harder to innovate. While knowledge makes both innovation and referring easier, the overall effect of knowledge accumulation on innovation is negative. This is because references have a larger negative effect. The paper also finds some empirical evidence that supports the main conclusions developed by the model.

#### Labor Supply in Pandemic Environments: An Aggregative Games Approach

We analyze the effects that pandemic processes have on labor supply decisions using an aggregative game framework. A worker’s payoff depends on her labor supply and the probability of being infected which, in turn, depends on the aggregate labor supply, an indicator of the economic activity. We illustrate the effects of pandemic containment and salary compensation public policies on the Nash equilibrium and analyze its expectational stability. The results indicate that these policies can stabilize expectations regarding the aggregate labor supply decisions. There is a set of parameter values for individuals’ preferences and public policies where two-period cycles for the expectations revision map may arise, implying that labor supply and the probability of contagion exhibit a pattern of persistent oscillation.

#### Learning and evidence in insurance markets

I consider a model of insurance contracting where the buyer has access to endogenous, costly evidence of his risk type (such as a test result). I characterize parameter values for which the buyer is worse off when the insurer is allowed to take evidence into account when contracting. I also show that allowing contracting on evidence can increase or decrease aggregate welfare, depending on parameter values. I fully characterize the optimal mechanism, which features ‘low powered’ contracts, in contrast to some models of contracting with endogenous unverifiable information. The results are relevant to policy debates over the use of genetic information in health and life insurance.

#### Learning by Lobbying

We study how lobbying relationships between interest groups and politicians can evolve over time. We model a policy-motivated interest group that can repeatedly lobby a policymaker. We highlight a fundamental connection between influence and learning, since the fruits of today’s lobbying efforts can inform tomorrow’s. Thus, early lobbying efforts are shaped by the interest group’s incentives to influence and learn, along with politician incentives to thwart learning. The value of learning depends on the group’s familiarity with the policymaker, as well as the chances and significance of future lobbying opportunities. These factors shape the screening and signaling forces to produce different lobbying dynamics. We find that: (i) early career policymakers are more expensive to screen, (ii) weak policymakers are less likely to be screened, (iii) policymakers can be influenced by the mere prospect of later lobbying. Additionally, revolving-door incentives may reduce the costs of screening and lobbying influence

#### Learning Efficiency of Multi-Agent Information Structures

We study which multi-agent information structures are more effective at eliminating both first-order and higher-order uncertainty, and hence at facilitating efficient play in incomplete-information coordination games. We consider a learning setting à la Cripps, Ely, Mailath, and Samuelson (2008) where players have access to many private signal draws from an information structure. First, we characterize the rate at which players achieve approximate common knowledge of the state, based on a simple learning efficiency index. Notably, this coincides with the rate at which players’ first-order uncertainty vanishes, as higher-order uncertainty becomes negligible relative to first-order uncertainty after enough signal draws. Based on this, we show that information structures with higher learning efficiency induce more efficient equilibrium outcomes in coordination games that are played after sufficiently many signal draws. We highlight some robust implications for information design in games played in data-rich environments.

#### Learning from Manipulable Signals

We study a dynamic stopping game between a principal and an agent. The agent is privately informed about his type. The principal learns about the agent’s type from a noisy performance measure, which can be manipulated by the agent via a costly and hidden action. We fully characterize the unique Markov equilibrium of this game. We find that terminations/market crashes are often preceded by a spike in (expected) performance. Our model also predicts that, due to endogenous signal manipulation, too much transparency can inhibit learning. As the players get arbitrarily patient, the principal elicits no useful information from the observed signal.

#### Learning from Shared News: When Abundant Information Leads to Belief Polarization

We study learning via shared news. Each period agents receive the same quantity and quality of first-hand information and can share it with friends. Some friends (possibly few) share selectively, generating heterogeneous news diets across agents akin to echo chambers. Agents are aware of selective sharing and update beliefs by Bayes’ rule. Contrary to standard learning results, we show that beliefs can diverge in this environment leading to polarization. This requires that (i) agents hold misperceptions (even minor) about friends’ sharing and (ii) information quality is sufficiently low. Polarization can worsen when agents’ social connections expand. When the quantity of first-hand information becomes large, agents can hold opposite extreme beliefs resulting in severe polarization. We find that news aggregators can curb polarization caused by news sharing. Our results hold without media bias or fake news, so eliminating these is not sufficient to reduce polarization. When fake news is included, it can lead to polarization but only through misperceived selective sharing. We apply our theory to shed light on the evolution of public opinions about climate change in the US.

#### Learning in Large Network Games with Dynamic Populations

In this talk, we investigate whether strategic agents can adapt to non-stationary environments. Specifically, we focus on learning dynamics over large network games in which the population of players may be time-varying and players interactions may change according to independent stochastic network realizations. Within this framework we show that, under suitable gradient dynamics, agents almost surely converge to a strategy that is approximately optimal. Specifically, we show that the learned strategy is, with high probability, an epsilon-Nash equilibrium of each stage game, where epsilon decreases with increasing population size. We apply these results to study dynamic pricing in online markets with time-varying demand.

#### Learning Through Transient Matching

I study a model of dynamic matching with overlapping generations, in which workers are born with incomplete preference information. To learn their preferences, workers must temporarily match with firms. Workers freely choose a firm to apply to each period, and firms hire their top applicants, up to a capacity constraint. I develop an algorithm extending techniques from the bandit literature to characterize the unique matching equilibrium. In general, equilibrium outcomes fail to satisfy standard notations of stability; furthermore, equilibrium search patterns differ from results in the directed search literature.

#### Learning Underspecified Models

This paper considers optimal pricing with a seller who is endowed with a underspecified model, in that he does not possess a complete description of how actions translate into payoffs. To save computational cost, a monopolist designs an algorithm delegating the decision to determine a product’s price in each period. Not knowing the true demand curve, the algorithm is tasked with ensuring that the optimal price emerges in the long run with sufficiently high probability, uniformly over the set of possible demand curves. The monopolist has a lexicographic preference over the payoff and the complexity cost of the algorithm, seeking an algorithm with a minimum number of parameters subject to achieving the same level of long run average payoff. We show that for a large class of possible demand curves with strictly decreasing continuous marginal revenue curve, the monopolist selects an algorithms which assume demand is linear even if it is not. The monopolist chooses a misspecified model to save computational cost, while learning the true optimal decision uniformly.

#### Life cycle of startup financing

I characterize an optimal, incentive compatible, and renegotiation proof contract of venture capital (VC) financing of a startup (that may be successful or not) whose rate of arrival of success is a function of the accumulated investment stock. The contract depends on the startup valuation, prior probability of success, and initial capital. Sufficient conditions for existence of such a contract are specified.

The paper explains why the startup has to rely on different ways of financing in different stages of its life, and why VC financing is not feasible in early stages of development of the startup.

#### Limiting the Communication to Deter Collusion: A Model of Endogenous Equilibrium Selection

Can you make people work without directly supervising them? The answer is yes if you have multiple agents by creating an architecture where they supervise one another. Complication arises from the fact that, the agents may collude and jointly deviate to no effort and no peer supervision. This paper models the collusion formation process and characterizes the conditions under which the collusion may or may not

occur. The central insight is: if the principal can limit the communication among the agents, it is much easier to deter collusion.

I study two ways of joint deviation: voting and commitment. When all agents are directly connected in a communication network, deviation by voting can be stopped if and only if the threshold of passing the vote is sufficiently high. Deviation by commitment, however, cannot be stopped. However, if the principal limits the initial communication network to a “ring”, the joint deviation can be deterred when the passing threshold is at least three people no matter the total number of players. Commitment can be stopped when there are at least six agents in the department. The negotiation power of an arbitrary individual in an arbitrary network can also be calculated by an algorithm. The findings give us insights into firm management, and political control, and also designing mechanisms for controlling corruption.

#### Lobbying for Trade Liberalization and its Policy Influence

Lobbying activities are important to the promotion of Free Trade Agreements (FTAs). I quantify the influence of lobbying on ratification probability of FTA by constructing a novel dataset containing all lobbying activities about FTAs in the United States. I setup a contest model of lobbying where heterogeneous players choose lobbying expenditures to affect the ratification probability of FTAs. I use structural gravity estimation to predict the trade profit gains from FTAs and use Maximum Likelihood estimation to back out the ratification probabilities. Results show that lobbying expenditures in manufacturing sector increase ratification probability by 21 percentage points on average, and the expected gains from lobbying are five times of the lobbying expenditures on average. Additionally, free riding lowers lobbying expenditures by 40%. These findings highlight the effects of lobbying on the formation of international agreements.

#### Locally non-bossiness and preferences over colleagues

The student-proposing deferred acceptance (DA) mechanism is the most used mechanism in school choice, the only one that is stable and strategy-proof. However, when DA is implemented, a student can change the schools of others without changing her own. We show that this drawback is limited: a student cannot change her classmates without modify her school. We refer to this new property as *locally non-bossiness*. Along with strategy-proofness, it is equivalent to the local notion of group strategy-proofness in which manipulating coalitions are restricted to students in the same school. Furthermore, locally non-bossiness plays a crucial role on incentives when students have preferences over colleagues. As long as students look first at the school to which they are assigned and then at their classmates, DA induces the only stable and strategy-proof mechanism in this preference domain. To some extent, this is the maximal domain in which a stable and strategy-proof mechanisms exists for any school choice context.

#### Make It Til You Fake It

We study a dynamic principal agent model of fraud and trust. The principal has limited power of commitment and wishes to accept a real project and reject a fake. The agent is either an ethical type that produces only a real project, or a strategic type that also has the ability to produce a fake. Producing a real project takes a positive and uncertain amount of time, while a fake project can be created instantaneously at some cost. We characterize the equilibrium, and explore two institutional remedies that improve the principal’s welfare: opaque standards, and impediments in the approval process.

#### Managing Information Production in Teams

A principal has a stream of related decisions to make under imperfect information. He employs a finite group of agents to acquire information at a cost. The principal designs task allocation and payment schemes to robustly implement all agents engaging in information acquisition and truthful reporting. We characterize the optimal joint design of task allocation and payment scheme, which highlights a trade-off between task assignment diversification and peer monitoring efficiency. The optimal deterministic design features a chain structure of peer monitoring. Stochastic task allocation and payment scheme ease the tension between diversification and monitoring efficiency.

#### Managing Persuasion Robustly: The Optimality of Quota Rules

We study a sender-receiver model where the receiver can commit to a decision rule before the sender determines the information policy. The decision rule can depend on the signal structure and the signal realization that the sender adopts. This framework captures applications where a decision-maker (the receiver) solicits advice from an interested party (sender). In these applications, the receiver faces uncertainty regarding the sender’s preferences and the set of feasible signal structures. Consequently, we adopt a unified robust analysis framework that includes max-min utility, min-max regret, and min-max approximation ratio as special cases. We show that it is optimal for the receiver to sacrifice ex-post optimality to perfectly align the sender’s incentive. The optimal decision rule is a quota rule, i.e., the decision rule maximizes the receiver’s ex-ante payoff subject to the constraint that the marginal distribution over actions adheres to a consistent quota, regardless of the sender’s chosen signal structure.

#### Mandatory disclosure of conflicts of interest: Good news or bad news?

We investigate the welfare effect of disclosure of conflict of interest when an expert advises a decision maker. In a model with verifiable information and uncertainty about the expert’s conflict of interest and the informedness of the expert, we show that disclosure of the expert’s bias is counterproductive when the magnitude of the expert’s bias is not too large and the likelihood of the expert being informed is low. Moreover, the harm of disclosing the expert’s conflict of interest is more significant when there is a larger uncertainty about the nature of the expert’s conflict of interests.

#### Many-to-one assignment markets: extreme core allocations

This paper studies many-to-one assignment markets, or matching markets with wages. Although it is well-known that the core of this model is non-empty, the structure of the core has not been fully investigated. To the known dissimilarities with the one-to-one assignment game, we add that the bargaining set does not coincide with the core, the kernel may not be included in the core, and the tau-value may also lie outside the core. Besides, not all extreme core allocations can be obtained by a procedure of lexicographic maximization, as it is the case in the one-to-one assignment game. Our main results are on the extreme core allocations. First, we characterize the set of extreme core allocations in terms of a directed graph defined on the set of workers and also provide a necessary condition for each side-optimal allocation. Finally, we prove that each extreme core allocation is the result of sequentially maximizing or minimizing the core payoffs according to a given order on the set of workers.

#### Market Design for Distributional Objectives in Allocation Problems: An Axiomatic Approach

Many priority-based allocation problems feature distributional objectives. We in- troduce a unified theory for meeting these objectives with reserves and quotas. We undertake an axiomatic approach, where we first formulate potential policy objectives as axioms, and study the logical possibility of achieving them. We characterize a class of reserves- and quotas-rules in the school choice problem, and uniquely characterize a Deferred Acceptance algorithm coupled with a particular rule in this class as priority-violations minimal. Stability plays no role in our theory. Despite our focus on school choice, our axiomatic approach is applicable broadly in other practices of market design.

#### Market segmentation through information

An information designer has precise information about consumers’ preferences over products sold by oligopolists. The designer chooses what information to reveal to differentiated firms who, then, compete on price by making personalized offers. We ask what market outcomes the designer can achieve. The information designer is a metaphor for an internet platform who collects data about users and sells it to firms who can, in turn, target discounts and promotions towards different consumers. Our analysis provides new benchmarks demonstrating the power that users’ data can endow internet platforms with. These benchmarks speak directly to current regulatory debates.

#### Market Structure and Adverse Selection

We consider an insurance economy plagued by adverse selection where a planner pre-assigns roles to prospective sellers. This choice determines which sellers a buyer can jointly trade with. To date, only two polar market structures have been explored. Under exclusive competition as in Rothschild and Stiglitz (1976), each buyer can trade with at most one seller. Under nonexclusive competition as in Attar, Mariotti and Salanié (2011,2014,2021,2022), buyers can trade with arbitrarily many sellers. While the choice of market structure matters, the welfare comparison is ambiguous: Exclusive competition gives rise to separation, low prices for low-risk types yet frequently involve rationing. Nonexclusive competition forces low-risk types to pool with high-risk types and thereby pay higher prices, but does not involve rationing. In this paper we propose an intermediate market structure—- partial exclusive competition — whereby each seller belongs to one of two subgroups; buyers can trade with at most one seller from each subgroup. We show that in every equilibrium one subgroup of sellers proposes pooling contracts, and there always exist equilibria under which separation arises for the other subgroup. This ensures that the low-risk agent’s welfare is greater than under nonexclusive competition.

#### Market Structure of Intermediation

#### Master equation for discrete time Stackelberg mean field games

#### Matching and Prices

Indivisibilities and budget constraints are pervasive features of many matching markets. But when taken together, these features typically cause failures of gross substitutability—a standard condition on preferences imposed in most matching models. To accommodate budget constraints and other income effects, we analyze matching markets under a weaker condition: net substitutability. Although competitive equilibria do not generally exist in our setting, we show that stable outcomes always exist and are efficient. However, standard auctions and matching procedures, such as the Deferred Acceptance algorithm and the Cumulative Offer process, do not generally yield stable outcomes. We illustrate how the flexibility of prices is critical for our results. We also discuss how budget constraints and other income effects affect classic properties of stable outcomes.

#### Matching Costs in Centralized And Decentralized Markets

#### Matching with Multilateral Contracts

In many environments, agents form agreements which are multilateral and/or have externalities. We show that stable outcomes exist in these environments when the *irrelevance of rejected contracts* condition survives aggregation, either across all agents or within two *implicit* sides of the market for whom contracts are substitutes. In settings where agents are strategically sophisticated, in the sense that they make correct conjectures about how other agents will choose from each set of contracts, we show this is ensured by a mild criterion on those conjectures. When each agent is strategically sophisticated about the behavior of all other agents, stable outcomes always exist: No conditions on preferences or market structure are necessary. Our characterization of these outcomes allows the application of matching theory to new settings, such as legislative bargaining or free trade agreement formation.

#### Maximal Blackwell Theorem

Blackwell’s theorem—relating second-order stochastic dominance to the existence of mean-preserving spreads—has numerous applications in economics. We give a new, simple proof of this theorem via an explicit construction. The constructed mean-preserving spread turns out to be optimal in a number of optimization problems. Economic applications include gradual persuasion, bounds on belief deviation from truth in dynamic learning, and reduced-form mechanism design. From a mathematical perspective, we generalize and simplify the results on the existence of maximal maximum martingales and provide a new “persuasion” perspective on Hardy’s and Doob’s maximal inequalities.

#### Mechanism Design with Sequential-Move Games: Revelation Principle

Traditionally, mechanism design focuses on simultaneous-move games (e.g., Myerson (1981)). In this paper, we study mechanism design with sequential-move games, and provide two results on revelation principles for general solution concepts (e.g., perfect Bayesian equilibrium, obvious dominance, strong-obvious dominance). First, if a solution concept is additive, implementation in sequential-move games is equivalent to implementation in simultaneous-move games. Second, for any solution concept ρ and any social choice function f, we identify a canonical operator γ^{(ρ,f)}, which is defined on primitives. We prove that, if ρ is monotonic, f can be implemented by a sequential-move game if and only if γ^{(ρ,f)} is achievable, which translates a complicated mechanism design problem into checking some conditions defined on primitives. Most of the existing solution concepts are either additive or monotonic.

#### Mechanisms without transfers for fully biased agents

A principal must decide between two options. Which one she prefers depends on the private information of two agents. One agent always prefers the first option; the other always prefers the second. Transfers are infeasible. One application of this setting is the efficient division of a fixed budget between two competing departments. We first characterize all implementable mechanisms under arbitrary correlation. Second, we study when there exists a mechanism that yields the principal a higher payoff than she could receive by choosing the ex-ante optimal decision without consulting the agents. In the budget example, a profitable mechanism exists if and only if the information of one department is also relevant for the expected returns of the other department.

When types are independent this result generalizes to a setting with n agents. We apply this insight to derive necessary and sufficient conditions for the existence of a profitable mechanism in the n-agent allocation problem.

#### Mediated Bayesian Persuasion

Many settings possess information channels that are subject to some sort of influence by a third-party. Understanding the role of mediation in such settings can have significant implications on the design of truthful and transparent services and platforms. We introduce a model of mediated Bayesian persuasion in which a self-interested mediator publicly commits to a mediation strategy. The induced persuasion game between the sender and the receiver possesses an information channel in which the sender’s messages are subject to modification by the mediation strategy before reaching the receiver. We analytically characterize the restriction imposed by the mediator. Finally, we show that mediation never helps the sender.

#### Mentors and Recombinators: Multi-Dimensional Social Learning

We study imitative population games in which the set of strategies is multi-dimensional, and new agents might learn from multiple mentors. We introduce a new family of dynamics, the recombinator dynamics, which is characterised by a single parameter, the recombination rate r 2 [0;1]: The case of r = 0 coincides with the standard replicator dynamics. The opposite case of r = 1 corresponds to a setup in which each new agent learns each new strategic dimension from a different mentor, and combines these dimensions into her adopted strategy. We present two complete characterisations of the stationary states under these dynamics, and we demonstrate the applicability of the new dynamics to the study of strategic interactions.

#### Misspecification Averse Preferences

We study a decision maker who approaches an uncertain decision problem by formulating a set of plausible probabilistic models of the environment but is aware that these models are only stylized and incomplete approximations. We introduce the concept of a best-ﬁt map that identifies the most suitable model within this potentially misspeciﬁed set based on observable data. Building on this, we develop an axiomatic foundation for preferences that are averse to misspeciﬁcation. In particular, we introduce a novel criterion that discriminates between aversion to misspeciﬁcation and attitudes toward model ambiguity. First, conditional on a model having the best ﬁt, the decision maker forms a misspeciﬁcation-robust evaluation by considering a range of models in proximity to the best-ﬁtted one. Then, she aggregates these robust evaluations via a monotone and quasiconcave aggregator incorporating uncertainty about what model is the best approximation of the environment.

#### Mixed-Price Auctions for Divisible Goods

In a mixed-price auction, bidders’ payments are convex combinations of price discrimination and the market-clearing price. In a symmetric divisible-good model, I prove that all pure-strategy equilibria in mixed-price auctions are symmetric, and give a closed-form expression for equilibrium bids. I show that the set of feasible equilibrium bids shrinks as the auction becomes discriminatory, as aggregate supply becomes deterministic, and as the market becomes large. When bidders have linear marginal values the unique equilibrium of the discriminatory auction raises more revenue than any equilibrium of the uniform-price auction, but an additional bidder may be more valuable than proper selection of auction format. On the whole, sellers implementing uniform-price auctions may reap substantial gains by introducing mild price discrimination.

#### Monopoly, Product Quality, and Flexible Learning

#### Monotone Equilibrium Design for Matching Markets with Signaling

We study monotone equilibrium design by a planner who chooses an interval of reactions that receivers can take, before senders and receivers move in matching markets with senders’ costly signaling. We provide a method for monotone equilibrium design that uncovers novel insights into a planner’s optimal equilibrium choice. In our nonlinear settings with monotone, supermodular, concave utility functions, the surplus possibility frontier is convex and hence indicates decreasing-returns-to-scale information technology. This frontier expands as senders become less risk averse or the mean and variance of the receiver type distribution increase

#### Motivating Effort with Information about Future Rewards

This paper studies the optimal mechanism to motivate effort in a dynamic principal-agent model without transfers. An agent is engaged in a task with uncertain future rewards and can choose to shirk at any time. The principal knows the reward of the task and provides information to the agent over time. The optimal information policy can be characterized in closed form, revealing two key conditions that make dynamic disclosure valuable: one is that the principal is impatient compared with the agent; the other is that the environment makes the agent become pessimistic over time without any information disclosure. In a stationary environment, the principal benefits from dynamic disclosure if and only if she is less patient than the agent. Maximum delayed disclosure is optimal for an impatient principal: the principal delays all disclosures up to the maximum time threshold and then fully discloses. By contrast, in a pessimistic environment, the principal always benefits from dynamic disclosure, but the level of patience is still a crucial determinant of the structure of the optimal policy.

#### Multi-point solution concepts of incomplete games

The model of incomplete cooperative games incorporates uncertainty into the classical model of (complete) cooperative games by considering a partial characteristic function. This leaves values of some of the coalitions unknown. The main focus of this paper is to initiate the study of multi-point solution concepts of incomplete cooperative games.

We generalise the standard solution concepts into the incomplete setting in the following manner. For an incomplete game, we determine the set of all complete games which coincide with the incomplete game on the known values of coalitions and satisfy further properties (e.g. being a member of a subset of cooperative games). Such games are called extensions. Now, we compute a standard solution concept for every such extension and take its union. Similarly, an intersection is considered.

A systematic analysis is performed for a variety of standard solution concepts as the core, the Weber set or the (pre-)kernel. Different sets of extensions (namely 1-convex and positive) are considered. Surprisingly, many of such generalisations yield the imputation set when we restrict to minimal incomplete games.

#### Multilateral Bargaining with Information Spillovers

We study dynamic bargaining with information asymmetries and spillovers. A monopolist faces potential buyers with private, yet correlated, valuations for a good. As such, while trying to extract surplus from early buyers, the monopolist must consider that their trading behavior reveals information to delaying buyers. The monopolist’s inability to commit to prices limits surplus extraction, but the commitment gains are non-monotonic in correlation: at low levels, lack of commitment power is more detrimental with more information; at high levels, commitment power increases with more information. Additionally, we examine the impacts of correlation on the speed of trades and welfare, and contrast results with equilibrium outcomes under exogenous learning and socially valuable information.

#### Multilateral War of Attrition with Majority Rule

We analyze a multilateral war of attrition game with majority rule. A chair and two

competing players decide how to split one unit of surplus over continuous time. Each player has

an exogenously given demand that are incompatible. In each period, the players simultaneously

choose whether to concede or continue. The chair can concede to either of the two competing

players but the competing players can concede only to the chair. An agreement is reached

when at least one player concedes. We characterize the equilibria of this game and establish

the necessary and su cient conditions under which equilibria with delay exists.

#### (Near) Substitute Preferences and Equilibria with Indivisibilities

An obstacle to using market mechanisms to allocate indivisible goods is the non-existence of competitive equilibria (CE). To surmount this Arrow and Hahn proposed the notion of social-approximate equilibria: a price vector and corresponding excess demands that are `small’. We identify social approximate equilibria where the excess demand, good-by-good, is bounded by a parameter that depends on preferences only and not the size of the economy. This parameter measures the degree of departure from substitute preferences. As a special case, we identify a class called geometric substitutes that guarantees the existence of competitive equilibria in non-quasi-linear settings. It strictly generalizes prior conditions such as single improvement, no complementarities, gross substitutes, and net substitutes.

#### N-agent and mean field games for optimal investment with HARA utility function and the presence of risk-seeking agents

This study aims at extending the work of Lacker and Zariphopoulou (2019) by considering the financial market in which both risk averse and risk-seeking agents coexist instead of just only risk averse agents. Moreover, this study considers realistic situations in which expected return can be positive, negative, or zero rather than just an ideal case where expected return is positive. Specifically, the n-agent and mean field games for optimal investment with the family of the hyperbolic absolute risk aversion (HARA) utility function under relative performance are studied. Several specific forms of the HARA family including exponential, power, and logarithmic form are investigated to study the games with the presence of both risk averse and risk-seeking agents. With these specific forms, the results show that there exists a unique constant Nash equilibrium and a unique constant mean field equilibrium for both the n-agent games and mean field games, respectively. Furthermore, the qualitative effects of the personal risk preferences and market parameters on the optimal investment strategy are discussed deeply.

#### Naive Social Learning with Heterogeneous Model Perceptions

This paper studies a social learning problem where individuals observe a sequence of signals and repeatedly communicate their beliefs with neighbors. Individuals follow a naïve rule when learning from others and may incorrectly interpret their own information. This paper provides a set of characterizations for limit beliefs in this learning problem. One key feature of the characterizations is that the society has a tendency to settle on a state that minimizes the weighted relative entropy between the true and the perceived data-generating processes, and the weight describes the network’s centrality. This paper further notes that it is possible that beliefs fail to converge or converge to multiple limits, which can be characterized by a variant of the weighted relative entropy. One implication is that group irrationality can arise. The society may settle on a state that is against every member’s private information. Even if every individual is able to identify the true state independently, the society may end up learning incorrectly after communications.

#### Naivete and Sophistication in Initial and Repeated Play in Games

Compared to more sophisticated equilibrium theory, naive, non-equilibrium

behavioral rules often better describe individuals’ initial play in games. Addi-

tionally, in repeated play in games, when individuals have the opportunity to

learn about their opponents’ past behavior, learning models of dierent sophis-

tication levels are successful in explaining how individuals modify their behavior

in response to the provided information. How do subjects following dierent

behavioral rules in initial play modify their behavior after learning about past

behavior? This study links both initial and repeated play in games by analyzing

elicited behavior in 3×3 normal-form games using a within-subject laboratory

design. We classify individuals into dierent behavioral rules in both initial and

repeated play and test whether and/or how strategic naivete and sophistication

in initial play correlate with naivete and sophistication in repeated play. We nd

no evidence of a positive correlation between naivete and sophistication in initial

and repeated play.

#### Narrow Framing in Games

We study finite normal form games under a narrow framing assumption, which implies that when players play many games simultaneously they consider each one separately. We show that under mild rationality assumptions players must play either Nash equilibria or logit quantal response equilibria. When observed payoffs are monetary (but utilities are not observed) we show that players’ behavior is described by a natural generalization of logit quantal response equilibria, in which players respond not just to the expected payoff, but also to the maximum possible payoff and minimum possible payoff.

#### Necessarily Unfair Matching with Concave Costs

This project investigates the optimal matching between two sets on a line with concave costs and two existing approximations to the optimal (the greedy and Dyck matching). In contrast to the environment with convex costs, the optimal matching with concave costs will match some points that are further apart to take advantage of the closeness of other points. We give sharp estimates for the number of matched pairs that are a given distance apart, thus characterizing the unfairness of the optimal matching. Similar estimates are also proven for the greedy and Dyck algorithms. In addition, we also introduce a new family of algorithms which nests the greedy and Dyck algorithms as extreme cases. We also describe more efficient algorithms for computing these matches for random sets in the unit interval that remains practical even when the sets have millions of points. These insights are used to compare the costs of the matching algorithms for large random sets.

#### Negotiated Binding Agreements

*negotiated binding agreement*. I provide easy-to-check necessary and sufficient conditions for the outcomes that can be supported by negotiated binding agreements and explore a number of applications. I show that these conditions are robust to perturbations in the negotiation procedure including: timing of proposals, proposing action profiles, and variation in the payoff of perpetual disagreement. I show that the necessary and sufficient conditions generalise when coalitions may jointly deviate in a cooperative way and show that these are consistent with a perturbed version of the beta-core.

#### Network and Timing Effects in Social Learning

We consider a group of agents who each can take an irreversible costly action whose payoff depends on an unknown state. Agents learn about the state from private signals, as well as from past actions of their social network neighbors, which creates an incentive to postpone taking the action. We show that on networks with a linear structure patient agents do not converge to the first-best action, while on tree networks they do.

#### Network Hazard

This paper introduces a novel form of moral hazard specific to networks and illustrates this concept using simple models from coordination games, epidemics, supply chains, and financial networks. In these models, agents form beneficial links that also propagate costly contagion. Endogenously, second-order contagion risk constrains the concentration of connections around central agents. Protective measures against contagion, such as vaccines, subsidies, or bailouts, mitigate contagion risk, subsequently increasing concentration. However, if these protective measures are imperfect or costly, shocks to central agents can result in greater harm and increased welfare variance, as evidenced in disease outbreaks, aggregate volatility, or financial crises.

#### Non-Bayesian updating and value of information

We investigate how non-Bayesian updating affects robust evaluation of information under expected utility maximization, and whether Blackwell informativeness remains a desirable feature. We restrict attention to a class of updating rules called systematic distortions (de Clippel & Zhang 2022), which we characterize through two axioms on the updating procedure. We propose two measures of value: a prospective view, based on anticipatory utility, and a consequentialist view, based on realized utility. We show that, within anticipatory utility, non-instrumental value of information plays a key role. Commonly across the two measures, overreaction is incompatible with robust desirability of information, while underreaction accommodates it.

#### Non-Common Priors, Incentives, and Promotions: The Role of Learning

This paper explores profit-maximizing incentive schemes for overconfident workers. We show that a firm’s exploitation of a worker’s overconfidence may intensify over time, even though workers incorporate informative signals and update beliefs using Bayes’ rule. This result implies that employing a worker might only be profitable if he is believed to be sufficiently unproductive. Based on this, we also derive an implication for a firm’s optimal promotion policy. It can be optimal to base a promotion decision on success in the current job, even if the task requirements in current and new job are entirely unrelated. Thereby, we provide a microfoundation for the so-called Peter Principle, that past successes are a bigger driver of promotion decisions than what appears to be optimal (see Benson et al., 2019 for recent evidence), and show that the resulting pattern can actually be optimal for firms.

#### Non-cooperative Bargaining and Collusion Formation Through Communication Networks

Many real-world organizations face the threat of internal collusion, where a fraction of members conspire to exploit regulatory loopholes or abuse their power for personal gain. In contrast to existing literature, this paper considers the case that colluding members may provide cover for each other, evading punishment even if non-colluding members report their activities.

The collusion formation process is modeled as a bargaining process through a personal connection or friendship network, as corruption attempts are not made public. The analysis reveals that collusion is less likely to occur in networks with sparser connections. In particular, star and ring networks present the greatest challenges for collusion. For arbitrary communication networks, an algorithm is developed to identify the potential for collusion among individual players, enabling policymakers to enhance detection and control of corruption.

This research contributes valuable insights into the fields of anti-corruption, anti-trust, firm management, political bargaining, social movements, and revolutions. It is particularly relevant in cases where principals struggle to impose punishment following successful collusion.

#### Nonlinear Fixed Points and Stationarity: Economic Applications

We consider maps T:ℝ^{k}→ℝ^{k} which are normalized, monotone, and translation invariant. Given x∈ℝ^{k}, β∈(0,1), and a map T with these properties, there exist two points x_{β} and x_{β} which are the unique solutions of the fixed point equations

T((1-β)x+βx_{β})=x_{β} and (1-β)x+βT(x_{β})=x_{β}.

The purpose of this work is to study lim_{β↑1}x_{β} and lim_{β↑1}x_{β}. We provide different conditions that guarantee the existence of these limits which always coincide when they exist. We also provide conditions which allow us to characterize these limits and comment on the rate of such convergence. In the second part of the paper we provide economic applications for these results. First, we study the classic issue of existence and characterization of the asymptotic value for zero-sum stochastic games (Sorin 2003). Second, we consider an equilibrium model of interconnected financial institutions that evaluate their losses with respect to coherent risk measures.

#### On Blockchain We Cooperate: An Evolutionary Game Approach

Blockchain is the trust machine in cyberspace that supports cooperation by consensus protocols. However, studies on consensus protocol in computer science ignore the incentives that could affect agent behaviors. An emerging literature in game theory introduces rational agents and solution concepts to study equilibrium outcomes of various consensus protocols. However, the existing studies with rational agents are limited in generalizability and are far from providing guidance for future designs of consensus protocols. We abstract a general Byzantine consensus protocol as a general game environment in extensive form, apply bounded rationality to model agent behaviors, and solve the initial conditions for three different stable equilibria. Our research contributes to literature across disciplines, including Byzantine consensus protocol in computer science, game theory in economics on blockchain consensus, evolutionary game theory at the intersection of biology and economics, and bounded rationality at the interplay between psychology and economics. Finally, our research guide future designs of consensus protocols to achieve desirable outcomes by evaluating incentives choices.

**keywords**: cooperation, Byzantine fault tolerance, bounded rationality, evolutionary game theory, evolutionary stable strategy, blockchain consensus

*We have no eternal allies, and we have no perpetual enemies. Our interests are eternal and perpetual, and those interests are our duty to follow.— Lord Palmerston, the mid-19th century British Prime Minister*

#### On manipulability in financial systems

We investigate manipulability in the setting of financial systems by considering two weak forms of immunity:

non-manipulability via merging and non-manipulability via splitting. Not surprinsingly, non-manipulability

via splitting is incompatible with some basic requirements as the priority of debt claims and the limited

liability of equity since financial institutions in default surely have incentives to divide into two, the first

inheriting assets and rights and the second receiving only obligations, regardless of the clearing mechanism.

Outstandingly, we introduce a large class of financial rules that are immune to manipulations via merging.

This class includes not only financial rules in accordance with bankruptcy rules fulfilling non-manipulability

via merging, as the proportional rule, but also some financial rules induced by division schemes, a novel

approach that allows to clear the obligations of the members of the system taking into consideration the

whole interconnections in the financial network

#### On market prices in double auctions

We address some open issues regarding the characterization of double auctions. Our model

is a two-sided commodity market with either finitely or infinitely many traders. We first unify

existing formulations for both finite and infinite markets and generalize the characterization of

market clearing in the presence of ties. Second, we define a mechanism that achieves market

clearing in any, finite or infinite, market instance and show that it coincides with the k-double

auction by Rustichini et al. (1994) in the former case. In particular, it clarifies the consequences

of ties in submissions and makes common regularity assumptions obsolete. Finally, we show that

the resulting generalized mechanism implements Walrasian competitive equilibrium.

#### On Necessary Conditions for Implementation of Functions, without Rational Expectations

The Bayesian implementation literature has identified in Bayesian Incentive Compatibility (BIC) and Bayesian Monotonicity (BM) two key conditions that a social choice function has to satisfy to be fully implemented by a social planner. I characterize the class of solution concepts such that BIC is necessary for full implementation of functions, and I find we can not expect significantly more permissive results by dropping the rational expectations assumption and moving to non-equilibrium models. Preliminary results suggest the same may be true for a BM-like condition as well.

#### On the equivalence of information design by uninformed and informed principals

We compare information design, or Bayesian persuasion, by uninformed and informed principals. We show that, under the assumptions of state-independent ordinal preferences of the principal and nondegenerate information structures, a Pareto undominated outcome is implementable by the uninformed principal if and only if it is implementable by the informed principal.

#### On the Power of Reaction Time in Deterring Collective Actions

We study a setting in which multiple agents (e.g. policymakers) play a dynamic coordination game, each having an opportunity to take a mutually beneficial action at a separate random time. A principal (e.g. a lobby or NGO), whose interests are diverging from those of the policymakers and who is endowed with a limited budget, is given opportunities, at random times, to dissuade the agents from coordinating by taking costly punitive actions against them. We show that when the principal reacts sufficiently quickly to agents’ actions, there is a unique equilibrium in which agents give up on building a collective action, even when the budget of the principal is limited. We also show that if the principal can ex ante commit to a punishment strategy, deterrence is ensured even if the agents can act more quickly than the Principal. This brings a new perspective on how to regulate lobbies’ activities.

#### On the Relationship between Damage and Deception

#### On the Virtue of Being Regular and Predictable: A Structural Model of United States Treasury Auctions

We analyze the policy question of whether the US Treasury should maintain the current security

distribution mechanism of the primary dealer system in the Treasury market to achieve the debt

management objective of the lowest funding cost over time. We study the data of 5369 auctions of

Treasury securities issued between May 2003 and March 2022 (gross total issuance: $168.9 trillion). The crucial novelties of this paper over previous literature are (1) we consider the stability of the Treasury auction market (measured in the volatility of auction prices) as a key metric to measure the performance of the market, (2) we develop a model of Treasury auctions that do not depend on the Gaussian distribution and consistent with the behavior of primary dealers reported in the US Treasury Office of Debt Management (2012), (3) we introduce clustered bootstrap for structural estimation, and (4) we develop a novel asymptotic approximation method to conduct counterfactual analysis. The novel findings of this paper are that (1) we identify potential increases in auction high rate volatilities due to a decline in primary dealer activities to be a potential policy concern, and (2) we compare the effectiveness of the primary dealer system, the direct bidding system, and the syndicate bidding system and find that the primary dealer system achieves significantly lower funding cost volatilities while maintaining an equal level of costs, thus contribute to the stability of the Treasury auction markets and the status of US Treasury securities as the “global safe assets.”

#### Optimal Civil Justice Design

We study the optimal civil justice design in a two-sided private information environment.

The fundamental goals of the civil justice system and the optimal production of evidence are

considered in our design. We characterize the optimal civil justice mechanism, and demonstrate

that this mechanism maximizes social welfare by providing access to justice to the

victims and maximal compensation to the victims confronting liable injurers at the minimum

expected cost of producing evidence. We show that full revelation of private information requires

the production of evidence only in a subset of legal cases. We provide conditions for an

optimal cost-allocation rule.

#### Optimal Contests with Incomplete Information and Convex Effort Costs

I investigate the design of effort-maximizing mechanisms when agents have both private

information and convex effort costs, and the designer has a fixed prize budget. I first

demonstrate that it is always optimal for the designer to utilize a contest with as

many as possible participants. Further, I identify a necessary and sufficient condition

for the winner-takes-all prize structure to be optimal. When this condition fails, the

designer may prefer to award multiple prizes of descending sizes. I also provide a

characterization of the optimal prize allocation rule for this case. Lastly, I illustrate

how the optimal prize distribution evolves as contest size grows.

#### Optimal Disclosure in All-pay Auctions with Interdependent Valuations

We study all-pay auctions with one-sided private information and interdependent valuations. To sharpen the competition and maximize revenue, the auction organizer can design an information disclosure policy through Bayesian persuasion about the bidder with private information. Depending on the bidders’ relative strengths and the degree of valuation dependence, the revenue-maximizing disclosure policies can take the form of partial disclosure, full disclosure, or no disclosure. We also show that relative to the no-disclosure benchmark, optimal information disclosure can sometimes improve allocative efficiency, but will always hurt the bidders’ total welfare in the resulting all-pay auction.

#### Optimal Disclosure Windows

#### Optimal Feedback in Contests

#### Optimal Forbearance of Bank Resolution

This paper analyzes a regulator’s optimal strategic delay of resolving banks when the regulator’s announcement of the intervention delay endogenously affects the depositors’ run propensity. Given intervention, the regulator either liquidates the remaining illiquid assets or continues managing the assets (suspension intervention) at a reduced skill level. In either case, I show the depositors may react to more conservative policy by preempting the regulator: the depositors run on the bank more often ex ante if the regulator tolerates fewer withdrawals until intervention. A policy of never intervening can leave the bank more stable than a conservative intervention policy.

#### Optimal Hotelling Auctions

We derive the optimal mechanism for a designer with products at each end of the Hotelling line for sale. Buyers have linear transportation costs and private information about their locations. These are independent draws from a commonly known distribution. Two independent auctions are optimal if and only if two independent auctions are efficient. Otherwise, the problem exhibits countervailing incentives and worst-off types that are endogenous to the allocation rule. Combining a saddle point property with an appropriate ironing procedure allows us to characterize the optimal selling mechanism as a function of a single parameter and to derive associated comparative statics. The optimal mechanism is always ex post inefficient; it involves entering a set of types into a lottery with positive probability. This set and the associated ex post lotteries vary non-trivially with the problem parameters. A two-stage clock auction involving coarse bidding implements the optimal selling mechanism in dominant strategies.

#### Optimal Incentive for Innovation Adoption

We study innovation adoption as social learning featuring payoff externalities, i.e. adoption behavior not only provides information, but also directly affects non-adopters’ payoff. With positive payoff externality, equilibrium is unique and both the equilibrium and the social optimum are gradual. Moreover, due to information and payoff free-riding, the equilbrium adoption is too slow. With negative payoff externality, there may be discrete jumps in the equilibrium and social optimal adoption. There are now a family of equilibria, each charaterized with a jumping point, with the slowest one being the most efficient. However, even the slowest equilibrium is too slow. Our results suggest that policies to incentivize innovation adoption when payoff externality is positive and negative are distinct in nature: with positive externality, optimal policy seeks to offset externality by subsidies or taxes; with negative externality, optimal policy seeks to cooridnate.

#### Optimal mechanism for the sale of a durable good

A buyer wishes to purchase a durable good from a seller who in each period chooses a mechanism under limited commitment. The buyer’s valuation is binary and fully persistent. We show that posted prices implement all equilibrium out- comes of an infinite-horizon, mechanism selection game. Despite being able to choose mechanisms, the seller can do no better and no worse than if he chose prices in each period, so that he is subject to Coase’s conjecture. Our analysis marries in- sights from information and mechanism design with those from the literature on durable goods. We do so by relying on the revelation principle in Doval and Skreta (2020).

#### Optimal menu of tests

I study the optimal design of menus of tests. Prior to taking a binary decision, accept or reject a privately informed agent, a decision-maker (DM) can perform one test from a restricted set. For example, the restriction can come from information processing or technological constraints. The DM wants to accept a subset of types whereas the agent always wants to be accepted. Instead of choosing the test himself, the DM let the agent choose a test from a menu. The choice itself then serves as an additional dimension for information revelation. I characterise when a menu is optimal and show that the DM does not benefit from committing to an action. Using these results, I show conditions under which the DM wants or does not want to include strictly less informative test in the menu. I also define an order on tests that characterises which tests are part of an optimal menu.

#### Optimal Multi-Dimensional Mechanisms

We characterize the properties of optimal selling mechanisms for the multi-dimensional, multi-good auction and monopolistic screening problems. In particular, for auction settings, we prove that the participation region in the optimal allocation does not depend on the number of bidders. For monopolistic screening settings, we compute the optimal selling mechanisms for several novel settings.

#### Optimal Review of Conduct with Informative Prior Audits

We consider a principal-agent setting wherein the principal may reward (or punish) agents upon completing a review of noisy signals pertaining to the agent’s behavior. Only agents who are audited are reviewed. The audit process itself is (weakly) informative of agents’ behavior, because agents who act prosocially are (weakly) more likely to be audited. A reward generates reputational benefits in addition to its monetary value, and audits may trigger similar reputational impacts. We characterize the optimal review process, and identify the factors that affect how conservative or liberal the review process ought to be, including how visible the audit process is to third parties. We explain how our results pertain to important two-step review processes, including: arrests followed by trials; nominations followed by reviews; tax audits followed by compliance reviews; and suits followed by judicial review.

#### Optimal Scoring Rules for Multi-dimensional Effort

This paper develops a framework for the design of scoring rules to optimally incentivize an agent to exert a multi-dimensional effort. This framework is a generalization to strategic agents of the classical knapsack problem (cf. Briest et al., 2005; Singer, 2010) and it is foundational to applying algorithmic mechanism design to the classroom. The paper identifies two simple families of scoring rules that guarantee constant approximations to the optimal scoring rule. The truncated separate scoring rule is the sum of single dimensional scoring rules that is truncated to the bounded range of feasible scores. The threshold scoring rule gives the maximum score if reports exceed a threshold and zero otherwise. Approximate optimality of one or the other of these rules is similar to the bundling or selling separately result of Babaioff et al. (2014).

#### Optimal Self-Screening and the Persistence of Identity-Driven Choices

I analyze a model in which agents choose whether to undertake a task with an individual-specific probability of success of which they only have a noisy perception. I show how, when agents do not have the tools to correct for noise as a Bayesian would, they can use statistics about the prevalence of their social group among the successful individuals in the task to bias their noisy perception in a direction contingent on their social type and limit the adverse effects of the noise on decision making. This optimal self-screening can improve decision making on average, even when the statistics are irrelevant in a Bayesian sense. Differences in representation across social groups induce differential influences on choice across social types. I show the existence of a stable population equilibrium, not driven by ability differences, in which a priori identical social groups make different choices, fuelling the asymmetries in the representation of social groups among those successful.

#### Optimal Sequential Experimentation

A decision-maker (DM) acquires payoff-relevant information before making an irreversible decision, but said information is neither public or for sale. Thus, the DM must experiment i.e., manage a noisy, continuous-time signal process. This restriction is salient in business and policy applications e.g., conducting R&D, investing in a startup, proposing or changing a law, etc.. Zhong (2022) finds that the DM only acquires infrequently-arriving, precise (Poisson) information if she could flexibly purchase it. In contrast, a decision-maker required to experiment finds that frequently-arriving, imprecise (Gaussian) and Poisson information are perfect substitutes even though the experiment set is large. This observation allows me to always construct an optimal experiment only generating Gaussian information. I further provide a numerical example where optimal experiments {\it never} generate Poisson information.

#### Optimal Sharing in Social Dilemmas

Public goods games are frequently used to model strategic aspects of social dilemmas and to understand the evolution of cooperative behavior among members of a group. While providing a baseline case, a (local) public goods model implies an equal sharing of returns. This appears an unsatisfying modelling choice in contexts where contributors are heterogeneous and returns can be divided freely. Furthermore, it is intrinsically linked to the negative effect of inequality on cooperation, which is observed both theoretically and experimentally. To better understand the link between inequality and cooperation when returns can be shared flexibly, we characterize sharing behavior that maximizes contributions in an infinitely repeated voluntary contribution game, where players differ in both their endowments as well as the productivities of their contributions. In sharp contrast to egalitarian sharing, we find that endowment inequality makes cooperation easier to sustain when returns can be shared unequally. Maybe surprisingly, this qualitative relation between endowment inequality and cooperation is independent of players’ productivities. We derive a unique sharing rule as a function of productivities and endowments that is weakly superior to all other sharing rules. This rule generically departs from both equal as well as proportional sharing. If inequality is high, for example, individuals with the highest endowment need to be compensated more in absolute terms, but their relative share may be significantly less than their proportional contribution. Our analytical findings are qualitatively supported by numerical simulations of simple evolutionary learning dynamics.

#### Optimal Task Assignments

A principal wields task assignment power over her team of agents working on a project comprising multiple tasks. Working on a task generates positive spillovers and substitutes for efforts at similar tasks while incurring a convex cost. An agent assigned to a task with many links is prone to free-riding. Assigning him to extra neighboring tasks will internalize his externalities, but effort costs will surge due to convexity. I focus my analysis on the class of key-peripheral networks. They are composed of peripheral tasks, which are linked to one other task, and key tasks, which are linked to at least two peripheral tasks. I show that the modular assignment, which assigns each agent to a module consisting of a key task and its set of peripheral tasks, resolves the free-rider problem. Moreover, the modular assignment implements a unique equilibrium, which is a minimum dominating set. Modularization is optimal if the network structure is sufficiently specialized. Specialization captures the trade-off between mitigating free-riding and convexity on key-peripheral networks. Projects exhibit high specialization if each key task is linked to many peripheral tasks, but not linked to a lot of other key tasks.

#### Optimal Verification of Rumors in Networks

We study the diffusion of a true and a false message when agents are biased and able to verify messages. As a recipient of a rumor who verifies it becomes informed of the truth, a higher rumor prevalence can increase the prevalence of the truth. We uncover conditions such that this happens and discuss policy implications. Specifically, a planner aiming to maximize the prevalence of the truth should allow rumors to circulate if: verification overcomes ignorance of messages, transmission of information is relatively low, and the planner’s budget to induce verification is neither too low nor too high.

#### Optimality of weighted contracts for multi-agent contract design with a budget

We study a contract design problem between a principal and multiple agents. Each agent participates in an independent task with binary outcomes (success or failure), in which it may exert costly effort towards improving its probability of success, and the principal has a fixed budget which it can use to provide outcome-dependent rewards to the agents. Crucially, each agent’s reward may depend not only on whether she succeeds or fails, but also on whether other agents succeed or fail, and we assume the principal cares only about maximizing the agents’ probabilities of success, not how much of the budget it expends.

We first show that a contract is optimal for some objective if and only if it gives no reward to unsuccessful agents and always splits the entire budget among the successful agents. An immediate consequence of this result is that piece-rate contracts and bonuspool contracts, two types of contracts which are well-studied and motivated in the literature on multi-agent contract design, are never optimal in this setting. We then show that for any objective, there is an optimal priority-based weighted contract, which assigns positive weights and priority levels to the agents, and splits the budget among the highest-priority successful agents, with each such agent receiving a fraction of the budget proportional to her weight. This result provides a significant reduction in the dimensionality of the principal’s optimal contract design problem and gives an interpretable and easily implementable optimal contract.

Finally, we discuss an application of our results to the design of optimal contracts with two agents and quadratic costs. In this context, we find that the optimal contract assigns a higher weight to the agent whose success it values more, irrespective of the heterogeneity in the agents’ cost parameters. This suggests that the structure of the optimal contract depends primarily on the bias in the principal’s objective and is, to some extent, robust to the heterogeneity in the agents’ cost functions.

#### Orchestrating Organizational Politics: Baron and Ferejohn meet Tullock

This paper examines the optimal design of the rules that govern the process of dividing a fixed surplus and recognizing a proposer inside an organization. The process is modeled as a multiplayer sequential bargaining game with costly recognition. The designer is endowed with instruments: first, she sets the voting rule, i.e., the minimum number of favourable votes required to approve a proposal; second, she can manipulate the mechanism of proposer recognition, which is modeled as a contest. That is, she can bias the contest for recognition in favor of certain players by imposing multiplicative weights and additive headstart on their outputs. The design objective accommodates diverse preferences. We show that when the designer can flexibly alter both voting rule and recognition mechanism, the optimum can be achieved by a dictator voting rule and a biased recognition mechanism without headstarts, which reduces to a standard biased rent-seeking model. When the recognition mechanism is kept fixed, the optimum may involve headstarts. When the recognition mechanism is kept fixed, a k-majority-rule with k>1 can be optimal.

#### Outside Options and Optimal Bargaining Dynamics

#### Outside options, reputations, and the partial success of the Coase conjecture

A buyer and seller bargain over a good’s price in continuous time, the buyer has a private value $v\in [\underline v,\overline v]$ and a positive outside option $w\in [\underline w,\overline w]$. Additionally, bargainers can either be rational or committed to some fixed price. When the sets of commitment types and buyer values are rich and the probability of commitment vanishes, outcomes are approximately equivalent to the seller choosing a take-it-or-leave-it offer below $\max\{\underline w,\underline v/2\}$. Although there is minimal delay, outcomes need not be efficient as the buyer sometimes chooses her outside option. Seller payoffs may increase in the buyer’s outside option.

#### #Protest

#### Pareto Gains of Pre-Donation in Monopoly Regulation

The Revelation Principle implies that given any admissible social welfare function, the outcome of Baron and Myerson’s (1982) (BM) optimal direct-revelation mechanism under incentive constraints cannot be dominated by any other mechanism in expected utilities. However, since the expected total surplus varies with a change in the social welfare function, Pareto improvements should be possible if the monopolist and consumers can agree, by means of side payments that reveal no additional information to the regulator, on the use of an alternative social welfare function which would generate a lower expected deadweight loss. We check the validity of this intuition by integrating the BM mechanism with an induced cooperative bargaining model where unilateral pre-donation by consumers or the monopolist is allowed. Under this new mechanism monopolist’s pre-donation in the *ex-ante* stage always leads to *ex-ante* Pareto improvement while a certain amount of it eliminates the expected deadweight loss. Moreover, if optimally designed in the *interim* stage, the monopolist’s pre-donation may also lead under some cost parameters to *interim* (and also *ex-post*) Pareto improvement. Consumers, on the other hand, have no incentive to make a unilateral pre-donation, nor to reverse the optimal pre-donation of the monopolist.

#### Pareto Inefficient Unraveling in a Matching Market

#### Percolation Games: A bridge between Game Theory and Analysis

#### Perfect Bayesian Persuasion

A sender commits to an experiment to persuade a receiver. We study attainable sender payoffs, accounting for sender incentives for experiment choice, and not presupposing a receiver tie-breaking rule when indifferent. We characterize when the sender’s equilibrium payoff is unique and so coincides with her value in Kamenica and Gentzkow (2011). A sufficient condition is that every action which is receiver-optimal at some belief over a set of states is a uniquely optimal at some other such belief—a generic property for finite models. In an extension, this uniqueness generates robustness to imperfect sender commitment.

#### Perfect Robust Implementation by Private Information Design

This paper studies the principal-agent framework in which the principal wants to implement his first-best action. The principal privately selects a signal structure about the unknown state of the agent whose preferences depend on the principal’s action, the state and a privately known agent’s type. The agent privately observes the generated signal and reports it to the principal. We show that by randomizing between two perfectly informative signal structures, the principal can elicit perfect information from the agent about the state and implement his first-best action regardless of the agent’s type. The key idea is that signal structures form posterior beliefs, which induce actions with opposite reactions to agent’s messages. This sustains agent’s truthtelling and allows the principal to implement his first-best action upon learning the state. As to economic applications, we consider the bilateral-trade model and show that the seller can extract the full surplus from the privately informed buyer with non-quasilinear preferences and multi-dimensional information.

#### Persistent Private Information Revisited

This paper revisits Williams’ (2011) continuous-time model of optimal dynamic insurance with persistent private information and corrects several errors in that pa- per’s analysis. We introduce and study the class of self-insurance contracts that are implementable as consumption-saving problems for the agent with constant taxes on savings chosen by the principal. We show that the contract asserted to be optimal in Williams (2011) is the special self-insurance contract with zero taxes. When the agent’s private endowment is mean-reverting, that contract is strictly dominated by the optimal self-insurance contract, which imposes a strictly positive tax, induces immiseration when the rate of mean-reversion is high, and sends the agent to bliss when the rate of mean-reversion is low. When the agent’s endowment is not mean-reverting, the contract derived in that paper is, in fact, optimal among all incentive compatible contracts; we provide a new explanation for its properties in terms of the agent’s indifference among all reporting strategies. These results extend to the natural discrete-time analogue of the model. Separately, Williams’ (2011) first-order approach to incentive compatibility relies on an erroneous and unjustified assumption on the space of feasible reporting strategies; our analysis does not.

#### Persuading a Manipulative Agent

In a dynamic Bayesian persuasion game, a sender is seeking approval from a series of receivers before a deadline. I assume that only the receiver can verifiably disclose the current experiment to the next receiver, while the sender cannot. In this case, by deciding to hide the information or not, a receiver manipulates the information used by the following receiver. This manipulation power makes delay possible in equilibrium when receivers are naïve. In this naïve case, if actions are binary, manipulation power can benefit a receiver while weakly hurting the sender. But with sophisticated receivers, there is no incentive to delay.

#### Persuading an Informed Committee

A biased sender seeks to persuade a committee to vote for a proposal by providing public

information about its quality. Each voter has some private information about the proposal’s

quality. We characterize the sender-optimal disclosure policy under unanimity rule when the

sender can versus cannot ask voters for a report about their private information. The sender

can only profit from asking agents about their private signals when the private information is

sufficiently accurate. For all smaller accuracy levels, a sender who cannot elicit the private

information is equally well off.

#### Persuasion in Random Networks

This paper studies a Bayesian persuasion problem in a connected world. A sender wants to induce receivers to take some action by committing to a signal structure about a payoff-relevant state. I wonder how information provision is affected by a random network when signals are shared among neighbors. Receivers differ in their prior beliefs; the sender wants to persuade some receivers without dissuading the others. I present and characterize novel strategies through which the network is exploited. These strategies can prove useful if the network is sufficiently segregated. In such a case, connectivity can be beneficial to the sender. When some receivers are especially hard to persuade, exploiting the network becomes more attractive. A less informative signal structure which does not exploit the network is however preferred when the other receivers are especially hard to dissuade. Therefore, polarization has ambiguous effect on the informativeness of the optimal signal structure.

#### Persuasion with Coarse Communication

We study games of Bayesian persuasion where communication is coarse. This model captures interactions between a sender and a receiver, where the sender is unable to fully describe the state or recommend all possible actions. The sender always weakly benefits from more signals, as it increases their ability to persuade. However, more signals do not always lead to more information being sent, and the receiver might prefer outcomes with coarse communication. As a motivating example, we study advertising where a larger signal space corresponds to better targeting ability for the advertiser, and characterize the conditions under which customers prefer less targeting. In a class of games where the sender’s utility is independent from the state, we show that an additional signal is more valuable to the sender when the receiver is more difficult to persuade. More generally, we characterize optimal ways to send information using limited signals, show that the sender’s optimization problem can be solved by searching within a finite set, and prove an upper bound on the marginal value of a signal. Finally, we show how our approach can be applied to settings with cheap talk and heterogeneous priors.

#### Persuasion with Hard and Soft Information

A privately informed sender with state-independent preferences communicates with an uninformed receiver about a two-dimensional state. The sender can verifiably disclose the state’s first dimension with some probability, and can communicate about both dimensions via cheap talk. When the two dimensions are positively dependent, unravelling occurs – i.e. the sender fully reveals evidence whenever he has it – if and only if the sender has evidence with probability one. When unravelling does not occur, the model features multiple equilibria. Varying across equilibria, I show that equilibria that feature more disclosure are worse for the sender, with the disclosure minimizing equilibrium being sender-best. Comparative statics results indicate a substitution effect between communication via cheap talk and disclosure. I fully characterize the sender-optimal equilibrium for a few applications, and provide an extension to multiple unverifiable dimensions and non-monotonic sender utility under certain equilibrium selection rules.

#### Piecemeal: A Step-by-Step Algorithm for the Two-Person Allocation of Indivisible Items

Assume that two players, A and B, strictly rank *n* indivisible items from best to worst. Piecemeal starts with A’s and B’s top-ranked items. If they are different, each player receives that item, which is an envy-free assignment; if they are the same, this item goes into a *contested pile.*

Assume *x* is the players’ top-ranked item in the contested pile. Let *b _{A}* and

*b*be the bundles of A and B—comprising at least two lower-ranked items—that each player

_{B}*minimally*prefers to

*x*. If

*b*and

_{A}*b*are different, and either A indicates it prefers

_{B}*x*to

*b*, or B indicates it prefers

_{B}*x*to

*b*, there is an envy-free assignment of these items; if

_{A}*b*and

_{A}*b*are the same, there is none. If there is no such assignment, skip

_{B}*x*and repeat for the next top-ranked item. The resulting allocation of contested items, which will be partial if there are skipped items, may miss a complete envy-free allocation, but it much easier to apply than other 2-person fair-division algorithms of indivisible items.

#### Pioneers and Followers: Innovation with Heterogeneous Beliefs

#### Polarization and Media Bias

This paper presents a model of partisan media trying to persuade a sophisticated and heterogeneous audience. We base our analysis on a Bayesian persuasion framework where receivers have heterogeneous preferences and beliefs. We identify an intensive-versus-extensive margin trade-off that drives the media’s choice of slant: Biasing the news garners more support from the audience who follows the media but reduces the size of the audience. The media’s slant and target audience are qualitatively different in polarized and unimodal (or non-polarized) societies. When the media’s agenda becomes more popular, the media become more biased. When society becomes more polarized, the media become less biased. Thus, polarization may have an unexpected consequence: It may compel partisan media to be less biased and more informative.

#### Policy Compliance and Polarization during the Pandemic

We construct a Bayesian network game to study individuals’ compliance (or lack thereof) with public health mandates, such as social-distancing measures against Covid.

Agents form their networks to minimize cognitive dissonance that arises from the mismatch between compliance behaviors implied by their ideologies and circumstances, and those of peers in their networks. When agents’ ideologies are immune to the outside influence in the form of interaction with neighbors in their social networks and exogenous shocks such as political polarization, one giant connected component emerges, which maintains communications open and behaviors stable at the initial distribution. However, when we introduce exogenous shocks to the ideologies of select few agents, referred to here as the “political elites”, we find that given that individuals place sufficient weight on the actions of their peers when choosing their own behaviors, two disparate network communities emerge that partition the network and action space into two, which reinforces the polarizing force of the exogenous shock in further alienating the two communities from each other. We arrive at the same conclusion if we allow individuals to adjust their ideologies over time in maximizing their utility.

#### Policy Experiences

#### Policy Improvement for Additive Reward additive transition stochastic games with discounted and average payoff

We give a policy improvement algorithm for two person for zero sum stochastic games with additive reward and additive transition in both discounted and Cesaro average payoffs.

#### Policy-advising competition and endogenous lobbies

We investigate competition between experts with different motives. A policy-maker has to implement a policy and can either acquire information herself or hire a biased but well-informed expert. We show that the policy-maker delegates the decision to the expert if the latter cares sufficiently about the policy. In particular, the expert acts as an advisor (positive price) if her bias is small and as a lobbyist (negative price) otherwise. We then introduce an unbiased expert who cares about her reputation. We show that competition may force the biased expert to turn lobbyist. Finally, the effect of competition on social welfare depends on whether the policy is more important for society than for the policy-maker. In particular, if society deems the policy not important, welfare improvements from hiring the unbiased expert may arise when her expertise is low.

#### Policymaking in Times of Crisis

How do crises influence an executive’s willingness to implement policy reforms? While existing work focuses on how crises impact voters’ demand for reform, we instead investigate how they alter politicians’ incentives to supply policy experimentation, even if the crisis does not shift voters’ policy preferences. To study this problem, we develop a model of elections and policy experimentation. In our setting, voters face uncertainty about their optimal policy and politicians’ ability to manage a crisis.

We show that extreme reforms generate more information for voters about their optimal platform. Consequently, the incumbent has electoral incentives to engage in information control. At the same time, a crisis represents an exogenous test for the incumbent, who must prove competent enough to successfully manage the country during turbulent times. Therefore, a crisis has an (independent) impact on the incumbent’s electoral prospects, and this may influence his incentives to engage in risky policy experiments. We find that, in contrast to the conventional wisdom, a crisis induces bolder policy reforms only when the incumbent is sufficiently likely to be competent. If the incumbent is relatively unlikely to be competent, then the crisis instead results in policies that are closer to the status quo. As such, our model qualifies the standard intuition on this matter, and potentially allows us to make sense of the mixed results emerging in the empirical literature.

#### Political Bargaining under Incomplete Information about Public Reaction

#### Posturing and Bluffing in Bargaining

We study the initial posture decision in bargaining and how it affects the outcome in a context in which, if the bargaining fails, a resolution stage assigns payoffs depending on the posture. We consider bargaining situations where the proposer claims a payment from the responder. The proposer chooses the initial posture (a claim) before the bargaining starts, and the resolution stage assigns payoffs if bargaining fails. Before the resolution stage starts, the proposer can revisit his position up to a level given by his private commitment level. We show that posturing and bluffing are substitutes: The proposer chooses a high posture to signal his commitment level and a lower posture to hide his type. The proposer maximizes the initial posture for the intermediate commitment level. Applications include studying overcharging in the plea bargaining process with the trial as a resolution stage and troops mobilization in an international conflict where war is the resolution stage.

#### Power Consolidation in Groups

I develop a model of how a society’s distribution of political power and economic resources evolves over time. Multiple lineages of players compete by accumulating power, which is modeled as an asset that increases the probability of winning conflicts over resources. This model provides sharp equilibrium predictions for how a society’s distribution of power evolves and whether it approaches a dictatorial, oligarchic, or inclusive regime in the long run. My main result shows that power and resources *inevitably* fall into the hands of a few when political competition is left unchecked in large societies. In addition to addressing a longstanding empirical puzzle, this result also suggests that persistently rising inequality observed in large countries such as the United States will not self‐correct; in light of this, I also provide insights into the policy interventions that can be used to counteract rising inequality in large societies.

#### Predicting Choice from Information Costs

An agent acquires a costly flexible signal before making a decision. We explore the degree to which knowledge of the agent’s information costs help predict her behavior. We establish an impossibility result: learning costs alone generate no testable restrictions on choice without also imposing constraints on actions’ state-dependent utilities. By contrast, for most utility functions, knowing both the utility and information costs enables a unique behavioral prediction. When the utility function is known to belong to a given set, we provide an exact characterization of rationalizable behavior. Finally, we show that for smooth costs, most choices from a menu uniquely pin down the agent’s decisions in all submenus.

#### Preventive-Service Fraud in Credence Good Markets

Fraud in markets for preventive services is persistent and prevasive. Examples include preventive dental care and automotive maintenance intended to prevent problems that, if they materialized, would require costly treatment or repair. The market is modeled as a stochastic dynamic game of incomplete information in which the players are customers and service providers. It is analyzed using the notion of weak perfect Bayesian equilibrium. The services provided are credence because the customers lack the expertise necessary to assess the need for the recommended service both ex ante and ex post. In such markets, fraud is a prevalent equilibrium phenomenon that is somewaht mitigated by customers’ loyalty and providers’ reputation.

#### Price of information in the participation game

We propose a variant of the participation game studied by Palfrey and Rosenthal. In our model, agents (voters) do not know their preferred candidates unless they pay a cost (time/money) to study that private information. We analyze different types of voters and characterize the symmetric Nash equilibria for each type of voter. We also discuss how public information and the price of (private) information affect the existence, number, and location of the symmetric equilibria. Our results suggest that it is possible for candidates to increase their chance of winning by giving untruthful public information. And counter-intuitively, we show that in some cases decreasing the price of information will result in a lower probability that agents will pay for that information in symmetric equilibria.

#### Price Steering in Two Sided Markets

We study the incentives from a two-sided platform to segment the market by providing personalized

search results. In our environment, a monopolistic platform is in charge of matching sellers to buyers.

Upon being matched, each pair of buyer and seller negotiate prices. If they choose to transact, the

platform receives a commission fee that is proportional to the value of the transaction. The platform

is assumed to have full information over customers’ and sellers’ outside option. We show that in this

environment the platform may have incentives to prioritize finding feasible matches to more expensive

products, so as to inflate market prices, and thus, the commissions it receives from transactions. By

doing this, the platform maximizes the number of transactions, which can generate excess liquidity.

#### Price System versus Rationing: Inequality-aware Market Design

#### Pricing and Perpetual Royalties with Repeated Resale

This paper considers a durable object that is repeatedly resold bilaterally. The results highlight differences between contracting environments which have become practical as record keeping technologies improve. When each owner sets a price unilaterally, trade is reduced relative to one time sales. Fixed royalties to the prior owner, mandated in some countries, are counterproductive: they lower the prior owner’s value. A dynamic contract that maximizes profits for the first owner achieves efficiency in all but the first sale, without achieving full surplus extraction. It can be interpreted as nonlinear perpetual royalties, which have been discussed especially in digital art markets.

#### Private Disclosures in Competing Mechanism.

#### Private Private Information

In a *private* private information structure, agents’ signals contain no information about the signals of their peers. We study how informative such structures can be, and characterize those that are on the Pareto frontier, in the sense that it is impossible to give more information to any agent without violating privacy. In our main application, we show how to optimally disclose information about an unknown state under the constraint of not revealing anything about a correlated variable that contains sensitive information.

#### Probabilistic spatial power indices

In this work we study probabilistic Owen-Shapley spatial power indices, which are generalizations of the Owen-Shapley spatial power index introduced by Shapley (1977). We provide an explicit formula for calculating these spatial indices for unanimity games and give an axiomatic characterization of the family of probabilistic Owen-Shapley spatial power indices. We show that this family of spatial power indices can be obtained by means of the axioms employed by Peters and Zarzuelo (2017) to characterize the Owen-Shapley spatial power index, dropping an invariance axiom and adding continuity. We employ an equal power change property, a spatial dummy property, anonymity, a positional invariance property, and a positional continuity.

We also consider the model in which there is a finite number of issues R. In this case continuity is not satisfied any more, and only three axioms characterize the family of probabilistic Owen-Shapley spatial power indices associated with R: an equal power change property, a spatial dummy property and anonymity.

Some examples are also given.

#### Progress, Delays, and the Timeliness of Reporting

This paper studies the optimal contractual arrangement when managers have private information about the progress and delays during firm operations or product development. The optimal contract may grant the manager a discretionary period during which he has full autonomy on whether and when to report any progress or delays. If success is not achieved by the end of the discretionary period, the manager is given the incentives to report his private information as soon as it becomes available via a soft deadline that comprises random extensions and termination. The optimal contracts display commonly observed features in practice and generate novel implications for the design of accounting standards and regulations for the timeliness of managerial disclosures.

#### Providing Incentives with Private Contracts

Agents working together to produce a joint output care about each other’s incentives. Because real world contracts are typically private information, observed only by their direct signatories, agents are vulnerable to the principal opportunistically reducing the power of other agents’ incentives. When agents are sufficiently skilled, the principal can mitigate this commitment problem by making the most skilled one “team-leader,” with authority to write other agents’ contracts. This endogenous hierarchy, never optimal with public contracts, raises effort, output, and compensation, but distorts effort allocation due to rent extraction. Our model applies to bank syndicates, venture capital, organizational design, and outsourcing.

#### Public Good Provision Re-Examined

I write down the government’s public good provision problem from first principles and, contrary to popular wisdom, find a solution. I call it the cost-sharing pivotal mechanism. Both the statement of the problem and the solution are new. The cost-sharing pivotal mechanism is strategy-proof, employs a utilitarian decision rule—generalizing the efficient decision rule by scaling each individual’s monetary value by an arbitrary welfare weight—satisfies a new participation constraint, satisfies a new fairness principle, and is ex-post budget-balanced asymptotically in large populations. Moreover, I show that the common methodological simplification of taking values to be net of one’s cost share is not without loss of generality, standard participation constraints are not well-suited for the government’s public good provision problem, and the most well-known mechanism for public good provision, the Clarke mechanism, violates a basic fairness constraint: if nothing is produced, no one should pay.

#### Public Persuasion in Elections: Single-Crossing Property and the Optimality of Censorship

We study public persuasion in elections, in which a monopoly designer or multiple competing designers attempt to influence the election outcome by manipulating public information about a payoff relevant state. We allow for a wide class of designer preferences, ranging from pursuing pure self-interest to maximizing any social welfare function expressed as weighted sum of voter payoffs (e.g., utilitarian). Our main result identifies a novel single-crossing property and shows that it guarantees the optimality of censorship policies – which reveal intermediate states while censor extreme states – in large elections under both monopolistic and competitive persuasion. The single-crossing property is (i) generically satisfied when designers are self-interested, or (ii) satisfied for generic designer preferences under a mild assumption on the distribution of voters’ preferences. We also analyze how the structure of the equilibrium censorship policy varies with the designer’s preference and voting rules. Finally, we apply our results to study the welfare impacts of media bias and competition and show that, contrary to common wisdom, increased media competition may in fact harm voter welfare by inducing excessive information disclosure.

**Keywords:** D72, D82, D83

**JEL Classification:** voting, single-crossing property, censorship, Bayesian persuasion, competition in persuasion, information design

#### Quality is in the Eye of the Beholder: Taste Projection in Markets with Observational Learning

We study how misperceptions of others’ tastes influence beliefs, demand, and prices in a market with observational learning. Consumers infer the commonly-valued quality of a good based on the quantity demanded and price paid by other consumers. When consumers exaggerate the degree to which others’ tastes resemble their own, such “taste projection” leads to erroneous and disparate quality perceptions across consumers (i.e., “quality is in the eye of the beholder”). In particular, a consumer’s biased estimate of the good’s quality is negatively related to her own taste. Moreover, consumers’ quality estimates are increasing in the observed price, even when the price would have no influence on the beliefs of rational consumers. These biased beliefs result in perceived valuations that exhibit too little dispersion relative to rational learning and a demand function that is excessively price sensitive. We then analyze how a sophisticated monopolist optimally sets prices when facing short-lived taste-projecting consumers. Projection leads to a declining price path: the seller uses an excessively high price early on to inflate future buyers’ perceptions (e.g., creating “hype”), and then lowers the price to induce a larger-than-rational share to buy. When consumers can instead time their purchase, projection causes late buyers to under-appreciate selection effects, thereby exposing them to systematic disappointment. A final application examines how projection of risk preferences distorts portfolio choice when learning from asset prices.

#### Quality over Quantity

We derive the seller’s utility maximizing selling mechanism in bilateral trade with interdependent values. Due to the interdependencies in valuations, finding the optimal mechanism is an informed seller problem. It turns out that the optimal mechanism is no longer a take-it-or-leave-it offer for the whole capacity; the seller finds it optimal to decrease the quantity of allocation (or the probability of trade) in order to credibly signal her private information to the buyer.

#### Randomly Selected Representative Committees

There are many real-world examples where decisions are made by a committee rather than all of the members of a court, legislature, or other body. The members of such committees are usually chosen either using random selection or using direct selection by a designated authority. However, neither of these methods satisfies both of the lodestar principles that each member of the body has an equal opportunity of being selected to serve on each committee and that the collective view of each committee is representative of the collective view of all of the members of the body. We present a new committee selection method that has the core benefits of random selection (equal opportunity) and direct selection (the possibility of representativeness) while avoiding their pitfalls. This new method consists of creating a pool of “average” committees in which each member of the body serves on the same number of committees included in the pool, and then randomly selecting a committee from the pool.

#### Rank Guarantees in Matching Markets

Centralized matching markets involve many participants operating under limited information about others’ preferences and priority rankings. This paper investigates worst-case outcomes for applicants in a deferred acceptance mechanism. Applicants know their preferences but lack information about others’ preferences and their priority rankings. I show that applicants rarely have good outcome guarantees, even in large markets with IID uniform preferences. These guarantees deteriorate as market size increases or preferences become more positively correlated. Applicants fail to achieve a guarantee of an outcome above any percentile in large markets. The rarity of good rank guarantees highlights the variability of outcomes under deferred acceptance and may imply limits to applicants’ incentives to understand their preferences or report truthfully to begin with. Technically, I demonstrate the connection between poor guarantees and the prevalence of perfect matchings, which makes explicit why outcomes in deferred acceptance are variable.

#### Recursive Rational Inattention Is Entropic

We study a rationally inattentive agent who, each period, acquires costly information about an evolving state and chooses an action. We say that her valuation of dynamic decision problems is recursive if it satisfies the Bellman equation for each problem. The main result is that if her valuation is recursive then the corresponding cost of information is entropic—that is, linear in the reduction in entropy of beliefs. This result corresponds to a converse to Steiner, Stewart, and Matějka (2017), who showed that if the cost is entropic, her valuation is recursive for each dynamic decision problem.

#### Redistributive Bargaining under the Shadow of Protests

Two agents bargain over how to redistribute resources between themselves and a third party. The third party is not included in the negotiations but can interfere by protesting against proposals under review. Protesting is costly and stochastically successful. When successful, protesting secures the status quo. Stationary equilibria feature either inefficient protests or excessive accommodation. Contingent agreement delay is necessary and sufficient to curb this issue: By enabling immediate punishment for protests, it allows the bargainers to extract the full surplus in equilibrium. Competition is key for this result. Due to impatience, contingent delay is impossible if the bargainers collude.

#### Redistricting with Endogenous Policies

I study the interaction between partisan gerrymandering and the policy positions of candidates at the district level. In current redistricting models, policies are exogenously determined and the behavior of voters is independent of the district they are assigned to. I depart from this assumption and allow for candidates’ positions to depend on the distribution of voters within a district. Given this, a gerrymanderer partitions voters into equi-populous districts in order to maximize the (expected) number of districts won by one party. I find that optimal districts create a wedge between moderate and extreme opponents, encouraging the emergence of extreme candidates. I use my results to investigate the impact of partisan gerrymandering on the level of policy polarization. By diluting the power of moderate voters, optimal redistricting generates a posterior distribution of district representatives that has at least two modes.

#### Regret in Durable-Good Monopoly

I study a dynamic model of durable-good monopoly where the seller cannot commit to future prices and is uncertain about the buyer’s value. I adopt a prior-free approach where the seller minimises lifetime regret against the worst-case type of the buyer. In the unique equilibrium the seller’s worst-case regret against types who purchase at any given time equals the worst-case regret against types who purchase at any other time. The seller cannot profitably deviate even if he could commit to his deviation. Despite this, the equilibrium does not match the commitment outcome. This is because the seller’s objective is endogenously determined by his optimal counterfactual behaviour against each type, which is time-inconsistent. The Coase conjecture holds: in the frequent-offer limit the good is sold immediately at a price equal to the lowest value.