Publications

Micro-level dynamics in hidden action situations with limited information  

Stephan Leitner, Friederike Wall
Available on: arXiv

Abstract: The hidden-action model provides an optimal sharing rule for situations in which a principal assigns a task to an agent who makes an effort to carry out the task assigned to him. However, the principal can only observe the task outcome but not the agent’s actual action. The hidden-action model builds on somewhat idealized assumptions about the principal’s and the agent’s capabilities related to information access. We propose an agent-based model that relaxes some of these assumptions. Our analysis lays particular focus on the micro-level dynamics triggered by limited information access. For the principal’s sphere, we identify the so-called Sisyphus effect that explains why the optimal sharing rule can generally not be achieved if the information is limited, and we identify factors that moderate this effect. In addition, we analyze the behavioral dynamics in the agent’s sphere. We show that the agent might make even more of an effort than optimal under unlimited information, which we refer to as excess effort. Interestingly, the principal can control the probability of making an excess effort via the incentive mechanism. However, how much excess effort the agent finally makes is out of the principal’s direct control.

Limited intelligence and performance-based compensation: An agent-based model of the hidden action problem

Patrick Reinwald, Stephan Leitner, Friederike Wall
Available on: arXiv

Models of economic decision makers often include idealized assumptions, such as rationality, perfect foresight, and access to all relevant pieces of information. These assumptions often assure the models’ internal validity, but, at the same time, might limit the models’ power to explain empirical phenomena. This paper is particularly concerned with the model of the hidden action problem, which proposes an optimal performance-based sharing rule for situations in which a principal assigns a task to an agent, and the action taken to carry out this task is not observable by the principal. We follow the agentization approach and introduce an agent-based version of the hidden action problem, in which some of the idealized assumptions about the principal and the agent are relaxed so that they only have limited information access, are endowed with the ability to gain information, and store it in and retrieve it from their (limited) memory. We follow an evolutionary approach and analyze how the principal’s and the agent’s decisions affect the sharing rule, task performance, and their utility over time. The results indicate that the optimal sharing rule does not emerge. The principal’s utility is relatively robust to variations in intelligence, while the agent’s utility is highly sensitive to limitations in intelligence. The principal’s behavior appears to be driven by opportunism, as she withholds a premium from the agent to assure the optimal utility for herself.

Effects of limited and heterogeneous memory in hidden-action situations

Patrick Reinwald, Stephan Leitner, Friederike Wall
Available on: arXiv

Abstract: Limited memory of decision-makers is often neglected in economic models, although it is reasonable to assume that it significantly influences the models’ outcomes. The hidden-action model introduced by Holmström also includes this assumption. In delegation relationships between a principal and an agent, this model provides the optimal sharing rule for the outcome that optimizes both parties’ utilities. This paper introduces an agent-based model of the hidden-action problem that includes limitations in the cognitive capacity of contracting parties. Our analysis mainly focuses on the sensitivity of the principal’s and the agent’s utilities to the relaxed assumptions. The results indicate that the agent’s utility drops with limitations in the principal’s cognitive capacity. Also, we find that the agent’s cognitive capacity limitations affect neither his nor the principal’s utility. Thus, the agent bears all adverse effects resulting from limitations in cognitive capacity.

Decision-facilitating information in hidden-action setups: An agent-based approach 

Stephan Leitner, Friederike Wall
Published in: Journal of Economic Interaction and Coordination

Abstract: The hidden action model captures a fundamental problem of principal-agent theory and provides an optimal sharing rule when only the outcome but not the effort can be observed. However, the hidden action model builds on various explicit and also implicit assumptions about the information of the contracting parties. This paper relaxes key assumptions regarding the availability of information included the hidden action model in order to study whether and, if so, how fast the optimal sharing rule is achieved and how this is affected by the various types of information employed in the principal-agent relation. Our analysis particularly focuses on information about the environment and feasible actions for the agent to carry out the task. For this, we follow an approach to transfer closed-form mathematical models into agent-based computational models. The results show that the extent of information about feasible options to carry out a task only has an impact on performance, if decision-makers are well informed about the environment, and that the decision whether to perform exploration or exploitation when searching for new feasible options only affects performance in specific situations. Having good information about the environment, in contrary, appears to be crucial in almost all situations.


On Heterogeneous Memory in Hidden-Action Setups: An Agent-Based Approach

Patrick Reinwald, Stephan Leitner and Friederike Wall
Published in: SIMUL 2020: The Twelfth International Conference on Advances in System Simulation
Available on: arXiv

Abstract: We follow the agentization approach and transform the standard-hidden action model introduced by Holmström into an agent-based model. Doing so allows us to relax some of the incorporated rather “heroic” assumptions related to the (i) availability of information about the environment and the (ii) principal’s and agent’s cognitive capabilities (with a particular focus on their memory). In contrast to the standard hidden-action model, the principal and the agent are modeled to learn about the environment over time with varying capabilities to process the learned pieces of information. Moreover, we consider different characteristics of the environment. Our analysis focuses on how close and how fast the incentive scheme, which endogenously emerges from the agent-based model, converges to the second-best solution proposed by the standard hidden-action model. Also, we investigate whether a stable solution can emerge from the agent-based model variant. The results show that in stable environments the emergent result can nearly reach the solution proposed by the standard hidden-action model. Surprisingly, the results indicate that turbulence in the environment leads to stability in earlier time periods.