Publications

Decision-facilitating information in hidden-action setups: An agent-based approach 

Stephan Leitner, Friederike Wall
Published in: Journal of Economic Interaction and Coordination

Abstract: The hidden action model captures a fundamental problem of principal-agent theory and provides an optimal sharing rule when only the outcome but not the effort can be observed. However, the hidden action model builds on various explicit and also implicit assumptions about the information of the contracting parties. This paper relaxes key assumptions regarding the availability of information included the hidden action model in order to study whether and, if so, how fast the optimal sharing rule is achieved and how this is affected by the various types of information employed in the principal-agent relation. Our analysis particularly focuses on information about the environment and feasible actions for the agent to carry out the task. For this, we follow an approach to transfer closed-form mathematical models into agent-based computational models. The results show that the extent of information about feasible options to carry out a task only has an impact on performance, if decision-makers are well informed about the environment, and that the decision whether to perform exploration or exploitation when searching for new feasible options only affects performance in specific situations. Having good information about the environment, in contrary, appears to be crucial in almost all situations.


On Heterogeneous Memory in Hidden-Action Setups: An Agent-Based Approach

Patrick Reinwald, Stephan Leitner and Friederike Wall
Published in: SIMUL 2020: The Twelfth International Conference on Advances in System Simulation
Available on: arXiv

Abstract: We follow the agentization approach and transform the standard-hidden action model introduced by Holmström into an agent-based model. Doing so allows us to relax some of the incorporated rather “heroic” assumptions related to the (i) availability of information about the environment and the (ii) principal’s and agent’s cognitive capabilities (with a particular focus on their memory). In contrast to the standard hidden-action model, the principal and the agent are modeled to learn about the environment over time with varying capabilities to process the learned pieces of information. Moreover, we consider different characteristics of the environment. Our analysis focuses on how close and how fast the incentive scheme, which endogenously emerges from the agent-based model, converges to the second-best solution proposed by the standard hidden-action model. Also, we investigate whether a stable solution can emerge from the agent-based model variant. The results show that in stable environments the emergent result can nearly reach the solution proposed by the standard hidden-action model. Surprisingly, the results indicate that turbulence in the environment leads to stability in earlier time periods.