Theoretical models are simplified approximations of the world, or more specifically thought experiments predicated on assumptions that try to approximate first order aspects of behavior. As an empiricist I cannot knock theorists for using approximations—heck I do it all the time too (see, asymptotics, or the delta method). But we should ask whether the approximations are reasonable against what we see in the real world. That is,
If verisimilitude is not a criterion for assumptions, any result can be reverse engineered by picking the assumptions that deliver the result.
If any result can be engineered then results themselves have no special ontological status.
Exploring the implications of assumptions for its own sake can be technically demanding but no more credible as a way to map reality for that. This practice generates “bookshelf” models whose practical utility depends on filtering the assumptions and implications against data and our beliefs about the real world. Without filtering we are building a Tower of Babel (or maybe an art museum). (Note this goes beyond Friedman’s famous arguments about predictive validity without regard for assumptions: we need to filter assumptions too because the implications of a model are not unique to one set of assumptions, by 1 and 2.)
How complicated can the problems be that we allow our agents to solve in a model? Is a dynamic program ever admissible as a reasonable assumption on the objective function of an agent? That depends on the situation. If the goal that the agent is seeking is sufficiently clear (albeit complicated to achieve) and the agent has lots of opportunity to experiment and come upon something that works well, it may be reasonable to assume that the agents’ actions will converge in a way that it appears as if it is solving such a program. The validity of the “as if” assumption should be vetted in this way though.
All of this from an essay by Paul Pfleiderer on “chameleon” models and the misuse of theory in economics: link. (Ht @noahpinion)