Search my Subject Specializations: Classical, Early, and Medieval Plays and Playwrights: Classical, Early, and Medieval Poetry and Poets: Classical, Early, and Medieval Prose and Writers: Classical, Early, and Medieval World History: Civil War American History: Users without a subscription are not able to see the full content. Rose Abstract This book explains why moral beliefs can and likely do play an important role in the development and operation of market economies. More This book explains why moral beliefs can and likely do play an important role in the development and operation of market economies.
Bibliographic Information Print publication date: Authors Affiliations are at time of print publication. It then identifies specific characteristics that moral beliefs must have for the people who possess them to be regarded as trustworthy. When such moral beliefs are held with sufficient conviction by a sufficiently high proportion of the population, a high trust society emerges that supports maximum cooperation and creativity while permitting honest competition at the same time.
Such moral beliefs are not tied to any particular religion and have nothing to do with moral earnestness or the set of moral values-what matters is how they affect the way people think about morality. Such moral beliefs are based on abstract ideas that must be learned so they are matters of culture, not genes, and are therefore able to explain differences in economic performance across societies.
Hardcover , pages. To see what your friends thought of this book, please sign up. Lists with This Book. This book is not yet featured on Listopia. Chad rated it liked it Nov 15, Kirk rated it really liked it Nov 17, Daniel Austin Green rated it really liked it Jan 16, Shashwat rated it liked it Jun 14, Michael Van Beek rated it really liked it Apr 21, Amy rated it liked it Oct 01, Ryan rated it it was amazing Jul 10, Victor Ferrer rated it it was amazing Nov 28, Timothy Leeds rated it it was amazing Apr 16, Ronald marked it as to-read Nov 21, Greg Linster marked it as to-read Feb 06, What kind of characteristics would they have?
The reason for doing that is I had become disenchanted with the progress that we have been making as a profession on what's commonly now known as the Development Puzzle. Basically, economics did really well through the 19th century, the beginning of the 20th century, working out the essential logic of the price system.
And that was a huge triumph, a great gift to mankind. And I think we basically got that right. But as Thomas Kuhn has pointed out, when you have a new paradigm, you always say that things are great; you start to answer a lot of questions; but over time you start to peter out. The usefulness of the paradigm starts to peter out. And that happened with the neoclassical paradigm. So, what then happened? Well, in the 20th century, institutionalism was re-resurrected, I should say--it was already there to some extent--to fill in the gaps.
The basic insight there was while there's nothing wrong with neoclassical economics and our understanding of markets per se, we have to recognize that they exist in a context; that they rest on an institutional foundation, as it were. And once we did that, then a whole bunch of puzzles became solvable. We were able to make some real progress, including but not limited to Development Economics.
We certainly made a lot of theoretical progress. I would argue, though, that that has begun to lose steam. We have found that when you drop institutions into less developed countries very often they either do nothing or they are subverted and co-opted and become vehicles of opportunism themselves.
- .
- how to hold your head while painting in white.
- Quicksilver Bridges.
- Diener der Finsternis (Duke de Richleau) (German Edition).
So, something else must be missing from the story. Barry Weingast, who is a political scientist at Stanford has a great way of putting the problem. He said if you needed a copy of the U. Constitution, you could always go to South America because there's a ton of photocopies of it floating around in the form of their Constitutions.
Yet you don't get a United States down there. And you can't tell the standard argument that they don't have the right kind of requisite conditions, because as recently as the late 19th century, Argentina had higher per capita income than we did. So, they have all the stuff that they need, and they even have, superficially, a Constitution, and so on and so forth.
So, much of the institutional apparatus is there. And yet the don't get what we get. Because apparently--they have a court, but maybe it's not quite like ours. They have laws, but legislation doesn't quite work in terms of how it's enforced. So, there is a puzzle, still, which is fundamentally we don't fully understand why some countries do much better than others.
And you are trying to fill that gap. That's what got me interested in this area in a broad kind of way. So, this thought experiment is to think about what role moral values might play in helping to create prosperity, and you focus on the issue of trust in dealing with strangers in large group situations because that's necessary for specialization.
The way I approach the whole thing is to say: Look, if we are trying to figure out what kind of moral beliefs would do the best job supporting the development and operation of a market system, the first thing we have to do is figure out what exactly needs to be going on to have a well-functioning market system. That stuff's all well known. Basically, Adam Smith is right about this. The issue of distribution is important but not nearly as important as the issue of having enough stuff to divide up in the first place. Really comes down to specialization.
Societies that are able to effectuate dramatic specialization through very large scale production are those that are going to have levels of productivity that are many orders of magnitude greater than other societies. And we've known this for a long time, although it's surprising how few younger economists are really aware of how dramatically different the level of productivity is when you allow specialization. Well, I shouldn't say almost nobody, but many economists don't have the pin factory example memorized, for example.
Which I require my Principles students to do because it is such a shocking increase in productivity. But be that as it may, the question then is: That's what it takes to work, but now does specialization present any kinds of problems? Obviously if it was really easy to effectuate tremendous gains from specialization, everybody would do it. But not everybody does do it. Well, when you have dramatic specialization to increase the productivity like that, you are going to invite a problem of localized knowledge that is quite similar to a local knowledge problem that was addressed by Hayek across the whole of society.
As you know, Hayek argued that the price system solves a problem and the problem that is solved is reconciling the localization of knowledge. Because we have a price system, we don't have to know what each other is doing or why. All we have to do is pay the market price, and as a result, we'll pay the full social opportunity cost of using that resource. And that effectuates efficient coordination across the whole of society, even though we don't have to know that much about each other--because everything we need to know is already embodied in that price. That was a fabulous argument. But I would argue that when you look inside firms, which is where all the stuff gets created in the first place, we have a similar kind of local knowledge problem.
The larger a firm is and the more complex its production is, the more likely it is that there are people who know things that nobody else knows. Or even can know. And as a result, if people in that situation are not able to take full advantage of that knowledge, we are just throwing away a tremendous amount of efficiency, much like we would be if we didn't have market prices across all society. The problem within the firm: I don't know if that fad is still going in the business schools any more but there was a big fad in business schools and management and the business literature about the capital stock of knowledge within a firm--that there was a lot of specialized knowledge and localized knowledge that you are talking about embodied in the individual workers.
But they would come and go, so how do you preserve the knowledge that the firm has at any point in time to make that more efficient despite the reality of turnover. I don't know if they made much progress with that; obviously there is one move toward using prices within a firm; I don't think that's been terribly successful. But it's certainly true that at any point in time within any large organization whether it's a business or a non-profit, there is an immense amount of specialized, sometimes localized, but specialized knowledge that isn't written down anywhere.
It's just embodied in the heads of people who happen to be employees at the time. And how do you get that to be used effectively is a major problem for any successful organization. And that's kind of a stock concept of it, and that's certainly correct. But the problem is every bit as daunting in a flow sense, which is what Hayek would have emphasized. That is, things are changing constantly; the problems of the day today are different than yesterday.
And they just come at you constantly. And the person who is in the best position to answer those questions is the one who has a great deal of localized knowledge regarding how a particular area of the firm works. What I introduce in the book is that there is a form of opportunism that has never really been codified in the past. It's what I call third-degree opportunism. And that's opportunism of the form that there's an action set and other people in the firm, or the firm owners or CEOs or whatever may know a proper subset of that action set; but a person who is on the ground, as it were--I like to say a middle level manager--knows a much larger number of actions than that action set.
And if an action is one that is profitable but not the most profitable is known to the person with local knowledge but not known to the others, and the person who possesses the local knowledge knows this, he might pick an action that is good enough but is not the best. And that would not be consistent with maximizing profits and not be consistent with efficiency. And this is a very daunting problem.
I call it 3rd degree opportunism. It's very a very daunting problem because it's a problem that gets worse the bigger firms are. Because the bigger firms are, the more specialized the knowledge, and by definition the more likely you've got a situation in which an individual has an informational advantage over those that he would have to answer to or coordinate with.
You are not talking here about--you talk about these other phenomena, which I'm going to mention, which is shirking, where obviously sometimes an employee can work less hard than his boss might know about and enjoy some leisure on the job. That's not what you are talking about. You are talking about a very specific kind of opportunism, right? I'm talking about a form of opportunism that cuts to the very heart of whether a firm is run in entrepreneurial fashion or in bureaucratic fashion. This is a fundamental tradeoff, because if you aren't able to delegate managerial responsibilities vis-a-vis what we call relational contracts--in other words, contracts that are flexible enough to give those who possess local knowledge--if we can't do that then we are throwing away all of the efficiency gains, what I would call Hayekian gains, that would come from fully exploiting localized knowledge.
However, relational contracts that would confer that kind of discretion would by definition open themselves up to opportunistic exploitation that constitutes what Bob Frank would call a golden opportunity. The reason why is by definition if nobody else can know what the optimal action is then there is no way you can be in a sense caught cheating because no one else knows what the counterfactual could have been.
Only you know that. So, this is kind of an inescapable problem associated with the efficient use of local knowledge within firms. And I think it's a very deep problem; it's a very fundamental problem; and it cuts right to the heart of production and to the heart of the difference between a bureaucratically run firm and an entrepreneurial firm.
But it goes beyond that. There are so many transactions that--and you talk about this in the book--you and I are going to make a deal; there's going to be a contract between us.
Not a handshake deal; there is a contract. But it's impossible for the contract to specify all the possible conditions, including conditions where I might do something on your behalf without your knowing it's even possible. Because you don't have that localized knowledge. And I always think, when I think about these kind of problems, about selling a house or buying a house, where we have this unbelievably important asset being exchanged for money and we have this unbelievable set of paraphernalia and bells and whistles surrounding it--title and page after page of contractual agreement on all sides, about what we are going to do on each other's behalf.
But despite all of that, we leave much of it unspecified because it's too costly to specify everything; and more importantly we can't anticipate everything that could happen. And so inherently at some point there is either trust or there is this random legal action; and of course legal action is really unpleasant. So, obviously the more trust that's involved the better it is because we avoid the complexities of legal action and all of its costs.
But--you have to trust the other person to a certain extent. And how do you generate that trust? Especially in this situation, which is the one I want to focus on because it's the center of the book for the rest of the way out, which is: One of the parties knows something nobody else knows, and knows that by either taking an action or not taking an action, good or bad will occur. How do you get that person to do the right thing? And if you can do that, if you can get a world where people do the right thing even when they are not observed or monitored, you can really exploit these potentials for specialization and trade, exchange; and you won't be able to exploit them if that trust isn't there.
Is that a good summary? I agree with that totally. I think that's basically correct. My particular approach doesn't really view it as what do we do to make that happen, although I have ideas, of course. I basically am working backwards. What's necessary to be true. Let's turn to that. Obviously there are many other ways you can check opportunism generally. There's repeated dealings, there's reputation, there is police rules, monitoring of various kinds. But we're going to focus on the most difficult problem to monitor, observe; because that's really important to keep in the back of your mind as you are listening to this.
Because obviously markets and societies find ways to deal with many of the problems associated with opportunism. This particular kind is special. It's special but it is more frequent and it is more fundamentally important than one might first suspect when one first thinks about these things. Part of the reason why is most of the cost is unobserved. Most of the cost takes the form of economic organizations that don't exist, or institutions that don't exist. So, I would argue that the preoccupation with incentive-compatibility mechanisms is the result of kind of a survival bias.
In other words, you study what's there to see and for most of human history, what we have observed are institutions that exist to solve these kinds of like shirking and so forth that are pretty frequent precisely because being able to trust people you don't know is something that has been extremely rare throughout human history.
It's even rare today but if you go back years or so, I would say it was completely rare. Nobody had the kind of moral beliefs that would be required to get you to a condition of genuine and generalized trust at the same time. So, something has changed and part of your argument is going to be, although you don't deal with this in depth in the book: That's right, and actually, I'm writing another book now that deals exactly with that issue. That's a huge issue all by itself.
Let's go back to the moral issue now, which is: What's necessary to create behavior on the part of individuals basically to turn down, reject, and resist the chances to be opportunistic when nobody is watching? What do we need? There are a couple of things that you need. Number one, the person's predilection to be trustworthy cannot be merely an exercise in incentive compatibility.
Which is what most economists want to do. They want to model trust behavior and trustworthiness as an exercise in incentive compatibility. Explain what you mean by incentive compatibility. It's the idea that it's an exercise in enlightened self-interest because it's in your own best interest to behave in a trustworthy manner. The most common example is to say: Markets breed honesty and honesty breeds markets. Suppose you've got a guy and he's a car mechanic. If he behaves in an untrustworthy way it gets back to the customers; he has less business.
Also Available In:
If he behaves in a trustworthy way, he gets rewarded for that by virtue of having more business. And so on and so forth. So, that's an example of the kind of argument that most economists like to make about trust. It's no big deal, it's easy to explain. It's in your own best interest to be trustworthy anyway. That's all well and good but the problem is if that's all there is to trust then trust is going to fall down exactly where the word is most meaningful.
This is such an empty approach that Toshiyo Yamagishi, who is a pretty famous social capital theorist, sociologist in Japan, says this isn't even trust at all. We should call it assurance; that's all it is. I don't trust you. I just know you are going to act as if you were trustworthy.
Not the same thing. And Oliver Williamson is very dismissive of a great deal of the trust literature; and he would say that this is what he would call calculative trust, which is a contradiction in terms anyway. So, for a situation in which there is a genuine golden opportunity is possible. A golden opportunity is a situation in which the person who may or may not behave in an opportunistic way believes there is zero probability of being caught. In any way, shape, or form. They can do it and they can get away with it, perfectly. And this terminology comes from Robert Frank. That's the first place I ever saw it.
You've got to be able to deal with that. And so, Frank's argument, and I think he was absolutely right although he was kind of dismissed at the time, was that the only way to bust out of that is for trustworthiness to be based on moral taste. If it's in any way an exercise in rational behavior, it's not going to work for a golden opportunity. So, the thing that's producing the trustworthiness has to be in a sense pre-rational, antecedent to the rational calculation problem. So, he said it had to be moral taste.
It was a heretical thing to say when he said it and people have largely dismissed it. And I think that's been a huge mistake. They dismissed it because economists generally don't like arguments based on taste. They prefer to use arguments based on prices, incentives, etc. But this is basically saying you'd better have a taste for being good. Or not doing a bad thing. It had better be part of your makeup, to solve that.
And that is an unappealing argument methodologically. It could be true--which is the problem--but it's unappealing methodologically partly because you don't want to be in a position to say: Well, the way we'll make the world a better place is we'll get people to be better people.
That obviously--most economists are uncomfortable with that kind of logic. But that doesn't mean it's not true. This one's also uncomfortable with it. I don't like arguments that are grounded in taste, but nature doesn't care what we like. The explanation just is what it is. If it is indeed the case that tastes carry the day, then it's incumbent upon us to move forward with that as our working theory. Turns out things are not quite as bad as people think, and we can circle back to this later when we talk about culture.
But anyway, you were asking what do we need: Well, first of all it needs to be taste. That's where Bob left it. He just said it's got to be taste. I pushed the ball down the field by saying if it's got to be taste, then what kind of moral taste? And then I worked through the thought experiment to discover that first and foremost, if the reason why you think something is wrong is because of the harm it does to other people, which is by the way what I would call harm-based moral restraint, and that is kind of the foundation for why most of us are reluctant to be opportunists.
But if that's the only reason why you won't behave opportunistically is because of the harm that's done, then the problem is, if you are in a situation where you think nobody is going to be harmed by your opportunism, you'll still be opportunistic. And just think about it for a minute.
That is not a big problem in very small group society, where you live in hunter-gatherer bands or small tribes. The number of people involved is fairly small, so even if we don't get caught, we do know that our actions might measurably harm someone that we care about, or maybe we don't care about him but we don't want to be feeling like we hurt somebody.
By the way, we should mention: Talk about that for a second. Guilt is the mechanism through which all of this works; and the question is how do you put guilt to work? You put guilt to work by having moral values that actuate it. The point of my book is that moral values are important also, but even more important is how they are structured, because otherwise you are not going to get guilt triggered in the right sort of way. And this point about small versus large, I found very interesting, because basically what you are saying is that guilt is going to be triggered by empathy.
When I realize that I'm harming someone I'm going to feel bad about that, which is I think a universal truth. We may differ in how bad we feel about harming others and differ dramatically in how we emotionally react knowing we've hurt someone; but the insight you have which I really like is: And you give the example, which I thought was very good, of a false insurance claim. Explain how that would work. The basic idea is usually when we do something in a small group to behave opportunistically, somebody gets hurt and we feel guilty about it.
But the greater the number of people in the group over which the cost of that harm is divided, the more likely it is that there will not be a single human being who is harmed and who we can therefore empathize with and therefore sympathize with and therefore feel guilty about having harmed. Or, if they are harmed, it's by such a small amount they might not even perceive it. At some point we don't even have to make that qualification. We don't even need to quibble. We are talking about way less than a penny per person in the United States.
People can't even perceive that. It's not even there. Noise swamps it by orders of magnitude. So, no one is harmed. And that's why many people who seem to be nice guys and seem like they would never do anything to hurt you or your family or anybody, very generous, good people, might cheat on their taxes. Or inflate their expense account at work. And that's a fundamental problem. It's a problem everywhere, but it's an especially big problem in countries outside the West.
Outside the West, if people feel like they are not hurting anybody, they really feel like they can just do whatever they want as long as they don't get caught.
David C. Rose, The Moral Foundation of Economic Behavior - PhilPapers
So, you are only left with incentives to combat opportunistic behavior. So, the point of that is that harm-based moral restraint is not enough to deal with the empathy problem; and the empathy problem is fundamental because it's a problem that gets worse the larger the group size is. And you are going to be an impoverished society if you can't sustain very large institutions, large markets, large firms. Bigness is the key. Smith is right, and getting big means that our hardwired sense of moral restraint is going to fall down on the job.
Because that's a small group thing. Because we are a small-group species. Let me raise Immanuel Kant here for a second. The only thing I understand about Kant--which is I think an important thing, though--is the categorical imperative. In the categorical imperative he says that you should take an action or avoid an action, when trying to decide if it's the wrong thing to do, you should imagine if everyone did it, if it was common practice, rather than just you doing it.
And that's his way to solve this problem, right? I always use the example: Sampling all the fruit everywhere in the grocery, or reading all the books in the bookstore while drinking coffee, which most people say: Well, that doesn't hurt anybody; it's no big deal. And to some extent that's true; but if everyone did that instead of buying the fruit or the books and just ate while they were there, there wouldn't be grocery stores or bookstores; and I consider those immoral acts. When I tell people that, they get mad at me. But I think that's correct. And that's one way to solve the problem.
But you don't deal with that. In the book I compare the moral foundation after I completely work the whole position out to what other philosophers have had to say, and one of them is Kant. I think what Kant was doing is he was giving a rigorous voice to changes in moral beliefs that were already underway. So, in other words, I don't think he's somebody who brought about these changes. I think he's somebody who is simply echoing them. They are already in the culture and he is codifying it and making it rigorous. I think that people who like Kant or know Kant are going to say: Principled moral restraint is the idea that I'm not going to do this particular negative moral action not because of the harm that it does but because I believe it's wrong in and of itself.
Even though it would benefit me. Even though it's in my own self-interest. Aside from the guilt. With the guilt aside, say, it's in my financial interest to do this, it's morally wrong; I'm not going to do it and I'm not going to get caught. And many economists balk at this.
- Satyricon — Volume 05: Crotona Affairs.
- Drota’s Bard;
- Witches in Red: A Novel of the Mist-Torn Witches (The Mist-Torn Witches series)!
- The Moral Foundation of Economic Behavior by David C. Rose!
- Engendering Individuals;
Not to pick on Oliver Williamson, but he and I have argued about this over the years a great deal. And I would say to Oliver: Suppose you are at , and there's only one person working and he has to duck in the bathroom. And suppose you knew the security camera wasn't working, so you knew with certainty you could steal a candy bar and get away with it. You are not going to steal that candy bar. I know you are not. You know you are not. And you know that I know that you are not. And it's not because: Well maybe it really is working.
That's not the reason.
The Moral Foundation of Economic Behavior
It's just you don't think it's right. I think that's how Oliver describes principled moral restraint but doesn't realize it. Well, I think a lot of economists are uncomfortable--it comes back to my methodological point. I think everyone accepts that as true. I think there are economic ways of looking at that. I think if the candy bar would save your child's life, even though it might be wrong you might be more likely to steal it rather than just to appease your sugar demands for just a few minutes.
I'm willing to accept the idea. Sure, but there's a qualitative difference between stealing the candy bar to save your child's life and saying I know that stealing it was wrong but I don't care; I'm going to save my child's life versus not believing it's even wrong in and of itself. I agree with you. There's an example that can tease that out. I'm just agreeing with you that people do act that way, they feel that way, they refuse opportunities because they think they are wrong; but that economists may be uneasy about invoking that for methodological reasons.
So, principles moral restraint is obviously, undeniably a way to solve the opportunism problem; but you have more to say about it than that. Well, it's a necessary but not sufficient condition for solving the opportunism problem. It solves the empathy problem, but there's another problem. The empathy problem meaning that you might have trouble feeling that there are actual people being hurt. Even if you solve the empathy problem, you have another problem, and that is someone could feel guilty about undertaking a negative moral act, let's say an opportunistic act, and they feel extremely guilty about it because they possess principled moral restraint; maybe somebody's hurt, maybe somebody's not, but that's beside the point in this case; but they feel guilty about it, so they have principled moral restraint.
There's no issue there. But they may also feel guilty about not being able to take a positive moral act they could have taken if the negative moral act is undertaken. So, this is what I call the greater good rationalization problem. And it is really a huge problem, because this is a device that many advocates use to rationalize their actions in ways that after a while we come to take as reasonable but not so long ago we would have viewed as patently wrong.
So, give an example. In the United States today the conversation begins far downstream of whether it's legitimate to take money from other people to solve some kind of social justice kind of problem. If you go back to say and you say: