Wednesday, May 4, 2011

On the Death of Osama bin Ladin

Every victory should be regarded with sadness, like a funeral. The only thing worse is a defeat. Having no need for victory is the real victory. Victory is merely avoidance of the appalling in favor of the barely tolerable. Thus it is with the recent slaying of Osama bin Ladin. It's better that it succeeded than if it had failed. Victory is better than defeat. But only just barely. It cannot make up for the great tragedy that either one, victory OR defeat, had to happen. We should not have had to do it. And some thought should be given to how to avoid having to do it in the future.

I use the word "tragedy" here in the classic Greek sense: misfortune arising from one's own flaws. Osama bin Ladin was our nemesis, the fruit of our hubris as a nation. He should never have existed, and if we had not betrayed our own values and our own identity as a nation he never would have. Or if he had, he would have been someone else's nemesis, not ours.

There are so many intertwined threads of truth to the death of Osama bin Ladin.

Start with the surface. He was a violent, evil man who was responsible for the slaughter of thousands. His death is no great loss to the world. On this, most everyone agrees. The only exceptions would be those who share his particularly twisted brand of violent Islamic ideology. (It's "Islamic" in the same sense as the Christian Identity movement is "Christian." I call it that because I don't know what else to call it. Normal Muslims may take exception, just as normal Christians may take exception to having racist neo-Nazi monstrosities lumped in with them. I shall here merely note the likely unhappiness, acknowledge that very few Muslims bear much resemblance in their beliefs to bin Ladin, and move on.)

But that's just the surface, taking bin Ladin's death as if it existed in isolation. It doesn't, of course. It's the most recent significant development in a saga that has included a lot of stupid, inept, opportunistic, and downright wrong-headed moves on the part of the United States. Starting from the World Trade Center attack on September 11, 2001, our first wrong move was President Bush's choice of words to describe what we were engaged in: "War on Terrorism." (Or "War on Terror." It depended on his mood on any given day, or perhaps on how much he'd had to drink recently.) We cannot, of course, wage war on a military tactic, nor on a loose-knit criminal organization. We can fight them, as we speak of fighting crime. But this fight cannot be a "war." War is inevitably fought by armies and navies in service to nations, one national government against another. So that was our first mistake. We took a criminal act and improperly dignified it by calling it an act of war, as if al-Qaeda were a nation and Osama bin Ladin its government. This error of terminology -- if it was an error and not a brilliant and wicked deception -- led us to the invasion of Afghanistan and later of Iraq, to the slaughter of tens of thousands of innocent people and the deaths and maiming of thousands of our own citizens. None of these actions did anything much against the organization that attacked us in 2001. But because we were "at war," we sought enemies that could be vaguely connected with al-Qaeda (and whose conquest could prove advantageous either in geopolitical terms or in service to corporate bottom lines) on which to spend the might of our vast military machine, so much of which was useless against al-Qaeda itself.

At the same time as we emphasized the wrong targets, due to the illusion cast by that word "war," we downplayed the right targets and the right tactics. At one point, after bin Ladin escaped at Bora Bora, President Bush actually stated that he considered capturing or killing the al-Qaeda leader a low priority. What we saw recently was the success of an approach that should have been emphasized from the beginning: an approach that avoids being mystified by (or seizing the opportunity presented by) that misleading term "war."

So much for the immediate layer below the surface. But now let's dig a little deeper still. Why did Osama bin Ladin choose the United States as his primary target?

There was a certain amount of calculation in his doing so. Osama bin Ladin's long-term goal was to create a new Caliphate, uniting all Muslims under a single rule, a return to the Medieval greatness of Islam. (Preferably with himself as Caliph, one imagines.) The Muslim world is, of course, far from united. But one classic, time-honored way to unite squabbling peoples is to present them with a common enemy. By provoking the Untied States into taking ill-considered aggressive action in the Middle East, he hoped to enrage Muslims enough to have them set aside their differences in order to fight us. That didn't work as well as he'd hoped, but it explains why he launched the attack.

What it doesn't explain, however, is why he launched it at us. Why were we the right choice, the obvious choice, as the common foe of Islam? Why not attack some target in London, or in Tokyo, or in Brussels, or in Moscow? It doesn't take a whole lot of thought to arrive at the answer. America -- not Britain, Japan, the European Union, or the Russian Republic -- is the greatest of superpowers, the world's hegemon, the great power that must be defeated if Islam is to achieve greatness. America is the backer of Israel, the supporter of tyrants throughout the Muslim world, the new Rome.

And at root, that is where we went wrong, before bin Ladin was even born, and long before he launched his attacks in New York and Washington. That is why Osama bin Ladin exists, and why we had to kill him. Because we are not, in the national vision of our founders, supposed to be an empire, a superpower. We are supposed to be a land of liberty. We are supposed to be a democracy. And there is no such thing as a democratic empire. The two are incompatible, and one or the other must in the end be lost.

It's difficult for Americans nowadays to understand, because throughout my lifetime and for some years earlier we have had the world's most powerful military, so that it has come to seem normal. In reality, it is an anomaly of American history. Our nation has until the end of World War II always had a distrust of standing armies and a parsimony about military expenditure. We kept a small professional force, a cadre of officers, and when war loomed we would recruit or conscript an army around that tiny core and march off to face the enemy. During the major wars of our history -- the War of 1812, the Mexican War, the Civil War, the Spanish-American War, World War I -- we built powerful but temporary armies. When the war ended, the citizens who had rallied to the flag to meet the emergency laid down their arms and returned gratefully and happily to their civilian pursuits. The military budget shrank to nearly nothing, and so it remained during the years of peace, until the next war threatened. On the day the Japanese attacked Pearl Harbor, the United States had one of the weakest armies in the world.

As part and parcel of this, we went to war only rarely. Of the major wars the U.S. has engaged in, all but World War II were at the instigation of Americans themselves (the Civil War included, because the Confederates who started the war were Americans, too). Without a powerful standing army, we were seldom tempted to do this. Wars meant raising taxes, taking an economic hit, and of course sending young men off to die; they were not popular and we lacked the standing force to make the decision easier.

At the end of World War II, we had, once again, an enormous military force. It had been necessary to build this force in order to defeat the Axis, of course. But the common expectation was that, once again, as before, we would send all the boys home, and go back to our peaceful pursuits, retaining only that tiny cadre of trained military experts around which to build an army the next time war threatened. But for some reason things were done differently this time.

The Soviet Union presented a permanent enemy, a way to justify keeping a powerful military in times of peace. Why did we do this? It's a mystery to which there may be no one right answer. Maybe people in government genuinely believed in the Communist threat. Maybe it was the arms industry and others who profited off this massive government largesse. Maybe it was something hidden in the halls of power in Washington, desirous of empire and national power. Maybe it was a combination of all three. Whatever the motives, though, the actions in service to them are plain enough. We retained a huge military force. We built a chain of military bases all over the world. We supported puppet governments either to have allies in the Cold War or for economic reasons. We found ourselves continuously at war somewhere in the world. We were never, or almost never, wholly at peace.

We built a national-security apparatus, a government within a government, operating in secrecy, unaccountable to the voters, barely controlled by the President and not at all by Congress -- a clear violation of all the principles on which America is supposedly based. This is not new in the world, although it was new for us, and wrong for us. It's the way every empire in history has always operated. It's the way empires have to operate. Empire and democracy are incompatible. We cannot have both. That means that empire and America are incompatible. We cannot have both. We have become something other than America, something that our ancestors would look upon in horror.

In 1991, we were presented with a golden opportunity to set all this aside, bring the empire to an end, and become once more America. The Soviet Union, our opponent in the Cold War and the justification for empire from 1945 until then, ceased to exist. We could have shut down the bases, dismantled most of our armed forces, declared victory and gone home. We didn't. And that surely proves that by that time the empire was pursued for its own sake and the Cold War had become merely an excuse -- if it had ever been otherwise.

Today, we have a military force that costs as much as that of the entire rest of the world combined. We have hundreds of military bases in every corner of the world. We have the ability to invade any country on earth that we choose to invade, and we have arrogated to ourselves the willingness to use that ability whenever we choose, on whatever pretext we like, or on none. We have a government unaccountable to its people, that claims the authority to detain without trial, without rights, anyone -- citizen or foreigner -- that it labels as an "enemy combatant."

That is not America. It is the American Empire. And it was the American Empire, not America the land of liberty, that Osama bin Ladin attacked on 9/11/01. He was our nemesis, attacking in response to our hubris. The entire affair of the last ten years has been our tragedy.

Now he is dead. But the tragedy goes on, and will until the American Empire, too, is laid to rest.

Monday, September 27, 2010

Quote from a rich guy: "Tax me more."

Before presenting what is to follow, I have to apologize for neglecting this journal. My bad. I do have an excuse, mainly that I've been devoting my writing energy to fiction. I've finished the second novel in the Star Mages series and am getting it together pre-publication at this point. So that's good, but my feeling is that although it might seem like a decent excuse, I made a commitment here, I failed to keep it, and there is no excuse apart from physical or mental incapacity neither of which applies. (Yet. Knock wood.) :)

So I'll try to make up for that failure. To start with, I ran across an editorial in the LA Times by venture capitalist Garrett Gruener, who said some important things in it that people need to understand and, thanks to trickle-down propaganda, often don't. Here's the link to his article:

tax me more

Some excerpts that are particularly important:

"I'm a venture capitalist and an entrepreneur. Over the past three decades, I've made both good and bad investments. I've created successful companies and ones that didn't do so well. Overall, I'm proud that my investments have created jobs and led to some interesting innovations. And I've done well financially; I'm one of the fortunate few who are in the top echelon of American earners.

"For nearly the last decade, I've paid income taxes at the lowest rates of my professional career. Before that, I paid at higher rates. And if you want the simple, honest truth, from my perspective as an entrepreneur, the fluctuation didn't affect what I did with my money. None of my investments has ever been motivated by the rate at which I would have to pay personal income tax. . . .

"When inequality gets too far out of balance, as it did over the course of the last decade, the wealthy end up saving too much while members of the middle class can't afford to spend much unless they borrow excessively. Eventually, the economy stalls for lack of demand, and we see the kind of deflationary spiral we find ourselves in now. I believe it is no coincidence that the two highest peaks in American income inequality came in 1929 and 2008, and that the following years were marked by low economic activity and significant unemployment.

"What American businesspeople know, and have known since Henry Ford insisted that his employees be able to afford to buy the cars they made, is that a thriving economy doesn't just need investors; it needs people who can buy the goods and services businesses create. For the overall economy to do well, everyday Americans have to do well. . . .

"Remember, paying slightly more in personal income taxes won't change my investment choices at all, and I don't think a higher tax rate will change the investment decisions of most other high earners.

"What will change my investment decisions is if I see an economy doing better, one in which there is demand for the goods and services my investments produce. I am far more likely to invest if I see a country laying the foundation for future growth. In order to get there, we first need to let the Bush-era tax cuts for the upper 2% lapse. It is time to tax me more."

It's not surprising that a venture capitalist "gets it" about what limits investment in job-creating ventures: not availability of capital (i.e., not how much money rich investors have lying around), but expected return. What's more, the main thing that drives expected return is not how much the investor can expect to keep after taxes, but rather how much demand exists for the goods and services the investment is supposed to produce. As I said, it's not surprising a venture capitalist gets this; if he didn't know why he invests in one area rather than another, say in business rather than in financial instruments, he would not likely be successful at what he does. We ought to listen to him when he says things like this. I mean, when someone says, "Raise MY taxes," we can be pretty certain he's not speaking out of duplicitous self-interest. Unless he's a masochist or something.

But I'm going to take this argument one step further. Mr. Gruener says that small changes in his tax rate have no effect on his investment decisions. But what about big ones? What about the effect of the original Reagan tax cuts that dropped top marginal rates from the 60-70 percent range down into the 30s? On the other hand, what would be the effect of creating new tax brackets with very high taxes applied to very high incomes? What about a 95% tax on personal income over a million dollars a year?

Before continuing with this, maybe an explanation is in order about how "marginal" tax rates work. Right now, the top tax rate is 35% on incomes above $373,650. Does that mean that if someone makes $400k a year, he'll pay 35% of his income in federal income tax? No, it's a bit more complex than that. He'll pay that 35% only on taxable income above $373,650, which is to say, on $400,000 - $373,650, or $26,350. (That's if the $400k represents taxable income not total income, of course.) He pays at a lower rate on all the rest of his income. On the first part of what he earns for the year, he pays no taxes, just like everyone else.

So a 95% tax on income above a million dollars doesn't work to impoverish millionaires. What it does is to impose a personal income ceiling. Nobody is going to bother making over a million dollars in taxable income when Uncle Sweetie is going to make off with almost all of it. It won't hurt you to make more than a million (remember, all the income below a million is still taxed at the lower rates), but it won't help you much, either. So investors will stop investing in anything that would push their income above that point, and that will hurt the economy, right?

Well, not so fast. To begin with, most investments, and all of the ones we really want, are tax-deductible and so don't count as taxable income. If you start a business, most of the start-up costs are not taxed. (There are some exceptions involving heavy-equipment purchases, where the tax deduction is split over a number of years.) Wages you pay to employees are never taxed as your income. (As the employee's income, yes.) So what a confiscatory tax on really high income actually does is to give the person making that kind of scratch a really strong incentive to find places to invest that money where it will eventually pay off, but won't be taxable in the meantime. So -- provided we choose which investments to encourage through tax write-offs wisely -- this could actually spur investment rather than discouraging it.

Another consideration besides tax deduction is how quickly an investment pays off. The thing about investing in real business (that is, making stuff or providing services) is that it's a long-term project. You don't expect a quick payoff in the first year. Ask anyone who's ever started a business. You expect to lose money the first year, maybe the second year, maybe even longer depending on exactly what business you're in. Down the road, though, you do expect things to pick up to the point where you've recouped all those losses and made a profit. (It doesn't always happen that way, but you do expect it or you wouldn't have made the investment to begin with.) There are other kinds of investments, though, that can pay off very quickly. A good example is short-term trading on the stock market, where you're not trying to acquire stock for the long haul but rather to buy low and sell high, conceivably in a single day. Even better examples are the kinds of financial trading that resulted recently in the near-collapse of our financial system. To be sure, those particular investments went bad, but the point is that when they pay off they pay off quickly. That makes them preferable to investments in real business if you want a quick gain that can be reinvested for a multiplier effect.

What a confiscatory tax rate on very high income would discourage is this sort of investment. Why seek a quick payoff -- that is, a payoff this year -- if 95% of it goes to the federal government? Under that regimen, it makes a lot more sense to defer financial gains.

To illustrate, consider this. Let's say someone is making half a million normally. The person has another half million to invest. For sake of simplicity, he has two choices, either of which will return that half million and another million dollars on top of it. He can invest it in short-term financial manipulation that will give him the whole million and a half by year's-end. Or he can invest it instead in a start-up company making widgets, take a loss the first year, and recoup his investment plus another million over the next ten years.

If he chooses the latter route, he gets back an average of $150k a year, and in no year does the net return exceed $300k (let's say). Now: if he's going to see that investment return taxed at 35% no matter which way he goes, then he's better off investing in the short-term instrument. His net profit after taxes is $650k, and if he does the quick-return bit, he'll have all that to reinvest next year for more return still. But with the confiscatory tax in place, he'll be much better off investing in the slow road, because his net return with the short-term financial investment is only $50,000, while the return with the long-term investment is much higher, since none of it crosses that million-dollar line and so all of it avoids the confiscatory tax.

Bear in mind he's going to invest the money anyway. The only question is in what. Since investment in making things and providing services is what we want (that's what creates jobs), we want to encourage that and discourage the kind of investment that just plays with money.

Saturday, July 3, 2010

Prosperity: What's An Economy For?

I'm going to be writing a series on what I have come to call "Money-Free Economics." By this I don't mean economics of a barter system or of an economy without money; rather, I mean economics that ignores money and goes to the underlying real-wealth economy that money facilitates. I acknowledge up front that this creates a certain amount of distortion. There are features and processes of a modern economy that can't be understood without addressing money, among them interest rates, the effects of government fiscal policies, and speculative investment -- to name but three of many. But money also creates distortions. In particular, schools of economics that address money without touching on the underlying economy of goods and services often create severe distortions by treating money as if it existed and operated independently of the goods and services for which it is a token of exchange -- as if only money, not stuff, mattered. Moreover, those features of an economy that require addressing money to understand are already covered well by professional economists in their various schools. On these matters they don't require any help from me (often it's the other way around). But when economists present something as stupid as, for example, the laissez-faire interpretations of the Laffer Curve, or explanations for recession that rely entirely on monetary factors and ignore the distribution of wealth, I know that they have focused on money to the point where they have forgotten that it is just a token of exchange and not real wealth, because when you put those in money-free terms their nonsensical nature becomes obvious. So, to address the follies of economists and the politicians who quote them, I shall engage in an exercise, presenting economic concepts in ways that don't use money at all.

I'll begin today with an examination of what an economy is and what it's for in money-free terms.

An economy is, to begin with, a social arrangement. It involves assigning of ownership, division of labor, and rules of exchange and trade. In a modern society it is always a function of law. That wasn't always so, because human beings have not always lived under the rule of law, but even in pre-civilized times when there was no law as such and no formal government, there were still rules about who owned what, who was supposed to do what, and who got what in the end.

What this social arrangement is meant to do is to regulate and facilitate the production and distribution of wealth. Wealth, as I pointed out in the last entry, consists of goods and services. Going into a bit more detail, wealth consists of eight things: food, clothing, shelter, tools, toys, entertainment, advice, and assistance. Everything you or anyone else ever buys or sells falls into one or more of those categories. The economy is a social arrangement whereby these eight things are produced and gotten to the people who want and can use them. Those are the two criteria of economic success. As long as those eight things can be produced in enough quality and quantity and distributed to everyone who needs and wants them, the economy is a success. When either of these functions fails, the economy fails. If not enough food can be grown, or if the food that is grown can't be gotten to the people who need to eat it, there is famine. If not enough housing can be built, or if housing is built but sits vacant while people are homeless, there is a housing crisis. And so on.

Every failure of the economy, every depression, every recession, every instance of runaway inflation, every bubble collapse, even the economic failure that occurs after a military defeat, manifests ultimately in a failure either of production or of distribution or both. Even when the cause (or at least the trigger) of the economic problems is fiscal or monetary, such as a stock-market crash or the collapse of a housing or real estate or some other bubble, it always comes down in the end to a failure to produce or a failure to distribute. If it does not, then it is a nonexistent problem as far as the overall economy is concerned.

Problems can occur on either the production or the distribution side. An example of a production-side problem is a severe drought that results in crop failure. This creates a shortage of food and starvation. Another example is the devastation created by war, as for example in Germany during and after World War II, when Allied bombing and Allied and Soviet invasion destroyed German factories and industrial capacity, as well as German roads and railroads. A third example, more subtle, is the impact on the U.S. economy of the OPEC oil embargo from 1973 until 1983, which caused shortages of a crucial raw material. An economy that is in a pre-industrial state and is trying to industrialize also faces production challenges, not in the sense of losing production but in the sense of wanting to increase it. In general, production of wealth requires raw materials, labor, knowledge, and organization, and a shortage of any of these (for whatever reason) results in a deficit of production.

Problems of production are severe, but problems of distribution can be equally severe. The Irish potato famine was, at root, a distribution problem. It had a proximate cause on the production side, a potato disease that caused crop failures, but this would not have resulted in famine except that the Irish wheat lands were all in the control of aristocratic landholders who were entitled to the wheat crops for export purposes. That's the reason why ordinary Irish people were dependent on a potato diet in the first place. A more nearly equal distribution of Ireland's food crops would have meant that when the potato harvest failed, the people could eat other foods. Severe maldistribution of the nation's agricultural wealth meant that the potato blight became the potato famine.

The Great Depression and similar breakdowns in the years before it (for example the Long Depression that began in 1873 and lasted longer than the Great Depression itself, although it was not quite as severe) were also breakdowns of distribution. The economies of the advanced nations, such as the United States, suffered no shortages of raw materials, labor, knowledge, or organization, and there were initially no problems of production. But the goods produced were not distributed to the people who would use them. Because of the system of private capital property ownership, the goods produced in a factory (say) belonged to the factory's owner, and anyone who wanted those goods had to exchange items of value for them (by way of money, of course). Since not enough of the people who wanted the goods had the value to exchange for them, they could not be sold and so sat in warehouses being of no use to anyone.

The distorting effect of money can be easily seen in this entire sequence of events, which were caused by a desire on the part of capital property owners to keep to themselves as much of the wealth produced as they could. As long as we think in terms of money, this is perfectly understandable: the rich wanted to become richer. But if we think in money-free terms, the silliness of it becomes clearer. How much in the way of food, clothing, shelter, tools, toys, entertainment, advice, and assistance does even the richest person need? How much of these things does he even want? How much can he use? After a certain point, all that stuff is wanted not for use but for sale, and if a relatively few rich people own almost everything of value, for what can it be sold?

Here is the fundamental flaw of capitalism. It is predicated and focused on the accumulation of individual fortunes, which means that ultimately it undercuts its own basis resulting in economic breakdowns due to maldistribution of wealth and consequent depressed demand. Economists have gone to great lengths to refuse to acknowledge this. There is, or used to be, a concept in economics called "overproduction" or "surplus production" which meant that the economy was producing more stuff than people could use, so that in order to maintain full employment and productivity it needed to be sold abroad. But the economy has not historically ever actually produced more stuff than people could use (although that's theoretically possible). It has just produced more stuff than the people who wanted to use it could buy. That's a very different thing. The demand for goods and services depends not only on people's desire for things, but also on what they have to trade for them, and for most people the latter is exhausted long before the former. (Those for whom it is not, exhaust their desire to buy instead. Either way, stuff remains unsold.)

One of the things about economics today, even more than its disconnect from the economy of stuff and its focus on the arcane economy of money, is the refusal of many of its practitioners to think about the elephant in the room: the distribution of wealth. Even when an economist (by this stage of the game usually one long dead) takes a money-free approach, it often suffers from this flaw. A good example is Say's Law.

Say's Law is an economic principle attributed (somewhat incorrectly, but that's by-the-way) to the French economist Jean-Baptiste Say, who lived and worked in the late 18th and early 19th century. Say argued that there could never be a general glut of goods -- too much on the market to be sold -- because all goods produced created value with which to buy other goods, and goods are exchanged only for goods even when they are exchanged by way of money. As far as it goes, that's true -- but it also very much matters whether the goods produced are owned, and so exchangeable, by those who desire the other goods produced. Or in other words, it matters how widely wealth is shared. The fact that wealth exists to exchange for all products produced in the form of other products does no good on a practical basis unless those goods are in possession of those who wish to make the purchase.

One finds many critiques of Say's Law among economists, but rarely will one find this fundamental flaw recognized. John Maynard Keynes, for example, identified three assumptions underlying Say's Law: a barter model of money (goods are exchanged for goods), flexible prices (that can rapidly adjust upwards or downwards with little or no "stickyness"), and no government intervention. Keynes himself disputed the second assumption, arguing that prices are not necessarily flexible. Others have disputed the first or the third. (And here one does run into the distorting effect that arises from money-free economics, because there are aspects of a money economy which do not perfectly mirror a barter economy. However, that is not the real problem with Say's Law.) It's true that the idea does rest on at least the first two of those assumptions, but it also rests on another which is self-evidently false: the equal or near-equal distribution of wealth.

It's a curious thing, this refusal even of a supposedly "progressive" economist such as Keynes to address the central problem of inequality even though his own work naturally lends itself to doing so. Those who do address it usually seem to confine themselves to the moral aspects of it without considering the economic aspects. But the economic aspects are also real and also important.

Returning to the two functions of an economy, production and distribution of wealth, we may consider the template to be the economy of a pre-civilized community, in which a small band of human beings own all capital property in common and share tasks and wealth more or less equally. Production-side problems arose often enough in the form of shortages, but distribution-side problems did not. Even when production problems happened, it was never due to failures of organization, but only of natural resources, knowledge, or labor. The economy functioned in the manner Marx described as "communism," the end-state of his theoretical economic progression: from each according to his ability, to each according to his needs. Now, my personal opinion is that Marx had to have been smoking something to believe that an advanced economy, whose essence is impersonality, could ever operate communistically in this fashion. But we may nonetheless take that ancient pattern as, in terms of distribution and of the organization of labor and natural resources, the ideal, and evaluate our modern substitutes in terms of how closely they approximate this ideal. The truth is, of course, that they fall far short -- but in fairness, they have a much more complicated problem to solve.

In future posts, I'll consider historical economies that worked better than the one we have now, along with some spectacular historical failures. Finally, I'll speculate about alternatives to capitalism as it currently exists. In all cases, I'll approach the questions through money-free economics, in order to keep it as simple and non-arcane as possible.

Wednesday, June 16, 2010

"Making Money"

Our language has many peculiarities that shape thought in hidden ways. One example is the phrase "make money."

Strictly speaking, nobody "makes money" in this country except the mint. Money is legal tender, and neither private individuals nor corporations are authorized to "make" it. To do so is a felony. When we say that someone "makes money," what we really mean is that the person takes money: he persuades other people to give him money in exchange for something else, be it goods, services, promises, or deception. No money is actually made in these transactions, by which I mean that the overall money supply does not increase; what money the person who is "making" it gains, his customers lose in an exact one-for-one correspondence. Of course, that's not necessarily a bad thing for the customers, since money also has no intrinsic value whatsoever; it gains value only in exchange for other things that DO have intrinsic value, and the only reason anyone is willing to take intrinsically worthless money in exchange for intrinsically valuable things is because the money so acquired can then be given away to someone else in exchange for other things of value. Money is at root a confidence game in the literal sense of requiring a faith in the system of government that backs it and a confidence that it can be exchanged for items of value, even though it has no value of its own, and because of this disconnect, this one-off between the medium of exchange and the items of actual value, it can also be a confidence game in the figurative sense.

Really it all comes down, not to money, but to stuff: goods, services, promises, or deception. Money is not wealth. Goods and services are wealth; money is only a token exchangeable for wealth. One cannot "make money," but one can make wealth, by making goods or performing services. Ideally, that is how a person or a corporation "makes money" -- by making wealth, and exchanging the wealth for money, which can then be re-exchanged for more wealth. The amount of money doesn't increase, but the amount of wealth does. As a straightforward exchange, there is nothing objectionable about this. But the fact that we employ money rather than barter -- the fact that we exchange wealth not for wealth but for tokens exchangeable for wealth -- means that the potential for abuse, and for confidence games in the figurative sense, creeps in.

Start with the fact that goods and services are, almost without exception, produced collectively, not individually. That is, their creation requires the cooperative effort of more than one person. Most of the people who work to create the wealth have no ownership interest in it (as I explored in an earlier post) and must accept (or reject) a payment in money for helping to create it according to the terms that the owner (usually a corporation) is willing to offer. The potential for abuse in that transaction is of course well known to anyone who has studied the history of the labor movement.

Then there's the fact that money can be exchanged not just for real wealth, but for potential wealth. This is called "investing." Money is paid not for goods or services, but for the potential of being repaid more money than one paid out in the future, which can then be re-exchanged for real wealth. Investments, however, don't always pay off. Sometimes an investor loses money instead of gaining it. This means that a person or a corporation can "make" (or take) money by attracting investors rather than by offering wealth in exchange. To make things more wonderfully and woefully complex still, the person "selling" the investment can then turn around and re-invest the money so gained himself in the hopes that it will pay off more than he ends up paying back to the original investor. And so on, in a tangle of investment and reinvestment. There are whole industries built around this sort of thing, producing no wealth whatsoever but "making" lots of money.

Now the justification for this sort of financial goings-on is that at least some of the money is ultimately used to fund the production of wealth, which, under the rules of our economic game, requires money in order to be done. But it doesn't have to be done that way. All that's really necessary in order for an investment scheme to "make money" is that people who have money be convinced to invest it. A financier can "make money" all day long without producing a damned thing, merely by moving around intrinsically worthless tokens, taking money from others in exchange for promises or, in some cases, for deception.

Even when the money that is being "made" is acquired in the more straightforward fashion, by producing actual wealth and selling it, there is still plenty of room for practices that are anything but straightforward. British Petroleum, for example, is certainly producing wealth (or it intended to anyway) from its deep-water oil well in the Gulf of Mexico. But it acquired ownership of the oil it hoped to pump through a process of leasing the mineral rights from the government that involves a highly questionable exchange of value. Arguably, since the land in question is government property, it belongs to the people of the United States, yet the people get precious little return for it; if BP had to buy the rights for something approximating their real value, that could fund a lot in the way of public services, tax cuts, and/or deficit reduction. On the other end, as what actually happened with that well demonstrates, the law requires the people to pay to clean up any messes that result, after the corporation pays out an amount of money limited by law and, in the instant case, only a tiny fraction of the actual damages. In this particular case, due to the publicity involved and the magnitude of the disaster, BP may find itself unable to make use of that sweetheart deal, but the Gulf oil leak is only a larger-scale version of similar environmental accidents that happen all the time, and other damage that isn't accidental at all.

Running through our economy are rules and practices that twist and warp what should be a straightforward process of producing wealth and distributing it to people into one sort or another of theft. Theft of people's earnings, their savings, their livelihoods, their hopes and dreams, their health, and their lives. And yet, because of the peculiarities of the language we speak, we call all of that "making money."

A curious thing, I say.

Tuesday, June 1, 2010

The Value of Labor

There are two ways to establish a monetary value for labor. Both of those ways are economically sound, depending on the purpose for which labor is being evaluated. For purposes of this writing, both are equally important, as what I wish to discuss is the difference between the two.

The first (and simplest) way of determining the value of labor is through the labor market. This follows the tautology that everything is “worth” what its customer will pay for it. A merchant (in this case a worker) will seek the highest price (wage) possible, while a buyer (employer) will seek the lowest price (wage) possible, and the balance in bargaining power between the two determines the outcome. In the labor market, that balance is affected by the number of workers available to do a particular type of work (supply), the number of such jobs open (demand), the ability of workers to bargain collectively (organization), and the parameters set by law and regulation (rules of the game). Supply, demand, organization, and rules of the game are what determine the “value” of labor – in this sense. Let us call this the market value of labor.

The second way of determining labor’s value is in terms of the value of what it produces. All labor produces goods or services which are then offered for sale (or at least could be), and these goods and services have a market value of their own. In the context of any business, the “value” of labor in this sense is equal to the market value of all goods and services produced by it, net of any non-labor costs of production and marketing. This we may call the productive value of labor.

It should be self-evident that the market value of labor is always less than its productive value. In a capitalist economy this is entirely unavoidable, and as a practical matter it may be unavoidable in any economy, since some portion of the wealth produced must be set aside as capital to be reinvested. But in a capitalist economy, the entire point is to maximize, as much as practical, the difference between labor’s productive value and its market value, because the difference between these two is the margin of profit, and the purpose of a capitalist economy is to maximize profit.

Putting it another way, the purpose of a capitalist economy is to maximize the gap between the market value of what is produced, and the share of that wealth which goes to those who do the work of producing it. Or, more simply, a capitalist economy has as a condition of its defining purpose, the secondary purpose of keeping labor down.

METHODS OF INCREASING THE LABOR VALUE GAP

The goal of capitalism, in service to its ultimate goal of maximizing profit, is to increase as much as possible, and to maintain at as high a level as possible, the labor value gap – that is, the gap between labor’s productive value and its market value. How is this done?

Let us first recognize that capital cannot arbitrarily set wages anywhere it wants. If it could, it would get all labor for free. The market value of labor is determined by the factors of supply, demand, organization, and rules of the game. If capital is to influence the price of labor, therefore, it must influence these four factors. As a practical matter, though, there is limited influence that can be brought to bear by an individual business on any of them. An employer may certainly use organization and machinery to improve efficiency of production and so reduce its demand for labor; it may also use techniques of intimidation and propaganda to prevent the formation of labor unions and so reduce organization; in the modern world, it may in some cases outsource production to foreign countries and in this way increase the supply of labor. But to truly keep the cost of labor down and so maximize the labor value gap and hence maximize profit, business must exert influence over the government, which controls the rules of the game absolutely, and the other three factors to a very large degree.

The manner in which capital influences government is outside the scope of this article, but well known enough that it should need little elaboration; suffice it to say that bribery, either directly through payments to legislators or somewhat less blatantly through campaign contributions, buys access and influence, and turns the government to the service of capital much more than it would turn if it were truly answering the will of the people in democratic fashion. The results may be seen throughout history. It’s also interesting to see how the methods change from time to time depending on circumstances, but always work towards the same outcome.

At one time, the government influenced the supply of labor by encouraging high immigration rates, especially of refugees faced with even more brutal treatment in their home countries. During the 19th and early 20th centuries, this flood of immigrants almost by itself kept wages suppressed in basic industries such as mining, railroads, agriculture, and manufacturing. Today, immigration is still a factor in government policy to increase labor supply, but a less important one. The government encourages high rates of legal immigration of skilled labor today, at the request of the computer industry and others needing technical expertise. With respect to unskilled labor, the rules of the game have changed enough since the 1930s (for reasons I’ll go into in the next section) that legal immigration no longer suffices. A combination of illegal immigration and outsourcing has replaced it as the desired source of labor, since neither illegal immigrants nor foreign workers in their own countries benefit from U.S. labor laws and regulations.

Decreasing the market value of labor is only one side of the process. It’s also been the historical desire of capital to increase labor’s productive value. If this is done without increasing, or better still while decreasing, the market value of labor, then the labor value gap is increased that way as well. In the past, prior to globalization, capital has often sought high tariffs for this reason. Tariffs reduced foreign competition, and allowed higher prices to be set on goods, thus increasing the productive value of the labor that produced them. Of course, improving the productivity of labor through organization and mechanization also increases the productive value of labor. There have been times, however, when capital has been willing to see labor’s productive value actually decreased, as long as its market value was decreased more. A perfect example is the outsourcing of manufacturing that occurs today in response to the historically enlightened rules of the game governing American labor at this time. By moving manufacturing operations to countries where labor is paid only a small fraction of what American workers would have to be paid, manufacturers have been able to substantially reduce prices, and they have done so – not nearly as much as their labor costs have declined, but considerably. In this way, they have been able to continue selling higher quantities of goods to American consumers, whose paychecks have declined because of the loss of labor demand. So it isn’t about either maximizing price or minimizing wages by themselves. Rather, it’s about maximizing the gap between the two.

LIMITS ON THE LABOR VALUE GAP - POLITICAL

There are limits on how wide the labor value gap can become. These fall into two categories, the political and the economic. The political limits arise from the fact that everyone wants what they perceive as a fair shake. For workers, that means payment for their work that constitutes a living wage, and that they perceive as being a fair share of the wealth that their labor produces. A capitalist economy, by systematically increasing the labor value gap, incurs opposition and incites rebellion. The wider this gap becomes, and especially the worse off in real material terms the working class is compared to its expectations, the more opposition and rebellion will occur.

This rebellion may take the form of union organizing and strikes, of sabotage and assault, or, at its greatest extreme, of actual armed rebellion. The response to it, both by private capital and by capital-influenced government, tends initially to be repressive, but over time incorporates elements of reform and compromise. We may see this in the history of all capitalist economies to date, with the oldest such economies (those of the U.S. and western Europe) today exhibiting rules of the game that favor labor much more than was the case in the past. Repeatedly, the level of political unrest has reached a point where the more enlightened capitalists saw a need to offer reform of the system in order to allow it to continue functioning at all. Over time, this has resulted in a more humane and less brutal form of capitalism, incorporating many socialist features.

It’s important to recognize, though, that these reforms do not represent a defeat of the capitalists; they are not a revolution. Rather, they represent evidence that capitalist control of the state and of the economy is not and never has been absolute; it has always been possible to resist. If pushed hard enough, the government will institute reforms, and when the pressure becomes sufficiently strong, capitalists themselves will acquiesce in this reform, since it is preferable to revolution. The truly revolutionary change would be at root political, depriving capital of its monetary influence over government. Until that occurs, it’s questionable just how far the process of reform can go, and certain that any set of reforms will at times be undercut and reversed, or ways found around them, as has happened today with globalization and outsourcing.

LIMITS ON THE LABOR VALUE GAP - ECONOMIC

The other limit on how wide the labor value gap can become is economic. It arises because wages for work serve a dual function. On the one hand, they are a necessary cost of doing business, which capital seeks to minimize. On the other, they are what create a market for the goods and services offered. The market value of labor, therefore, is a limiting factor on the productive value of labor, and this means that in the long run too great a gap between the two will be unsustainable.

Economic history shows this clearly. The U.S. economy in its early phase was one of periodic crisis (1837, 1857, 1873, 1893, 1907, 1919, 1929) that wiped out small businesses and brought great suffering to working people. These were more than just “recessions.” No recession during the period from the end of World War II until the election of Ronald Reagan ever reached the horrid depths of the financial panics that occurred about every 20 years, almost like clockwork, in the pre-Depression economy, when double-digit unemployment was the norm in such downturns, and the economy experienced nearly as many years in depression as it did out of it. Many people think of the panic of 1929 – called the “Great Depression” – as somehow extraordinary. It was not really that extraordinary. It was the longest of the depressions of that time by a couple of years, but otherwise not the worst; that dubious laurel goes to the depression of 1893. What distinguishes the Great Depression from its predecessors is not its severity, nor even its length, but the reforms that arose from it.

Why did these panics occur? Why was the economy as often depressed as otherwise? Because a high labor value gap means a depressed consumer market, sustainable only by a combination of speculative investment and credit. This is unavoidable as long as a significant labor value gap exists: the productive value of labor is the net market value of the goods produced, and in order to buy the goods produced the market value of labor must equal its productive value, or nearly so (we may allow a small gap, representing capital accumulated for reinvestment, with labor accordingly diverted from producing goods for consumption to producing capital goods).

All of these panics, like the severe recession we are experiencing as of this writing (2010), were deflationary, that is, they drove prices down. As such, they reduced the productive value of labor, which is partly dependent on the prices of the goods produced. Unfortunately, at the same time they also reduced the demand for labor and so reduced the market value of labor as well, and this preserved the imbalance and prevented quick economic recovery.

After the end of the Second World War, the U.S. economy, and also those of capitalist Europe and Japan, entered a uniquely enlightened period. The labor value gap was lower during this period than before the Depression, and also lower than it is today. Just the same, the gap never completely disappeared, and in a capitalist economy my belief is that it can’t. A capitalist economy is defined as one that exists to pursue profit, and profit is found only through a labor value gap.

Just the same, the depression of the consumer market caused by the labor value gap represents a limit to how wide that gap can become. Along with the political restraints produced by rebellion, it tends over time to move a capitalist economy, in fits and starts, along the socialist road.

LOGICAL OUTCOME

I am unsure of the answer here. Much depends on whether the process of reform described above can reach a stage in which capital loses its excessive influence on the government (by any means other than violent revolution). If so, then a full transition to some sort of socialist economy will occur. If not, then we will reach an equilibrium in which we see-saw, as we have in the period since World War II, between wider and more narrow labor value gaps.

But such prognosis is beyond the scope of this article, as would be a prescription for what sort of socialist structure would best describe the post-capitalist economy, should attaining that prove possible.

Thursday, May 13, 2010

The First Noble Falsehood

All of the religions of the Classical Civilized Paradigm have in common a core belief about the relationship between the soul and physical existence. This belief is expressed in different ways in different faiths, but the most elegant expression in my opinion is the Four Noble Truths of Buddhism, first of which is that all life contains the element of suffering, or, more simply and as believed in practice, that all life is suffering. For Buddhists, the joys and pleasures of life (while acknowledged to exist) are, in essence, the bait for a trap. They’re here to bind us to the world so we can suffer more. The point is to get out.

No other religion puts it quite that way (or, in my opinion, quite that well), but all of the Great Religions share that conclusion: the point is to get out. We don’t belong here. We belong somewhere else or in some other conditions: in Heaven, in Paradise, reunited with God from Whom we have become separated, or restored to the bliss of non-manifestation. The details vary widely, of course. Religions of the Hindu/Buddhist complex believe in reincarnation or soul transmigration, and so teach that we may go through many incarnations before finally being freed to go where we belong. Religions of the Abrahamic lineage (Judaism, Christianity, and Islam) don’t have this belief, and so teach that there is a single lifetime after which comes God’s judgment and (hopefully) a passage to the place where we belong. But in none of these faiths is incarnate, manifest existence on the physical plane seen as anything but a mistake.

This idea – that we are here by mistake, and our focus should be to remove ourselves somewhere else – is what I call, in a play on the Buddha’s teaching which I’m sure he will have enough enlightenment to forgive, the First Noble Falsehood.

In fairness to the Buddha, we can’t be sure that this is what he actually taught. No writings by him have survived, and that’s rather a mystery. The Buddha, who was a prince, was certainly literate. Did he really not write down any of his teachings, or were his writings lost – or suppressed – after his death? We may wonder the same thing about Jesus and Mohammed, who were also literate and who have also left us no writings. Be that as it may, an inevitable disconnect occurs in communication between the teachings of an enlightened spiritual leader such as the Buddha or Jesus, and the form those teachings take when embodied in an organized religion, especially after the old boy isn’t around any longer to interfere with the process.

Part of that disconnect arises simply because of the difficulty of communicating the deep truths of the spirit in words. Language isn’t designed for that purpose. Its vocabulary communicates things from one person to another that both people are already familiar with. In giving you directions to the post office, I know that you know what a street is, what a post office is, what it means to turn left or turn right, and how to identify various landmarks that I may give you to show the way. If someone wants to communicate something that is a bit outside his listeners’ experience, then one starts with the things the listener knows and builds on that. But to communicate spiritual reality is very, very difficult, because it is outside of the experienced world of most people. It can usually only be done in metaphors and parables, and even then most people will attach meanings to the parables that aren’t correct. “He who has an ear, let him hear.”

But beyond this, further problems came in for all Classical Paradigm religions as they became established and involved in politics. Ideas and teachings that did not serve the political purposes of the faith were suppressed, and ideas that were necessary for that purpose were introduced. And it is at this juncture, I believe, that the First Noble Falsehood arose.

Politics is in large measure about privilege. It’s a battleground between those who enjoy an elite, entitled, privileged position, and those who would like to see them lose it. Sometimes those who would like to see them lose it simply want to replace them as the privileged elite. Sometimes the goal is to eliminate privilege altogether or at least reduce its prerogatives. In modern times that latter manifestation has become more acute, but in the ancient times when the Buddha lived, the elite (to which class he himself was born) had things pretty much any way they wanted. Politics – a game the Buddha’s social class played exclusively, commoners need not apply – was thus all about upholding and supporting their status and power, except of course when it involved infighting among them for a larger share of power than one’s fellows.

Religion was, under the Classical Paradigm, always a partner with the state. Often, it was an actual arm of the state. As such, it always served the purposes of the state, which included public order, but also included the upholding of privilege. Now privilege, to the enlightened eye, is unjustifiable. It is simply wrong, and needs to be abolished. And so, if religion is to serve the purposes of the state, a filter must exist to allow the passion which an enlightened teacher’s teachings inspire to be twisted into the service of privilege, something he would in all cases have abhorred. And the First Noble Falsehood serves that purpose admirably, by taking that passion and turning its focus away from this world altogether and towards another reality where there is no social privilege to be threatened.

The challenge to privilege and power represented by spirituality when focused on this world where we actually live can be seen in modern times. Look what Gandhi accomplished for Indian independence or Martin Luther King for racial equality in America. There is magic in genuine spirituality, a power of the heart that moves the hearts of others. To the holders of privilege, spirituality is dangerous, and must be channeled into non-threatening, or ideally even privilege-supporting paths.

Jesus was not put to death on a whim.

The story of Christianity’s co-option by the Roman state in the 4th century CE is a good illustration of how the process works, and where the First Noble Falsehood comes from. I said above that under the Classical Paradigm religion was always intertwined with the state, but actually early Christianity represents an exception. Under Roman law, Christianity was an illegal religion, but the laws were seldom enforced. Christians could be arrested any time, given a chance to recant their faith, and executed in brutal ways if they refused, but most of the time they weren’t. The net effect was that Christians were free to practice their religion (except when some Emperor or other got a bug up his sphincter and started a persecution), but was not involved with the state at all. It couldn’t be, because it was illegal. Nor could intra-faith politicians (you know the type, all religions have them) call on the state to enforce their authority. For that reason, during the time between the mission of Paul and the Council of Nicaea, Christianity was one of the freest and most diverse religions of all antiquity.

It was also frequently troublesome. Christians refused to pay token worship to the official state religion, and so in the view of the state endangered Roman society’s relationship with the Gods, on whose favor it depended. Christians were also frequently pacifists and anti-slavery advocates, thus endangering the Roman state’s ability to defend itself against foreign enemies and the foundation of the Roman society’s economy. Here was spirituality acting as a threat to privilege, which it so often does.

Three Emperors attempted to stamp out the religion through persecution. None succeeded. In the early 4th century, the emperor Constantine tried a different approach: co-option. By repealing the laws against Christianity, by calling a council of “Bishops” (note that this in itself favored the more authoritarian forms of Christianity over the less so, since only the authoritarian Christian sects recognized “Bishops” to start with) from all over the Empire to iron out exactly what the faith stood for and taught, and finally by making Christianity itself the state religion of the Roman Empire, Constantine and his successors transformed it from a rebellious and dangerous spirituality into a useful tool of politics. One of the primary levers of this transition was to bring to the fore, and interpret in certain pro-privilege ways, the otherworldliness that had always been an element in the religion. Rather than oppose war and slavery in this world, believers were taught to focus on the next, where there was no war and where everyone was free. By the time of the Roman Empire’s fall, Christianity had become wholly a tool of civic authority and the defense of privilege. The First Noble Falsehood was an important part of that transition.

The transition to that state for other religions is less visible, and in some cases I suppose it’s possible that the First Noble Falsehood was enshrined from the beginning so that no actual transition occurred. What is certain, however, is that so long as religion and the state remained partners in politics, any religion allowed to survive would of necessity transfer any passions for reform from this world to another.

With the separation of church and state which has become the modern norm, spirituality is now free to manifest itself in worldly ways and has begun to do so once more. It is, I think, time to abandon the First Noble Falsehood. We are not here, embodied in physical existence, as a mistake, and our goal is not to leave this life for another, but to make of it the best and holiest thing we can, in love of those around us and in homage to the principles we cherish.

Sunday, May 2, 2010

Personal Power and Political Power

“Money is power” is a cliché. That wealth leads to power, and vice-versa, is so well-known that an important underlying question is often missed. One finds arguments about which of the two is the more important for the commercial elite in our society – again missing that underlying question.

The important underlying question that’s often missed is this: what KIND of power? Are we discussing political power or personal power?

Political power is the ability to influence governing institutions. It’s held (obviously) by elected officials. Barack Obama, at present, has a great deal of political power. He can issue orders and have them carried out by the agencies of the U.S. government, by the United States military forces, and by the Democratic Party which he heads. He can use the persuasive power of his office to influence votes in Congress. To a lesser degree, all elected members of Congress also hold political power, as do Cabinet members and other important unelected government officials and those holding office in state and local governments, or in foreign governments throughout the world. Political power is also wielded by those who don’t hold government offices but who, through campaign contributions and lobbying, or through the ability to persuade a following among the citizens, can influence the actions of the government. Most of the time, when people speak of “power” being held or valued by the commercial elite, this is what they mean: the ability to influence government actions by means of persuasion and bribery.

If that sort of power, political power, is what we’re talking about, then I’d have to say on the whole – with a few exception – that money is more important to the commercial elite and power is only a means to the end of amassing more money. But there’s another sort of power that lies, I believe, at the heart of all desires to become mega-rich in the first place. Sometimes, for some people, it lies at the heart of a desire for political power as well. Both money and political power can be means to the end of amassing personal power: the ability to make other people, as individuals, into servants of one’s own will. Political power, in extreme or archetypal form, is exemplified by the dictator. Personal power, in extreme or archetypal form, is exemplified not by the dictator but by the slave owner.

Personal power is embodied in the powerful by a sense of superiority, and in the powerless by a sense of inferiority. Personal power lets a powerful person look at someone over whom he has power and say to himself – and, through various gestures and subtle means of communication, to his inferior as well – “I am better than you,” and makes the inferior say in the same ways, “You are better than I.” Unlike political power, it’s a very primal sort of power, with roots going back to the origins of our species. It pumps the body full of adrenaline and testosterone, or churns the guts with loathing, fear, and self-hatred.

Personal power is a man’s ability to seduce another man’s wife right in front of him, and have her be afraid to say no and him be afraid to do anything about it. Personal power allows a person to demand that others bow and scrape and show their submission. Personal power allows cruelty to others without penalty, and enables retaliation for even the most minimal slights. Personal power is what the power-hungry desire on a visceral level, and freedom from anyone having personal power over us is what we mean in our hearts by the word “liberty.”

Personal power is a face-to-face thing. Unlike political power, it isn’t impersonal power over the masses, but one-on-one power over an individual. It’s the ability of one individual to make another grovel, serve, and obey.

Government officials seldom hold personal power over ordinary citizens – we seldom interact with government officials in any direct way. They have personal power only over their employees, interns, and so on, and those who come within the purview of their immediate jurisdiction under the law.

Employers, on the other hand, always have personal power over their employees. We have laws protecting the rights of workers for that very reason, to limit the consequences of personal power. Landlords have personal power over renters, and we have laws protecting tenants’ rights for that reason. All such laws were fought tooth and nail by employers and landlords when they were proposed, partly because obeying them is often an expense, but in large part because it removes some of the payoff of personal power.

Every time an employee successfully starts a small business, or becomes self-employed, he gains freedom. The commercial elite may still have a lot more money than he does, but he is no longer dependent on any of them. No employer holds personal power over him. Every time a person buys his own home, he gains freedom. No landlord holds personal power over him.

That’s the underlying, unspoken reason why the pressure is on to keep wages suppressed in America. It’s not the only reason, of course; it’s reflexive for business owners for whom wages are a cost to be kept down, and who seldom consider the larger picture. But as long as wages are kept low, the number of people who will be able to escape from wage work and become free is limited, and so is the number of people who will be able to afford their own homes. With more and more money funneled to the very rich at the top of the ladder, they have more money to play with and gamble with, but at least as important is that the majority of the people are kept on the treadmill, where they can be controlled. Where they can be told what to do, and made to serve.

Personal power needs to be recognized and understood. We need to stop thinking “government” reflexively when we use the word “power.” Sure, government power is important and potentially dangerous. We need to make sure it is restrained by the three safety controls we put on it: separation of powers, public accountability, and explicit limits of government action such as the Bill of Rights. When these become frayed, as they have in recent years, we need to restore them.

But on a visceral level, the government is not what most people think of when they imagine freedom. They think of their boss, or their landlord, and being able to tell them to shove it. They think of being in a situation where no one can tell them what to do. The real enemy of freedom in a democracy is not the government, but rich and powerful individuals able to exercise personal power. To judge whether a government is a tyranny, a good rule of thumb is to ask to what extent it serves the interest of rich and powerful individuals – helping them to exercise personal power over others. To say that government secures and protects people’s rights is another way of saying that it protects the weak from the strong. A tyranny instead aids and abets the strong in dominating the weak.

Of course, the rich and powerful often try to confuse the issue by saying that a government interfering with their freedom to tyrannize others is a tyranny, and to them, it is – as it has to be; if it weren’t, it would be a tyranny to the rest of us. In just that way the slave owners of the antebellum South complained of the tyranny of Washington. We need have no more sympathy for our capitalist masters today than we do in hindsight for the plantation masters of yesterday.