Monday, September 27, 2010

Quote from a rich guy: "Tax me more."

Before presenting what is to follow, I have to apologize for neglecting this journal. My bad. I do have an excuse, mainly that I've been devoting my writing energy to fiction. I've finished the second novel in the Star Mages series and am getting it together pre-publication at this point. So that's good, but my feeling is that although it might seem like a decent excuse, I made a commitment here, I failed to keep it, and there is no excuse apart from physical or mental incapacity neither of which applies. (Yet. Knock wood.) :)

So I'll try to make up for that failure. To start with, I ran across an editorial in the LA Times by venture capitalist Garrett Gruener, who said some important things in it that people need to understand and, thanks to trickle-down propaganda, often don't. Here's the link to his article:

tax me more

Some excerpts that are particularly important:

"I'm a venture capitalist and an entrepreneur. Over the past three decades, I've made both good and bad investments. I've created successful companies and ones that didn't do so well. Overall, I'm proud that my investments have created jobs and led to some interesting innovations. And I've done well financially; I'm one of the fortunate few who are in the top echelon of American earners.

"For nearly the last decade, I've paid income taxes at the lowest rates of my professional career. Before that, I paid at higher rates. And if you want the simple, honest truth, from my perspective as an entrepreneur, the fluctuation didn't affect what I did with my money. None of my investments has ever been motivated by the rate at which I would have to pay personal income tax. . . .

"When inequality gets too far out of balance, as it did over the course of the last decade, the wealthy end up saving too much while members of the middle class can't afford to spend much unless they borrow excessively. Eventually, the economy stalls for lack of demand, and we see the kind of deflationary spiral we find ourselves in now. I believe it is no coincidence that the two highest peaks in American income inequality came in 1929 and 2008, and that the following years were marked by low economic activity and significant unemployment.

"What American businesspeople know, and have known since Henry Ford insisted that his employees be able to afford to buy the cars they made, is that a thriving economy doesn't just need investors; it needs people who can buy the goods and services businesses create. For the overall economy to do well, everyday Americans have to do well. . . .

"Remember, paying slightly more in personal income taxes won't change my investment choices at all, and I don't think a higher tax rate will change the investment decisions of most other high earners.

"What will change my investment decisions is if I see an economy doing better, one in which there is demand for the goods and services my investments produce. I am far more likely to invest if I see a country laying the foundation for future growth. In order to get there, we first need to let the Bush-era tax cuts for the upper 2% lapse. It is time to tax me more."

It's not surprising that a venture capitalist "gets it" about what limits investment in job-creating ventures: not availability of capital (i.e., not how much money rich investors have lying around), but expected return. What's more, the main thing that drives expected return is not how much the investor can expect to keep after taxes, but rather how much demand exists for the goods and services the investment is supposed to produce. As I said, it's not surprising a venture capitalist gets this; if he didn't know why he invests in one area rather than another, say in business rather than in financial instruments, he would not likely be successful at what he does. We ought to listen to him when he says things like this. I mean, when someone says, "Raise MY taxes," we can be pretty certain he's not speaking out of duplicitous self-interest. Unless he's a masochist or something.

But I'm going to take this argument one step further. Mr. Gruener says that small changes in his tax rate have no effect on his investment decisions. But what about big ones? What about the effect of the original Reagan tax cuts that dropped top marginal rates from the 60-70 percent range down into the 30s? On the other hand, what would be the effect of creating new tax brackets with very high taxes applied to very high incomes? What about a 95% tax on personal income over a million dollars a year?

Before continuing with this, maybe an explanation is in order about how "marginal" tax rates work. Right now, the top tax rate is 35% on incomes above $373,650. Does that mean that if someone makes $400k a year, he'll pay 35% of his income in federal income tax? No, it's a bit more complex than that. He'll pay that 35% only on taxable income above $373,650, which is to say, on $400,000 - $373,650, or $26,350. (That's if the $400k represents taxable income not total income, of course.) He pays at a lower rate on all the rest of his income. On the first part of what he earns for the year, he pays no taxes, just like everyone else.

So a 95% tax on income above a million dollars doesn't work to impoverish millionaires. What it does is to impose a personal income ceiling. Nobody is going to bother making over a million dollars in taxable income when Uncle Sweetie is going to make off with almost all of it. It won't hurt you to make more than a million (remember, all the income below a million is still taxed at the lower rates), but it won't help you much, either. So investors will stop investing in anything that would push their income above that point, and that will hurt the economy, right?

Well, not so fast. To begin with, most investments, and all of the ones we really want, are tax-deductible and so don't count as taxable income. If you start a business, most of the start-up costs are not taxed. (There are some exceptions involving heavy-equipment purchases, where the tax deduction is split over a number of years.) Wages you pay to employees are never taxed as your income. (As the employee's income, yes.) So what a confiscatory tax on really high income actually does is to give the person making that kind of scratch a really strong incentive to find places to invest that money where it will eventually pay off, but won't be taxable in the meantime. So -- provided we choose which investments to encourage through tax write-offs wisely -- this could actually spur investment rather than discouraging it.

Another consideration besides tax deduction is how quickly an investment pays off. The thing about investing in real business (that is, making stuff or providing services) is that it's a long-term project. You don't expect a quick payoff in the first year. Ask anyone who's ever started a business. You expect to lose money the first year, maybe the second year, maybe even longer depending on exactly what business you're in. Down the road, though, you do expect things to pick up to the point where you've recouped all those losses and made a profit. (It doesn't always happen that way, but you do expect it or you wouldn't have made the investment to begin with.) There are other kinds of investments, though, that can pay off very quickly. A good example is short-term trading on the stock market, where you're not trying to acquire stock for the long haul but rather to buy low and sell high, conceivably in a single day. Even better examples are the kinds of financial trading that resulted recently in the near-collapse of our financial system. To be sure, those particular investments went bad, but the point is that when they pay off they pay off quickly. That makes them preferable to investments in real business if you want a quick gain that can be reinvested for a multiplier effect.

What a confiscatory tax rate on very high income would discourage is this sort of investment. Why seek a quick payoff -- that is, a payoff this year -- if 95% of it goes to the federal government? Under that regimen, it makes a lot more sense to defer financial gains.

To illustrate, consider this. Let's say someone is making half a million normally. The person has another half million to invest. For sake of simplicity, he has two choices, either of which will return that half million and another million dollars on top of it. He can invest it in short-term financial manipulation that will give him the whole million and a half by year's-end. Or he can invest it instead in a start-up company making widgets, take a loss the first year, and recoup his investment plus another million over the next ten years.

If he chooses the latter route, he gets back an average of $150k a year, and in no year does the net return exceed $300k (let's say). Now: if he's going to see that investment return taxed at 35% no matter which way he goes, then he's better off investing in the short-term instrument. His net profit after taxes is $650k, and if he does the quick-return bit, he'll have all that to reinvest next year for more return still. But with the confiscatory tax in place, he'll be much better off investing in the slow road, because his net return with the short-term financial investment is only $50,000, while the return with the long-term investment is much higher, since none of it crosses that million-dollar line and so all of it avoids the confiscatory tax.

Bear in mind he's going to invest the money anyway. The only question is in what. Since investment in making things and providing services is what we want (that's what creates jobs), we want to encourage that and discourage the kind of investment that just plays with money.

Saturday, July 3, 2010

Prosperity: What's An Economy For?

I'm going to be writing a series on what I have come to call "Money-Free Economics." By this I don't mean economics of a barter system or of an economy without money; rather, I mean economics that ignores money and goes to the underlying real-wealth economy that money facilitates. I acknowledge up front that this creates a certain amount of distortion. There are features and processes of a modern economy that can't be understood without addressing money, among them interest rates, the effects of government fiscal policies, and speculative investment -- to name but three of many. But money also creates distortions. In particular, schools of economics that address money without touching on the underlying economy of goods and services often create severe distortions by treating money as if it existed and operated independently of the goods and services for which it is a token of exchange -- as if only money, not stuff, mattered. Moreover, those features of an economy that require addressing money to understand are already covered well by professional economists in their various schools. On these matters they don't require any help from me (often it's the other way around). But when economists present something as stupid as, for example, the laissez-faire interpretations of the Laffer Curve, or explanations for recession that rely entirely on monetary factors and ignore the distribution of wealth, I know that they have focused on money to the point where they have forgotten that it is just a token of exchange and not real wealth, because when you put those in money-free terms their nonsensical nature becomes obvious. So, to address the follies of economists and the politicians who quote them, I shall engage in an exercise, presenting economic concepts in ways that don't use money at all.

I'll begin today with an examination of what an economy is and what it's for in money-free terms.

An economy is, to begin with, a social arrangement. It involves assigning of ownership, division of labor, and rules of exchange and trade. In a modern society it is always a function of law. That wasn't always so, because human beings have not always lived under the rule of law, but even in pre-civilized times when there was no law as such and no formal government, there were still rules about who owned what, who was supposed to do what, and who got what in the end.

What this social arrangement is meant to do is to regulate and facilitate the production and distribution of wealth. Wealth, as I pointed out in the last entry, consists of goods and services. Going into a bit more detail, wealth consists of eight things: food, clothing, shelter, tools, toys, entertainment, advice, and assistance. Everything you or anyone else ever buys or sells falls into one or more of those categories. The economy is a social arrangement whereby these eight things are produced and gotten to the people who want and can use them. Those are the two criteria of economic success. As long as those eight things can be produced in enough quality and quantity and distributed to everyone who needs and wants them, the economy is a success. When either of these functions fails, the economy fails. If not enough food can be grown, or if the food that is grown can't be gotten to the people who need to eat it, there is famine. If not enough housing can be built, or if housing is built but sits vacant while people are homeless, there is a housing crisis. And so on.

Every failure of the economy, every depression, every recession, every instance of runaway inflation, every bubble collapse, even the economic failure that occurs after a military defeat, manifests ultimately in a failure either of production or of distribution or both. Even when the cause (or at least the trigger) of the economic problems is fiscal or monetary, such as a stock-market crash or the collapse of a housing or real estate or some other bubble, it always comes down in the end to a failure to produce or a failure to distribute. If it does not, then it is a nonexistent problem as far as the overall economy is concerned.

Problems can occur on either the production or the distribution side. An example of a production-side problem is a severe drought that results in crop failure. This creates a shortage of food and starvation. Another example is the devastation created by war, as for example in Germany during and after World War II, when Allied bombing and Allied and Soviet invasion destroyed German factories and industrial capacity, as well as German roads and railroads. A third example, more subtle, is the impact on the U.S. economy of the OPEC oil embargo from 1973 until 1983, which caused shortages of a crucial raw material. An economy that is in a pre-industrial state and is trying to industrialize also faces production challenges, not in the sense of losing production but in the sense of wanting to increase it. In general, production of wealth requires raw materials, labor, knowledge, and organization, and a shortage of any of these (for whatever reason) results in a deficit of production.

Problems of production are severe, but problems of distribution can be equally severe. The Irish potato famine was, at root, a distribution problem. It had a proximate cause on the production side, a potato disease that caused crop failures, but this would not have resulted in famine except that the Irish wheat lands were all in the control of aristocratic landholders who were entitled to the wheat crops for export purposes. That's the reason why ordinary Irish people were dependent on a potato diet in the first place. A more nearly equal distribution of Ireland's food crops would have meant that when the potato harvest failed, the people could eat other foods. Severe maldistribution of the nation's agricultural wealth meant that the potato blight became the potato famine.

The Great Depression and similar breakdowns in the years before it (for example the Long Depression that began in 1873 and lasted longer than the Great Depression itself, although it was not quite as severe) were also breakdowns of distribution. The economies of the advanced nations, such as the United States, suffered no shortages of raw materials, labor, knowledge, or organization, and there were initially no problems of production. But the goods produced were not distributed to the people who would use them. Because of the system of private capital property ownership, the goods produced in a factory (say) belonged to the factory's owner, and anyone who wanted those goods had to exchange items of value for them (by way of money, of course). Since not enough of the people who wanted the goods had the value to exchange for them, they could not be sold and so sat in warehouses being of no use to anyone.

The distorting effect of money can be easily seen in this entire sequence of events, which were caused by a desire on the part of capital property owners to keep to themselves as much of the wealth produced as they could. As long as we think in terms of money, this is perfectly understandable: the rich wanted to become richer. But if we think in money-free terms, the silliness of it becomes clearer. How much in the way of food, clothing, shelter, tools, toys, entertainment, advice, and assistance does even the richest person need? How much of these things does he even want? How much can he use? After a certain point, all that stuff is wanted not for use but for sale, and if a relatively few rich people own almost everything of value, for what can it be sold?

Here is the fundamental flaw of capitalism. It is predicated and focused on the accumulation of individual fortunes, which means that ultimately it undercuts its own basis resulting in economic breakdowns due to maldistribution of wealth and consequent depressed demand. Economists have gone to great lengths to refuse to acknowledge this. There is, or used to be, a concept in economics called "overproduction" or "surplus production" which meant that the economy was producing more stuff than people could use, so that in order to maintain full employment and productivity it needed to be sold abroad. But the economy has not historically ever actually produced more stuff than people could use (although that's theoretically possible). It has just produced more stuff than the people who wanted to use it could buy. That's a very different thing. The demand for goods and services depends not only on people's desire for things, but also on what they have to trade for them, and for most people the latter is exhausted long before the former. (Those for whom it is not, exhaust their desire to buy instead. Either way, stuff remains unsold.)

One of the things about economics today, even more than its disconnect from the economy of stuff and its focus on the arcane economy of money, is the refusal of many of its practitioners to think about the elephant in the room: the distribution of wealth. Even when an economist (by this stage of the game usually one long dead) takes a money-free approach, it often suffers from this flaw. A good example is Say's Law.

Say's Law is an economic principle attributed (somewhat incorrectly, but that's by-the-way) to the French economist Jean-Baptiste Say, who lived and worked in the late 18th and early 19th century. Say argued that there could never be a general glut of goods -- too much on the market to be sold -- because all goods produced created value with which to buy other goods, and goods are exchanged only for goods even when they are exchanged by way of money. As far as it goes, that's true -- but it also very much matters whether the goods produced are owned, and so exchangeable, by those who desire the other goods produced. Or in other words, it matters how widely wealth is shared. The fact that wealth exists to exchange for all products produced in the form of other products does no good on a practical basis unless those goods are in possession of those who wish to make the purchase.

One finds many critiques of Say's Law among economists, but rarely will one find this fundamental flaw recognized. John Maynard Keynes, for example, identified three assumptions underlying Say's Law: a barter model of money (goods are exchanged for goods), flexible prices (that can rapidly adjust upwards or downwards with little or no "stickyness"), and no government intervention. Keynes himself disputed the second assumption, arguing that prices are not necessarily flexible. Others have disputed the first or the third. (And here one does run into the distorting effect that arises from money-free economics, because there are aspects of a money economy which do not perfectly mirror a barter economy. However, that is not the real problem with Say's Law.) It's true that the idea does rest on at least the first two of those assumptions, but it also rests on another which is self-evidently false: the equal or near-equal distribution of wealth.

It's a curious thing, this refusal even of a supposedly "progressive" economist such as Keynes to address the central problem of inequality even though his own work naturally lends itself to doing so. Those who do address it usually seem to confine themselves to the moral aspects of it without considering the economic aspects. But the economic aspects are also real and also important.

Returning to the two functions of an economy, production and distribution of wealth, we may consider the template to be the economy of a pre-civilized community, in which a small band of human beings own all capital property in common and share tasks and wealth more or less equally. Production-side problems arose often enough in the form of shortages, but distribution-side problems did not. Even when production problems happened, it was never due to failures of organization, but only of natural resources, knowledge, or labor. The economy functioned in the manner Marx described as "communism," the end-state of his theoretical economic progression: from each according to his ability, to each according to his needs. Now, my personal opinion is that Marx had to have been smoking something to believe that an advanced economy, whose essence is impersonality, could ever operate communistically in this fashion. But we may nonetheless take that ancient pattern as, in terms of distribution and of the organization of labor and natural resources, the ideal, and evaluate our modern substitutes in terms of how closely they approximate this ideal. The truth is, of course, that they fall far short -- but in fairness, they have a much more complicated problem to solve.

In future posts, I'll consider historical economies that worked better than the one we have now, along with some spectacular historical failures. Finally, I'll speculate about alternatives to capitalism as it currently exists. In all cases, I'll approach the questions through money-free economics, in order to keep it as simple and non-arcane as possible.

Wednesday, June 16, 2010

"Making Money"

Our language has many peculiarities that shape thought in hidden ways. One example is the phrase "make money."

Strictly speaking, nobody "makes money" in this country except the mint. Money is legal tender, and neither private individuals nor corporations are authorized to "make" it. To do so is a felony. When we say that someone "makes money," what we really mean is that the person takes money: he persuades other people to give him money in exchange for something else, be it goods, services, promises, or deception. No money is actually made in these transactions, by which I mean that the overall money supply does not increase; what money the person who is "making" it gains, his customers lose in an exact one-for-one correspondence. Of course, that's not necessarily a bad thing for the customers, since money also has no intrinsic value whatsoever; it gains value only in exchange for other things that DO have intrinsic value, and the only reason anyone is willing to take intrinsically worthless money in exchange for intrinsically valuable things is because the money so acquired can then be given away to someone else in exchange for other things of value. Money is at root a confidence game in the literal sense of requiring a faith in the system of government that backs it and a confidence that it can be exchanged for items of value, even though it has no value of its own, and because of this disconnect, this one-off between the medium of exchange and the items of actual value, it can also be a confidence game in the figurative sense.

Really it all comes down, not to money, but to stuff: goods, services, promises, or deception. Money is not wealth. Goods and services are wealth; money is only a token exchangeable for wealth. One cannot "make money," but one can make wealth, by making goods or performing services. Ideally, that is how a person or a corporation "makes money" -- by making wealth, and exchanging the wealth for money, which can then be re-exchanged for more wealth. The amount of money doesn't increase, but the amount of wealth does. As a straightforward exchange, there is nothing objectionable about this. But the fact that we employ money rather than barter -- the fact that we exchange wealth not for wealth but for tokens exchangeable for wealth -- means that the potential for abuse, and for confidence games in the figurative sense, creeps in.

Start with the fact that goods and services are, almost without exception, produced collectively, not individually. That is, their creation requires the cooperative effort of more than one person. Most of the people who work to create the wealth have no ownership interest in it (as I explored in an earlier post) and must accept (or reject) a payment in money for helping to create it according to the terms that the owner (usually a corporation) is willing to offer. The potential for abuse in that transaction is of course well known to anyone who has studied the history of the labor movement.

Then there's the fact that money can be exchanged not just for real wealth, but for potential wealth. This is called "investing." Money is paid not for goods or services, but for the potential of being repaid more money than one paid out in the future, which can then be re-exchanged for real wealth. Investments, however, don't always pay off. Sometimes an investor loses money instead of gaining it. This means that a person or a corporation can "make" (or take) money by attracting investors rather than by offering wealth in exchange. To make things more wonderfully and woefully complex still, the person "selling" the investment can then turn around and re-invest the money so gained himself in the hopes that it will pay off more than he ends up paying back to the original investor. And so on, in a tangle of investment and reinvestment. There are whole industries built around this sort of thing, producing no wealth whatsoever but "making" lots of money.

Now the justification for this sort of financial goings-on is that at least some of the money is ultimately used to fund the production of wealth, which, under the rules of our economic game, requires money in order to be done. But it doesn't have to be done that way. All that's really necessary in order for an investment scheme to "make money" is that people who have money be convinced to invest it. A financier can "make money" all day long without producing a damned thing, merely by moving around intrinsically worthless tokens, taking money from others in exchange for promises or, in some cases, for deception.

Even when the money that is being "made" is acquired in the more straightforward fashion, by producing actual wealth and selling it, there is still plenty of room for practices that are anything but straightforward. British Petroleum, for example, is certainly producing wealth (or it intended to anyway) from its deep-water oil well in the Gulf of Mexico. But it acquired ownership of the oil it hoped to pump through a process of leasing the mineral rights from the government that involves a highly questionable exchange of value. Arguably, since the land in question is government property, it belongs to the people of the United States, yet the people get precious little return for it; if BP had to buy the rights for something approximating their real value, that could fund a lot in the way of public services, tax cuts, and/or deficit reduction. On the other end, as what actually happened with that well demonstrates, the law requires the people to pay to clean up any messes that result, after the corporation pays out an amount of money limited by law and, in the instant case, only a tiny fraction of the actual damages. In this particular case, due to the publicity involved and the magnitude of the disaster, BP may find itself unable to make use of that sweetheart deal, but the Gulf oil leak is only a larger-scale version of similar environmental accidents that happen all the time, and other damage that isn't accidental at all.

Running through our economy are rules and practices that twist and warp what should be a straightforward process of producing wealth and distributing it to people into one sort or another of theft. Theft of people's earnings, their savings, their livelihoods, their hopes and dreams, their health, and their lives. And yet, because of the peculiarities of the language we speak, we call all of that "making money."

A curious thing, I say.

Tuesday, June 1, 2010

The Value of Labor

There are two ways to establish a monetary value for labor. Both of those ways are economically sound, depending on the purpose for which labor is being evaluated. For purposes of this writing, both are equally important, as what I wish to discuss is the difference between the two.

The first (and simplest) way of determining the value of labor is through the labor market. This follows the tautology that everything is “worth” what its customer will pay for it. A merchant (in this case a worker) will seek the highest price (wage) possible, while a buyer (employer) will seek the lowest price (wage) possible, and the balance in bargaining power between the two determines the outcome. In the labor market, that balance is affected by the number of workers available to do a particular type of work (supply), the number of such jobs open (demand), the ability of workers to bargain collectively (organization), and the parameters set by law and regulation (rules of the game). Supply, demand, organization, and rules of the game are what determine the “value” of labor – in this sense. Let us call this the market value of labor.

The second way of determining labor’s value is in terms of the value of what it produces. All labor produces goods or services which are then offered for sale (or at least could be), and these goods and services have a market value of their own. In the context of any business, the “value” of labor in this sense is equal to the market value of all goods and services produced by it, net of any non-labor costs of production and marketing. This we may call the productive value of labor.

It should be self-evident that the market value of labor is always less than its productive value. In a capitalist economy this is entirely unavoidable, and as a practical matter it may be unavoidable in any economy, since some portion of the wealth produced must be set aside as capital to be reinvested. But in a capitalist economy, the entire point is to maximize, as much as practical, the difference between labor’s productive value and its market value, because the difference between these two is the margin of profit, and the purpose of a capitalist economy is to maximize profit.

Putting it another way, the purpose of a capitalist economy is to maximize the gap between the market value of what is produced, and the share of that wealth which goes to those who do the work of producing it. Or, more simply, a capitalist economy has as a condition of its defining purpose, the secondary purpose of keeping labor down.

METHODS OF INCREASING THE LABOR VALUE GAP

The goal of capitalism, in service to its ultimate goal of maximizing profit, is to increase as much as possible, and to maintain at as high a level as possible, the labor value gap – that is, the gap between labor’s productive value and its market value. How is this done?

Let us first recognize that capital cannot arbitrarily set wages anywhere it wants. If it could, it would get all labor for free. The market value of labor is determined by the factors of supply, demand, organization, and rules of the game. If capital is to influence the price of labor, therefore, it must influence these four factors. As a practical matter, though, there is limited influence that can be brought to bear by an individual business on any of them. An employer may certainly use organization and machinery to improve efficiency of production and so reduce its demand for labor; it may also use techniques of intimidation and propaganda to prevent the formation of labor unions and so reduce organization; in the modern world, it may in some cases outsource production to foreign countries and in this way increase the supply of labor. But to truly keep the cost of labor down and so maximize the labor value gap and hence maximize profit, business must exert influence over the government, which controls the rules of the game absolutely, and the other three factors to a very large degree.

The manner in which capital influences government is outside the scope of this article, but well known enough that it should need little elaboration; suffice it to say that bribery, either directly through payments to legislators or somewhat less blatantly through campaign contributions, buys access and influence, and turns the government to the service of capital much more than it would turn if it were truly answering the will of the people in democratic fashion. The results may be seen throughout history. It’s also interesting to see how the methods change from time to time depending on circumstances, but always work towards the same outcome.

At one time, the government influenced the supply of labor by encouraging high immigration rates, especially of refugees faced with even more brutal treatment in their home countries. During the 19th and early 20th centuries, this flood of immigrants almost by itself kept wages suppressed in basic industries such as mining, railroads, agriculture, and manufacturing. Today, immigration is still a factor in government policy to increase labor supply, but a less important one. The government encourages high rates of legal immigration of skilled labor today, at the request of the computer industry and others needing technical expertise. With respect to unskilled labor, the rules of the game have changed enough since the 1930s (for reasons I’ll go into in the next section) that legal immigration no longer suffices. A combination of illegal immigration and outsourcing has replaced it as the desired source of labor, since neither illegal immigrants nor foreign workers in their own countries benefit from U.S. labor laws and regulations.

Decreasing the market value of labor is only one side of the process. It’s also been the historical desire of capital to increase labor’s productive value. If this is done without increasing, or better still while decreasing, the market value of labor, then the labor value gap is increased that way as well. In the past, prior to globalization, capital has often sought high tariffs for this reason. Tariffs reduced foreign competition, and allowed higher prices to be set on goods, thus increasing the productive value of the labor that produced them. Of course, improving the productivity of labor through organization and mechanization also increases the productive value of labor. There have been times, however, when capital has been willing to see labor’s productive value actually decreased, as long as its market value was decreased more. A perfect example is the outsourcing of manufacturing that occurs today in response to the historically enlightened rules of the game governing American labor at this time. By moving manufacturing operations to countries where labor is paid only a small fraction of what American workers would have to be paid, manufacturers have been able to substantially reduce prices, and they have done so – not nearly as much as their labor costs have declined, but considerably. In this way, they have been able to continue selling higher quantities of goods to American consumers, whose paychecks have declined because of the loss of labor demand. So it isn’t about either maximizing price or minimizing wages by themselves. Rather, it’s about maximizing the gap between the two.

LIMITS ON THE LABOR VALUE GAP - POLITICAL

There are limits on how wide the labor value gap can become. These fall into two categories, the political and the economic. The political limits arise from the fact that everyone wants what they perceive as a fair shake. For workers, that means payment for their work that constitutes a living wage, and that they perceive as being a fair share of the wealth that their labor produces. A capitalist economy, by systematically increasing the labor value gap, incurs opposition and incites rebellion. The wider this gap becomes, and especially the worse off in real material terms the working class is compared to its expectations, the more opposition and rebellion will occur.

This rebellion may take the form of union organizing and strikes, of sabotage and assault, or, at its greatest extreme, of actual armed rebellion. The response to it, both by private capital and by capital-influenced government, tends initially to be repressive, but over time incorporates elements of reform and compromise. We may see this in the history of all capitalist economies to date, with the oldest such economies (those of the U.S. and western Europe) today exhibiting rules of the game that favor labor much more than was the case in the past. Repeatedly, the level of political unrest has reached a point where the more enlightened capitalists saw a need to offer reform of the system in order to allow it to continue functioning at all. Over time, this has resulted in a more humane and less brutal form of capitalism, incorporating many socialist features.

It’s important to recognize, though, that these reforms do not represent a defeat of the capitalists; they are not a revolution. Rather, they represent evidence that capitalist control of the state and of the economy is not and never has been absolute; it has always been possible to resist. If pushed hard enough, the government will institute reforms, and when the pressure becomes sufficiently strong, capitalists themselves will acquiesce in this reform, since it is preferable to revolution. The truly revolutionary change would be at root political, depriving capital of its monetary influence over government. Until that occurs, it’s questionable just how far the process of reform can go, and certain that any set of reforms will at times be undercut and reversed, or ways found around them, as has happened today with globalization and outsourcing.

LIMITS ON THE LABOR VALUE GAP - ECONOMIC

The other limit on how wide the labor value gap can become is economic. It arises because wages for work serve a dual function. On the one hand, they are a necessary cost of doing business, which capital seeks to minimize. On the other, they are what create a market for the goods and services offered. The market value of labor, therefore, is a limiting factor on the productive value of labor, and this means that in the long run too great a gap between the two will be unsustainable.

Economic history shows this clearly. The U.S. economy in its early phase was one of periodic crisis (1837, 1857, 1873, 1893, 1907, 1919, 1929) that wiped out small businesses and brought great suffering to working people. These were more than just “recessions.” No recession during the period from the end of World War II until the election of Ronald Reagan ever reached the horrid depths of the financial panics that occurred about every 20 years, almost like clockwork, in the pre-Depression economy, when double-digit unemployment was the norm in such downturns, and the economy experienced nearly as many years in depression as it did out of it. Many people think of the panic of 1929 – called the “Great Depression” – as somehow extraordinary. It was not really that extraordinary. It was the longest of the depressions of that time by a couple of years, but otherwise not the worst; that dubious laurel goes to the depression of 1893. What distinguishes the Great Depression from its predecessors is not its severity, nor even its length, but the reforms that arose from it.

Why did these panics occur? Why was the economy as often depressed as otherwise? Because a high labor value gap means a depressed consumer market, sustainable only by a combination of speculative investment and credit. This is unavoidable as long as a significant labor value gap exists: the productive value of labor is the net market value of the goods produced, and in order to buy the goods produced the market value of labor must equal its productive value, or nearly so (we may allow a small gap, representing capital accumulated for reinvestment, with labor accordingly diverted from producing goods for consumption to producing capital goods).

All of these panics, like the severe recession we are experiencing as of this writing (2010), were deflationary, that is, they drove prices down. As such, they reduced the productive value of labor, which is partly dependent on the prices of the goods produced. Unfortunately, at the same time they also reduced the demand for labor and so reduced the market value of labor as well, and this preserved the imbalance and prevented quick economic recovery.

After the end of the Second World War, the U.S. economy, and also those of capitalist Europe and Japan, entered a uniquely enlightened period. The labor value gap was lower during this period than before the Depression, and also lower than it is today. Just the same, the gap never completely disappeared, and in a capitalist economy my belief is that it can’t. A capitalist economy is defined as one that exists to pursue profit, and profit is found only through a labor value gap.

Just the same, the depression of the consumer market caused by the labor value gap represents a limit to how wide that gap can become. Along with the political restraints produced by rebellion, it tends over time to move a capitalist economy, in fits and starts, along the socialist road.

LOGICAL OUTCOME

I am unsure of the answer here. Much depends on whether the process of reform described above can reach a stage in which capital loses its excessive influence on the government (by any means other than violent revolution). If so, then a full transition to some sort of socialist economy will occur. If not, then we will reach an equilibrium in which we see-saw, as we have in the period since World War II, between wider and more narrow labor value gaps.

But such prognosis is beyond the scope of this article, as would be a prescription for what sort of socialist structure would best describe the post-capitalist economy, should attaining that prove possible.

Thursday, May 13, 2010

The First Noble Falsehood

All of the religions of the Classical Civilized Paradigm have in common a core belief about the relationship between the soul and physical existence. This belief is expressed in different ways in different faiths, but the most elegant expression in my opinion is the Four Noble Truths of Buddhism, first of which is that all life contains the element of suffering, or, more simply and as believed in practice, that all life is suffering. For Buddhists, the joys and pleasures of life (while acknowledged to exist) are, in essence, the bait for a trap. They’re here to bind us to the world so we can suffer more. The point is to get out.

No other religion puts it quite that way (or, in my opinion, quite that well), but all of the Great Religions share that conclusion: the point is to get out. We don’t belong here. We belong somewhere else or in some other conditions: in Heaven, in Paradise, reunited with God from Whom we have become separated, or restored to the bliss of non-manifestation. The details vary widely, of course. Religions of the Hindu/Buddhist complex believe in reincarnation or soul transmigration, and so teach that we may go through many incarnations before finally being freed to go where we belong. Religions of the Abrahamic lineage (Judaism, Christianity, and Islam) don’t have this belief, and so teach that there is a single lifetime after which comes God’s judgment and (hopefully) a passage to the place where we belong. But in none of these faiths is incarnate, manifest existence on the physical plane seen as anything but a mistake.

This idea – that we are here by mistake, and our focus should be to remove ourselves somewhere else – is what I call, in a play on the Buddha’s teaching which I’m sure he will have enough enlightenment to forgive, the First Noble Falsehood.

In fairness to the Buddha, we can’t be sure that this is what he actually taught. No writings by him have survived, and that’s rather a mystery. The Buddha, who was a prince, was certainly literate. Did he really not write down any of his teachings, or were his writings lost – or suppressed – after his death? We may wonder the same thing about Jesus and Mohammed, who were also literate and who have also left us no writings. Be that as it may, an inevitable disconnect occurs in communication between the teachings of an enlightened spiritual leader such as the Buddha or Jesus, and the form those teachings take when embodied in an organized religion, especially after the old boy isn’t around any longer to interfere with the process.

Part of that disconnect arises simply because of the difficulty of communicating the deep truths of the spirit in words. Language isn’t designed for that purpose. Its vocabulary communicates things from one person to another that both people are already familiar with. In giving you directions to the post office, I know that you know what a street is, what a post office is, what it means to turn left or turn right, and how to identify various landmarks that I may give you to show the way. If someone wants to communicate something that is a bit outside his listeners’ experience, then one starts with the things the listener knows and builds on that. But to communicate spiritual reality is very, very difficult, because it is outside of the experienced world of most people. It can usually only be done in metaphors and parables, and even then most people will attach meanings to the parables that aren’t correct. “He who has an ear, let him hear.”

But beyond this, further problems came in for all Classical Paradigm religions as they became established and involved in politics. Ideas and teachings that did not serve the political purposes of the faith were suppressed, and ideas that were necessary for that purpose were introduced. And it is at this juncture, I believe, that the First Noble Falsehood arose.

Politics is in large measure about privilege. It’s a battleground between those who enjoy an elite, entitled, privileged position, and those who would like to see them lose it. Sometimes those who would like to see them lose it simply want to replace them as the privileged elite. Sometimes the goal is to eliminate privilege altogether or at least reduce its prerogatives. In modern times that latter manifestation has become more acute, but in the ancient times when the Buddha lived, the elite (to which class he himself was born) had things pretty much any way they wanted. Politics – a game the Buddha’s social class played exclusively, commoners need not apply – was thus all about upholding and supporting their status and power, except of course when it involved infighting among them for a larger share of power than one’s fellows.

Religion was, under the Classical Paradigm, always a partner with the state. Often, it was an actual arm of the state. As such, it always served the purposes of the state, which included public order, but also included the upholding of privilege. Now privilege, to the enlightened eye, is unjustifiable. It is simply wrong, and needs to be abolished. And so, if religion is to serve the purposes of the state, a filter must exist to allow the passion which an enlightened teacher’s teachings inspire to be twisted into the service of privilege, something he would in all cases have abhorred. And the First Noble Falsehood serves that purpose admirably, by taking that passion and turning its focus away from this world altogether and towards another reality where there is no social privilege to be threatened.

The challenge to privilege and power represented by spirituality when focused on this world where we actually live can be seen in modern times. Look what Gandhi accomplished for Indian independence or Martin Luther King for racial equality in America. There is magic in genuine spirituality, a power of the heart that moves the hearts of others. To the holders of privilege, spirituality is dangerous, and must be channeled into non-threatening, or ideally even privilege-supporting paths.

Jesus was not put to death on a whim.

The story of Christianity’s co-option by the Roman state in the 4th century CE is a good illustration of how the process works, and where the First Noble Falsehood comes from. I said above that under the Classical Paradigm religion was always intertwined with the state, but actually early Christianity represents an exception. Under Roman law, Christianity was an illegal religion, but the laws were seldom enforced. Christians could be arrested any time, given a chance to recant their faith, and executed in brutal ways if they refused, but most of the time they weren’t. The net effect was that Christians were free to practice their religion (except when some Emperor or other got a bug up his sphincter and started a persecution), but was not involved with the state at all. It couldn’t be, because it was illegal. Nor could intra-faith politicians (you know the type, all religions have them) call on the state to enforce their authority. For that reason, during the time between the mission of Paul and the Council of Nicaea, Christianity was one of the freest and most diverse religions of all antiquity.

It was also frequently troublesome. Christians refused to pay token worship to the official state religion, and so in the view of the state endangered Roman society’s relationship with the Gods, on whose favor it depended. Christians were also frequently pacifists and anti-slavery advocates, thus endangering the Roman state’s ability to defend itself against foreign enemies and the foundation of the Roman society’s economy. Here was spirituality acting as a threat to privilege, which it so often does.

Three Emperors attempted to stamp out the religion through persecution. None succeeded. In the early 4th century, the emperor Constantine tried a different approach: co-option. By repealing the laws against Christianity, by calling a council of “Bishops” (note that this in itself favored the more authoritarian forms of Christianity over the less so, since only the authoritarian Christian sects recognized “Bishops” to start with) from all over the Empire to iron out exactly what the faith stood for and taught, and finally by making Christianity itself the state religion of the Roman Empire, Constantine and his successors transformed it from a rebellious and dangerous spirituality into a useful tool of politics. One of the primary levers of this transition was to bring to the fore, and interpret in certain pro-privilege ways, the otherworldliness that had always been an element in the religion. Rather than oppose war and slavery in this world, believers were taught to focus on the next, where there was no war and where everyone was free. By the time of the Roman Empire’s fall, Christianity had become wholly a tool of civic authority and the defense of privilege. The First Noble Falsehood was an important part of that transition.

The transition to that state for other religions is less visible, and in some cases I suppose it’s possible that the First Noble Falsehood was enshrined from the beginning so that no actual transition occurred. What is certain, however, is that so long as religion and the state remained partners in politics, any religion allowed to survive would of necessity transfer any passions for reform from this world to another.

With the separation of church and state which has become the modern norm, spirituality is now free to manifest itself in worldly ways and has begun to do so once more. It is, I think, time to abandon the First Noble Falsehood. We are not here, embodied in physical existence, as a mistake, and our goal is not to leave this life for another, but to make of it the best and holiest thing we can, in love of those around us and in homage to the principles we cherish.

Sunday, May 2, 2010

Personal Power and Political Power

“Money is power” is a cliché. That wealth leads to power, and vice-versa, is so well-known that an important underlying question is often missed. One finds arguments about which of the two is the more important for the commercial elite in our society – again missing that underlying question.

The important underlying question that’s often missed is this: what KIND of power? Are we discussing political power or personal power?

Political power is the ability to influence governing institutions. It’s held (obviously) by elected officials. Barack Obama, at present, has a great deal of political power. He can issue orders and have them carried out by the agencies of the U.S. government, by the United States military forces, and by the Democratic Party which he heads. He can use the persuasive power of his office to influence votes in Congress. To a lesser degree, all elected members of Congress also hold political power, as do Cabinet members and other important unelected government officials and those holding office in state and local governments, or in foreign governments throughout the world. Political power is also wielded by those who don’t hold government offices but who, through campaign contributions and lobbying, or through the ability to persuade a following among the citizens, can influence the actions of the government. Most of the time, when people speak of “power” being held or valued by the commercial elite, this is what they mean: the ability to influence government actions by means of persuasion and bribery.

If that sort of power, political power, is what we’re talking about, then I’d have to say on the whole – with a few exception – that money is more important to the commercial elite and power is only a means to the end of amassing more money. But there’s another sort of power that lies, I believe, at the heart of all desires to become mega-rich in the first place. Sometimes, for some people, it lies at the heart of a desire for political power as well. Both money and political power can be means to the end of amassing personal power: the ability to make other people, as individuals, into servants of one’s own will. Political power, in extreme or archetypal form, is exemplified by the dictator. Personal power, in extreme or archetypal form, is exemplified not by the dictator but by the slave owner.

Personal power is embodied in the powerful by a sense of superiority, and in the powerless by a sense of inferiority. Personal power lets a powerful person look at someone over whom he has power and say to himself – and, through various gestures and subtle means of communication, to his inferior as well – “I am better than you,” and makes the inferior say in the same ways, “You are better than I.” Unlike political power, it’s a very primal sort of power, with roots going back to the origins of our species. It pumps the body full of adrenaline and testosterone, or churns the guts with loathing, fear, and self-hatred.

Personal power is a man’s ability to seduce another man’s wife right in front of him, and have her be afraid to say no and him be afraid to do anything about it. Personal power allows a person to demand that others bow and scrape and show their submission. Personal power allows cruelty to others without penalty, and enables retaliation for even the most minimal slights. Personal power is what the power-hungry desire on a visceral level, and freedom from anyone having personal power over us is what we mean in our hearts by the word “liberty.”

Personal power is a face-to-face thing. Unlike political power, it isn’t impersonal power over the masses, but one-on-one power over an individual. It’s the ability of one individual to make another grovel, serve, and obey.

Government officials seldom hold personal power over ordinary citizens – we seldom interact with government officials in any direct way. They have personal power only over their employees, interns, and so on, and those who come within the purview of their immediate jurisdiction under the law.

Employers, on the other hand, always have personal power over their employees. We have laws protecting the rights of workers for that very reason, to limit the consequences of personal power. Landlords have personal power over renters, and we have laws protecting tenants’ rights for that reason. All such laws were fought tooth and nail by employers and landlords when they were proposed, partly because obeying them is often an expense, but in large part because it removes some of the payoff of personal power.

Every time an employee successfully starts a small business, or becomes self-employed, he gains freedom. The commercial elite may still have a lot more money than he does, but he is no longer dependent on any of them. No employer holds personal power over him. Every time a person buys his own home, he gains freedom. No landlord holds personal power over him.

That’s the underlying, unspoken reason why the pressure is on to keep wages suppressed in America. It’s not the only reason, of course; it’s reflexive for business owners for whom wages are a cost to be kept down, and who seldom consider the larger picture. But as long as wages are kept low, the number of people who will be able to escape from wage work and become free is limited, and so is the number of people who will be able to afford their own homes. With more and more money funneled to the very rich at the top of the ladder, they have more money to play with and gamble with, but at least as important is that the majority of the people are kept on the treadmill, where they can be controlled. Where they can be told what to do, and made to serve.

Personal power needs to be recognized and understood. We need to stop thinking “government” reflexively when we use the word “power.” Sure, government power is important and potentially dangerous. We need to make sure it is restrained by the three safety controls we put on it: separation of powers, public accountability, and explicit limits of government action such as the Bill of Rights. When these become frayed, as they have in recent years, we need to restore them.

But on a visceral level, the government is not what most people think of when they imagine freedom. They think of their boss, or their landlord, and being able to tell them to shove it. They think of being in a situation where no one can tell them what to do. The real enemy of freedom in a democracy is not the government, but rich and powerful individuals able to exercise personal power. To judge whether a government is a tyranny, a good rule of thumb is to ask to what extent it serves the interest of rich and powerful individuals – helping them to exercise personal power over others. To say that government secures and protects people’s rights is another way of saying that it protects the weak from the strong. A tyranny instead aids and abets the strong in dominating the weak.

Of course, the rich and powerful often try to confuse the issue by saying that a government interfering with their freedom to tyrannize others is a tyranny, and to them, it is – as it has to be; if it weren’t, it would be a tyranny to the rest of us. In just that way the slave owners of the antebellum South complained of the tyranny of Washington. We need have no more sympathy for our capitalist masters today than we do in hindsight for the plantation masters of yesterday.

Wednesday, April 21, 2010

Real Conservatism: Bring Back the Federalist Party!

Conservative, adj.:

1. disposed to preserve existing conditions, institutions, etc., or to restore traditional ones, and to limit change.
2. cautiously moderate or purposefully low: a conservative estimate.
3. traditional in style or manner; avoiding novelty or showiness: conservative suit.

None of those three definitions describe people who self-identify as "conservatives" in American politics today. And therein lies the problem.

A healthy political dialogue in a progressive society would occur between progressives on one side and conservatives on the other. Progressives would push for change, identifying problems that need fixing or opportunities to achieve something, to make society more egalitarian, wealthier, healthier, better-educated, more enlightened, more peaceful, fairer, more just, freer, etc., etc. Conservatives would object with "yes, but" arguments. But do we really need to make this change at this time? But look at the cost! But consider the unforeseen consequences. The subtext of all of which is: We agree with the overall goal. But let's not be hasty. Maybe this isn't the best way to do it, or the best time.

It's a useful -- in fact, necessary -- function in the dialogue, conservatism. It's necessary because (let's face it) progressives aren't always totally smart. On occasion, we can be profoundly stupid. Half-baked. Overly zealous. Insufficiently mindful of costs, social and political realities, and unintended consequences. So it pays to have a conservative side of the dispute, frustrating though we may sometimes find it, to insist that progressive ideas prove themselves in imagination and accounting before they're actually implemented.

But that function can only be served by real conservatives. Wingnuts need not apply. Those who reject, not only the half-baked hasty ideas sometimes generated by progressives, but the very idea of progress, are not conservatives, because one of the cardinal principles of conservatism is to support the traditional values of one's society, and the traditional values of the United States of America are progressive. You know, things like "We hold these truths to be self-evident, that all men are created equal," or "government of the people, by the people, and for the people." Conservatives -- real conservatives, that is -- hold to the same progressive values as progressives do, they're just more cautious about implementing them and less convinced at any point about the size of the step we're ready, as a society, to take.

The problem with conservatism in today's American politics is that the term has been hijacked. It no longer applies to real conservatives, it applies today to wingnuts who reject progressive ideals altogether. It applies to people who don't believe in the secular, Enlightenment-based democracy that America traditionally seeks to build, but would instead create a theocracy. It applies to people who don't recognize the value of a multiracial, tolerant society, but would have a white people's country. It applies to those who advocate, not a cautious approach to change, but a radical one -- in anti-progressive directions. To re-criminalize abortion is not conservative, it's a radical change. To abolish such long-standing government functions as Social Security, Medicare, aid for the poor, regulation of the economy, even public education, is radical. To end the separation of church and state and create a Christian government and legal base is radical. To bring effective democracy to an end and hand all political power over to a corporate plutocracy is radical. Conservatives do not advocate radical change. And so the people who advocate these radical changes are not conservatives.

Over the course of elections since 1980, I have watched the wingnuts take over more and more of the Republican Party from the true conservatives that used to dominate it. I kept hoping that the process would have a natural limit, that the GOP would come to its senses at some point and return to traditional American values and its own previously-solid conservative function in the dialogue. It's still conceivable that they may, but given the depths to which the party has sunk at this point, I think we need to entertain and plan for the contingent possibility that they also may not. What happens then?

There are still a few conservatives in the Republican Party, and also quite a considerable number of them in the Democratic Party, but Republican conservatives have become an endangered species. (Due to the wingnuts having hijacked the term "conservative," these Republican conservatives are nowadays known as "moderates." I refuse to cooperate with that theft of a perfectly good term by those to whom it does not properly belong, and so insist on calling these politicians conservatives, which they are.) We hear today that Florida Governor Crist, a conservative who will almost certainly lose the GOP Senate primary this year to a wingnut, will probably ditch the GOP and run as an independent. A few conservative Republicans have already left the party and either become independents or joined the Democrats. John McCain of Arizona is another conservative who faces a primary challenge from a wingnut, and although he has not indicated any inclination to jump party, none of us should rule out the possibility at this point. The surge of wingnut Republicans zealously trying to rid the party of conservatism has become endemic.

At the same time, as we progressives are painfully aware, the conservatives within the Democratic Party are making it harder for progressives to achieve what they should be doing. Well, of course that's what conservatives are supposed to do, but the problem is that the progressive-conservative dialogue, which has mostly become intra-Democratic, is in turn hampered by the howling wingnuts on the other side of the aisle. It's very difficult for Democrats to manifest both sides of a healthy political dialogue (progressive and conservative), and at the same time present a united front against wingnuttery. There's a strong tendency for people on our side of the discussion, that is to say, progressives, to turn upon conservative Democrats in wrath and insist that they be replaced by progressives, a desire that is amplified by fear and loathing of the wingnuts. Our political landscape is rapidly changing from one of progressives and conservatives to one of progressives and wingnuts, with conservatives squeezed out of the picture altogether.

Folks, that is not a good prognosis! We NEED conservatives, and we certainly do NOT need wingnuts! So I think it may be time to consider some practical contingency plans for bringing conservatives, real conservatives, non-crazy conservatives, conservatives-not-wingnuts, back into politics with a home of their own.

The simplest and best solution would of course be for the Grand Old Party to recover from its thirty-year binge and return to sobriety. Let the "big tent" goppers win the intra-party argument. Let the wingnuts be consigned to the wings and fringes where they belong. Let genuine conservatives again take their proper places as the loyal elected opposition. A nice dream. Maybe it will become real. But I'm no longer willing to hold my breath waiting.

Failing that, what we may need is a new political party. Governor Crist, rather than running as an independent, should found this party. I don't have any idea what to call it -- well, sure I do; it could be called the Conservative Party. But maybe that has too much potential confusion with the British party of the same name. The Party of Sanity is too flippant, as is the Party of Non-Wingnuts. Ah! I have it! We can bring back the oldest, most original name for an American party of conservatives there is, and call them the Federalist Party. Or maybe they can come up with a better name themselves, but I'll use that tag provisionally here.

The Federalist Party would include such Republicans as Crist, Olympia Snowe, Tom Campbell, John McCain, Arnold Schwarzenneger, and similar targets of wingnut loathing. It would also find room within its ranks for Democrats (and ex-Dems) such as Blanche Lincoln, Joseph Lieberman, Ben Nelson, and so on. Since these people would no longer be competing in Republican (or Democratic) primaries, there would be no pressure on them to adopt wingnut positions and they could remain true to their conservative beliefs, and let those positions run honestly and fairly in general elections versus both progressives and wingnuts.

After all, there's really only one reason why the wingnuts are getting anywhere at all: they are big fish in an increasingly small pond, as the number of voters willing to call themselves Republicans declines, and the smaller and smaller numbers that remain are increasingly dominated by wingnuts. This means that on election day, it becomes increasingly likely that one of the candidates in every election will be a wingnut. So I say, let that process reach its logical conclusion, let the Republicans become purely a wingnut party, and let those Republican conservatives who remain have somewhere else to go besides the Democrats. Since under those conditions wingnuts would win very few elections indeed, the Republicans would, over a few election cycles, quickly go the way of the Whigs, and future elections would be mainly between the progressive Democrats and the conservative Federalists. (At least until we adopt proportional representation so that we can have more than two active serious political parties. But that's a change of subject.)

Would this be better or worse in terms of elections for progressives? I'm going to be have to be honest here: it would be worse. There's no question that, most of the time, a progressive can beat a wingnut in an election more easily than a sane conservative. So if all we care about is the short-term goal of electing progressives, a return of genuine conservatism isn't a good thing. But I don't think that is all we should care about. It also must be recognized that wingnuttery does not deserve to be represented in Congress, yet in many districts, replacing wingnuts with progressives is simply not feasible; the people might be uneasy with their wingnut reps but they don't want any dad-gum lib'ruls neither. So a real, true conservative Congresscritter would be the realistic alternative, better than a wingnut because, well, anything is, and better than a progressive because he or she would represent the people of the district, which a progressive (in all honesty) would not.

Let me repeat the first sentence above: A healthy political dialogue in a progressive society would occur between progressives on one side and conservatives on the other. That's something we don't have any more. It would be good if we did. We might not elect as many progressives that way as when the only alternative to progressives consist of clowns and zanies, but on the other hand we would elect no clowns and zanies. And that would be better for America.

Sunday, April 18, 2010

There's Racism, And Then There's Racism

Is the Tea Party movement racist? Seems to me it is or it isn't depending on what kind of racism one means.

There's no question that opposition to President Obama from the right is vehement to a degree not really explained by his policies. This is not unlike the wild opposition to President Clinton, who was less progressive than Obama but also incurred loathing and fear on the right. Because Obama is black, the idea has arisen (and a certain amount of polling data in support of it has been presented) that this vitriol is based in racism. The fact that something similar was encountered by President Clinton, who is white, would seem at first glance to argue to the contrary. In fact, I contend that it supports the idea, if one examines the likely explanation for what DID generate that opposition.

Some racism is overt, crude, unsubtle, and blatant. Some racism is covert (or even unconscious), subtle, internalized, and unacknowledged even to oneself. Very little of the opposition to Obama is the result of overt racism. But a great deal of it is at least in part the result of covert racism.
An overtly racist objection to Obama would exist when a person feels, and admits to himself or herself (if not always to others), that a black person should not be president. Evidence of overt racism would be found when a person actually says something like this, or when a person is affiliated with a racist or white nationalist organization (e.g. when the person is a regular poster at Stormfront). Some of this does exist of course, but I am prepared to accept that the overwhelming majority of the Tea Party movement isn't part of it.

A covertly racist objection to Obama would exist when a person has no problem with a black person being president, but does have a problem with the idea that a black person could be elected president. That is to say, the person holding this attitude doesn't think black people are inferior to white people or inherently unqualified to be president, and may be willing to acknowledge that Barack Obama is a sharp guy who is just as capable at the job as a lot of white guys who have held it before him. It's not him. It's what his being elected says about what has happened to America.

I had a similar impression about the vitriolic opposition to Bill Clinton. Clinton's a white southern boy, of course, but he's also a notorious womanizer who evaded the Vietnam draft, smoked dope, grew his hair long, and married a tough feminist b**ch. For those who are inclined to freak out about the cultural changes that occurred over the 1960s and 1970s, he was a walking red flag, not because of his politics (which are pretty far right as Democrats go), but because of his cultural trappings and who he is as a person. In their America, the America they fondly remember from their childhood and would like to believe still exists -- in the REAL America, as they imagine to themselves -- someone like that would provoke revulsion and could NEVER win the nomination of a major party, let alone actually be elected. The vitriolic opposition wasn't really about him. It was about what his electability said about how America had changed and in what directions.

This impression was reinforced during the impeachment fiasco, which led Paul Weyrich to say, "I no longer believe that there is a moral majority. "I do not believe that a majority of Americans actually share our values. If there really were a moral majority, Bill Clinton would have been driven out of office months ago. It is not only the lack of political will on the part of Republicans, although that is part of the problem. More powerful is the fact that what Americans would have found absolutely intolerable only a few years ago, a majority now not only tolerates but celebrates."

In a similar way, Barack Obama's election says something about what America has become and is becoming that some people don't want to accept. And that change is not so much cultural as racial. Although there are cultural overtones, too.

White people are in the process of becoming a minority in this country. It hasn't happened yet, but it's in train. When it does happen, whites will be the largest minority, but still will represent less than 50% of the population. In the sepia-toned memory photographs of Obama's detractors, real America is a land predominantly of white people. Sure, it has nonwhites in it, and if you ask these guys they'd happily tell you that racial discrimination and Jim Crow and segregation and all that nasty stuff from our past had to go and they're glad it's gone. At least most of them will, and most will even mean it and believe it. But what they envision is an America of white people who are magnanimously, righteously non-racist and willing to generously tolerate and accept minorities in our midst on a (somewhat) equal basis, 'cause that's what great and wonderful people white Americans are. The idea of white people no longer being a majority, and thus no longer able to call the shots and be magnanimous and generous and so on, that doesn't sit well. But that's what the future holds.

As we approach that future, it becomes increasingly probable that someone non-white will gain the White House, and now it's happened. Obama was elected because America has a whole lot of black and hispanic citizens who voted for him in lopsided majorities. Obama was elected in addition because a whole lot of young people -- including young white people -- don't care that the country is heading for a white-minority future. Obama being elected president says that the uncomfortable future is closer than they thought, and his dusky face on the television above the presidential seal is a harsh reminder that the world of those sepia-toned memory photographs no longer exists. It makes them feel out of place in the world that surrounds them now.

And that feeling infects everything else, and magnifies small political objections into big ones, and causes irrational and unbelievable accusations to be believed without serious critique.
It isn't racist in the sense of being bigoted and thinking no black guy should be allowed to be president or can possibly have the smarts for it. But it is racist in the sense of being based in a lament for the fact that America is rapidly ceasing to be a white people's country.

Saturday, April 10, 2010

Slavery, Serfdom and Wage Work: The Forms of Coercion

I continue this week to encourage radical thinking, and to build on the post from last week. Last week, I explored the origins of capital property ownership, how it separates the right to own wealth from the work to create it, and the consequent nature of profit as a form of theft.

This week, I want to explore the lynchpin of all class privilege throughout the history of civilization: the ability to coerce the labor of others for the elite’s profit and the elite’s ends. Historically, there have been three broad methods by which the labor of the many has been channeled to the ends of the few, declining in brutality and increasing in subtlety from one to another, but all of them coercive in one way or another. These three methods are slavery, serfdom (and variants), and wage-work.

I don’t mean to suggest that there is perfect moral equivalence among the three. To be a wage worker is immeasurably better than to be a slave. The abandonment of slavery, and the near-abandonment of serfdom, really does represent progress in human rights and the human condition. But while working for wages is certainly not slavery, it is no more accurate to call it freedom. The only people who are free are those without masters, without bosses – those who work for themselves.

There was a time, early in the history of civilization, when that was pretty much the case for most people. The normal condition for a person in ancient times was not that of a hireling but that of a small farmer or craftsman, an owner of one’s own business. Working for someone else for pay was thought of as a transitional phase, something one did in order to learn a craft or to acquire the necessary capital to buy one’s own land. And of course, working for someone else was completely unknown in precivilized times. The transition to the current situation, in which the overwhelming majority of people work at jobs serving the profits of others, with no entitlement to the fruits of their own labor, did not develop overnight. The circumstances of servitude have grown less severe with the passage of time, but at the same time the condition of actual freedom has grown rarer and rarer.

One of the earliest forms of working for another, and the first to be employed on a large scale, was slavery. We may consider this the template. Initially, slavery probably arose as a consequence of war. When the victors in a war conquered an enemy, they gained more than the land that the enemy had occupied. They also gained the surviving enemy citizens as captives. Even when the conquest was less complete than that, captives were often taken in the course of the fighting and could be brought to the homeland and forced under threat of punishment to work for the victors. Of course, just as with the enemy’s land, the enemy people became disproportionately the property of the elite, who found themselves the owners of large tracts of land worked by slaves and generating a lot of money without the owner having to work on it at all. (Profit being theft, as noted last week.)

Over time, slaves became property to buy and sell just like land itself, and the pattern emerged of a class of warrior-aristocrats living off the labor of people who had no rights under the law (or few, depending on the society) and whose only purpose in life was to serve the interests of their masters. This became the template for all elite classes from that time forth. Like most prototypes, it was crude and unsubtle compared with the more sophisticated ways of compelling labor that followed. It suffered from numerous disadvantages, including slave revolts and a lack of motivation on the part of the workers. Nevertheless, it sufficed to keep the aristocratic class in wealth and power for thousands of years and in many different civilizations. Even more importantly, the underlying idea that the elite deserved to be served by a class of workers and to become rich from their labor became so entrenched that it survives to this day, many years after slavery itself has been outlawed.

One problem with slavery is that it was universally unappealing to the slave. (Or nearly so. There are instances in ancient times of highly skilled persons selling themselves into slavery, knowing that their skills would earn them favored treatment and a better circumstance than they could achieve in freedom. However, that’s the exception; very few slaves ever became slaves by choice.) People resisted becoming slaves and had to be forced into it when captured in battle or condemned for debt or for some other legal offense for which slavery was the penalty. There was really no way to reduce most of the people to a state of slavery, because the numbers of slaves would have proven impossible to control by the number of free people. In order to increase the number of people who could be reduced to servitude, it was necessary to make the conditions of servitude less drastic than was usually the case with slavery.

Some examples may be found prior to the industrial revolution of a form of coercion gentler than slavery, but still more direct and brutal than wage work. This consisted of a defined set of obligations on the part of a worker, who was forbidden under most conditions to leave his employment, but who also had more rights under the law than a slave. I’m going to call this sort of arrangement “serfdom,” but I should explain that I’m talking about a broader category of social arrangement than serfdom proper. The peasantry of medieval China or Japan, or the sharecropping and tenant farming arrangements in the post-emancipation American South, fit into this general category, as well as the condition of the medieval European serf. Because serfdom was less onerous than slavery, because it entailed some rights on the part of the serf and some obligations to the serf on the part of the master, it was possible to have a larger population of serfs than could be maintained as slaves. Even so, it turned out not to be as perfect a solution as wage work: the industrial-era answer that has turned nearly everyone into a tool of the elite.

Anyone can see how slavery and serfdom are coercive arrangements, because the victim is punished for refusing to work. But in the case of wage work, the coercive nature of the institution is less evident, because a wage worker is not directly punished for refusing to work. The only punishment is to withhold a reward: failure to work means the worker will not be paid. But it is still coercive, and the coercion still takes the form of punishment or threat of punishment. It’s just not applied by his immediate employer, nor directly for refusing to work. The coercion applied to a wage worker is applied before he ever accepts a job. It is built into the system of ownership that concentrates possession of capital property into a few privileged hands. It punishes the wage worker, not for refusing to work, but for attempting to work using capital property that belongs to the elite. Since he cannot obtain capital property of his own, he is unable to produce wealth on his own for his own use or for sale to others. As such, he has no independent way of supporting himself. He must work for the profit of another, in return for the means to support himself and his family. Rewards are sufficient motivators only to the extent that the person receiving the reward suffers from deprivation. If the wage worker can support himself through his own labor on his own behalf, rather than in service to another, then his desire for monetary reward is satiated, and he will have no reason to surrender his liberty. The rat will run the maze in return for food pellets, but only if it is kept hungry.

Because the ability of an employer to apply direct coercion is limited, and because the wage worker is allowed by law to voluntarily leave his employment, refusing to work but giving up his wages, it carries a greater semblance of freedom than either slavery or serfdom. It has been possible to argue that a wage worker “voluntarily” enters into an employment agreement, and so is actually free. The argument is specious, of course, because the only way the agreement could genuinely be voluntary is if the worker had the right and opportunity to support himself without a master. When the alternative is starvation, no real choice exists. It has also been possible to argue, with equal speciousness, that the worker rather than his master owns the fruits of his labors, by confusing the real fruits of his labors – the goods or services that his labor creates – with the reward his employer offers for surrendering them. Let there be no confusion on this point. A wage worker is not a slave, nor is he a serf. But he is most certainly not free.

In addition to keeping capital property concentrated in few hands – actually, in service to that necessity of universal coerced labor – it has also been desirable from the standpoint of the elite to keep the rewards paid for wage labor as low as practical. This was desired partly so as to maximize the share of wealth held by the elite, of course, but also to reduce the chance of a wage worker freeing himself by saving sufficient money to go into business or, through investments, to support himself without working. Even if a worker is unable to completely free himself from servitude, if he is well paid and lives within his means, his options become wider and he is much harder to manipulate. If asked to do something unacceptable, an employee who can survive without work for a year or more is much more likely to quit than one that lives paycheck to paycheck.

In the end, it’s all about power, even more than about money.

And that will be the subject of next week’s post.

http://www.smashwords.com/books/view/8357

Sunday, April 4, 2010

Profit Is Theft

One of my purposes in writing this blog is to encourage radical thinking. Not necessarily radical action (although radical thinking does radicalize action to a degree), but thinking that cuts through the false assumptions and intellectual ruts at the roots of a lot of habitual thought in politics, economics, religion, and art. If we can think radically, possibilities open to our consideration that we would never even imagine otherwise.

This week, I want to discuss two concepts that are crucial to any capitalist economy, and that are older than civilization, but much younger than the human race: the private ownership of capital property, and the related concept of profit. These were, for their times, radical ideas. Today, pointing out that they are not inevitable or natural ideas has itself become radical, and so doing that has become necessary.

Property ownership in some forms is as old as the human race, or somewhat older. But the property that our precivilized ancestors owned was all personal property, not capital property. Individuals owned things that they planned to use and enjoy themselves: clothing, tools, weapons, food stores, maybe a tent or a place in the communal dwelling. But no individual owned the land from which all these things came. An individual hunter could own the meat from his own kill, but not the hunting ground. The same hunter could own the spear he used to kill his prey, but not the flint quarry that its spearhead came from. Land was different from other types of property in that it was used to make wealth, rather than being wealth itself. In precivilzed society, it was the property of the band or the tribe, not of any individual. Any property that a person owned, he owned because his own work had made it, or because he had traded something produced by his own work for the product of someone else’s work.

Let’s look a bit more closely at that paradigm of property, because it contrasts greatly with what obtains today.

The source of wealth (the land) is owned communally.

The land is available to anyone in the band or tribe that is capable of making wealth from it.

If a person makes something, then (subject to tribal rules about sharing food and other necessities to make sure no one goes hungry or otherwise suffers unnecessarily) that person owns it. Labor defines ownership.

Private ownership of capital property was introduced with civilization. It created a very different paradigm of property ownership that worked like this.

The source of wealth (the land, and later on industrial plant and sometimes intellectual property) is owned by individuals.

The land and other capital property are only available to make wealth from with the permission of its owner.

If a person makes something, then (subject to laws which take a portion in taxes to cover public expense) it belongs to the owner of the capital property from which it is made. Labor does not define ownership. Ownership of capital property, and nothing else, defines ownership of the wealth produced from it.

Note the difference? When capital property was communally owned, it was labor that defined the ownership of wealth. Each person owned what he worked to produce. But since capital property has become privately owned, that ownership is now what defines ownership of the wealth produced from it. Today, no one owns what he works to produce, at least not because he works to produce it. Ownership is defined by ownership itself. To own capital property is to own what is produced from it, whether you do the work to produce it or someone else does. If you own capital property, that entitles you not only to the fruits of your own labor applied to that property, but also to the fruits of other people’s labor applied to the same. If you do not own capital property, then you are not entitled even to the fruits of your own labor.

This may be counter-intuitive, so let me go into a little more detail. Some may respond: aren’t people paid for their work? Don’t they own the fruits of their labor in the form of their wages or salaries?

No. They do own their wages or salaries of course, but that is NOT the product of their labor. That is the fee paid them for doing the work even though someone else owns the product of their labor. The product of a person’s labor is the goods or services produced by it, and that belongs not to the worker, but to the owner of the capital property the worker used to produce it. What’s more, it is always worth more in sale value than the wages paid those who produce it. As an employee, you are paid only a portion of the value of what your work produces – as small a portion as your employer can pay and still get you to do the job, and certainly never equal to the full value.

This brings us to the related concept of profit. What is profit? It’s defined as the revenues generated by a business minus its expenses. It may also be regarded as the net share of wealth going to the owner of capital property. Or, less even-handedly, it is that portion of the total wealth of an enterprise that the owner skims from the labor of others.

To make this clear, I’m going to exercise a bit of author privilege, or linguistic irresponsibility, and slightly redefine the word. (I have no shame. It’s true. Ask anyone.) For purposes of this writing, “profit” applies only to that portion of a business’ net revenue that is not produced with the owner’s own labor. This means that if you are the sole proprietor of a business with no employees, your business makes no “profits” in this sense, because your labor and no one else’s has generated the goods or services which have been sold to generate revenue. I’m doing this because I want to illustrate something about the great majority of business profits in our economy, which is however not true of situations such as I just described.

Profit, then, as I am using the word, is wealth amassed through other people’s work.

It is in this sense of the word “profit” – although I must emphasize that the vast majority of what accountants call “profit” does meet this definition – that profit is theft. It is the producing of wealth through the labor of other people, who are paid less than the value of the goods and services their labor produces. The owners of capital property – property which, in the natural state that our ancestors occupied for over a hundred thousand years, many times the duration of civilized life so far, was owned communally and not the property of any one individual – are taking wealth that other people have produced, and that in a natural society would belong to the people who produce it. And that is stealing.

But this act of theft is perpetrated by almost all owners of capital property without a shred of guilt, with even less shame than I feel in redefining a word here and there, because it has become endemic in our society and perceived as the natural order of things, no matter how unnatural it actually is. And it is completely unnatural, in two ways. Not only have we redistributed capital property, which in our original, natural societies was held in common, into private ownership, but we have also changed the rules about who owns what is produced from it, so that ownership rather than labor determines ownership. In natural, precivilized societies, capital property was owned by the society, but the society did not own the wealth that was produced from it. The individual that did the work owned the product of his work. (Subject, of course, to rules distributing food to the hungry and such, but that’s functionally equivalent to taxes today, and is a footnote to the process not the main description.) So not only have we gone from an arrangement in which capital property is publicly owned to one in which it’s privately owned, but at the same time we’ve gone from a system in which labor defines ownership to one in which ownership defines ownership. We have done this, obviously, to benefit the owners of capital property, who enjoy enormous privileges both economic and political in a modern society.

As noted above in the first paragraph, I’m not proposing any particular action here. We are long past the time when we could restore communal ownership of capital property, or at least I can’t think of any way to make that work in a modern industrial economy. Then again, perhaps there is a way and I simply haven’t thought of it. Certainly it’s a millennia-old Gordian knot of privilege and power, not easily undone. But the mind is as sharp an implement as Alexander’s sword, and merely to recognize the reality of what is and why serves by itself to put things into a new perspective. Also, there are some consequence of this recognition that all for-hire workers are being systematically plundered by a system designed to create and reward privilege which will be explored in future posts. At very least, this perspective will hopefully give many people the idea that things which have been taken for granted should be changed, which is a prerequisite to the consideration of exactly what they should be changed into.

Next week: slavery, serfdom, and wage work, or, the forms of coercion.

http://www.smashwords.com/books/view/8357

Sunday, March 28, 2010

This Is No Time For Compromise

Can we now dispense with the word “bipartisanship” now?

We are in a Crisis era, a Fourth Turning. Roughly once a lifetime, we go through a period of civic upheaval in which our national institutions (political and economic) have, for one reason or another, become dysfunctional. The last time this happened was in the 1930s-40s with the Great Depression followed by World War II. The time before that was in the 1860s-70s with secession, the Civil War, and Reconstruction. The time before that was in the 1770s-80s with the American Revolutionary War and the framing of the Constitution. You can find out more about the concept at this web site: http://www.fourthturning.com/. But what I want to write about today is not the overall concept of the generational cycle and the Fourth Turning. I want to talk about a specific characteristic that all Fourth Turnings have, this one (so far) included. That characteristic is divisiveness. It’s something that is often decried, but it is in fact a good thing – indeed, an absolutely necessary thing.

A Crisis era (such as this one) is a decisive time. It’s a time when much-needed reforms are put in place, reforms that have been neglected for decades. It is not a time for compromise or soft talk or middle courses. It’s a time when consensus cannot be achieved, when conflict arises between those who see a need for the new and those who would preserve the old, however dysfunctional it may be. It is the nature of such a conflict that it cannot be resolved through agreement. There must be a victory, and there must be a defeat. Consider the three Crisis eras from our nation’s past, beginning with the American Revolution.

In 1773, tensions had been rising between England and the American colonies for decades. The expensive conclusion of the Seven Years War (or French and Indian War to Americans) moved the British government to try to get the colonies to contribute financially to their own defense. A reasonable request, of course, but it ran head-first into the colonists’ conviction that they had come to America in the first place in search of self-rule, and that Crown and Parliament had no proper sovereign authority over America. London’s position was diametrically opposed: the British government insisted on its right to govern all British territory, including the colonies in America.

This impasse had grown over time. Prior to the French and Indian War, the British government didn’t really make any attempt to govern the colonies. Britain used America as a convenient dumping ground for condemned criminals (as she would later use Australia), a source of raw materials, and a market for manufactured goods, but otherwise left the colonists to their own devices. Thought in Britain had always held that the Crown and Parliament held sovereignty and the right to govern, but why bother? As American society became more developed and sophisticated, though, as population grew, and as the war with France forced Great Britain to take an interest in (and spend more money on) America’s defense, the attitude of the British government and that of the Americans approached collision.

A number of taxes were imposed on the colonists in the years following the end of the war, provoking a storm of protest. The government backed down and repealed most of these taxes by the early 1770s, retaining only a token duty on imported tea.

Was the tea tax onerous, an unconscionable burden threatening to reduce Americans to abject poverty? Certainly not. It was barely a tax at all. It would fall short of paying for the French and Indian War by millions of pounds. Most Americans would likely shrug their shoulders, pay the duty, and hardly notice. But the Tea Act, if allowed to stand, set the precedent that Parliament had the authority to tax the colonists and to legislate in other ways. Rather than accept this, a radical group led by Samuel Adams engaged in a bit of guerrilla theater, nonviolent civil disobedience, and applied vandalism, and destroyed a cargo of tea in Boston harbor.

This was not a move intended or calculated to provoke compromise. In response, the British government didn’t compromise, either. It imposed a series of Punitive Acts (or “Intolerable Acts” as the Americans called them) which further roused the Americans’ ire. Americans began forming militias and stockpiling arms and ammunition. The Crown dispatched reinforcements to America and negotiated with the German principality of Hesse for mercenary troops. The Americans formed a provisional government and appointed George Washington commander of its newly created army, which set about besieging the British forces in Boston. Battles were fought. Washington’s forces outmaneuvered the British at Boston and forced them to withdraw. The British thereafter returned the favor at New York City and nearly (but not quite) destroyed the Continental Army. The Congress passed a motion to declare independence from Great Britain. From that point on, the lines were drawn and no compromise was possible. Either America would become fully independent of Great Britain, or the colonies would submit to British rule, but the prior condition of loyal but self-governing colonies would cease to exist, one way or another.

Does this begin to sound familiar in terms of our current situation?

We can also compare it to what happened in the 1860s. Tensions had been building over issues related to industrialization of the country, particularly slavery, for many years. The territories acquired during the U.S.-Mexican War were a focus for much of the argument, since they would eventually become states and their representatives in Congress would weigh in on one side of the divide or the other. The newly-formed Republican Party represented the interests of the northern capitalists and of the abolitionists (who were in agreement over the specific issue of slavery; both opposed it although for different reasons). A moderate Republican, Abraham Lincoln, was nominated for president in 1860. Lincoln was not proposing to outlaw slavery, but did propose to keep it out of the new states formed from the western territories. This would, over time, result in an anti-slavery majority in Congress, and the planter interests saw the writing on the wall.

A true compromise on the issue of slavery would have resulted in gradual emancipation with compensation paid to the slave owners for loss of their property, but the hard-liners were not interested in that on either side. Southern fire-eaters saw an opportunity to provoke secession from the U.S. by states that permitted slavery. The strategy for this was to ensure a hard-line pro-slavery Democratic candidate in the election. Moderate Democrats held their own convention, with the result that the party split and nominated two competing presidential tickets, both of which lost (predictably enough) and Lincoln won with a plurality of the popular vote, exactly as the fire-eaters had intended. Seven states promptly seceded. Lincoln initially attempted a compromise solution and peaceful rejoining of the Union. The seceding states were having none of it. They formed a new central government with a Constitution modeled on the one they had abrogated (with a few appropriate changes) and, in a dispute over a federal fort within the borders of one of the seceding states, went to war.

Once again, an irreconcilable conflict existed. The southern planters wanted to preserve an antique way of life based on wealth generated by growing cash crops with slave labor. The northern commercial and industrial interests wanted to pursue an increasingly mechanized and industrialized future in which slaves would be replaced by machines and finance capital would dominate the entire economy, and the emancipationists, their temporary and ad-hoc allies, wished to free the slaves for moral reasons. A solution might have been found short of war, but it would have required the planters to accept defeat and seek the best compromise deal they could get. They were unwilling to do that. And so the lines were drawn once again, and the conflict fought to the finish.

The Great Depression was less violent, but no less uncompromising. A breakdown of the capitalist economic system with its governing philosophy of laissez-faire left some 25% of the workforce unemployed. Neither the breakdown nor dispute over that philosophy was new; the industrial economy put in place after the Civil War suffered periodic financial panics and depressions roughly every 20 years. The philosophy itself was opposed by labor union activists, anarchists, socialists, and Communists. Class conflict had been intensifying for decades. The Depression brought it all to a head. Herbert Hoover, the president when the economy tanked, was no laissez-faire purist, or so one would judge from his past. But he moved in that direction in the face of disaster, perhaps out of genuine conviction or perhaps because the Republican Party demanded it of him. The conflict this time was political and electoral and did not involve guns (which we may take as a sign of progress), but it was no less decisive. Over the years of Franklin Roosevelt’s presidency, laissez-faire was abandoned. The workplace was unionized, the government regulated the banks and other industries, and the first social welfare programs (Social Security and unemployment insurance) were put in place. By the time World War II was over, a new economy had been crafted, a mix of capitalist and socialist elements. This was not accomplished through bipartisan compromise any more than the changes of the American Revolution or the Civil War were. The divide was sharp and partisan, with the Democrats on one side of it and the Republicans on the other. The Democrats won, and the Republicans lost.

In the present time, we again face a situation similar to those three. The economy has again broken down, although not as severely as during the Great Depression. In addition, we face shortages of key raw materials and severe environmental dangers. The problems this time are global in scope. The global economy is beyond the power of any one national government to regulate – an international means of regulating it is required. One economic problem that was not present in the 1930s was a shortage of fuel; the U.S. was still a net exporter of oil then. Today, we are faced with the need to transform our energy economy away from its dependence on oil – no easy task. We cannot simply apply the same methods that worked in the Depression, despite a superficial similarity.

On all of these points, we do not find national unity. There are voices on the other side, claiming that the problems don’t exist, or that we can solve them without changing the way we do business. In many cases, these voices are cynical and insincere, acting not with genuine public concern but out of a desire to protect private profits. We saw how fiercely the lines were drawn over the health-care reform debate. This is the template for the next few elections. A compromise, “bipartisan” solution will, almost by definition, be an unworkable one. We must accept that the conflict exists. It’s too soon to broker a negotiated settlement. First, we must win. Then we can make peace.

I hope – and given the example of the Great Depression, I cautiously believe – that I speak of “winning” and of “peace” only in metaphor. Some violence, however, has already occurred. It remains to be seen whether those who are defeated at the polls (rather, some of their crazier supporters) will resort to the cartridge box instead of the ballot box. Let us pray not. Such efforts would of course be defeated, but in the course of it lives would be lost for the most futile of causes. In that sense, I hope that we have peace now, not after victory. But at the same time, we cannot let the danger of violence deter us from doing what must be done.

In any case, it’s time to jettison the search for “bipartisanship.” There will come a time later on, after the necessary reforms are in place and their opponents have accepted reality, when consensus may be sought once more. But that time is not now.