Re-imagining Democracy
MMolvray
2010
(CC) 2010 MMolvray
Licensed under Creative Commons 3, BY-NC-ND: May be freely copied a) if
author is cited and link is included (MMolvray,
http://molvray.com/govforum) b) if use is non-commercial, and c) if
content is copied without modification. (ND stands for "no
derivatives.") All other use, please contact MMolvray
.
For most recent versions, and information about downloads and other
formats, please visit the home web site
for /Re-imagining Democracy/.
ISBN: 978-0-9829518-9-7 (.prc) and 978-0-9829518-5-9 (.epub)
Table of Contents
Control
Rights
Defining Terms
Missing Rights
Privacy
Right to Live
Rights in Conflict
Free speech vs. noise
Summary
Environment
Defining the Problem
Facts (Not Decisions)
Costs and Choices
Decisions (Not Facts)
Problems and Penalties
Costs of Transition
Sex and Children
Relationships
Parents
Children
Government 1: War and Politics
Introduction
Force
Between Groups
Against Criminals
Decision-making
Voting
Minority Protections
Officeholders
Precognitive Decisions
Laws
Administration and Taxes
Government 2: Oversight
Oversight
Regulation
Public Works
Money and Work
The Nature of Money
Capital
Finance
Scale of Business
Corporations as Bodies
Advertising
Prices and Incomes
Pricing of Basics
Labor
Care
Medical Care
Retirement
Disability
Child Care
Care of Infants
Care Facilities
Anti-Poverty Measures
Medical Research
Education
Social Implications
Teaching and Learning
Formal Schooling
Advanced Education and Research
Diffuse Learning
Creativity
Afterword
Control
Government is about control: who gets it, how much they get, and what
they can do with it. The labels vary — totalitarian, democratic,
communist, monarchist, anarchist — but in practice it comes down to
power. The labels only tell us how to think about it.
But how we think about it can also be important, for either good or ill.
Theories of government and the pull of power are two unrelated forces at
work, like thought and gravity. Gravity, by itself, is boring. It
doesn’t result in anything particularly interesting and dealing with it
is a lot of work. Thought, by itself, also doesn’t get anywhere. Worse,
when applied, theories can be deadly. A theory about how to fly that
ignores gravity will have regrettable results. Unsatisfactory
governments based on untenable theories have killed millions of people.
But when the two work together, when thought invents airplanes and space
craft, then whole new worlds can open up and everyone can benefit.
I’d like to discuss and, if I can, contribute to two things in this
work. The first is the importance of limiting the control people
exercise over each other, consistent with the same desirable situation
for everyone else. The second is the implementation of rights. The
rights themselves are well articulated by now, but we seem to keep
forgetting how to apply them every time a new situation comes up.
My focus is more or less on how to ensure sustainable equity and
liberty. That implies a degree of both are there to be sustained, or at
least that the evolution required doesn’t involve a change into an
unrelated organism. So, even though this is a work about rights, it’s
not about how to acquire them. It’s only about what’s needed not to lose
them. I don’t have a new or improved way to change bad governments into
good ones. I wish I did.
- + -
Placing consistent limits on people’s ability to take resources and
control is much harder than logic suggests it ought to be. After all,
there are always more people getting the bad end of the deal, so
majority rule ought to be enough to put an end to inequity for all time.
Except it doesn’t. After just a few years, one morning it turns out that
the rich have gotten richer, civil liberties have been quietly
legislated away, and massive global problems with obvious solutions stay
unsolved no matter what the majority wants.
There are often legislative loopholes that facilitate the process, but
the real problem is more basic than that. We’re fighting human nature.
Until we recognize that and compensate appropriately, I don’t think we
can escape the cycle of increasing inequality and subsequent nasty
corrections.
Consider a basic premise of democracies. Safeguarding freedom and our
rights takes vigilance. We have to inform ourselves about the issues.
The system depends on an informed electorate.
That has not been working out well for us. One can point to deficiencies
in education and the media, but I think even if they were perfect, we’d
still have the problem. Less of it, perhaps, but always enough to lead
to the inevitable consequences. The reason is that people have weddings
and funerals and jobs to attend to. They have neither the time nor the
interest to stay current on the minutiae of government. And it is in the
minutiae that the trouble starts. How many voters will notice changes in
liability law that make vaccine development too risky? When an epidemic
is sweeping the land it’s too late for a do-over. How many will notice
that a lethally boring redistricting law is being passed? How many will
realize it disenfranchises some voters? How many will care, if it’s not
their vote?
Take an easy example. In the US, five states (South Dakota, Alaska,
North Dakota, Vermont, and Wyoming) with just over three million people
have the same Senate voting power as the five states (California, Texas,
New York, Florida, Illinois) with 110 million. (2007 numbers) The Senate
can effectively derail legislation. So a Californian has less than 3% of
the political clout of a South Dakotan. Three percent.
(That disenfranchisement has been accomplished without protecting
minority rights. Minority rights do need protection when the majority
rules, and that was once the intent of the skewed representation of one
particular group, but in our case one third of the population has lost
much of its voice while minorities still have no protection.)
Giving some people three percent representation compared to others is so
anti-democratic it’s not even funny, and yet nobody can find a practical
solution within the current system. That’s because there is none at this
point. It would involve dozens of politicians casting votes to throw
themselves out of a job. The solution was to pay attention when the
problem was tiny and boring, when nobody except professionals or
interested parties was ever going to notice it. So we’re headed into one
of those nasty corrections. Not any time soon, but the longer it waits,
the bigger it’ll be. All of history shows that it’s not a question of
“if,” but of “when.”
The problem is that nothing on earth is going to make people sit up and
take notice of tiny boring issues. Building a system dependent on that
is no smarter than building one dependent on people feeling altruistic
enough to deprive themselves according to their means in order to
provide for others according to their needs. It’s as doomed as an
economic system that thinks it can rely on rational decision-making. It
just isn’t going to happen. Instead some people will use the loopholes
made by unrealistic ideas to use the system for their own ends.
Communism becomes totalitarianism, and dies. Capitalism becomes a set of
interlocking oligopolies that privatize profit and socialize risk.
Democracies become . . . well, it varies. Some places are more
successful than others at slowing the slide into media circuses funded
by the usual suspects.
- + -
There’s another aspect of human nature in conflict with the way
democracies are supposed to work. It’s the idea that freedom is not
free, that we the people must defend it. There’s nothing wrong with the
idea, all things being equal. The problem is that all things are not equal.
Abuses of power start small. They start when someone who can pushes the
envelope just that one little bit. In other words, they’re started by
people already some way up the social tree, which means that there’s
strong resistance to stopping them. The resistance comes not only from
the few who’d lose by being stopped, but also from the many who
wouldn’t. That may seem counterintuitive, but remember that the
situation is going to be a challenge to a social superior over something
minor. That doesn’t feel like a noble defense of freedom. It feels like
making a fuss over nothing. It feels like a waste of time and an
embarrassment. So even at those early, small stages, there’s a cost
associated with doing something. People tend to look at the cost, weigh
it against the smallness of the issue, and decide it’s not worth doing
anything just yet. The next time the envelope is pushed, the issue is a
bit bigger, but the cost is also higher because the perpetrator now has
more power. Pretty soon, there may be personal consequences, small ones
at first. By the time people wake up to the fact that freedom itself is
being lost, the price of resistance has grown high enough so that it’s
still not worth fighting for /at the time/. It’s only in hindsight that
the mind reels at people’s lack of action.
Another complication is that the process does not take place in the full
glare of consciousness. The original issue may evade attention simply by
being too small to notice. Once there’s a cost associated with noticing,
the tendency is not to go there. We’d rather forget about it, and we’re
very good at finding reasons why we can.
The resistance to examining disparities of power is one example. In the
good old days, nothing needed to be examined because inequality was
natural or ordained. Generations of goofy, inbred aristocrats made no
difference to the conviction. Now nothing needs to be examined because
we’re all equal. In Voltaire’s immortal words, “The law forbids rich and
poor alike to sleep under bridges.” The idea that we’re all social
equals is patently absurd, and yet it’s so pervasive that a President
can compare the homeless to campers and it is considered evidence only
of his heartlessness, not of his lack of mental capacity. We’re
convinced, so the evidence makes no impression.
Not only are the issues hard to follow. We don’t want to follow them.
Does that mean it’s hopeless to aim for an equitable self-regulating
system such as democracies are supposed to be? I don’t think so, or I
wouldn’t be writing this tract. Exhortations to be better citizens might
not work, but that’s not the only choice.
The solution is to match the ease and effectiveness of action with the
level of motivation. That’s not a startling insight, but it does suggest
that the wrong question is being asked in the search for a solution. The
question is not why people don’t make the necessary effort. The question
is what prevents people from using or defending their rights. How long
does it take to find information once someone starts looking for it? How
many steps are involved in acting on it? How obvious are the steps? How
much time, knowhow, or money do they take? Is there any personal
vulnerability at any point? You see where this is headed. The solution
is to remove the obstacles. It is not a luxury to make it easy to fight
back. It is a critical, perhaps the critical lever for sustaining a free
society.
Another part of the solution is to minimize the number of problems
needing action to begin with. Again, that’s not a startling insight, but
it does run counter to the prevailing model. Now we have laws to stop
abuses. That requires the abuse to be committed before it can be dealt
with. In the best case scenario that means dealing with issue after
issue, as each one comes up. A better model is one of prevention. The
desire to commit abuses has to be reduced, and that’s not as impossible
as one might think. The trick is to shorten the feedback loop caused by
abuse until it affects the person committing it.
Imagine a simple world in which a person owned a factory. Then imagine
the law said they and their family had to live downwind of it. In a
world without loopholes, pure air would be coming out the “smoke”stack.
In a world where it was effortless to take action against evasion of the
law, there’d be no loopholes. Both pollution and ownership are too
complicated for obvious solutions, but that doesn’t mean there are none.
A salary feedback loop, to give another easy example, would require
management decisions about workers’ pay to be applied in equal
proportion to management’s own net compensation. One real world example
was a law requiring state legislators in Hawaii to enroll their children
in public schools. It was one of the few US public school systems that
was not underfunded. (I don’t know if the law still exists or if they’ve
found a way to evade the intent by now.)
Effective feedback loops aren’t always simple, and finding a short term
loop that accurately reflects a long term situation is much harder than
rocket science. However, the point of the feedback loops is the same: to
bring home the consequences of actions to those people who make the
relevant decisions. Initial solutions that are less than perfect could
evolve toward improved effectiveness if the system was set up to make
the process easy. The degree to which feedback loops sound nice but
impossible to implement is a measure of how far we are from actual equality.
- + -
If I’m right that people ignore the small stuff, and that what matters
all starts as small stuff, why would things ever change?
First, I’d like to mention a couple of reasons why enlightened
self-interest says they should change. The state of enlightenment being
what it is, these factors probably won’t make much difference, but you
never know, so they’re worth pointing out.
Accumulations of power actually benefit nobody, not even the supposed
elite beneficiaries. The effect of power on society is like that of
gravity on matter. Eventually the mass becomes so huge that movement on
the surface is impossible. Economic mobility, social mobility,
inventiveness, and in the end even physical mobility, all become
limited. It becomes a very boring world which even the elites seek to
escape. Take an archetypal concentration of power, that found in Saudi
Arabia. The process has gone so far there that half the population can’t
go for a walk. And yet, if being at the top of the tree is so nice, why
is the proportion of wealthy Saudi men vacationing outside the country
so much higher than the percentage of, say, rich European men relaxing
in Saudi Arabia? Just because many people lose by the system in Saudi
Arabia does not mean it’s actually doing anyone any /good/. For some
reason, it’s counterintuitive that when everyone wins, there’s more
winning to be had, but it is a matter of observable fact that life is
richer and more secure for everyone in equitable societies. Logically,
that would imply the elites ought to be in the forefront of limiting
themselves, since it’s easy for them and they stand to gain by it in
every run except the shortest one. They are, after all, supposed to be
the ones who take the long view, unlike the working classes who live
from day to day. (Which just goes to show how much of a role logic plays
in this.)
Accumulations of power do worse than no good. Eventually, they bring the
whole society down. That, too, is a matter of observable fact. So far,
no society with a significant level of technology has managed to limit
power sustainably and ensure sufficient adaptability and innovation to
survive. The only societies that have made it longer than a few thousand
years are hunter-gatherers. Living at that technological level seems
like a high price to pay for stability. We have over 1300 cc of brain.
We can do better than cycling through periods of destruction in order to
achieve renewal.
Although history doesn’t give much ground for hope based on enlightened
self-interest, the track record for plain old self-interest is better.
Not perfect, but better. When survival is at stake, people are capable
of great things, even changing a few habits and working together.
Ostrum’s work (e.g. 1990
,
or a very potted summary on
Wikipedia) examines avoidance of ecosystem collapse in detail. What I
find most interesting is the extent to which widespread responsiveness
to social members and some degree of distribution of power play a role.
These are the attitudes that seem impossible to achieve, and yet some
societies manage it for the sake of survival. The type of danger
involved, which is a force of nature and doesn’t lend itself to
unproductive resentment the way enemies do, and the type of situation,
which benefits from the application of foresight and coordinated action,
can facilitate beneficial social structures that rein in self-interest.
I’m putting it backwards in that paragraph. To be more precise, humans
started out facing ever-present “ecological collapse” because of their
inability to control any but the tiniest factors of their environments.
Hunter-gatherer societies are also remarkable for their egalitarianism,
compared to states that succeeded them. So, it’s probably not that later
groups achieved more power-sharing as that they retained or regained it.
Social tools that work are very relevant to us since we all live in a
disaster-prone world now. For all of us the only solution is to work
together and expend the resources necessary to control the situation.
It’s not utopian to hope we’ll do it. People have, in fact, shown that
they’re capable of it.
The truly utopian assumption is that there can be a technological
solution to the problems facing us. That’s a logical impossibility not
because there is no fix, but because somebody has to apply it.
Technology hugely increases the available physical power in a society,
and that also increases social power. Holding all that at our
fingertips, as it were, means that every action is also hugely
magnified. Expecting technology to fix anything is the same as expecting
people to always and everywhere apply it for the greatest good of the
greatest number. That’s why hoping for a technological fix is like
hoping that a change in gun design will lead to the end of murder.
Amplified power has another critical consequence. Enough of it, used
badly, can destroy the society that couldn’t figure out how to control
the power of the people using it.
That is not hyperbole. It might seem like it because modern
technological societies are only at the very earliest stages of being
able to destroy the planet. At this point, we’d probably only be able to
destroy it for us. The proverbial cockroaches would move in. But it’s no
longer hard to imagine a scenario in which we destroy it for life. A big
nuclear war could have done it. Global warming could do it. In a far
future when everyone has personal spaceships, an evil mastermind could
orbit a light-bending device between us and the Sun which would shade
the whole Earth to death before the machine could be found and
destroyed. There isn’t just one way to destroy a highly technological
society, and the more advanced it is, the more ways there are. Bad
governments can do it. All the people together can do it with tiny
actions that add up. Mad individuals can do it with sabotage. There are
so many ways that it is literally only a matter of time. The more
technologically advanced the society, the more essential limits to power
are for its very survival.
So, to return to wondering why things would change, it looks like that
may be the wrong question again. It’s trite, but nonetheless true that
things always change. The status quo can never be maintained. The only
choice is to follow the path of least resistance or to expend more
energy and take action. The path of least social resistance is to let
the powerful do their thing. But, sooner or later, that’ll be fatal.
Planetary changes are just as threatening now as any flood, so much so
that plenty of people see the need for action. However, “plenty” hasn’t
become “enough” yet because of the usual impediments. Those who don’t
know, don’t want to know. Many of them — those with little control over
the situation — would do something about it if they could, but it’s just
too difficult. The remaining few do control the situation, and there’s
currently no way to bring the consequences of their actions back to
them. The reason there’s no way is because they’ve made sure of it, and
the reason they can make it stick is because of the power imbalance
that’s been allowed to develop. It’s the same story over and over again.
And the reason it persists is also the same. Too many people feel the
effort of taking action costs too much for the perceived benefit.
It all comes down to limiting power. The pattern is so obvious and
repeated so many times in so many different spheres that there has to be
a common element preventing effective action. There’s at least one thing
pervasive enough to qualify. Everybody wants to limit the ability of
others to hurt them, but not as many want to limit their own ability to
take advantage of others. That, by itself, may be enough to explain why
we don’t get effective limits. The problem is more than global threats
and too much power in too few hands. It’s also that we have to put
limits /on ourselves/ to get to the good life . . . and to avoid the
opposite. That’s hard to take on board. It just feels so right that if
somebody else loses, I must be winning. But no matter how self-evident
it feels, that’s not actually a logical necessity. We can all lose.
We’re doing it right now.
+ + +
Rights
Defining Terms
Limits to power aren't just for the powerful. They have to start with
the individual, with each and every individual. Rights are supposed to
provide those limits, to demarcate the borders so that everyone has the
most freedom compatible with an equal level for others. However,
equality is so far from realized that the word has become Orwellian:
some are always being more equal than others.
Given how well-articulated and well-known the basic rights are by now,
the pervasive difficulty of actually implementing them has to be due to
an equally pervasive factor. External factors, such as apathy or
ignorance, should vary independently of rights. They shouldn't be able
to result consistently in the same losses and the same pattern of losses
that we see repeated over and over again. Rights are gained or regained
in a social convulsion, and then the slow erosion repeats. The
uniformity of failure suggests the fault has to be endogenous, a problem
with the understanding of the rights themselves.
However, at the most fundamental level there doesn't seem to be much to
misunderstand. All rights share one quintessential feature: they apply
equally to everyone. When that's not true, they're not rights. They
become privileges. So far, so clear, but then it rapidly becomes less so.
The rights themselves are not all created equal. Some depend on others.
For instance, freedom of assembly is meaningless without freedom of
movement. Their relative importance can also vary. Perfect guarantees of
security of life and limb become useless without any freedoms to make
life worth living. On the other hand, freedom doesn't mean much when
keeping body and soul together takes all one's time. Last, some rights
are just plain missing. The right to make a living is included in the
Universal Declaration of Human Rights, but it's really more of a hope
everywhere in the world. And yet it's obvious to the meanest
intelligence that without a livelihood no other freedom means a thing.
So rights are not equal. They can depend on other rights, vary in
importance, or be entirely absent. However, those factors don't receive
the same attention as the equal application of rights to all. The latter
is critical, of course, but it can also be meaningless if one of the
other factors is in play. If people are to use, in fact, the rights they
have in theory, then all the factors are critical. Obscuring that point
by focusing only on the equal application of rights does nothing but
render the right useless. Sometimes it seems that the focus is not an
accident, since it allows weightier parties to say they're for the equal
rights while pulling the actual situation their way. Then conflicts
aren't resolved on the merits, which leads to the usual descending
spiral of encroaching power. It is not simply a philosophical exercise
to figure out how to resolve the inequality between rights. It is essential.
The first step is to enumerate how the inequality among rights plays
out. Which ones, exactly, take precedence? This is a ticklish issue,
since nobody wants the rights dear to them declared insignificant. The
usual way of handling it is to gloss over it. The Universal Declaration
of Human Rights, for instance, has a couple of articles at the end
pointing out that no right may be construed so as to abridge other
rights. That's a rare acknowledgment that rights can conflict, but it's
also ironic because a little higher on the same page are two rights, one
of which has to abridge the other. It states that everyone has a right
to education that promotes personal fulfillment, peace and tolerance,
and then it says that parents have a "prior right" to determine their
children's education. But, despite the injunction against abridging,
there's no word on how to handle a parental choice of militant patriotic
or religious indoctrination for their children.
Examples of direct conflicts between rights are many. Is vaccination
more important than respect for the right to refuse medical treatment?
Is it never, always, or sometimes more important? Does freedom of speech
trump freedom from persecution? Can speech insult religious icons? Can
it threaten death by mail? Or on the web? What is the difference between
speech and persecution when it's a telemarketer's call?
Rights can also be related in other ways besides a straightforward
conflict. Depending on point of view or context, the same right can be
vital or irrelevant. Someone trying to pass a final exam isn't hoping to
exercise the right to free speech. A person facing a murderer's gun may
not share Patrick Henry's feelings on the relative worth of liberty and
life. The dependencies of different rights can change, too. Freedom of
religion for a mystic does not require the right to assemble in the same
way as it does for, say, a Catholic. A whole series of interlocking and
changing issues all need to work together to resolve rights into their
correct relationships, and even then they're only valid in a given
situation. The complexity is more reminiscent of biology than logic.
Rights may be universal, but how people feel about them isn't.
Given the complexity, and especially given its mutable nature, looking
for static answers seems foolish. And yet the concept of adapting to
reality is sometimes rejected out of hand because it would lead to a
legal morass. That is a strange approach. Saying "away with complexity!"
doesn't make it so. The Dutch might as well decide that this business of
having a fluid ocean makes life too difficult, so dikes will be built as
if the sea was solid. Writing laws that ignore reality is equally
senseless. It's the legal system that must adapt, not the other way
around. Expecting laws to create their own reality is just magical thinking.
The complexity of the situation doesn't make it hopeless, however.
Simple rules can often deal with an infinitely complex reality, and the
same is true here. Specific static answers may be impossible, but the
rule of equality can find a way through much of the complexity even when
the answers themselves depend on each given situation. No situation has
exactly the same blend of important factors, and equality just needs to
be balanced among all of them, not limited to one.
Another objection made sometimes is that striving for equity is naive.
It requires a definition of fairness, which is a subjective matter and
thus doomed to failure. It's said to be better to rely on something
quantifiable, such as a cost-benefit analysis. I'm not sure whether that
view is based on willful ignorance or the plain variety. Surely, it's
obvious that cost-benefit analysis can only begin /after/ deciding who
pays the cost and who gets the benefit. That can't be decided without
rules about what's fair, or, alternatively, about what some people can
get away with. Cost-benefit analysis can't take away the need for the
step it rests on. It can, however, muddy it. Possibly, that comfortable
obscurity is the attraction of pretending the first step can be ignored.
Dealing with that first step of defining fairness is not impossible.
It's not any specific or static situation. It's that everyone has equal
rights. Each individual enjoys the widest possible range of rights that
is compatible with the same range for others. The essential step in
applying the idea is that some rights depend on others, and that the
/totality/ must be preserved. No single right takes precedence over all
others.
The principle that rights must apply equally, and that the priority of
rights may change in order to preserve that equality, can provide a
resolution of some conflicts on that basis alone. For instance, it's not
hard to see that allowing a religion, any religion, to dictate speech
will make religious freedom meaningless. If everyone is to have both
freedom of speech and of religion, speech has to have priority. Giving
religion priority would paradoxically end freedom of religion, since any
offensive expression associated with another set of beliefs could be
prohibited.
Missing Rights
So far, so good, but even that blindingly obvious example gets into
trouble rather fast. Freedom of religion is meaningless without freedom
from persecution, and some speech can certainly feel like harassment to
the recipient. The current solution is to try to put conflicting demands
on the same right: to promote free expression and to throttle it back;
to promote freedom of religion and to refuse it a congenial environment.
That's doomed to failure.
However, logical absurdity is not the only choice. The irresolvable
nature of the conflict is a symptom not of inevitability but of a deeper
problem. The borders between the two rights aren't good enough. They
leak over onto each other's territories. The solution is not to let one
or the other leak preferentially, but to plug up the gap. The solution
is to find what's missing, and that actually doesn't seem too difficult
in this case. What's missing is the right not to hear.
The lack of an explicit right to control our own sensory inputs is at
the root of a number of modern frictions. The technology to amplify our
ability to bother each other, for good or ill, simply wasn't there
before. The lack of a right not to hear is doubly odd because it would
seem to be an obvious parallel to other well-established rights. For
instance, control over our own bodies means more than freedom of
movement. There's also the right to refuse medical procedures, to be
free of assault, and generally to limit other people's ability to
impinge on us. We control not only what we do, but also what other
people can do to us. Yet when it comes to sensory matters, we have
freedom of expression, but no equivalent right to refuse inputs from
other people. That's a recipe for disaster which is in the making all
around us.
Aggressive marketing, whether commercial, charitable, or political, is
currently considered a minor annoyance worth suffering for the sake of
free speech. The advertisers aren't expected to control themselves. The
targets are expected to "tune it out." Interestingly, research
shows that the
effectiveness of advertising depends on /the extent to which it is tuned
out/. That makes the injunction to ignore it downright sinister.
(Repetitive messaging is discussed at greater length under Advertising
and in Education
.) We've become
so habituated to advertising that it takes a new angle to remind us just
how intrusive it is. There is now a technique to use focused sound that
reflects off the inside of the cranium (1
,
2
).
At the focus of the speakers, you hear voices inside your head.
Marketers placed these things on buildings to advertise a movie about
ghosts by making passersby hear what seemed to be ghosts. It came as a
surprise to them that people were outraged. (The marketers were
surprised. Not the ghosts. They have more sense.) There is no real
difference between reflecting sound off a wall or a skull to get to the
auditory nerve. The only difference is that we're not used to tuning out
the latter. But once we're used to it, the tendency is to accept that
it's somehow up to us to deal with it. It takes a new intrusion to
remind people that, no, it shouldn't be up to us. We have a right to
silence.
Although the right has yet to be articulated, it's already starting to
be applied. It's so obvious to so many people that even in the U. S. of
A., where the dollar is king, a Do Not Call Registry limiting marketing
by phone is one of the most popular laws of the last decade. Stephen Fry
,
champion and master of communication that he is, has made the same point
about avoiding unsought words: “No matter who you are no one has … a
right to address you if you don’t want to be addressed.” However, the
lack of articulation about a right to silence causes helplessness in the
face of all kinds of violations, not all of them commercial. Cell phone
conversations in confined spaces are another example. The level of
discourse about that sometimes echoes the old one about smoking. The
"right" to smoke was important and the right to breathe was not.
Soft porn is one area where the right not to see is explicitly discussed
even if not always acknowledged. (There is general agreement that we
have a right not to see the hard core stuff.) As with telemarketers,
sometimes the need for a missing right is so clear, it's demanded even
if we don't know what to call it. But because we don't have a name for
it the argument is ostensibly about whether parents should decide what's
acceptable for their children. So a parent's right to control what a
fifteen year-old sees is more important than anybody's own right to
control what they themselves see. It's another farcical contortion
symptomatic of unresolved conflicts between rights.
Now that everybody has the ability to din at everybody, 24/7, it's going
to become increasingly obvious that an explicit right not to hear, see,
smell, taste, or touch is an inalienable right, and that it has to stop
being alienated. As a shorthand, I'll call it a right to silence in the
same way as free speech is used to mean all forms of expresssion.
The big sticking point with any "new" right is how to implement it. It
always seems laughably absurd at the time. In the high and far off
times, when there was talk of broadening voting rights beyond the landed
gentry, the gentry sputtered about the impossibility of determining the
addresses of fly-by-night renters. We've managed to solve that problem.
The same process is repeated with each newly recognized right. Creating
a quiet environment is no different. It only seems difficult because
it's a measure of how much intrusion has become commonplace.
The basic rule is that any given expression should be sought out, and if
it's not, it shouldn't intrude. The old clumsy brown paper covers on sex
magazines were a step in this direction. Those who wanted that content
could get it, but it wasn't up to the general public to avert its eyes.
Updating that attitude to the electronic age, the idea is that all
content has to be opt-in rather than opt-out. If a web site wants to
carry ads, a button click could lead to them, but there can be nothing
"in your face." Billboards would be banned since there is no way for
them not to be in your face. Magazine articles and advertising would
have to be distinguishable and separate. Television advertising would be
grouped together in its own time slot, as it is in Europe. Product
placement in entertainment would be a no-no.
Obviously, respecting the right to silence would affect current business
models dependent on ad-supported-everything. However, simply because a
whole industry has fattened on the violation of rights doesn't mean they
must continue to be violated. (For ideas on how to pay for content
without ads, see an earlier piece
of mine, or the chapter on Creativity
, which reflects an
idea gaining currency in many places, (e.g. 1
,
2 , 3
).
Once again, the distance between respect for the right to silence and
our current world is a measure of how far technology has enabled some
people to push others and how far we have to go to get back.
Not all implementations of the right to silence require readjustments to
the whole social order. The issue of bellowing into cell phones could be
solved rather simply, for instance. They don't have a feedback circuit
to allow the caller to hear their own voice at the ear piece. No doubt
it was cheaper to make them that way. Mandating a feedback circuit,
possibly an amplified one to make people talk very, very quietly, (and
usable volume controls to boost the sound from muttering callers) would
go a long way to returning cell phone bellowing back to normal speaking
tones. Sometimes there are technological fixes.
The thorniest balancing issue raised by a right to silence is the one
brought up at the beginning of this discussion: how can the greatest
scope be preserved both for offensive opinions and for the right to an
unintrusive environment? Bottling advertising or soft porn back up may
be very difficult in practice, but there aren't too many conceptual gray
areas. How to draw the lines between conflicting opinions, however, is a
much tougher question and it starts with the fundamental one of whether
any lines should be drawn at all. Serrano desecrated a crucifix for the
sake of art. Hoyer drew Mohammed in a cartoon for the sake of free
speech. Should they have been silenced? Absolutely not.
Should the people offended by it have had to notice it? That's a much
harder question. At least in those cases, the desecration happened in
venues not likely to be frequented by those faithful who would take
offense. (A modern art show, and a general circulation Danish
newspaper.) The issue had to be brought to their attention; a fervor had
to be whipped up. In that case, if people go out of their way to notice
things they don't like, there is no issue of persecution and freedom of
speech takes precedence. Persecution seeks out its victims, not vice versa.
In a different situation, where the wider culture presents a rain of
unavoidable and offensive messages, such as an insistence that
homosexuality is a perversion, justice seems better served by limiting
that expression to venues where people would have to actively seek it
out. The principle is always the same, which means the resolution may or
may not be. Conflicts should be resolved to preserve the most rights for
everybody, and to preserve the most equality among the conflicted
parties. Precisely because equality is the goal, the resolution of any
specific conflict has to be decided on its own merits.
Privacy
While I'm on the subject of missing rights, privacy is another essential
one. Unlike the right to silence, at least we know it's name and there's
growing acknowledgment that it should be on the books. However, as a
recently understood right, it's relegated to the status of the new kid
on the block: tolerated if it stays on the fringes, but not welcome to
interfere with the old big ones like free speech. I'm going to argue
that this is backwards. Respect for privacy is fundamental to all the
other freedoms and should take precedence as a rule.
To begin at the beginning, what is privacy? What is it that we have a
right to? Thoughtful and thorough discussions of the definition of
privacy (such as Solove's "Understanding Privacy"
) explore
the concept in all its permutations. However, in one important respect
there's a missing element. Just as with all the other rights, it's not
the exact definition or its place in the hierarchy that matters because
/those change depending on the situation/.
Trying to make a fluid issue static is part of the reason it's generally
easy to subvert the intent. It would be better to go to the shared
characteristic of all the concerns about privacy. Every aspect of
privacy involves the ability to control the flow of information about
oneself. Exactly which bits of information need to be controlled varies
according to personal feelings. It's not the information that defines
privacy. It is, once again, about control.
Control implies that the individual decides which information about her
or him can be stored, where it can be stored, and for how long. It
implies that the permission to store it can be revoked at any time. And
it implies that potentially private information also follows that rule.
It has to be "opt-in" and never "opt-out," and the permission is revocable.
As with all rights, if they are to be meaningful, the burden of
implementing respect for privacy falls on the potential violators, not
the victims. A so-called right might as well not exist if it's nothing
but the suggestion that people can spend their time trying to defend
against violations. The equivalent in a more familiar situation would be
to make robbery illegal but to do nothing to stop it except give the
victims, without any police assistance, the "right" to try to find the
perps and to try to get their property back. That's not the way rights work.
Violating a right is a crime and it's up to the violator not to do it.
That is equally true of privacy as it is of personal safety. It is not
up to the individual to try to object to violations after they've
happened. It's up to the people who want that information to first get
permission to have it. So, no, Google can't photograph recognizable
residences. Marketers can't track your every web click. Credit reporting
agencies can't hold data against people's wishes. Before anyone objects
that then life would cease, remember that loans were made and insurance
risks evaluated before the days of centralized databases. There's
nothing impossible about respecting privacy rights. It's just expensive
because a whole industry has been allowed to feed on violating them.
That's only made us used to it. It doesn't make it good. Neither does it
stop the growing resentment that always accompanies violated rights. The
anger will burst out eventually, and no doubt stupidly, since we haven't
put a name on the real problem.
One curious aspect of privacy is that as a rather newly recognized right
which is not well articulated, it has become an umbrella for several
unarticulated rights whose lack is becoming obvious. There's a sense
that privacy is about control, so sometimes what I've called the right
to silence is conflated with it. This is especially so when the unwanted
noise involves sex. Sex is a private matter, one wants enough control
not to hear about somebody else's work on it, so it must be a privacy
issue. Because the terms haven't been correctly defined, it's relatively
simple to argue that sexual content, unless it's about the viewer, has
nothing to do with his or her privacy. The argument then spins off into
being about the children. In reality, it's about different areas of
control. The right to silence is control over the extent to which one
can be forced to notice the activities of others. Privacy is the right
to control one's own information.
Another issue conflated with privacy is abortion rights. The main thing
the two seem to have in common is that both started to be discussed in
public at about the same time. Possibly it has to do with the fact that
medical procedures are normally private. However, whether or not medical
procedures are performed has nothing to do with privacy. That has to do
with the right to control one's own person, one of the most fundamental
rights of all. Even the right not to be murdered is secondary, since
killing is allowed in self-defence. Similarly, if there was no right to
control over one's own body, patients dying of, say, kidney disease
could requisition a kidney from healthy people walking around with a spare.
Abortion muddies the argument only because some people believe the fetus
is a person with legal rights greater than those of the mother since it
can require her life support. There is nothing to stop women from
believing this and living accordingly because there is a right to
control one’s own body.
Everyone has the right to live according to their own beliefs. The
relevance to abortion is that personhood is necessarily a belief, a
social or religious category. It is not a matter of objective fact.
Biology can only determine who belongs in the species /Homo sapiens/
. No cellular
marker lights up when someone is due to get legal rights.
It bears repeating: personhood is necessarily a matter of belief,
whether that's based on religion or social consensus. Therefore those
who oppose abortion because they believe the fetus is a person with
special status have to hope they are never successful in legislating how
others handle their pregnancies. If they were, it would mean that
exceptions could be made to the right to control one's own person. And
once that principle is admitted, then there is nothing to stop a
majority with different beliefs from legislating forced abortions.
The fight over abortion is a good example of just how bad unintended
consequences can be if there is enough confusion over concepts. Control
over one's own person is different from a right to privacy. So is the
freedom to live according to one's own beliefs. When the issues involved
in abortion are correctly categorized as ones of control and beliefs about
personhood, then the appropriate social policies are not hard to identify.
Individual decisions are not the point here. They may be complex,
depending on beliefs. Fair social policies are obvious: everyone has the
same right to control their own bodies. Nor can any religion take
precedence over that control without, ironically, destroying freedom of
religion as well as other basic rights. Privacy is not the issue at any point.
Having discussed what privacy is, and what it's not, let's go on to why
it is a fundamental right. That seems counterintuitive at first because
privacy, in and of itself, is not very interesting. Like the right not
to be murdered, it becomes critical only when it's violated. But, like
control over one's body, control over one's own information is necessary
if other rights are to have any meaning. The only reason that hasn't
always been obvious is that we haven't had the technical capability to
spy on each other 24/7, or to retain every whisper forever. When anyone
on the internet — including, for instance, your boss — can look over
your shoulder and examine where you live, which plants grow in your
window boxes, which gym you visit, who you have sex with, and how you
looked in your baby pictures, there will effectively be no freedom left.
Everything will have to be hidden if everyone can see it. What you can
say will depend on what others approve of being said. Where you can go
will depend on where others approve of you going. Old-fashioned police
states, which depended on limited little human informants to keep people
in line, will come to seem like desirable places with a few minor
constraints. The logical conclusion of no privacy rights is no freedom
of speech, movement, or assembly.
A common objection to drawing logical conclusions is that the situation
would never really get that bad. There's no need to take the trouble to
prevent a situation that's not going to arise. That kind of thinking is
wrong on two counts. One is that it's symptomatic of evaluating the cost
of preserving rights against losing bits of them, and of the tendency to
opt for the path of least resistance. It's too much trouble to fight so
we put up with the loss. Then we get used to it. Then there's a further
loss … and so on. The evidence so far does not provide grounds for
optimism that things will never get too bad because people won't stand
for it.
But even if there is no problem at all, even if an invasion of privacy
never happens, that is not the point. The thinking is wrong on a second,
and even more important, count. Rights aren't abandoned just because
nobody happens to be using them. A nation of bores with nothing to say
still has to preserve the right to free speech, or stop being a free
country. A nation of atheists has to preserve freedom of religion. A
nation where nobody has ever been murdered still has to consider murder
a crime. And a nation where nobody cares about privacy has to enforce
the right to it. It's not good enough to say that the explicit right is
unnecessary because nobody needs it. Having a right to privacy is
different from waiting for someone to take it away. We find that out
every time a new invasion occurs.
Privacy is a linchpin right that needs to be explicitly included in its
real place in the hierarchy.
Right to Live
There's another vital but missing right, one that's been identified for
decades, one that was noted by statesmen like Franklin Roosevelt, and
one that requires plenty of social change in some parts of the world.
It's the right to keep body and soul together under all circumstances,
not only the special case of avoiding violent death. There's a right to
make a living, a right to medical care, and a right to care in old age.
Obviously, the right to live has to be understood within its social
context. In the Paleolithic, people didn't have the same capacity to
provide for each other that we do now. There are limits to how much we
can guarantee each others' lives, but within those limits, there is the
obligation to respect everyone's right to live. Not recognizing that
right leads directly to the eventual loss of all other rights as people
give them up to make a living. Just the threat of being deprived of a
living leads to the loss of rights. The right to live isn't merely a
nice, pie-in-the-sky privilege for when we're all rich enough to eat off
gold spoons. It's critical to any sustainable free society.
The right to live, perhaps more than any other, makes people fear how
the inevitable conflict between rights can be resolved. A right to live
costs money, and many people want to be sure it's not their own money.
However, conflict with other rights is not some special defect that
makes this one uniquely impractical. All rights have conflicts and need
to be balanced. The solution isn't to rigidly prefer one. It's to
evaluate the situation on its merits and see which solution preserves
the most rights for everyone.
The sticking point is how it might work in practice because the greatest
good of the greatest number is not the same as a guarantee that
wealthier citizens won't lose by it. The only way to achieve no-cost
economic justice is sufficient growth, equitably distributed, which
brings everyone up to an adequate level. That rosy scenario is so
comforting that it's now the dominant model for how economic justice
will happen. Sadly, it's doomed. That's not because growth could never
be sufficiently vast. It could be and has been at several points in
recent history when technological advances led to huge wealth creation.
But spontaneous equitable distribution of that wealth has never
happened. If enough people spontaneously limited themselves to ensure
justice for others, there wouldn't be a problem to begin with. There's a
problem because people do want justice, but for themselves, not as
limits on their own activities for the sake of others. Economic growth
doesn't change that.
The truth of the matter is that a right to live will inevitably be
costly to some people in the short term. Wealth is currently distributed
without regard to economic justice, so the people whose money comes at
the expense of the lives of others would not get that money under a just
system. Again, if nobody made money unjustly, there wouldn't be a
problem to begin with. Given that there /is/ a problem,there is no way
to make it cost-free to everyone.
In some ways, it's ironic that there should be resistance to
implementing economic rights because there's really nothing
objectionable about them. Economic justice, which strives to balance all
rights equally, would respect property rights and wealth accumulation
that didn't deprive others of a living. Most people, honest people, are
disturbed at the mere thought of making money at the expense of others'
lives. In a just system, they could make money without that disturbance,
nor would anyone fear grinding poverty. It's a win-win situation.
And yet, the harder it is to make a living, the less anyone cares how
it's made. The less economic justice, the greater the fear of poverty
and the more a right to live sounds like a right to impoverish others.
It's a downward spiral just as the previous situation is an upward one.
Taking advantage of others is a very difficult pattern to stop unless
there are rules in place preventing everyone equally from doing it. The
truth of that is evident from the worst economic crimes, like slavery,
right down to accepting minor bribes. Before the laws are in place, the
haves fight them because of the spectre of poverty. After laws promoting
economic justice are passed, lacking those laws becomes a mark of
barbarism. For instance, turning a blind eye to slavery is now
considered despicable everywhere. On the other hand, an effective social
safety net is still inconceivable to many US citizens, while Europeans
can't understand how any society can be said to function without one.
The point of this digression on the costs of a right to live is that I
want to be clear on the fact that there are inevitable short term costs
for some people. It's also inevitable that at the level of whole
societies, judging by the weight of the evidence so far, everyone is
better off in the medium term and even more so in the long term.
Economic justice does not have to be the apocalyptic catastrophe that
communism made of it, /so long as rights are balanced rather than given
rigid precedence/. On the contrary, equitable economic rules lead to
increased, not decreased, wealth. They're also fundamental to a
generally equitable and therefore sustainable society. In the chapter on
Money and Work ,
I'll give some examples of how a right to live might be applied in ways
that balance available resources, property rights, employers' rights,
the right to an equitable share of created wealth, and the right not to
die of poverty.
Of all the rights, the one to live suffers most from a breakdown of
imagination, so I want to digress a bit on that, too. With the others,
such as the right to privacy (unless you're Google), there's not a
widespread sense that it's silly to even try implementing them. But it's
inconceivable to many people that a universal right to live could exist
outside a fairy tale. "Inconceivable" isn't just a figure of speech in
this case. It is, literally, inconceivable. For instance, no matter how
many times accountants do the math and show that everyone in the US
would benefit if we had universal health care, many US citizens simply
cannot take on board the fact that they're losing by not spending money
on medical care for others. No doubt, if you had never seen the color
red, having physicists tell you its wavelength wouldn't help. But once
you had seen it, anybody who told you it couldn't exist would seem to
lack vision.
Slavery provides one example of that change in vision. At one time, some
people in the US thought slavery was essential to the economy. Now,
nobody can see how anyone could have thought slavery brought any
benefits. The fact that the slaveholders made some money is seen as an
infinitesimally small quantity compared to the astronomical social
costs. It's become inconceivable that anyone could have missed that.
The concept that increased justice leads to increased benefits for
everyone — not just nebulous moral benefits but plain old quality of
day-to-day life benefits — is something that's much easier to see when
it's been experienced. So easy, in fact, that for people in that
position it must seem too obvious to need as much repetition as I'm
giving it.
Although the right to live suffers the most from seeming impossible to
implement, all missing rights seem fanciful. After they've been
articulated, but before they're applied, the usual excuse is that
they're impractical. The examples are legion. Government of the people
couldn't possibly work because the people are simpletons. Free speech
will lead to a breakdown of order. Universal health care will bankrupt
the country. Respect for privacy will make it impossible to do business.
And so on. Yet, oddly enough, whenever the fight to gain respect for
rights is successful, the opposite happens. Democracies work rather
well; countries with free speech have more order, not less; and
industrialized countries with universal health care have lower medical
costs than the USA, which is the only one doing without. None of the
missing rights would be impractical to apply. What feels impractical is
that they involve limiting the power of those currently benefiting from
their abuse. That's different. It's difficult, not impractical.
Rights in Conflict
Most of this piece on conflicting rights has been devoted to errors in
the framework which cause conflicts even when none is necessary, such as
missing or badly defined rights that blur the necessary boundaries among
people. But even when all those errors are solved, rights can and will
conflict precisely because there is no single right answer to their
order of precedence.
However, although there cannot be a rigid priority among rights, there
is a clear goal. The viability of all rights should be preserved. The
best outcome preserves the most rights and the most freedom for the most
people. In consequence, conflicts between rights need to be resolved in
favor of the ones whose loss /in that situation/ would otherwise cause
the most damage.
As an example of how this might play out consider a recent conflict
.
Muslim workers at a slaughterhouse needed to pray five times a day.
Other workers were reluctant to fill in to the extent required. Filling
in is a non-trivial task in that environment. The butchering process
moves at blinding speed, so workers who step away briefly create a real
burden for those who have to make up the difference. The risk of serious
injury is increased and, if there's any faltering, the whole line can be
halted, carcasses pile up, there's an incredible mess, workers get in
trouble, and there may be consumer health issues or financial losses if
some of the product winds up being mishandled.
The conflict was between religious rights and workers' rights. The
Muslims shouldn't have to scrimp on their beliefs just to keep their
jobs. The non-Muslims shouldn't have to work faster and increase risk of
injury just so somebody else could have extra breaks. Oddly enough, a
third alternative was not mentioned. The processing lines could be
slowed down enough at the relevant times of day so that the missing
workers didn't cause a problem. In fact, it was a three-way conflict and
the balance lies between all three factors, not just two. If the owner
of the plant had been a struggling start-up operating on razor-thin
margins, then any loss of profit could have meant closure of the plant.
That would make both workers' and religious rights moot and would be the
wrong place to look for a solution. In this particular case, the owner
was a Fortune 500 company for whom the very limited slowdown would have
made some correspondingly limited impact on the bottom line. That
property right needs to be balanced against equality among workers and
the freedom to practice one's religion. It's not too hard to see which
right would suffer the least damage in this case. Aiming for maximum
equality among rights, the obvious alternative is to slow down the
production line. It's so obvious, that its absence from the discussion
can only be one more example of the lengths to which people will go to
avoid inconveniencing the powerful party.
Of course, the more evenly balanced the conflict, the harder it is to
see a clear resolution. Consider, for instance, the issue of someone who
objects to vaccination versus the public health need for most people to
be vaccinated. On the one side is the bedrock right to control one's own
person, and on the other side is … the bedrock right not to be killed by
a disease carrier. If an epidemic is actually in progress, the public
health considerations take precedence because the threat is real and
immediate. But if there is no immediate threat, and the level of
vaccination in the population is sufficient that there is unlikely to be
one, then the situation is different. Then the individual is being asked
to give up a fundamental right without medical justification. On the
other hand, if there is widespread sentiment against vaccination,
consistency may be essential for the sake of fairness. (Information
about the real pros and cons of vaccination would also be indicated in
that case, but that's a separate issue.) Assuming that it doesn't start
an anti-vaccination movement (which would damage public health) my
preference would be to decide in favor of the less powerful party, in
this case the individual rather than the public as a whole. But I could
as easily see an argument in favor of maintaining consistent treatment
of all citizens. The specific decision in a specific case would depend
on the attitude of the community. That's messy, but messiness is
unavoidable when there is no clear path to a decision. It's better to be
clear on the real factors in a decision than to create false neatness by
pretending some of them don't exist.
Free speech vs. noise
We have gone off the rails as regards freedom of speech. The freedom
part is all-important and the speech part is forgotten. It's important
to remember what freedom of speech is for: to ensure a hearing for all
voices so that information or truths aren't stifled.
In the 1600s and 1700s when the concept was being developed and applied,
the signal to noise ratio was very different from what it is now. Few
people had the means to disseminate their ideas to begin with, so there
weren't many voices. Advertising barely existed. (Evidence of people
hawking things probably goes right back to the cave dwellers, but the
volume of advertising, its pervasiveness, and its ability to distract
were orders of magnitude lower than they are now.) Nor was there the
technology to din at people 24/7/365. So noise was not a large concern
of the main early thinkers on the topic of freedom of speech. Their big
concern was silencing.
Silencing was and remains something that must be prevented. The dreadful
irony, though, is that a fixation on allowing all speech as the
definition of freedom facilitates the loss of the whole point of freedom
of speech.
Drowning out voices kills their message at least as well as silencing.
Insisting that everyone, everywhere, for any purpose, has an equal right
to speak hasn't preserved freedom of speech. That's killing it. When
everybody can shout as loud as they can about whatever they want, the
biggest voices will dominate.
Nor is it possible to take comfort in the fact that the little voices
are still there if needed, that no information or truth will be lost.
That only holds in theory. In practice — and we live in practice, not
theory — there are a number of considerations that mean the drowned
voices are gone.
Time and attention are finite. There are a limited number of items we
can notice, and of those an even more limited number we can fully
process. That limited amount of information will inform action. In terms
of practical consequences, any further information, even if it came from
an omniscient god, might as well not exist. Freedom of speech is
supposed to prevent precisely that loss of useful information, but when
it's drowned out, it's gone.
It gets much worse, however. Repetition is well known to lead to a sense
of familiarity, and from there to the sense that the brand is known and
good, for some value of the word "good." (Just a sample reference:
Unconscious processing of Web advertising. 2008
) There
is accumulating evidence from those who study the cognitive effect of
advertising that the feeling of comfort is independent of conscious
thought or attention on the part of the target. Even when people /try/
to be sure they don't react favorably to advertised objects, they wind
up choosing them more often
. Tuning it
out, far from making it powerless, gives it maximum effect.
The way our brains work, repetition is apparently assumed at some very
basic neural level (pdf) to
be indicative of something real, something on which we can base
projections and expectations without having to go through the work of
reprocessing all the inputs as if they were new every time. The need for
rapid decision-making based on insufficient information is a fact of
life, sometimes it's a matter of life or death, so it's hardly
surprising that our brains would be primed to take all the shortcuts
they can get. Repetition, whether in advertising, dogma, propaganda,
opinions, news items, or catchy tunes, will lead to the same result in
some large proportion of people. Science can't say that any given
individual will be susceptible, but it can say with high statistical
certainty that groups of individuals will be affected.
The implications of the power of repetition for freedom of speech are
huge. It means that the loudest voices drown out others not just because
they're loud. They also seem more persuasive
(pdf). And the human mind
being what it is, once persuaded, won't admit the possibility of
manipulation. Who wants to admit to that? Even to themselves? Instead,
people generally defend their current point of view by any means
available, always convinced that they thought the whole thing through
carefully. There is no other way to maintain the sense of being in
control of one's own thoughts.
So, freedom of speech interpreted as a free-for-all of shouting does the
opposite of its intentions. It does not preserve diversity and the
richness of public discourse. It does not preserve truth wherever it
might appear. It drowns it. It reverts us back to dependence on the
ideas of the few. One can argue about the wisdom of crowds, but there's
no doubt about the foolishness of elites. None of them has ever been
right often enough to avert disaster. Not a single nation traces its
roots to the Paleolithic. Judging by that record, reverting to
dependence on an elite is guaranteed to end in somebody's wrongheaded
ideas taking over the public sphere, and leading to the usual consequence.
To preserve freedom of speech it is critical to do more than prevent
silencing. The noise must be dialed back, too. Of course, that requires
making distinctions between signal and noise which is the prickly task
we could avoid by defining freedom of speech as a free-for-all.
Making any distinctions is supposed to put us on a slippery slope headed
straight down to censorship and thought control. As I've said many
times, the existence of a slippery slope is not a good enough excuse to
head over a cliff, no matter how clear cut it is. Right now, we're
heading into thought control by allowing too much noise. That is no
better than thought control caused by too little. Either way, we lose a
freedom essential to quality of life and sustainable government. We have
no choice but to do a better job of distinguishing signal from noise.
It's not optional if we want freedom of speech.
The slipperiness derives from the difficulty of distinguishing what is
noise in all cases. The simple solution in the murky middle zone is to
always err on the side of allowing speech rather than suppressing it.
The harder part is to make that zone as narrow as possible.
Let's start with the easy cases. It's been clear for a long time that
there is speech which has no right to expression. Free speech doesn't
confer a right to perjury, to wrong answers on exams, to yelling "fire"
for nothing in crowded theaters, or to incitement to riot. The
unacceptability of lying in order to extract money is the basis for
truth in advertising laws. None of these limits has led to thought
control. It is possible to apply limits on speech without losing freedom.
The task now is to update those limits to account for new technology.
Flimflam used to be hard to repeat often enough to create plausibility
on that basis alone. Now it can be, which means identifying untruth
becomes an urgent task.
Identifying falsehood leads straight to the even thornier issue of
deciding where truth begins. There's an allergy to that in current
thinking for two good reasons. The received wisdom has proved very wrong
on many occasions over the centuries, and some of the worst excesses by
authorities have been committed in the name of doing the right thing.
That's led — rightly — to an absolute commitment to protect expression
of religious and political beliefs.
But the combination of a healthy uncertainty about truth together with
the commitment to protect all religious and political speech has
resulted in a curious chimera. Now any statement /defined by the
speaker/ as a belief is automatically protected. The consequence is
absurdity, which can be lethal. For instance, when some parents hear
about somebody’s beliefs on the evils of vaccination, they decide to
keep their children safe from it. Once vaccination levels are low
enough, group immunity is lost, the disease itself comes back, and
causes deaths in children .
Sometimes the two groups include the same children.
Suppressing noise without falling into the error of suppressing thought
requires objective methods of telling them apart. So the core questions
are whether truth can be distinguished from lies, and if so, in which cases.
The general question has been addressed by a considerable body of
philosophy, including most recently deconstructionism. Various schools
have made the case that the truth may be unknowable. Generalizing from
that sense of inscrutability has led to the feeling that nobody can
dictate what is the "right" way of thinking.
However, generalizing from abstractions to the fundamentally different
class of things represented by tangible facts is lumping apples with
pineapples. The knowability of truth has little direct relevance to the
physical world in which we have to deal with stubborn facts. Those who
have people to do their laundry for them can write screeds about whether
the clothes really exist or are truly soiled and criticize the internal
contradictions in each others' texts. The rest of us just have to try to
deal with the things at minimum cost and maximum benefit.
Thus, in the general case there may or may not be any philosophical
truths, but in the specific case of fact-based issues the answer is
different. Even if the truth (possibly with a capital "T") is
unknowable, fact-based issues can have statistically valid answers, and
we have the lightbulbs, computers, and airplanes to prove it. That means
counterfactual assertions exist. They are not just some other equally
valid point of view. The yardstick of truth-in-speech can be applied to
matters of fact, those things which can be measured and studied using
the scientific method. Nobody is entitled to their own facts, and
labeling them "beliefs" doesn't make it so.
A couple of caveats follow, of course. Reasonably accurate discovery of
the facts and their meaning may be non-obvious. That doesn't mean it's
impossible. Nor is there anything wrong with thinking carefully about
assumptions, methods, or conclusions. It's essential. Re-examination of
data in the light of new knowledge is equally essential. But revisiting
the same issue without new supporting data, after repeatedly reaching
the same conclusion with a high confidence level, such as 95%, is a
waste of time. It's noise.
So far, the murky part of the slippery slope no longer includes
demonstrably counterfactual assertions. Whether they happen on the news,
talk shows, printed matter, or any other disseminated medium, repeating
untruths is not protected as free speech. That means the end of (legal)
manufactured controversy about many current topics, such as evolution,
vaccination, or the monetary cost-benefit ratio
(pdf) of illegal aliens to the rest of US society. Which would be quite
a change. (I'll discuss in a moment how one might give such laws
practical effect.)
On the other hand, as an example of a factual controversy that has not
yet been decided, it would still be possible to worry about the effects
of cell phone transmitters on nerve tissue. Studies of harm have come up
negative, but they haven't had sufficient duration across enough studies
for the required level of statistical certainty. (As if to make the
point, a study came up a few months after that was written: Trees are
affected by electromagnetic radiation
.
Original
,
in Dutch.)
The point I'm trying to make with these examples is that the standard of
what constitutes certainty should be high, and anything which a
reasonable person could question based on the facts should continue to
have protected status.
Another murky area of the slippery slope are bad beliefs and opinions.
(Good ones everybody wants to hear, so they aren't generally targets for
suppression.) There is not, in any system of thought I'm aware of, an
objective way to deal with subjective matters. It's a logical
impossibility. The most one can say is that some beliefs can have very
harmful practical effects, but the laws are already in place to prevent
or punish those effects. They can be objectively evaluated. The beliefs
themselves cannot be.
Therefore everybody /is/ entitled to their own opinions. Limitations on
beliefs or opinions really can lead straight into censorship. That
includes wildly unpopular opinions such as justifications for terrorism.
To illustrate my interpretation of these distinctions, I'll take an
example from Glenn Greenwald
, a
stalwart proponent of the freest speech, discussing the University of
Ottawa's censorship of a talk by Ann Coulter. I would describe her as
favoring a vicious branch of right wing ideology with not a few
overtones of racism, sexism, and any other bigotry you care to mention.
In short, just the kind of person any right-thinking citizen would want
to censor.
In his view the University was wrong in placing any limits on her
speech, no matter how repulsive it was. He supports that view because
"as long as the State is absolutely barred from criminalizing political
views, then any change remains possible because citizens are free to
communicate with and persuade one another and express their political
opinions without being threatened by the Government with criminal
sanctions ...."
I've agreed above that political views can't be suppressed, and I've
already disagreed that merely allowing everyone to talk guarantees the
viability of even minor voices. But I think he misses another vital
point in the mix. Whether in support of good or revolting opinions,
allowing anyone to propagate lies does not serve the public good.
There's a difference between government restricting speech it dislikes —
which is very bad — and restricting speech that is objectively untrue —
which is essential.
It seems to me that the Canadians and Europeans are attempting that
distinction in their laws against hate speech. They've labeled it "hate
speech," but I've seen it applied against groups that spread lies to
foment hatred, such as the Shoah-deniers in Germany. It was careless to
label it according to the goal, suppressing hatred (which can't be
objectively identified and therefore can't be legislated away), instead
of the method, stopping lies, which sometimes can be objectively
evaluated. It's understandable that the Europeans would be particularly
aware of the need to short-circuit hateful falsehoods. They've certainly
suffered from seductive lies (seductive at the time) which caused
enormous real harm. But it is important to give such laws their right
name because that helps to delimit them objctively and thus makes them
consistently enforceable.
In the specifc example of Coulter's talk in Canada, the authorities
based their objection on the hatefulness in question, which led to the
charge of preferring one set of subjective attitudes over another. They
could have more validly told her she would be expected to stick strictly
to the facts. For instance, according to the rules suggested here,
nobody could stop Coulter from giving a speech about, for instance, how
much she disliked blacks. That's her opinion and, although it may not be
doing her cardiovascular health any good, she's entitled to it. As soon
as she says she has nothing against blacks, it's just that they're
good-for-nothing freeloaders who are draining the system of resources
put there by hardworking people, then she has to be able to prove it or
she's indeed liable for spreading falsehood. Those are facts. One can
count up the money and see whether they're true.
(And, indeed, people have tallied those particular facts, and race is
not associated with being on welfare. Poverty is associated with being
on welfare. For instance, 2004 numbers
(pdf),
over 30% of blacks are poor, and over 30% of welfare recipients are
black. Over 10% of non-hispanic whites are poor, and comprise over 10%
of welfare recipients. Whether blacks themselves or discrimination is at
fault for the higher poverty rates is another factual point which is not
the issue here. The point here is, if they were better freeloaders,
they'd have to be using welfare in higher proportion compared to their
poverty rate.)
The point to the example of Greenwald's objections to Coulter's talk is
to show the distinction I'm trying to draw between suppressing untruths
and allowing the free expression of opinions. I'd like to give a feel
for how it would change the discourse if free speech did not mean a
free-for-all, if truth-in-speech laws applied to everyone, not just
advertisers. It's become clear that spreading demonstrable falsehood
generates what people already call a "noise machine." The wisdom of the
crowd is way ahead of the deep thinkers on that.
The idea is that presenting counterfactual points as plausible in any
form, even as mere implications or dogwhistles
fails the
truth-in-speech test. Which brings me to the hard part: how can the
prohibition be put into practice?
I'm a believer in first figuring out where to go, then seeing how to go
there. So, even though I'm not at all sure how to implement these ideas,
that doesn't change the point that they do need implementing. What
follows are suggestions. There are bound to be better ways to do it, but
I feel I should moor the ideas in some kind of action since I've been
discussing them.
People opposed to a given viewpoint will be the quickest to find errors
of fact in it. They'll also generally be the quickest to try to suppress
the viewpoint itself. Those two opposing forces must be balanced, but
the opposition can serve a useful function by identifying errors
clearly, logically, and with substantiation. Substantiated objections
would require a response within a specific time, such as a week. If
neither party then agreed to modify their stance, the process would
ratchet up. It would be submitted for comment to the community of those
trained in the relevant field. (I describe the feedback process as an
integral part of government generally in the section on Oversight
in the second Government chapter.) If an overwhelming majority, such as
90% or more, of the experts agree with one side, then the other side
would be expected to withdraw its statements. The losing side could
appeal through the legal system (in a process decribed here
)
to confirm or deny the earlier consensus. To reduce frivolous attempts
to use the process to suppress opposing views, people or groups who
charge errors of fact, but turn out to be in error themselves more than,
say, three times in row would be barred from raising fact-based
objections for some length of time, such as five or ten years.
The rules about truth-in-speech would apply to anyone disseminating a
message widely: people speaking to audiences in person or through some
medium, whether physical or not. It would not apply to interactions
among individuals, whether at the dinner table or digital. Messages
going out to groups, on the other hand, are going to an audience. So,
yes, it would include bloggers. It would include social networking,
since messages are generally visible to a number of people at once. The
heaviest responsibility would fall on those with the largest audiences,
and the enforcement should be strictest there as well. But everybody,
down to the smallest disseminator of information would be held to the
same standard. Being small does not confer a right to disregard facts.
Everybody is subject to the same truth-in-speech laws, and it behooves
everybody to make sure they're not disseminating false rumors. The
excuse of not having done one's homework can only be valid a couple of
times, otherwise viral lies by little people would become the new
loophole for big ones.
I know that a general responsibility to check the facts would be a huge
change, and I know it would be hard to enforce at the smallest scale.
But I'm not sure how socially acceptable rumors with no basis in fact
would be if that practice was labelled reprehensible, stupid, and
illegal at the highest levels. Currently, the only value at those levels
is being first with anything hot at any price. It's not surprising to
see it spread downward. If there's an understanding that lies are a
social toxin and social opprobrium attaches to being first and wrong, I
think there may be less pressure toward rumormongering even at smaller
levels. Respect for truth can spread just as disrespect can. Otherwise
there'd be no cultural differences in that regard, and there are.
The effect on news outlets, as currently understood, would be severe. It
would not be enough to attribute a statement to someone else to avoid
responsibility. It still comes under the heading of disseminating
untruth. The fact-checking departments of news organizations would
become sizable. Being first would share importance with being fact-based.
The sanctions against those who persist in lying could be multilayered.
After some number of violations, say five, that fall into a pattern of
practice showing a carelessness about facts, a news outlet could be
required to show, simultaneously with their assertions, what their
opposition has shown really are the facts in the matter. If the problem
persists, they could lose status as a news organization and be required
to show a label of "for entertainment only" at all times. If they cannot
meet even that standard, they could be shut down.
I realize I just said that entertainment is included in truth-in-speech
standards. It's very different, applied to that industry, but it still
applies. Stories are the most effective way to spread messages, more
effective than straight information (e.g. Butler et al, 2009 pdf)
. It is
no more acceptable to spread dangerous falsehoods by narratives than any
other way. However, the distinctions have to be drawn in rather
different ways.
Stories become very boring if they can't embellish. As both a scientist
and a science fiction writer (who sometimes even gets paid for the
occasional story) I'm very aware of that. So I'm also aware that there
are important differences between different types of counterfactual
elements. Some are completely outside of both experience and
possibility, such as faster than light travel. Others are are outside of
experience but seem possible, such as a bus jumping a sudden chasm in a
road
.
And then there are those which are part of people's experience and
contradict it, such as jumping from the top of a six story building,
landing on one's feet and sprinting away.
The last category is not a big problem, because people generally know
from their own lives what the facts of the matter are. The first
category is not a big problem because it has no practical application.
It's the middle category which can cause real difficulties. In that case
it is /not/ simple to tell fact apart from fiction. The common
assumption that people know to suspend all belief in anything labeled
fiction is obvious nonsense. Nobody would have any interest in fiction
if they were sure it had no application to their lives whatsoever.
People pay attention to stories, in the broadest sense, because they
give them frameworks to integrate their own experiences. For the most
part, that's on the level of intangibles and is not a concern here, but
the same attitude applies to the whole of the story. If the story
implies that a car going very fast can escape the shock wave of an
explosion, most people have no personal experience telling them to doubt
that. In the case of the jumping bus mentioned earlier, who knows to
what extent movies showing similar stunts contribute to drivers trying
to jump potholes and crashing their cars
?
The stories certainly don't help.
The thing that is so frustrating about those lapses in storytelling is
that they are completely unnecessary. They are based on nothing more
than laziness. One can write the story and include the real physics,
biology or chemistry where it matters without changing the degree of
suspense or interest by one jot. All that's needed is a bit of research
into the facts. In the bus story, the first part of the road just needed
to be higher than the second. Then the front of the bus could start
falling, the way it would at real world speeds, and (given a good stunt
driver) the chasm could have still been almost as wide as the bus'
wheelbase. The larger the media company, the less excuse they have. A
thousand biologists would have been glad to volunteer the information to
Paramount that the sentence "Oh my God! The DNA! It's degrading into
amino acids!" needed a minor change: "Oh my God! The DNA! It's degrading
into nucleic acids!" That error never killed anyone, but it adds to the
confusion around science /for nothing/.
From a writer's perspective, it's ironic that taking care about facts is
part of awareness and respect for the power of storytelling. Pretending
it doesn't matter is saying that nobody listens to that stuff. The
social good, however, is that stories are an important and effortless
way for people to learn. Entertainment can assist in that vital social
function in ways that don't interfere with it's primary mission at all.
Consider another example. Medical emergencies in entertainment generally
leave the audience with the impression that the way to get help is to
push a few buttons on a phone and shout, "Ambulance! Now!" (Yes, yes, I
know. I embellish a bit for the sake of my story ….) That's also the
natural thing to do when desperate, so there's nothing in an
inexperienced audience's mind to suggest otherwise. The result is that
in real life valuable time is wasted while dispatchers get the
information they need. That really can kill people. Following the real
protocol would necessarily change the pace of the story at that point,
but it's a truly vital part of social education, and it doesn't actually
need to change the tension. It's the type of fact-based truth that needs
to be required so that storytellers without enough respect for their
craft don't spread toxic misinformation.
Being unreal is an important part of entertainment, and the distinction
to dangerous and useless untruth is hard to draw. The truth about the
facts of war, for instance, would make all but the grimmest war stories
impossible. The truth about the actual practice of science would make
"War and Peace" seem like a fast-paced action flick. The truth about
politics would be unwatchable. As before, where there's doubt, let
people speak out.
A very difficult case is unreality that shades into such a one-sided
presentation it amounts to a propaganda piece for very destructive
attitudes. What should one do, for instance, about D. W. Griffith's
classic, "Birth of a Nation?" I really don't know. So if I were
deciding, I would have to allow it. Another example is entertainment
whose whole point seems to be violence. One such game or movie is just
stupid. A whole decades-long drumbeat of them is going to shape
attitudes. But at what point does one draw the line? Possibly a
requirement to depict the action and physical effects in accord with
real physics and biology would be enough to reduce the propaganda value.
The sanctions for dangerous lies in entertainment need to be different
than in, for instance, news. Probably the most damning thing of all
would be a requirement to intercut a clip showing what was very wrong
about a scene. The story would come to a screeching halt, the audience
would be exposed to the issue, and there's probably every chance they'd
flip to something else entirely, which is the worst thing you can do to
a storyteller. The people behind the entertainment causing the problem
would be required to fund the production of the clip made for those who
successfully objected to its stupidity. That would mean a pre-release
would have to be available to those who wanted to examine it for errors.
There's a remaining gray area I wouldn't know how to handle at all, and
yet which is in many ways the worst problem of all. (I discussed this in
an earlier piece, Weaponized Free Speech
, from
which the following is taken.) I've been discussing the pernicious
effect of lies, but the bigger issue is not lies. The bigger issue is
what to do when free speech itself is the problem.
In a 2006 article
by George
Packer, Kilcullen, a counterinsurgency expert, makes the point that
"when insurgents ambush an American convoy in Iraq, 'they’re not doing
that because they want to reduce the number of Humvees we have in Iraq
by one. They’re doing it because they want spectacular media footage of
a burning Humvee.'" The US government has also used events to shape
rather than inform thought. One needs only to count the incidence of
terror alerts in the year leading up to and the year following the 2004
presidential elections for one example. Like mangled bodies around a
burning Humvee, these things aren’t lies. The pictures aren’t doctored;
the information leading to the alert may be genuine. And yet, their
purpose is not to tell the truth.
There is something deeply sinister about using freedom of speech to
cloud thinking instead of to clarify it. There’s a lethal virus in
there, somewhere, when free speech is used to steer people’s feelings in
ways that bypass their higher brain functions. And that's especially
true when those brain functions are bypassed to make it easier to kill
people.
What's the cure? Publicity is the point of weaponized speech, and yet
there is no way to say, “You can report on these stories, but not those
stories” without striking at the heart of free speech.
If censorship couldn’t work, it might seem that an alternative is to
make the voice of reason louder until it overmatches violence, but I
don’t see how. There is no symmetrical fix. There is no way for reason
to deliver a message that has the same punch as dead bodies. If it
tried, it would cease to be a voice of reason.
I don't see a solution so long as there are people willing to commit
violent crimes or violent acts or wars, so long as there are people who
broadcast them, and so long as there are people who want to hear that
message. The “marketplace” of ideas only functions when nobody brings a
machine gun into it.
There are several main points to keep in mind on the subject of limiting
freedom of speech. It relies just as much on the presence of silence as
it does on avoiding silencing. When distinguishing signal from noise, in
order to know what to silence, any distinctions that can't be made
objectively must default to favoring the freedom to speak. There are,
however, many more distinctions that can be made than we make now,
primarily those which relate to matters of fact. Implementing
factual-truth-in-speech practices does not lead to censorship any more
than does the prohibition against yelling "Fire!" as a prank in crowded
theaters. Freedom of speech understood in a way that improves the signal
to noise ratio would make it easier to develop that informed citizenry
on which sustainable societies depend.
Summary
What are the conclusions from this discussion of rights? One is that
rights, to be rights at all and not privileges, must apply to everyone
equally. Two is that the rights themselves are not equal. To preserve
the maximum equality among people, it's essential to take the inequality
among rights into account when they conflict. It's also essential to
recognize that they will sometimes conflict, even in a theoretically
perfect system, because their relative importance can vary in different
circumstances and because people place different priorities on them.
The conflicts among rights need to be resolved in ways that are
explicitly directed toward preserving the maximum equality among people.
That requires two important balancing acts. One is explicit recognition
of which rights depend on others. The current implicit assumption that
rights are equal serves only those people who are more equal than
others. If there is explicit recognition of inequality, then primary
rights, such as freedom of movement or speech, can be given the
necessary precedence over dependent ones, such as freedom of religion.
Two is that conflicts need to be resolved in ways that do the least
overall damage to any of the rights. All of them need to be viable,
because they're all essential to some degree. Allowing any right to
become meaningless, which is what happens when one is automatically
demoted, opens the door to the erosion of all rights.
I want to stress that when I say "necessary precedence" for some rights
that does not mean exclusive or automatic precedence. It means that we
have to be very careful about preserving the linchpin rights, but it
does not mean that they always “win.” The essential point is the
/balance/, and that the different kinds of balance all have to happen at
the same time. It's rather like surfing, in which balancing for forward
motion and for not falling down both have to be done at once. On a
different level, it's the same problem we solve in order to walk. Both
require practice, but they're not impossible.
The point is that the balance in any given case depends on the situation
so it has to be decided on its merits. The goal is the same: not to fall
in the physical world, and to preserve maximum equality in the social
world. The balance that achieves the goal is always different. I'm not
trying to say that it would be easy, but I am saying it's possible. It's
also necessary. Pretending that a simple, rigid system can handle the
complex and fluid dilemmas of people's rights may be decisive, but it
achieves the wrong goal. The Queen in Alice Through the Looking Glass
was wonderfully decisive about demanding people's heads, but it didn't
do much for running the country. In Alice's Wonderland that didn't
matter. In the real world, we wind up having to deal with the inevitable
and nasty consequences.
+ + +
Environment
Defining the Problem
The government's role in environmental issues is currently seen as
something like neighborhood code compliance. It should stop people from
littering and make sure the place looks nice. It's handled very much as
an afterthought, as something far removed from the core functions of
defence, law, and order. Government is supposed to regulate the
interactions of people. The environment is just where we live. But with
technology comes increased power. People can change the environment to
the extent of destroying livelihoods and lives. Suddenly, the
environment is a massive way people affect people. Environmental
regulation has gone from being an afterthought against littering to
being a core function of government.
If rights are the essential limits that enable free societies, and if
they apply equally to everyone, preserving the environment is the first,
not the last, function of government. After all, given that people's
activities have effects on the physical world, then equality among
people means that everyone has the same right to affect the environment.
Imagine if every man, woman, and child on the planet polluted as much as
the worst offenders. What we've done is give a few people the privilege
to pollute and use resources out of all proportion by virtue of having
been first or having more money. There is not one shred of fairness in
that, and the consequences for the livability of the whole Earth are
just what you would expect from gross injustice. Environmental issues
are a fundamental component of social justice. One could say they are
the fundamental component since without a viable physical environment
none of the other rights have any meaning.
It's intimidating, or discouraging, or both, that the very first field
of activity under consideration requires a realignment of everything we
do if the principle of equality is actually applied. The assumption of
equality has become so ingrained that evidence to the contrary can be
hard to grasp. The truth is that we have miles to go.
If everyone can pollute and use resources equally, then it follows that
social justice requires completely clean, sustainable, and renewable
agriculture, technology, and industry. The same holds for individual
activities. There's no question that we're so far away from justice that
it would take time to reach it. The transition would have to be measured
and planned if it wasn't to create more suffering than it solved. And
there's no question on a practical level that it's a real challenge,
considering our current technologies, to move from burning through our
planet toward living on it.
Facts (not decisions)
One problem continually crops up in this technological age: the
distinction between facts and decisions is not clear enough. They are
not the same thing. Treating them as interchangeable causes problems
instead of solving them. A decision can't generate a fact, or vice
versa, and yet people seem to expect discussions to generate evidence
and more study to indicate a course of action. Looking for things in the
wrong place is a sure way not to find them.
There's also a lack of clarity about the distinction between facts and
ideas. In the wonderful words of Moynihan, you are entitled to your own
opinions, but you're not entitled to your own facts. Yet it's common to
hear people say they don't "believe in" a given problem, and then to
expect their belief to be accommodated, as if it really was a religion
that deserved protection. I discussed issues involved in delimiting
objectively verifiable facts (as opposed to truth) at greater length in
the Free Speech vs. Noise
section of the Rights chapter.
The main point here is that we do have an effective way of finding
objectively verifiable facts. It's called the scientific method. Not
using science, and acting as if the facts can emerge from a political
discussion, is a waste of time. It's as silly as arguing about whether
the planet is round. It is not a mark of balance, seriousness, or
complexity to do that. It's a way of showing one doesn't understand the
topic. A discussion of environmental issues has to encompass the
complexity of competing interests, but that does not include ignoring
the available evidence.
It would, of course, be a departure to require a government to pay
attention to the facts, but it's necessary for real equality. Without an
explicit requirement to get and use the evidence, we /are/ entitling
some people to their own facts. Lack of access to information, whether
because it's actively hidden or poorly presented, deprives people of
their single best tool to make good decisions. Then they can plausibly
be told to leave it to the experts. But worst of all, obscuring the
facts is the least visible and thus most effective weapon of those with
a hidden agenda. It's the primary tool of those who want to take more
than their fair share. Having the facts and giving them their proper
role is not a minor issue. It's central to a fair government.
I have to go off on a tangent here, since I've come down solidly on the
boring side of the debate about whether facts are out there or something
we shape inside our heads. In any practical sense, facts are objective …
/facts/ and we have to adapt to them. But there are a number of gray
areas on the path toward figuring out what those facts are.
First, if we rely on science, it's essential that scientists ask the
questions that need answers. The very questions being asked can steer
thought on a subject. That's a huge, almost invisible issue precisely
because it's hard even to see the questions that aren't being asked. (I
touch upon it here
.)
It's beyond the scope of a discussion on the environment to explore that
particular philosophical point, but it does have to be addressed
effectively in any system of government that would like to be
reality-based. And that won't be easy because, among other things, it
implies changes in the reward structure of academe. (Discussed briefly
in Education
).
Further, even though they may know which questions need asking,
scientists can easily be herded into avoiding topics by depriving them
of funding. Awareness of these background issues is essential in order
to neutralize them, not as a reason to deny the usefulness of science.
It's a reason to make sure there are no pressures channeling the
questions scientists may ask.
Once science has spoken on an issue, the only problem for non-scientists
is figuring out which experts to believe. It's not simple for
non-specialists to evaluate technical evidence, nor should they have to.
That's what the specialists are for. As a scientist myself, I think
there's a rather easy answer to this dilemma. It may not be the best
one, but it's a start.
What we should do is measure the degree of agreement among the
specialists, who /can/ evaluate the evidence. And we should measure that
agreement not by a poll, because scientists are notorious for waffling,
but by tracking the conclusions in peer-reviewed journals. Sample size
might be a problem for insufficiently studied questions, but the burning
issues of the day generally receive a good bit of attention. Then all
one has to do is decide which level of agreement among scientists is
significant. Since we're dealing with human beings, perhaps it would be
appropriate to take the level social scientists use: 90%.
It's not unknown for a scientific consensus to be overturned, and it's
great fun to point out the few examples (such as phlogiston
and land bridges
), but it
is very uncommon. New discoveries happen all the time, and as consensus
develops it's refined by more and better data, but those are both
different from being flat wrong. Conclusions that stand the test of time
while subjected to the scientific method are rarely flat-out wrong.
Following better-than-90% agreement among scientists is a much safer bet
than most of the bets we take in life. When the vast majority of
scientists agree, we can take it that the evidence is solid and the
issue in question is settled, even if we don't understand how they got
there. It's the same as accepting that the law of gravity is not in
serious dispute, even though most of us don't understand the physics
involved. (Up on their own high level, that includes the physicists
looking for the Higgs boson, come to think of it.) Just because, before
Newton, people had a different explanation for the lack of levitation
doesn't mean that different "beliefs" about gravity must be entertained.
Presenting the right information is only half the battle. Not presenting
the wrong information is perhaps a bigger and harder task. I'm not sure
that it's even theoretically possible when news is for-profit. Sensation
will always make a quicker buck than education. And yet, when there's
something as sensationally dangerous as global warming, the "news" media
can't be bothered to discuss context or consequences. The structural
fault may be that entertainment ceases to be entertaining when it
reminds viewers of real problems, and people generally would rather be
entertained than bothered. In a for-profit environment, "news" isn't
just a minor irritant obstructing knowledge. It may well inevitably
become the antithesis of understanding.
That is a non-trivial problem when fairness requires accessible
information. It's another case of conflicting rights, one where it's
foolish to hope free speech can solve the problem of costly speech. I've
tried to come up with some ideas about how to approach a fact-checking,
truth-in-speech goal, which I mentioned in the Free Speech vs Noise
section. Whatever methods prove most effective, the important point is
that there do need to be standards for what constitutes actual news.
People have no way of knowing what's true in unfamiliar topics, and it's
essential to make it easy for everyone to distinguish fantasy from
reality. It's also essential to prevent silencing of any opinions and
alternate interpretations. It's a difficult balancing act. The rules are
bound to be imperfect and in constant need of improvement, but there do
have to be truth-in-speech rules.
So the first step in dealing with environmental issues is to get the
evidence, the actual evidence, not the evidence according to anyone who
stands to make money by the result. None of that can be a political
process and give reality-based results. Then one tallies the weight of
the evidence. It's not hard to do and specialists in government and
watchdog groups could provide that information. I'll give a few examples
of a very informal tally of the evidence in the controversies about
genetically engineered food, nanoparticles, and global warming.
The popular concern about engineered food, embodied in the colorful term
"frankenfoods," is of some kind of genetic damage from these mutated
organisms. That would require genes in frankenfoods to infect humans
through a process called lateral gene transfer. That's never been
observed between large organisms like corn or cows and humans. Transfer
of genetic material from bacteria to humans has occurred and it's been
studied (e. g. Science, 2001
). It's a
rarer event than, say, a large meteor strike. A search of the literature
allows one to say there is well over 99% agreement that genetic damage
to humans is not the problem.
On the other hand, if you search for ecological damage from lateral gene
transfer, allergic reactions to modified proteins, or nutritional damage
to humans due to the poor farming practices facilitated by the
genetically modified organism, then it's a different story. (The
nutritional damage is due, I suspect, to the fact that about 75% of
genetically engineered crops involve genes for RoundUp, or glyphosate,
resistance. At least it did the last time I looked, around 2006. In that
case, crops are engineered to resist large doses of herbicide so that
very high doses can be applied. That reduces weed competition until
resistance ratchets up. For a while, yields are higher. Obviously, using
vast quantities of RoundUp is not conducive to either nutritional
quality or ecological health.) I haven't tallied the papers to know what
the level of consensus is, but even a cursory glance shows that there's
evidence of a problem, enough to make it worth looking into.
A more difficult example is nanoparticles. There's plenty of research
showing that wonderful and amazing things are just around the corner, or
even already here, because of nanotechnology. That's great, and we're
gearing up to pursue it in the usual way. No holds barred. Meanwhile,
there are disquieting rumbles in the biomedical research that the
particles are the right size to interact with cells, and that some of
them may even be carcinogenic
in much the same way as
asbestos fibers. It's a new field, and without a review of the
literature I couldn't tell you whether there's an emerging consensus on
this. What is clear is that there is some danger here. The question then
becomes what to do about it. More study? Mandating better containment of
nanoparticles preventively (and expensively)? Halting all use of
nanotechnology? Decisions about future action are choices, not facts,
and have to be approached differently, as I'll discuss in a moment.
Or take yet one more topical example: global warming. Far over 90% of
climatologists
say the data
show our activities are warming the planet far beyond the averages seen
in hundreds of thousands of years (e.g. Nature
). For those
who want more, a couple of summaries of data here
and here
,
and a recent article
showing how feedback loops and thermal storage take the situation beyond
our ability to influence. With that kind of consensus, the issue is
settled (and has been for years). Global warming is right up there with
other facts established well beyond the 90% consensus standard, such as
the harmfulness of smog, the carrying capacity of given ecologies,
evolution, and reduced lifespan from smoking.
Once the fact-based step is done, the next one is deciding what to do
about it. Science may be the best tool for figuring out if there's a
problem and for finding possible solutions. But it can /not/ make the
decision on which solution to use. That involves making choices, not
discovering facts. It's foolish to think a friendly discussion could
find a fact, and it's equally foolish to think science can find a
decision. A decision will never emerge from more study. That will only
produce more facts. (At best.) Facts inform a decision, but they can't
make it, and pretending otherwise is just a way of taking decisions out
of our hands and giving them to interested parties or the inexorable
march of events.
Costs and Choices
Before going into what equitable decision-making might look like, it's
vital to recognize the role of costs in making a decision.
There's a funny assumption that runs through the thinking about
environmental costs. It's embodied, for instance, in the polls asking
people what they're willing to spend on, say, air quality. The answers
are generally on the order of fifty cents a day, and this is then taken
in all seriousness by economists as a starting point for discussing
realistic tax costs of environmental policies. It's as if they truly
believed in a free lunch.
There is no free lunch. The choice is not between paying fifty cents or
nothing. The no-cost scenario does not exist. The money will always be
spent. It may be spent directly on clean air or indirectly on disease,
reduced food production, or a hundred other things. In all cases,
there's a cost.
When it's stated plainly like that, nobody denies it, and yet it's
always waved away as being irrelevant in something called the real
world. That makes no sense. The real world is the one without a free
lunch. Perpetual motion machines are the fantasy. It doesn't matter how
abstruse the downstream math is, the machine will not work. Everybody
knows that, too. Pretending otherwise, however, lets the unspoken gamble
continue. It's the bet that someone else will pay the price while we get
the goods. That's what it's really all about.
Without digging very far, people will admit that's what they're doing.
Sometimes they even put it on bumperstickers. "Nuke their ass. Steal
their gas" for example. Or talk to someone about downstream costs, and
the response is liable to be, "In the long run, we're all dead." No,
actually, we aren't, unless the whole human race goes extinct. What
they're really saying is that other people's grandchildren don't matter.
(Their own will presumably somehow be immune.)
If there were some way to ensure that all the costs could be passed to
powerless people, then at least from a biological standpoint the damage
would be limited and the system wouldn't necessarily collapse. But the
most voiceless group of all is future people, and the big problem with
the future is that it doesn't stay put. It becomes now. We're becoming
the people who'll pay the price even though we won't get the benefits
because those were already taken. At that point, it ceases to look like
such a bright idea.
So, since what I'm trying to do here is delineate a fair system, the
first point to make about costs is that they are never zero. We don't
get to decide whether to spend money or not. We only get to decide when
to spend it and how high a price to pay. Paying early for intelligent
intervention is always the cheapest solution. Letting events happen and
then trying to recover from disaster is always the most expensive. In
between, there's a range of relationships between prevention and price.
Somebody always pays and, in fairness, that should be the people who
incur the costs.
To go back to the clean air example, the real question is not "Do you
want to pay for healthy air?" It's "Would you rather pay for healthy air
or for dirty air?" Nor is it, "How much do you want to pay?" It's "Which
choice, with it's associated price tag, do you take?" In a fair system,
the costs of any choice have to be attached to that choice. If all the
pluses and minuses are explicit up front, the optimum choice is often
easy to see.
That obvious method of pricing isn't already applied for the simple
reason that industries are not currently regulated that way. Governments
allow them to pass costs on to people and to future generations.
Accountants then label those costs "external" to the companies' own
balance sheets and they don't book them. In many ways it's astounding
that, when you come right down to it, an accounting rule is the proximal
cause of our environmental disasters. However, unlike the environment
itself, regulations can be given a do-over.
I'd like to stress that it is possible to make estimates of costs,
including downstream medical and ecological costs. Even if it's nothing
but a minimum and maximum boundary, it gives a pointer to the order of
magnitude involved. The newer and less known the technology, the more
the estimates will need refinement as data become available. But that
does not affect the vast majority of estimates for the simple reason
that the bulk of technology is necessarily not new. There are always
data available to estimate most risks, costs, and benefits. When there
is over 90% agreement on what the data show, then that belongs in
government, voter, and news information. There's also nothing except
special interests to stop those estimates from being clearly presented
in ways that can be grasped at a glance, and that can link to layers of
medium and complete details for interested voters and specialists.
Decisions (not facts)
Finding and presenting the facts may not be simple, but making the right
decisions is even harder. The pressure from interested parties ratchets
up, which makes the essential task of ignoring them that much tougher.
The general steps to a decision would be similar to the official process
now. Start with gathering and providing information, follow that with a
call for comments, a ruling, and possibly appeals. However, fairness
requires much higher transparency and accountability. It also requires
that the fair and open process results in the actual rulings instead of
being a way to keep people busy while the real decisions are made elsewhere.
Environmental decisions are a special case of decision-making generally,
which I'll discuss in the first Government
chapter. Decisions can be grouped into three quite different types.
The most far reaching kind should perhaps be called unexamined
assumptions rather than decisions. They're orientations more than
conscious thoughts and people can dislike examining them, but that
doesn't stop them from influencing choices and outcomes. For instance,
the issue of when to take action is probably most acute for
environmental issues. There's a big difference in costs and benefits
depending on whether one chooses prevention or cure, and since it deals
with the physical world, it's not usually possible to start over after
making a mistake. Absolute prevention, that is, doing nothing unless
it's first shown to be harmless, would mean real safety. It would also
mean that nobody could ever make a move, because everything, including
sitting in a room and growing fat, has some level of harm or risk
associated with it. The opposite approach, waiting for problems to
happen and then curing them, allows a great deal of room for initiative,
but it also puts us in our current interesting situation where we get to
find out if any of the disasters actually have a cure. The balance of
risk-taking — and the associated danger of disaster and benefits of
innovation — is a community decision. One could almost say a cultural
decision.
On a less lofty level are broad issues of policy. For instance, which
mix of power sources to use in a given area if wind and tidal are
feasible as well as solar. Broad policies are more a matter of
priorities than facts, except perhaps at the extremes, and are
appropriately decided by consensus processes, such as voting.
The vast majority of decisions are specific: what to do in a given
situation to get a given outcome. These require knowledge and constant
attention detail, in other words they require professionals. Making
voters decide specific questions results in voter fatigue and, at best,
near-random choices based on anything but the boring facts. Responsible
officials with the necessary background need to be appointed to make
these decisions. I mean that adjective "responsible" literally.
Officials who make decisions against the evidence that lead to
predictable bad outcomes should be held personally responsible for them.
All types of decisions, if they're to be rational and equitable, depend
on transparency in the whole process, explicit criteria, and the right
to appeal.
Transparency means more than making details available. The experts of
obfuscation have figured out how to deal with the requirement to provide
details. They drown people in data. Providing all the data is essential,
but transparency also means well-summarized data, graphically presented
in the clearest way possible. It means /cognitively/, not just
physically, accessible information. It also means that every step of the
process, the facts, deliberations, rationales, and outcomes, should be
clear.
In one respect, though, there could be less transparency than we have
now without losing accountability. So far, the rules about recording
every word and note don't seem to serve much purpose. Some quite
spectacular corruption has festered right next to those rules, flouting
them effortlessly. When recordkeeping does prevent corruption, then it
must be required. But theatre should be reserved for the arts.
Explicit criteria in a fair system would be simple. Double standards
should not be in evidence at any point. Given that criterion, least
/total/ cost is the yardstick to differentiate among choices. That's not
to say that least total cost has some sort of moral superiority in
itself or that it's always the best yardstick. It applies in this case
because a government is, in effect, a way of deciding how to spend other
people's money, and the only right answer after the demands of fairness
are satisfied is that as little as possible should be wasted.
Total cost is only easy to discuss, not easy to determine. It's really
short for "the sum of tangible and intangible costs plus the community's
priorites." Some of the costs and benefits will have an obvious price
tag. Others, like how much the community cares about visual pollution
versus product price may be very hard to pin down. Total cost is
necessarily nebulous. However, nebulosity in and of itself doesn't stop
businesses from coming up with an estimated price. Few things could be
more intangible than good will, yet it's regularly estimated in
accounting. Costing such things out doesn't require any conceptual
breakthroughs. It's only some of them, the ones which reduce profit,
that are supposedly too hard to handle. And that, as I've said several
times, is just an excuse to pass the hidden costs on to someone else
with no voice in the matter.
While I'm on the subject of intangibles, I'd like to make the point
again that there is no way to avoid dealing with them. Estimates are
necessarily not perfect, and they always need refining as more data come
in. People also disagree about the very premises of the estimates since
priorities vary. However, none of that is a sufficient excuse for
pretending that things like future costs or quality of life don't exist.
Ignoring them doesn't make them go away. All it's done is take away our
control over the process.
No matter how nebulous important intangibles are, some estimate of them
is better than none. The way to mitigate the associated uncertainties is
not ignorance but clarity about the assumptions and data behind the
estimates. The probable range of estimates, not just the best guess,
must be explicit. When the assumptions and their implications are
clearly stated, people can make their own adjustments based on their own
priorities. But once all that has been done, once we've given the
intangibles our best guess and made clear what that guess is based on,
then we have to go with it. As I said earlier
, refusing
to consider significant complicating factors because they make the
question messy is a sign of avoiding real answers.
The right of appeal, which provides a check on arbitrary or corrupt
decisions, is important to keep feedback loops short. Different kinds of
appeal are appropriate for different deficiencies. If a decision ignored
vital facts, it needs review by experts and a legal process to sustain
or overturn it. That would be true whether the disputed decision was
made by officials or voters. Majority vote can't decide facts. A
discussion of the legal process in is the first Government chapter
.
If the deficiency was in not holding a vote when it would have been
appropriate, or in the voting process itself — for instance, not making
all relevant facts available — then the petition would be for a vote on
the topic. That process is discussed in the section on voting
.
Since the idea is for it to be a real check, not just a means of
harassment for people with vested interests, some level of general
support needs to be required as discussed in that section.
The argument against a decision has to be based on the same goal,
fairness and least total cost, and it would have to show where and why
the decision taken, whether by officials or voters, missed the optimum.
The suit wouldn't necessarily have to be heard if careful consideration
indicated that the people complaining did not have a reasonable
interpretation of the facts on their side. That decision could itself be
appealed some limited number of times. If they did have a case, however,
and if they prevailed, then the decision would be nullified and the
official who made it, especially if it wasn't the first such error,
would be that much closer to losing their job.
That brings me to the officials, the actual people whose job it is to
implement regulation when it will do the most good for the least cost.
It takes a great deal of expertise to recognize those points. When the
process is done right, it will look to the general public like nothing
has happened. There won't be any disasters that need fixing. They will
have all been prevented. And yet innovation will never be hampered.
Back in the real world, there will always be officials who are less
omniscient and who need to be prodded in the right direction. Methods to
that end are discussed under Oversight
.
The officials who need so much prodding that they're more trouble than
they're worth need to find other lines of work. The definition of that
point at which officials need to be fired might vary a bit across
communities, but there needs to be an explicit point. Otherwise the
tendency is always to excuse bad behavior from those in charge.
Last, there needs to be recourse against those who've been negligent,
incompetent, or criminal. Which brings me to the next section on what to
do about problems.
Problems and Penalties
Human factors cause two very different obstacles to progress on
environmental issues.
The first is that pursuing perfect environmental policies too
singlemindedly would cause more hardship than it prevents. There's a gap
between what's right and what's attainable because we've let the
distance grow too big. It's one more case where rights have to be
balanced to achieve fairness. With time, if we moved toward
sustainability, this obstacle would evaporate.
The other one, however, would not. We'll always have people who try to
trash the environment for personal gain. Selfishness doesn't disappear
in the face of fairness. It's merely contained.
Each of these obstacles requires it's own approach. Going slower than
one would like out of consideration for the people involved may be
necessary in the first case, but is toxic in the second one. I'll talk
about mitigating some of the transition problems first, and then discuss
the so far unlabelled class of crimes against the environment.
A big issue whenever there's talk of balancing competing interests is
that before you know it, all the balance winds up on the side of all the
money. It's too easy to use the gap between what's right and what's
possible as an excuse to slide into an ever-diminishing definition of
"possible." As in other situations when the easy way out is to avoid
conflict with vested interests, it's essential to explicitly and
consciously compensate for the effect of power on decision making. The
only valid reason to go slow is to spread the hardship of transition for
the majority of people over enough time so that there's an optimum
between achieving a sustainable system and the cost and effort required.
It's about the greatest good of the greatest number, not the greatest
good of the biggest pile of money.
Even when the ideal is not yet attainable, some ways of getting there
may be better than others. I'll state the obvious: for any damaging
industry, the solution in a rational world is to reduce the damage to
the minimum, to make sure that no people are hurt by it, and to reduce
need for non-renewables by every possible recycling method.
One way of reducing damage is to switch from methods that create
hard-to-handle diffuse pollution, and instead use processes that cause
more easily controlled point source pollution. For instance, pollution
from isolated large power plants is easier to contain than that from
millions of vehicles. Least total cost solutions would concentrate
pollution to its most controllable minimum and then make sure it does
not escape into the environment at large.
Materials research that helps find renewable substitutes would be
promoted. That research is also part of the price of using
non-renewables. In a far future time, space-based industry might make
finite resources functionally infinite. (That's a boring and practical
reason to continue exploring space, besides the much more fun one of
boldly going where no one has gone before).
The cost of doing what is needed to cancel out the damage from damaging
industries is simply the cost of doing business. Putting those costs on
the books where they belong would mean that prices contained real cost
information, which is always said to be desirable for markets. Fair
pricing would also make it much easier to evaluate alternatives as
they're developed.
If some damage is allowed in the interests of making transition costs
less painful, then there has to be an explicit floor below which more
damage is unacceptable. That goes back to the point that paying nothing
is not a real option. That floor has to be compatible with improving
sustainability and needs to meet standards of scientific validity.
Communities could move toward sustainability faster, if they feel able
to handle the cost, but they couldn't move slower. It's eminently clear
by now that hyperslow motion toward a goal is just another way of
avoiding it.
A minimum rate of progress toward sustainability means doing better than
running in place, but determining that rate is the same type of problem
as pinning down other environmental issues. The minimum rate has to be
estimated because we can't have perfect knowledge that tells us exactly
what the limits are. But the one thing a minimum rate of progress does
not mean is pretending there's no answer merely because there's no
precise one.
An example of one possible way of determining limits is to commission
studies, say three of them, from specialists who represent the range of
ideas in that particular field. Those three studies, the "no-limits,"
"strict limits," and "middle-of-the-road" ones, could then be put out
for widespread, double-blind peer review and comment. The study (or
studies) judged to have the most robust methods and conclusions would
then be used to set policy. If two of them are in a tie, a midpoint
between them could be used. And then, since our knowledge is necessarily
approximate, the decisions should be reviewed by the same process every
so often, maybe every ten years, like the Census. It probably goes
without saying that such a method is not perfect and should be improved,
but I hope it also goes without saying that it would be a lot better
than what we have now. Almost anything would be.
Limits mean different things in different industries because there's
varying potential to approach the ideal. There's no reason why,
eventually, agriculture couldn't be a non-polluting, sustainable
industry. There's no reason why energy production couldn't come very
close to that ideal. At the other end of the scale, there is no way for
an extractive industry to be theoretically sustainable (barring
faster-than-light travel, I suppose) or completely non-polluting. And,
although it's possible that in some far future time civilization will
cease to need metals, I don't see that happening soon no matter how fair
a society is.
On various levels geared to their own requirements, industries will
always need practicality adjustments to the ideal of purity and
sustainability. That departure from the ideal incurs costs to compensate
for the damage. The important point is that the expense of those
adjustments should be paid when they're made, and not dumped on a future
that has no vote in the matter. If all costs are included, the tendency
to use up the planet will be de-incentivized, to use business's own
doublespeak.
In a bit of a class by itself is individual pollution, things like
backyard barbecues, household chemicals, and batteries thrown into the
trash. This is where people's desire for a healthy environment meets not
only the cost but also the inconvenience. It's not just some faceless
fat cat who has to shape up. There tends to be less action on this front
than one might expect from the widespread support for "green" issues.
The reason for the recalcitrance comes from the other sticking point
with regard to individual pollution. It's a similar problem to mass
transit. There is no way for an individual to have mass transit, and
there is no way for an individual to clean the whole environment. Only a
social solution is possible to a social problem. That makes it very
discouraging when there's no social direction behind those goals.
However, when there is, then individual action fits that paradigm. In
places with excellent mass transit, such as Holland, it's a significant
way of moving people, with all the attendant social benefits. Likewise,
when the whole society makes it simple to recycle, to keep houses clean
without violent toxins, and so on, individual pollution might well cease
to exist. People don't contribute to pollution and resource depletion
because it makes them feel good. We do it because we have no other
sufficiently easy choice.
Individuals do have a massive effect on the environment in another way,
one that isn't usually classed with environmental issues. Without
effective and universal policies of sustainability, the number of people
on the planet is the single biggest determinant of how much damage we
do. Infuencing the number of children people have cuts across so many
rights, freedoms, privileges, and issues that it needs a chapter of its
own . I'd just
like to stress here that population is, obviously, an environmental
issue. If one admits the principle that we don't have the right to ruin
the environment for each other, then it applies to individuals just as
it does to industry.
Those were some of the human problems associated with environmental
issues. The last issue is human crimes and penalties.
Punishments are supposed to be proportional to the crime committed, so
the first question is where does environmental crime fit in the scheme
of things? Is it a property crime? A crime against humanity? Something
else entirely? Dumping a load of packing crates in an empty field is an
environmental crime, but it's pretty much down there with littering. It
would hardly qualify as the crime of the century. Closer to the other
end of the scale is something like the Chernobyl nuclear reactor
accident. Sickening thousands of people, killing some of them, causing
birth defects, and making a whole section of our planet a no-man's land
does qualify as a new and awful kind of crime. It's hard to even imagine
what would be the appropriate penalties for something like that. There
is no way to recover what was lost, and any punishment is insignificant
in the face of the suffering caused.
We're pretty clear on the minor nature of the first type of
environmental crime. But the kind which causes physical harm to people
doesn't seem to be on the legal radar yet. The kind that murders a whole
species doesn't even have a name. Environmental crimes are all punished
as if they're a harmless white-collar infraction of some arcane legalism
about property. Not all of them are. Some are wholesale robbery,
assault, poisoning and murder. The fact that they're bigger than
ordinary crimes does not make them better. It makes them worse. They
need to be punished accordingly both for the sake of justice, and to
help those who are slow to understand what's at stake.
One problem peculiar to environmental crime is figuring out who to
punish. In the Chernobyl example, is it the people who decided to build
a reactor? The ones who decided to build that type of reactor? The ones
who decided that running without maximum safeguards was worth the risk?
And who are all these people? It's one of the hallmarks of modern
industrial society. No one is responsible.
That's the primary change needed in law and custom. The people holding
the final word on decisions affecting the environment have to be held
personally responsible for those decisions. Right now, there's so little
sense of the seriousness of the crime that the same people who committed
it can generally retain their jobs as if it was just some accident that
happened on their watch. In fairness, responsibility can't be palmed off
on a legal fiction of corporate entities without any actual /corpus/.
Actual human beings with the actual power have to hold the actual
responsibility. Those are the same rules that apply to everyone else. If
people were less deafened by the call to power, it would be considered
absurd to jail shoplifters while putting decision-makers in an
accountability-free zone.
Environmental issues are perhaps the most important area where the
feedback loop between those making the decisions and those dealing with
their consequences has to be so short that there's no difference between
the two sets of people. A system that makes it simple for small voices
to call big ones to account should help to reach that goal.
Restitution is always a factor in environmental cases. Recovery always
costs money, and that money should come first from the pockets of those
responsible for the problem. It doesn't matter if the scale of the costs
makes any indvidual fortune a drop in the proverbial bucket. No assets
controlled by a responsible party should be off-limits, there can be no
sheltering of funds from the consequences of environmental misdeeds.
Fairness requires that.
The bottom line is that the environment is the precondition for every
aspect of quality of life, and for life itself. Crimes against life have
to be treated with the same seriousness whether they're small- or
large-scale.
Cost of Transition
How to get from where we are, here on the brink of ecological disaster,
to the promised land of sustainability is, of course, the burning
question. As a matter of method rather than principle, it belongs to the
purview of economists and specialists in the relevant sciences, but it's
such a barrier to action in many people's minds that it needs to be
discussed if the principles are to have any plausibility.
Money is the only stumbling block, but it's enough. Everybody would like
to live in a non-polluting, sustainable society. We just don't want it
to cost anything. Since that's what we're afraid of, let's face it
squarely. How much would the transition cost?
There are numbers available on the cost of switching to renewable energy
economies. For instance, the IPCC 2007 Report on climate change
(p.
21) points out, "In 2050, global average macro-economic costs for
mitigation towards stabilisation between 710 and 445ppm CO_2 eq are
between a 1% gain and 5.5% decrease of global GDP. This corresponds to
slowing average annual global GDP growth by less than 0.12 percentage
points." After a thorough summary
with links to dozens of original sources, another estimate is a cost of
0.11% of GDP per year to stay below 450ppm.
Higher estimates come from Stern's landmark 2006 review of the economics
of climate change .
(Wikipedia has a summary .)
His estimate is that 1% of global GDP is the cost of averting "the worst
effects" of climate change, usually pegged at 450ppm CO_2 eq, although
recent data indicates a number closer to 400ppm or even lower. Note that
his target is at the lowest end of the range studied by IPCC 2007, so
implementing it is bound to be more expensive.
That brings up the question of what the scientifically valid target
should be. There are plenty of indications that feedback loops have been
insufficiently included in climate change models, and that therefore the
situation is much more dire even than most predictions so far (i.e.
2009). According to the grand old scientist of climate change, James
Hansen, we'd have to return to a level below 350 ppm to avert
unacceptable (read impoverishing and lethal) warming. (An explanation of
the relationships between CO_2 ppm and temperature increases is here
.)
The take-home message is that aiming for less than 450ppm CO_2 eq is not
excessive or idealistic. It's on the optimistic side of what it will
take to avert disaster. And since averting disaster is the minimum
acceptable course, whatever it takes to achieve that is the lowest cost
fairness allows, even if it's more expensive than anyone likes.
Even more interesting, in some ways, is Stern's estimate of what it will
cost to do nothing. Twenty percent of GDP. Again: /20%/ of GDP. That
kind of reduction in GDP was last seen in the US in the Great
Depression, when there was over 25% unemployment, tent cities, hunger,
and beggared rich people committing suicide. The comfortable people
became uncomfortable, and found themselves suddenly among those who
don't matter. That is the cost of doing nothing.
Before continuing, I'd like to mention briefly a couple of the
criticisms from some economists leveled at any report that implies the
cost of transition is feasible. One is that they review the work in
question and point out that it underestimates costs. When there's
validity to the objection, it's generally because the review takes time.
When about a year has passed between the review and the original study,
that year of doing nothing has already raised the price of mitigating
the increasingly bad situation. It's not so much that the estimate was
wrong as that they prove how quickly inaction raises the price.
The other objection tends to be that the price of doing something is
totally unrealistic. The economists' explanation assumes that the amount
of money people save in current economies is indicative of what they're
able to spend on the costs of transition. Since the percentage saved is
less than the estimated costs of transition, the transition is supposed
to be unaffordable. They dress it up by using mathematical models and
their own definitions of common and uncommon words, but the whole effort
fails because garbage in always means garbage out. This is the old
notion that we can choose to pay nothing. We can't. Saying, "But I don't
want to pay anything!" makes no difference, even when you say it in
mathematics.
The choice is paying 20% (or more) by doing nothing or paying less to
prevent disaster. Whether the "less" is 1%, 2%, or even 5% is not the
point. The point is that is is /much less/. Even the critics can see that.
There is also some evidence that costs may actually be lower than
expected. As I discussed in an earlier post
,
the cost estimate does not include the benefits to GDP from local
business opportunities, improved property values, reduced need for
environmental mitigation, and so on. That’s been estimated to increase
GDP
by as much as several percent in developing countries, which need it the
most.
So, let's take the 1% of GDP number as a reasonable middle-of-the-road
estimate of what it would take to move to clean, sustainable energy
within 40 years. Given an estimated current global budget of around $43
trillion, that's around $430 billion per year. It's easier to understand
those numbers if they're brought down to earth. For someone who's well
off and making $50,000 per year, 1% of annual income equals $500. That
would buy meals at a good restaurant once every couple of months. Or a
very good wide-screen TV. Or a few days' vacation. The amount is tiny.
[Update 2009-03-23: Well, when it's not being poured down a rathole,
it's small. When spent to bail out banks with no oversight, it doesn't
feel so good.]
On the other hand, the amount is huge. A recession is any drop in GDP. (
A depression is a big drop: 10% or more.) Some people suffer during even
the smallest drop, usually the most vulnerable. A family of four living
on $11,000, the poverty level in the US, might become homeless in a
month if their pay suddenly dropped by $110. Someone living on one
dollar a day (over a billion people in 2005) could die if it turned into
90¢.
Also, it's not only energy that needs realigning. Waste treatment,
recycling, industrial practices all need huge changes that will
initially cost more than current practices. My guess is that those would
be less costly than the enormous, system-wide undertaking of moving to
different energy sources, but let's say it's about the same. So, say 1%
for energy transition, 1% for other industry transitions, and another
percent for agriculture transition. A three percent reduction in growth
of GDP happens regularly, and the world does not end, although it can be
very hard on the vulnerable. The solution is to spread costs so that
they're borne by those best able to do so. That requires planning,
coordination, and cooperation. That requires the most powerful — wealthy
people and large businesses — to pay much more of the costs than the
powerless. That's the problem, but that's different from unaffordable or
impossible.
Those estimates are for a transition within 40 years. Whatever the
actual estimate, it could be halved by spreading it over 80 years,
assuming the extension didn't mean progress toward sustainability
stopped. Those estimates, or something close to them, are the costs if
we started now. Every year that passes while business as usual makes the
problem worse reduces the amount of time we have and raises the price of
doing nothing. Never forget that doing nothing is the most expensive
option of all.
I can't begin to say how much it stuns me that for fear we may miss out
on the national equivalent of a widescreen TV, we're gambling the fate
of the only planet we have. It's insanity.
It does not have to be that way. (I need to repeat that.) It does not
have to be that way. Yes, it would take global cooperation and
regulation. Yes, it would take concerted action, such as people do in
time of war. As in war, our survival is at stake. Unfortunately, though,
we have only ourselves to fight against. Instead of the satisfying work
of whacking enemies, the problem requires global cooperation and
international institutions capable of planet-wide enforcement on the
same scale as the planet-wide effects of environmental damage. It's all
very unheroic.
But it is affordable. Environmental fairness is just as practical as all
the other kinds. It's environmental unfairness that's impractical. If we
don't spend what is necessary to move to a clean, sustainable future,
then we won't have one. It's that simple.
+ + +
Sex and Children
Relationships
This may be the chapter people turn to first because of the title, but
the only thing to say about sex is that there is nothing to say. The
role of government is to enforce rights, to stop people from stepping on
each other. Nothing about consensual sex is the business of government.
On the other hand, marriage has to do with sex, or is supposed to, and
the powers-that-be have always had much to say about it. That relates
partly to the joy of messing about in other people's business, but
partly it also has to do with something real. Families are the
irreducible unit in the social mix, and all societies recognize that by
giving them legal rights. Hence the role of government in what could be
construed as a private matter. The few societies that have tried to
ignore those bonds, such as the early Communists, have failed. Families
aren't called "nuclear" for nothing.
In fairness, though, the recognition of the strength of the bond between
people who love each other has to apply to everyone equally. It
shouldn't be limited to biological or sexual relationships. The germ of
that idea lies behind the right to adopt, but there's no reason why
voluntary lifelong commitment should be limited to the parent-child
relationship. (Well, it's voluntary for the parent(s). Children's rights
are discussed later.) Any pair or group of people who have a lifelong
commitment to support each other should have the legal rights of
families. And the same legal obligations if they decide to part ways.
Last, there is the nasty topic of sex crimes. Let's be clear from the
start: these are not an unfortunate form of sex. They are crimes that
use sex. Like any other crime, they are completely the business of
government. I'd maintain that since they affect people at a point of
maximum vulnerability and can poison much of life's joy, they should be
dealt with much more severely than nonsexual crimes. Other crimes that
attack vulnerable people are penalized more severely, or at least are
supposed to be, such as profiteering in time of disaster or violence
against toddlers.
The hallmark of a sex crime is the lack of consent, which of course
implies that if consent can't be given, a crime has been perpetrated
regardless of any other factor. So, for mentally competent adults to
have sex with the mentally incompetent or with children is, /ipso
facto/, criminal. Sex among young but sexually mature adolescents is a
much grayer area. If no abuse of any kind of power is involved, then
it's hard to see how it could be called criminal, although each case
would need to be considered on its own merits. As with gray areas in
other aspects of life, the fact that they exist does not change the
clarity of the obvious cases. It's simply another situation where the
rules are the same — in this case that sex must be consensual — which
means that the remedies may vary.
Another favorite bugbear of the restrictive sexuality proponents is that
any freedom will lead to no limits at all. What, they worry, will stop
people from marrying dogs? The answer is so blazingly obvious, the
question wouldn't even occur to a normal human being, but since there
seem to be some very odd people around, I'll be explicit. An animal
can't consent, so bestiality comes under the category of cruelty to animals.
Parents
The current concept of parenting goes right back to the one provided by
Nature: it's something for everyone to do to the maximum extent
compatible with survival. This works well in Nature's merciless
biological context. It produces lots of extra offspring, who are
selected against if they're less adaptable than their peers, and over
time the species gets better and better. To put it in plain English, the
system works if enough children die young.
As human beings, we're having none of that, and we've been quite
successful at the not-dying-young part. But we're still determined to
follow Nature's program as far as maximum parenting goes. This has not
been working out well for us. We're an ecological disaster, all by
ourselves.
It's important to fully absorb the fact that Nature always wins. There
is no way to continue overpopulating the planet and to also get off
scotfree. It isn't going to happen. We may be able to put many more
people on the planet than we once thought possible, but space on the
earth is finite and reproduction is not. There is no way, no way at all,
to avoid Nature's death control if we humans don't practice birth
control first. One way or the other, our numbers will get controlled. We
don't get to choose to reproduce without limits. We only get to choose
whether the limits are easy or lethal.
The limits themselves are a matter of biological fact within the bounds
set by what one sees as an acceptable lifestyle. The earth can hold more
people who have one mat and one meal a day than people who want two cars
and a summer house. That's why estimates of the Earth's carrying
capacity for human beings can vary all the way from 40 billion down to
two billion.
However, it's academic at this point what a desirable carrying capacity
might be because under any comfortable lifestyle scenario we're already
either above that level or very close to it. (AAAS, 2000
) It's becoming more
critical by the day to at least stabilize population, and preferably to
reduce it to a less borderline level. On the other hand, without
totalitarian controls and a good deal of human suffering, it seems
impractical to try to limit people to fewer than two children per
couple. So the answer to our current reproductive limits is rather
clear, even without extensive studies.
The good news is that two children per couple is actually slightly below
replacement levels because not everyone reproduces, and of those who do,
some have only one child. That limit would allow for a very gradual
reduction of our over-population until some previously agreed,
sustainable level was achieved. At that point, an egalitarian system — a
raffle of sorts, perhaps — could be applied to fairly distribute the
occasional third child permit.
In the real world, it's not always couples who raise children, although
I've been discussing it that way because it takes a pair to produce
them. (Yes, I'm aware that with new genetic engineering methods it can
get more complicated than that. However, it's still about people
producing children, and the number of children per person doesn't change
based on the means of production. Only the method of doing the math
changes.) The right to have children should, obviously, apply to
everyone equally, but there's the usual stumbling block that although
the number of children women have is not subject to doubt, the number of
children men have can be much harder to guess. A fair method requires
some way to determine who is the male parent, and in this technological
age that's easy. All that remains is to apply the methods we already have.
And, no, that would not be a DNA test one could refuse. The socially
essential function of fairly distributing the right to have children
takes precedence over the right to refuse to pass a toothpick over the
inner cheek cells to provide a sample. It's a parallel case to the
requirement for vaccinations
in the face
of an epidemic.
I know that I've used the dreaded word "permit." It leads to the
difficult question: How are these desirable limits to be achieved?
There are a number of non-punitive measures that limit the number of
children in families to smallish numbers, and all those measures would
already operate in a fair society. First, people must have enough
security in old age that they don't need a small army of offspring as
their own personal social security program. Second, women must have
their rights to self-determination, education, and other occupations
besides motherhood. Judging by the experience of countries where women
have acquired some rights and where there is retirement security, those
two things by themselves bring the rate of reproduction close to
replacement level. Maybe with the addition of strong cultural support
for families with no more than one or two children, other measures might
only rarely need to be applied. Advertising, mass media, and social
networks would all have to send the same message, in a similar way as
they do now, for instance, about the undesirability of smokers.
However, limits laid down in law would still be necessary if for no
other reason than to stress their importance and to provide recourse
against the occasional social irresponsibles who we will always have
with us. The punishments for exceeding those limits would have to be
rather different than for other anti-environmental excesses. After all,
nobody on earth would want to, how should I say this?, "remove" the
children. Incarcerating the parents also wouldn't be helpful. It would
create problems and solve none.
What's really at stake is a special case of environmental mitigation.
Just as an industrialist would be required to pay for cleanup, likewise
parents would be required to pay the excess costs brought on by the
overpopulation they're facilitating. But that doesn't really help to put
a price on it. The immediate actual cost of just one extra child in a
whole country is nothing. If the actual cost of everyone having extra
children were visited on just one pair of parents, the richest people on
the planet would be ruined.
So actual costs unfortunately don't provide actionable information. One
would have to fall back to the merely practical and set the level high
enough to be a deterrent, but not so high as to be ruinous. Just as with
incarceration, ruining the parents creates more problems than it solves.
A deterrent is necessarily a proportion of income and assets, not an
absolute amount, and the higher the level of discretionary funds
available, the higher that proportion needs to be. For the poorest, one
percent of income might be enough of a motivation, whereas for the
richest the level might have to be fifty percent. Time and experience
would tell. Furthermore, like unpaid taxes, this overpopulation
mitigation cost could not be avoided by bankruptcy, and if it were not
paid, it could be taken straight from the individual's pay or assets.
And the amount would be owed for the rest of one's life. There's nothing
naturally self-limiting about the effects of overpopulation, and neither
should there be for the deterrent implemented to prevent it.
There's one technical aspect to replacement levels that has a big effect
but that people sometimes ignore. Aside from absolute number of
children, the age at which those children themselves reproduce changes
population numbers. Just to make that clear, let me give an example.
Consider a sixty year span of time. If a couple produces two children at
thirty, and those children each have a child apiece when they are
thirty, then the total number of people after sixty years, including all
three generations, is six. If, instead, everybody reproduces at fifteen,
there are five generations after sixty years and the total number is
ten. Everybody has only one child each, but age at reproduction has a
big effect on total numbers. The longer lifespan becomes, the more
critical this effect is. If lifespan increases very gradually, that
might not matter, but should there suddenly be a medical advance that
has us all living to 400, it would again become important to adjust
reproductive rates downward or suffer the consequences of reduced
quality of life. I wouldn't be hugely surprised — maybe just a bit
surprised — to see noticeably lengthened life spans, say to 120 or so,
as a consequence of stem cell therapies, so even in the near future this
discussion is not completely theoretical.
Producing children has always been socially as well as personally
important. But fairness requires changes to its significance. Who has
sex with whom has zero public significance, and the sooner that's
universally recognized, the sooner we can stop generating massive
suffering by unfairly, unethically, and immorally interfering in each
others' lives. On the other hand, parenting has huge environmental
consequences which are ignored at this point. We're used to that
privilege, but that doesn't make it right.
Children
[This section benefited a great deal from comments by Caroljean Rodesch,
MSW and child therapist. Errors and bizarre ideas are, of course, mine.]
The first thing that has to be said about children is that they have
rights. They have none now, at least not in law. Where children are
involved, the discussion is about how much damage parents can do before
their rights are revoked in favor of the state. It's the same level of
ignorance men once had (and in some places still have) about women, when
they argued about how much husbands could beat their wives.
Any mention of actual children's rights is written off as impractical,
just as every other expansion of rights to former chattel has been, and
with about as much actual merit. I'll discuss below (and in the chapter
on Care ) some
examples of how children's rights could be implemented.
Another class of objections is variations on the theme that any rights
will result in a dictatorship of rude, free range children. That's
nothing but a false dichotomy between absolute parental rights and
absolute children's rights. Those are not our only choices. Children's
rights are yet one more case, perhaps the most difficult case of all,
where many different rights have to be /balanced/. An issue that depends
on balance cannot be solved by saying balance is too difficult. With
echoes of other similar excuses, it's amazing how consistently the
corollary is that all power has to stay /in statu quo/.
All that said, there are major differences between children's rights and
those of any other class of people because of children's lack of
experience. Rights for children does not imply that they should suddenly
dictate how schools or families are run. Their usual ideas on those
topics would prove neither sustainable nor effective. Under normal
circumstances, parents need to be the final authorities on what children
do /for the sake of the children themselves/. That is the criterion: the
good of the children. The parents' needs or wants are secondary. Under
normal circumstances, parents are famous for applying that principle,
sometimes at the cost of their own lives. But that's as it should be.
It's when things are not as they should be that the problems arise, and
it's those situations that children's rights must help rectify.
There are other differences based on the lack of experience. It's plain
that children would not do well as voters, soldiers, drug takers, or
drivers. It's plain that they have much to learn and need teachers.
However, the fact of their different needs is a reason to satisfy those
needs, not to deprive them of all rights. Parents don't own their children.
Parental rights, as distinct from children's rights, have next to no
limits at this point. We're only one step away from those ancient Roman
fathers who were at least consistent and claimed the right to kill any
of their household chattel, including children. The lack of boundaries
to parental rights leads to the usual damaging consequences. Children
are astonishingly resilient, so the human race hasn't died out (yet),
but a minority become dysfunctionally damaged and grow up to impose an
incalculable social price. Every adult needs more limits than we have
now with respect to children, not less. Limits are never a welcome
message, not for polluters, and not for you and me. But it's true
nonetheless. In a fair world, children have rights.
The primary and most important right is the fundamental one to be safe
from harm. The exercise of all other rights depends on that one in
adults, and the same is true of children. Therefore, when parental
rights are balanced against physical harm, the child's rights take
precedence.
That has consequences for how abuse cases are handled. The well-being of
the child is a higher priority than preventing family break-up. The
current assumption is that it's always best for the child to stay with
their family, even when they've been harmed by it. It takes huge damage
to the child, such as imminent danger of death or permanent disability,
before the legal system moves to revoke parental rights. (Disrespected
minority parents may lose their rights more easily, but that's a symptom
of discrimination against minorities, not of respect for the rights of
the child.) The adult right to freedom from harm would be meaningless if
it wasn't enforced until death was imminent. That's not an acceptable
standard for adults, and it is not acceptable for children.
Parenthetically, I want to comment up front on the cost issue. I get the
sense that the primacy currently given to parental rights is at least
partly due to a rather broad-based horror at otherwise having to care
for strange kids. It's true and it's unavoidable that the right to
escape an abusive situation is going to be more expensive for society
than ignoring the problem. But the fact that it's cheaper is not an
adequate reason for allowing some people to suffer at the hands of
others. It's not adequate when everyone is the same age, and it's not
adequate among different ages. Furthermore, just as a practical
footnote, I'm not at all sure that the overall cost of enabling children
to escape abuse really would be more expensive. Abused children go on to
become some of the most expensive adults in society, and this may be yet
one more instance where prevention turns out to be cheaper than cure.
Given that children have the right to be free of abuse, it's important
to define what abuse is. The definition is a prickly issue because
socialization sometimes requires physical measures. However, the issue
of occasional physical restraint is not unique to children. Adults
sometimes have to be physically restrained as well. We have models for
what is appropriate corrective action, and it does not involve beating
people up. Also, in a child's case, the motivation has to be teaching,
not simply a removal from society of an offending element. In other
words, the good of the child is the priority, and not the convenience of
adults. So there are two criteria. One is that physical corrective
measures should be of the same general type for children as for adults.
Two is that for children the teaching or rehabilitative component is of
primary importance. Using those two criteria, it's possible to delimit
physical restraint as opposed to abuse, and to distinguish a wholesome
upbringing that creates options as opposed to one that shuts them down.
Medical treatments are one area where the rights of children have to
differ from those of adults. Given children's lack of experience, that
probably goes without saying. Again, the criterion is the benefit to the
child and keeping their future options more open, not less. Making sure
a child gets necessary medical treatment for health is an essential
parental function. That doesn't mean all medical treatments are at the
discretion of adults. When benefit to the child is the criterion, then,
just as one example, it is arguably child abuse to subject him or her to
body-altering treatments for the sake of a beauty pageant.
There are many gaps in the current systems of child abuse prevention.
Each new case of abuse underscores that. Some of it is not due to faulty
rules, but to understaffing due to meager resources. In all the cases
I've heard about, that's due to the low priority of children and not to
the literal lack of money in a country. Protecting children is not
wildly expensive, but governments would rather spend money on other
things. Those priorities reflect power, not justice, and wouldn't be
legal in a fair society. Enforcement of rights is not optional, for
children no less than for others, so an adequate system of protection
for them must be funded through taxes just as the other essential
functions of government are.
Deficient resources are not the only problem, unfortunately. The legal
system is geared to adults, who can deal with small problems on their
own and need help only in exceptional cases. That model does not work
for children at all, and never could work no matter how well funded the
system was. The younger and therefore the more vulnerable the child, the
less likely they are to know they're being abused or that there's
anything they could do about it.
There are two points that need realignment. The first is that for
children, the most meaningful and practical means of exercising the
basic right to live free of abuse would be the right to leave it. They
should be able to get a divorce, as it were, from a bad situation. The
second is that the adults in the community need a great deal more
encouragement to interfere when they see a bad situation developing.
Both of those require good judgment on the part of many different
people, which make them both very hard to implement.
I see the child's right to leave as critical to the prevention of abuse.
And prevention, after all, is the essential point. Punishment after the
fact doesn't undo any of the damage. But the right to leave is
complicated not only by children's own lack of perspective, but also by
the existence of stupid, selfish, or even criminal adults. Parents
shouldn't have to contend with abuse allegations because they made their
kids eat peas for dinner. A vengeful ex-partner shouldn't be able to
waste everyone's time by bribing Junior away from the custody parent
with promises of never having to do homework. Worst of all would be if
children's rights didn't help children but hurt them by giving the
world's vilest criminals more opportunity.
Given those issues, children's right to leave has to involve responsible
adults. I can't see any other way to implement the right without
inviting more trouble than it solves. That doesn't mean there
necessarily is none. The more control over that decision that can be put
in the hands of the child concerned, and that is consistent with
beneficial outcomes, the better. Any adult who comes in contact with the
child is in that position, and should fill it. (More on the duty of
reporting in a moment.) But in order for children to have consistently
effective advocates there also need to be people whose job is nothing else.
In the chapter on Care
, I discuss
the role of child advocates at greater length. The point I want to
stress here is that professionals who advocate for children are an
essential component of enforceable children's rights. Another essential
component is the existence of places where the child can actually go.
I've called these facilities "children's villages" (after the orphanages
set up by the Tibetans in exile ), and they're
also discussed in Care. Additionally, the process of adoption needs to
be facilitated so that it happens quickly enough to be relevant.
As with divorce between adults, separation of children from their
families involves a host of financial and practical issues that have to
be resolved individually. I see that as an important function of child
advocates. The general principles to go by are that the child has
rights, so their preferences — not just needs, preferences — are an
important and valid factor. And that the biological parents, by creating
the situation in question, are responsible for it and will be assessed
child support to be paid to the new caregivers. (Again, more in the Care
chapter.)
That brings me to the second point: reporting abuse or neglect. The
older the child, the likelier that a separation initiated by them would
be feasible. The younger, the more important it is for other adults to
be willing to interfere in bad situations. That issue, like all the
others in this field, is a delicate balancing act. Creating a situation
in which neighbors spy on each other and everyone lives in fear would be
atrocious, but letting children suffer because adults would rather be
inoffensive is appalling.
Two lines of approach should mitigate that dilemma. One is that all
adults who come in contact with children professionally: day care center
workers, teachers, pediatricians, nurses, journalists, clergy, and so
on, would be trained to recognize signs of maltreatment as a condition
of their license and would be obligated to report it to a child
advocate. The advocate would be legally obliged to look into the
situation within days, not months. This is enforcement of a basic right.
It's not optional.
Mandated reporting is a feature of child abuse prevention laws in some
places now, but it suffers from insufficient compensation for the
asymmetry of power between children and adults. Professionals fear
repercussions from parents, their employers, or just from mere general
unpleasantness, and the reporting too often doesn't happen as it should.
Even full confidentiality doesn't always help matters.
So, as with power asymmetries generally, the solution is to strengthen
the feedback loop. Retention of the license to practice one's profession
needs to depend on fulfilling the duty to report just as much as it does
on adequately performing the other duties. Further, the child needs to
have the right to take any professional who should have reported, but
didn't, to court for dereliction of duty. The need to do that may only
be evident in hindsight, so the person involved could bring that suit as
an adult.
If those measures were still not sufficient to enforce mandated
reporting, additional ways of making it easier to fulfill that duty have
to be implemented. The result has to be that all professionals in a
community put children ahead of their own desire for a quiet life.
The same should be true of all adults in the community. Relatives and
neighbors are generally the first to know when there's a problem. But
they're not necessarily trained, and they're not necessarily even trying
to be objective. Therefore, for the general population, unlike for
professionals, reporting can't be mandated without creating more
problems than it solves. There should be "encouraged" reporting instead.
The signs of abuse or neglect need to be part of the curriculum in
school, and public health messages need to encourage people to report
when they're sure there's a problem.
That would create an atmosphere where children know they can ask any
adult for help. That, and the specific professionals whom they should
approach, ought to be a clear and present part of the curriculum from
the earliest days of day care up through grade school.
Funding will obviously be necessary when there have to be adequate
numbers of workers specialized in advocating for children. That is not a
sufficient reason to give up on children's rights. Nobody complains that
police are a luxury and that we should just learn to deal with crime on
our own because it's cheaper. I've said it before and I'll say it again:
the same is true of children. Their inalienable rights must be enforced,
and the expense of doing so is simply part of being a fair society.
Besides, the fewer abusive adults there are, the smaller the child
advocacy offices can be. People who want to reduce the money spent on
abused children need to figure out how to reduce the number of adults
causing the problem, not how to ignore the children suffering it.
In addition to freedom from harm, children have other basic rights, too.
They have a right to a living and the right to basic freedoms as they
grow into the ability to use them. These, too, need adaptations if
they're to be useful to children.
The right to a living, in particular, has different and more facets than
it does for adults. In the case of children, it means a right to support
rather than the right to earn a living. The right to support also
includes the mental and emotional support necessary for the child's
development. And it means acquiring the tools to earn a living
eventually, in other words, a right to education.
That, of course, has been more or less standard practice since before
the dawn of humankind. The difference, though, is that having an
explicit right means there can be no "less" about it. A defined level of
care and education has to be provided, and if it's lacking the child has
the right to get it elsewhere and the state has the obligation to make
that financially possible by exacting support from the parents and
supplying any shortfall. That is not standard practice, but it should be.
Given that a child's main task is acquiring the ability and tools to be
a competent adult, which is a full time occupation, the corollary is
that children have the right /not/ to work. Therefore, if their parents
cannot support them, it becomes the responsibility of society at large,
i. e. of the state, to do so. That's also always been generally
understood. In the past, extended families more or less fulfilled the
function of distributing care of children among many people, but
extended families are rarely extended enough these days. I discuss
possible methods of state assistance in the Care chapter. Whichever
solutions a society uses, the point remains the same. Children have a
right to have their physical needs met in an emotionally kindly
environment that gives them enough knowledge to earn a living when the
time comes.
Children do, also, have the same rights as others to the extent they can
use them. So the right not to work is not the same as a prohibition
against working. If children are citizens with rights, within the limits
of their lack of experience, it makes no sense to deny them the choice
to work if they want it. I realize that at the current levels of
accepted exploitativeness — /accepted/, not even egregious
exploitativeness — any suggestion that children could work if they
wanted to is likely to be met with scorn. However, as a matter of simple
fairness, /if/ children were reliably free agents in the decision,
there's nothing bad about children working, in and of itself. Such work
has two limiting characteristics: it is not physically taxing and allows
enough time for the child to stay healthy, to learn, and to play.
There's a difference between occasional babysitting or gardening for the
neighbors, and stunting growth by working twelve hours a day in a
factory. Or (now that I live near Los Angeles and have heard of the
situation) having proud parents push children into being full time
actors at the expense of the rest of their lives.
The right to education gives children the skills needed to live on their
own and to become the well-informed voters on whom the whole system
depends. The only universally and equally accessible way to fulfill that
right is with free public education. It should enable the graduate to
enter work that pays a modal income. That was once the idea behind free
schooling in the US. At the time, a high school diploma was enough to
get good work. In Europe that principle carried forward into modern
times when university education became freely available. (At least until
the Eurozone started following the US example and trying to water that
down.) In the US, sadly, the original idea has been almost lost, but it
was a good one. It is a hallmark of a fair society that it gives all its
citizens equal tools when starting the business of life.
Children also have rights to the freedoms of speech, thought, and
religion as they grow to feel the need for them. They're all rather
meaningless to a toddler, but they acquire meaning gradually, without
any strict on-off switch to make things easy. It makes sense to have
target dates for the application of various rights, adjusted for
individual children if they exercise a right earlier than most. The
validity of doing so should be decided on a case by case basis. For
instance, there have been several high school free speech cases recently
which have involved political speech issues, such as criticism of school
policy. (One example
from the Student Press Law Center site, where there are dozens of
others.) Sometimes the "free speech" of high schoolers is regrettable
rather than an expression of ideas, but that wasn't the situation in
these cases. These were about the schools shutting down dissent, and
using the children's status as non-persons to get away with it. The
cases were decided in favor of the powers-that-be simply, as far as I
can tell, because to decide otherwise would be to admit that minors have
rights. That's the kind of thing explicit children's rights should prevent.
The whole issue of applying children's rights is a moving target because
the children themselves are growing. That means their rights and
responsibilities should be proportional. It's silly to insist a person
is old enough to support her- or himself, old enough to enlist in the
military and kill people, but not old enough to vote or to decide which
drugs to take. It also means the acquisition of rights and
responsibilities should be gradual. For instance, the right to vote
could start with municipal elections. (In the US situation, that
includes school boards, which could turn into a true exercise in
democracy.) A couple of years later, county, province or state elections
would be included, and finally national and then, when the world has
reached that stage, international ones. The right to support shouldn't
go from 100% to 0% at the stroke of midnight on a given birthday. It
could decrease by 20% a year between 16 and 21. Legal responsibility for
criminal activity, likewise, doesn't have to go from nothing to total.
The requirement to get schooling could be transformed into the
availability of free schooling and ultimately, depending on the
country's resources, the need to pay some fees. Drug-taking could be
allowed first for "low-voltage" substances like beer, marijuana, or coca
leaves. As the age of greater and greater discretion is achieved, the
limits would be fewer.
I've stressed repeatedly that children's lack of experience affects
which rights are useful to them. It also affects whether they can
exercise them. Saying that children have rights, but then not providing
any means to compensate for their huge inequality in any struggle with
adults is to render those rights meaningless. So the question is really
a specific and acute instance of the more general problem of how to give
weaker parties the tools to deal with stronger ones.
In many way, the rights stipulated here would change nothing about
normal family life and upbringing. What they (try to) do is codify that
situation so that it can be made to apply to all. Normal family and
school life aren't nearly as far off the mark as many of our other
deviations from fairness and justice. That may not be too surprising
given that most normal people care about children, certainly more than
they do about any other class of human beings. There aren't that many
changes needed in normal situations. Where society fails badly is in the
abnormal cases, the ones where children are neglected, abused, or preyed
upon. And the core reason for that failure is that children have no
rights. The optimum balance will be enough rights for the child to
avoid, escape, or stop bad situations without interfering in benign
families that require no interference.
+ + +
Government 1: War and Politics
Introduction
There's irony to the fact that in a work on government, the topic itself
waits to appear till somewhere in the middle. But government is a
dependent variable without any chance of being functional, to say
nothing of fair, if the physical and biological preconditions aren't
met. Those preconditions can be satisfied with less effort when people
are savvy enough to understand where they fit in the scheme of things,
but they can't be ignored.
It's customary, or it is now at any rate, to think of government in
terms of its three branches, executive, legislative and judicial. But
those are really branchlets all in one part of the tree and don't
include important aspects of the whole thing. They are all concerned
with the machinery, with how to get things done, and they don't address
the why, what, or who of government.
At its root, the function of government is to coordinate essential
common action for the common good. When resources are at a minimum, the
function is limited to defense, which is one type of common action. In
richer situations, other functions are taken on, generally at the whim
of the rulers and without a coherent framework for what actually serves
the common good. The priorities are less haphazard in democracies, but
still don't necessarily serve the common good. Wars of aggression come
to mind as an example.
The defining characteristic of a government is said to be a monopoly on
the legitimate use of force. (Parents are allowed to use force on
children in all societies so far — an exception I consider wrong as I
discussed in the fourth chapter — but that's a side issue in the context
of this discussion.) That force is used not only against outside
threats, but also internal ones, such as criminals. An essential
function of government is to protect its citizens from each other. It's
a short step from that to regulatory functions generally, to all the
ways in which a government can serve to maintain the people's rights.
Economic, environmental, and safety regulations are all included here.
Preventing bad actions is one type function carried out by government
for the common good.
The positive functions of government are as important: the achievement
of desirable goals, not merely the prevention of disaster. Some of those
coordinated actions for the common good are now understood to be just as
obvious a function of government as defense. Essential infrastructure is
in this category, although people don't always agree on what is
essential. Ensuring basic standards of living, basic levels of health,
and security in old age are also coming to be understood as something
government is uniquely qualified to do. And then there are the things
which fire the imagination and are a quest and an adventure rather than
a necessity. These are big or not obviously profitable projects beyond
the scope of smaller entities, projects such as support for basic
research, for space programs, or for the arts.
Given that government has a great deal to do, it may be worth pointing
out which functions it does not have. Its purpose is not to bring
happiness or wealth to its citizens, only to bring the /preconditions/
for those desirable things. And it's certainly not to make a subset of
citizens wealthy. Nor is it to make citizens healthy, only to provide
the preconditions to make it easy for people to take care of their
health. That's actually a more difficult issue than the others because
if health care is government-funded, then a line has to be drawn about
how much self-destructive behavior to tolerate. And that runs smack into
the right to do anything that doesn't harm others. It's a conflict I'll
discuss in the chapter on Care, near the end of the section on medicine
.
So, having discussed the preconditions on which government rests, the
next step is examining the machinery by which it can implement its
functions. What we want from the machinery is for it to interfere with
those functions as little as possible and for it to be as effortless and
invisible as possible. The business of government is essentially a
boring job of solving other people's problems — or that's what it is
when abuse of power and corruption are removed from the equation. Quite
clearly, we have some distance to go before achieving a silent,
effortless and effective machine. Below, I'll offer my ideas on how to
get closer to that goal.
Important aspects of the machinery side of the tree which fall outside
the traditional executive, legislative, and judicial zones are funding
the government and overseeing it. Oversight is a particularly vital
issue because the thing being overseen also has a monopoly on force.
Given the willingness in enough people to accommodate anything the
powerful want to do, it's essential to counteract that call to power and
to make sure that oversight retains the ability to rein in all abuses.
Instead of discussing these topics in their conceptual order above, I'll
take them in the order of their capacity to interfere with people's
lives. That will, I hope, make it easier to imagine how it might work in
fact.
Force
Between groups
The bloody-minded we will always have with us, so it's to be expected
that even the most advanced human societies will always need to have
some sort of defense and police forces. But their function has to be
limited to small-scale threats. If an entire population flouts the law,
a police force is powerless. It can only stop rare actions, not common
ones. Likewise, the defense force of an equitable society would have to
be targeted toward stopping occasional attacks by kooks of various
kinds, not fullscale wars with equals or, worse yet, more powerful
countries.
The reason for that is simple. Fair societies are not compatible with
war. War is the ultimate unfairness. Its authority is might, not right.
Even self-defense, in other words even a just war, is not compatible
with retaining a nice sense of fair play. Wars of self-defense may be
forced on a country, but that doesn't change the facts. Justice and war
are simply mutually exclusive.
No, that does not mean that an equitable society is impossible. It's
only impossible if wars are inevitable, and the evidence suggests
otherwise. Looking at it from a national level, how many countries have
internal wars? Some, but not all. Not even most. Somehow, this so-called
inevitable state of war has been eliminated even among large groups
within countries. If it can be done for some groups, it can be done for
all groups.
It should go without saying, but probably doesn't, that regressions are
always possible. We could regress right back to stone-throwing cave
dwellers under the right conditions. That's not the point. The point is
that war has been eliminated between some groups. How that's done, and
the huge wealth effect of doing it are obvious by now. To repeat, if it
can be eliminated between some groups, it can be eliminated between all
groups. It is not inevitable. Aggression isn't about to go away, but
that particular form of it known as war can and has disappeared in some
cases.
Wars happen internationally because on some level enough people think
they're a better idea than the alternative. They want to retain the
ability to force agreements rather than come to them. But there's a
price to pay for the method used, and it only makes practical sense if
the price is not greater than the prize. Wars make no sense by any
objective measure on those grounds. Their only real benefit is on a
chest-thumping level, and even that works only at the very beginning,
before the price starts to be paid.
For anyone who doubts my statement about price, I should mention that
I'm thinking both large scale and long term. In a very narrow view,
ignoring most of the costs, one could claim that it's possible to win
something in a war. Take the view down a notch from international to
national, and it becomes clear how flawed it is. It's as if one were
thinking on the same level as a Somali warlord. If Somalia had not had a
generation with no government, and hadn't spent the time in the grip of
warring gangs, anybody on earth (except maybe a Somali warlord) would
agree they'd be much richer and better off in every way. Ironically,
even the warlords themselves would be better off. They wouldn't have to
spend a fortune on armed guards, for starters. And yet, unable to think
of a way of life different from the hellhole they're in now, they keep
shooting in the hope of winning something in a wasteland they're
reducing to nothing.
That, on an international scale, is where our national minds are. That,
on an international scale is the price we also pay. What we're losing
for the planet is the difference between Sweden and Somalia. When the
vision of what we're losing becomes obvious to enough people, and when
the pain of not having it becomes big enough, that's when we'll decide
that an international body to resolve disputes is a better idea than the
idiotic notion of trial by ordeal. Then it'll be obvious what's the
better choice, even if trial by law does mean losing the occasional
case. After all, that can happen in trial by ordeal too, but with much
more devastation.
So, no, I don't think that the incompatibility between fairness and war
is the end for fair societies. I think it's ultimately the end for war.
The beginnings of that evolution seem to be happening already. War is
becoming less and less of an option between some countries. Even the
concept of war between Austria and Italy, or Japan and France, is just
laughable. They wouldn't do it. They'd negotiate each others' ears off,
but they'd come to some resolution without killing each other.
That's the future. An extrapolation of the past is not.
The fact that the US is closer to the past than the future doesn't
change the trajectory. Likewise with the other large powers, China,
Russia, India, Brazil. They all have further to go than anyone would
like, but that only means regressions are possible. It still doesn't
change the direction of the path. And even the US seems to feel the need
for UN permission before marching into other countries and blowing them
up. That's something I don't remember seeing ever before. It's a sign of
a potential sea change, if they don't zig backwards again. The next step
would be to actually stop blowing people up. We'll see if they zag
forward and take it.
If violent methods are not an option as a final solution to disputes,
then the only alternatives when ordinary channels aren't working are
nonviolent ones. That is not as utopian as the people who fancy
themselves hard-headed realists assume. A tally by Karatnycky and
Ackerman (2005)
(pdf) of the
67 conflicts that took place from 1973 to 2002 indicates that mass
nonmilitary action was likelier to overturn a government and — even more
interesting — much likelier to lead to lasting and democratic
governments afterward. (69% vs 8%, and 23% with some intermediate level
of civic involvement.) Armed conflicts, in contrast, tended to result
simply in new dictatorships, when they succeeded at all. It is easy to
argue with their exact definitions of what is free and how the struggles
took place. But short of coming up with an /ad hoc/ argument for every
instance, the broader pattern of the effectiveness of nonviolence in
both getting results and in getting desirable results is clear.
As the article points out, some of the initially freedom-enhancing
outcomes were later subverted. Zunes et al. (1999)
cite one notable example in Iran:
Once in power, the Islamic regime proved to abandon its nonviolent
methodology, particularly in the period after its dramatic shift to
the right in the spring of 1981. However, there was clear
recognition of the utilitarian advantages of nonviolent methods by
the Islamic opposition while out of power which made their victory
possible.
The interesting point here is not that regression is to be expected when
nonviolence is used as a tool by those with no commitment to its
principles. The interesting thing is that its effectiveness is
sufficiently superior that it's used by such people.
That effectiveness makes little sense intuitively, but that's because
intuition tells us we're more afraid of being killed than having someone
wave a placard at us. Fear tells us that placard-waving does nothing,
but violence will get results. But that intuition is wrong. Fear isn't
actually what lends strength to an uprising. What counts is how many
people are in it and how much they're willing to lose. The larger that
side is, the more brutal the repression has to be to succeed, until it
passes a line where too few soldiers are able to stomach their orders.
That pattern is repeated over and over again, but each time it's treated
as a joke before it succeeds, and if it succeeds it's assumed to be an
exception. The facts become eclipsed by the inability to understand
them. The hardheaded realists can go on ignoring reality.
The faulty emotional intuition on nonviolence leads to other mistakes
besides incorrect assessments of the chances of future success. It
results in blindness to its effects in the present and the past as well.
As they say in the news business, if it bleeds, it leads. The more
violence, the more "story." We hear much more about armed struggle, even
if it kills people in ones and twos, than we do about movements by
thousands that kill nobody. How many people are aware of nonviolent
Druze opposition to Israeli occupation in the Golan Heights versus the
awareness of armed Palestinian opposition? How many even know that there
is a strong nonviolent Palestinian opposition? The same emotional
orientation toward threats means that nonviolence is edited out of
history and flushed down the memory hole. History, at least the kind
non-historians get at school, winds up being about battles. The result
is to confirm our faulty intuition, to forget the real forces that
caused change, and to fix the impression that violence is decisive when
in fact it is merely spectacular.
Admittedly, the more brutal the starting point, the less likely
nonviolence is to succeed. The people whose struggle awes me the most,
the Tibetans, are up against one of the largest and most ruthless
opponents on the planet. Even though their nonviolence is right, it's no
guarantee of the happy ending. But — and this is the important point —
the more brutal the basic conditions, the less chance that /violence/
can succeed. The less likely it is that /anything/ can succeed in
pulling people back from a tightening spiral of self-destruction. Places
like the Congo come to mind. The more equitable a society is to start
with, the better nonviolent methods will work.
"Common sense" may make it hard to see the facts of the effectiveness of
nonviolence, but even common sense knows that ethical treatment is
likelier to lead to ethical outcomes than brutality. The fact that armed
conflict is incompatible with justice is not a disadvantage to a fair
society, so long as it remembers that violence is the hallmark of
illegitimacy. The state itself must view the application of violence as
fundamentally illegitimate and therefore use its own monopoly on force
only in highly codified ways. An analogy is the blanket prohibition
against knifing other people, except in the specific set of
circumstances that require surgery. I'll discuss the distinction a bit
more below. A refusal to use violence is a clear standard to apply, and
it gives the state that applies it an inoculation against one whole
route to subversion. When violence is understood to be a hallmark of
illegitimacy, then it can't be used as a tool to take power no matter
how good the cause supposedly is, and no matter whether the abuse comes
from the state or smaller groups.
Another objection runs along the lines of "What are you going to do
about Hitlers in this world of yours?" "What about the communists?" Or
"the Americans" if you're a communist. "What about invasions?" What, in
short, about force in the hands of bad guys? If one takes the question
down a level, it's easier to understand what's being asked. In a
neighborhood, for instance, a police force is a more effective response
to criminals than citizens all individually arming themselves. The real
answer to force in the hands of bad guys is to take it out of their
hands in the first place. If a gang is forming, the effective response
is not to wait for attack but to prevent it.
On a local level, the effectiveness of police over individual action is
obvious. On an international level, it seems laughable because the thugs
have power. But that's common sense playing us false again. Brute force
threats are not inevitable. They're human actions, not natural forces.
They're symptoms of a breakdown in the larger body politic. The best way
to deal with them is to avoid the breakdown to begin with, which is not
as impossible as it might seem after the fact. The currently peaceful
wealthy nations show that dispute resolution by international agreement
is possible and they show how much there is to gain by it.
That indicates that objections about the impossibility of using
international norms to handle "bad guys" are really something else. An
answer that actually solves the problem is to strengthen international
institutions until the thugs can't operate, just as good policing
prevents neighborhood crime. However, when people make that objection,
international cooperation is the one solution they won't entertain. That
suggests they're not looking for a solution. Otherwise they'd be willing
to consider anything that works. They're really insisting on continued
thuggery, using the excuse that so-and-so did it first.
At its most basic level, the argument about the need to wage war on "bad
guys" is built on the flawed logic of assuming the threat, and then
insisting there's only one way to deal with it. The Somali warlord model
of international relations is not our only choice.
Force against criminals
Just as war is incompatible with fairness, so is any other application
of force where might makes right. Therefore force is justifiable only to
the extent that it serves what's right. It's similar to the paradox of
requiring equal tolerance for everyone: the only view that can't be
tolerated is intolerance. Force can't be used by anyone: it can only be
used against someone using it.
Force can be applied only to the extent of protecting everyone's rights,
/including the person causing the problem/, to the extent compatible
with the primary purpose of stopping them from doing violence. The state
can use only as much force as is needed to stop criminal behavior and no
more.
Because the state does have the monopoly on force, it is more important
to guard against its abuse in state hands than any others. Crimes of
state are worse than crimes of individuals. Not only do people take
their tone from the powerful, but they're also too ready to overlook bad
actions by them. So the corrupting influence of abuse is much greater
when done by those in power, and it is much more necessary to be
vigilant against people's willingness to ignore it. An event such as the
one where an unarmed, wheelchair-bound man was tasered
would be considered a national disgrace in a society that cared about
rights. It would be considered a national emergency in a society that
cared about rights if people saw tasering as the expected result of
insufficient deference to a uniform. That fascist attitude has become
surprisingly common in the U.S.
The exercise of state power carries responsibilities as well as limits.
When criminals are imprisoned, the state becomes responsible for them.
That's generally recognized, but somehow the obvious implications can be
missed. Not only must the state not hurt them, it has taken on the
responsibility for preventing others from hurting them as well. The
jailers can't mistreat prisoners. That much is obvious. But neither can
people condone the torture of prisoners by other prisoners. The dominant
US view that rape in prisons is nobody's concern is another national
emergency. Proper care of prisoners is expensive. People who don't want
the expense can't, in justice, hold prisoners.
The death penalty is another practice that cannot be defended in a just
context. Its justification is supposed to be that is the ultimate
deterrent, but a whole mountain of research has shown by now that, if
anything, crimes punishable by death are /more/ common in countries with
death penalties than without. If the irrevocable act of killing someone
serves no legitimate goal, then it can have no place in a just society.
Another problem is the final nature of the punishment. It presumes
infallibility in the judicial system, which is ludicrous on the face of
it. Fallibility has been shown time and again, and tallied in yet
another mountain of research. Executing the innocent is something that
simply cannot happen in a society with ambitions of justice.
But perhaps the most insidious effect of having a death penalty is what
it says, even more than what it does. The most powerful entity is saying
that killing those who defy it is a valid response. And it's saying that
killing them solves something. The state, of course, makes a major point
of following due procedure before the end, but that's not the impressive
part of the lesson. People take their tone from those in power, and for
those already inclined to step on others' rights, the due process part
isn't important. What they hear is that even the highest, the mightiest,
and the most respectable think that killing is all right. They think it
solves something. If it's true for them, it becomes true for everyone.
No amount of lecturing that the "right" to kill works only for judges
will convince criminals who want to believe otherwise. By any reasoning,
ethical, practical, or psychological, the death penalty is lethal
to a
fair society.
Decision-making
Voting
[Some material also in an earlier post on Democracy Doesn't Work
]
The bad news is that democracy does not seem to work as a way of making
sustainable decisions. The evidence is everywhere. The planet is
demonstrably spiraling toward disaster through pollution,
overpopulation, and tinpot warring dictators, and no democracy anywhere
has been able to muster a proportional response. The most advanced have
mustered some response, but not enough by any measure. A high failing
grade is still a failure.
Given that democracy is the least-bad system out there, that is grim
news indeed. But the good news is that we haven't really tried democracy
yet.
As currently practiced, it's defined as universal suffrage. (Except
where it isn't. Saudi Arabia is not, for some reason, subject to massive
boycotts for being anti-democratic even though it enforces a huge system
of apartheid.) But defining democracy down to mere voting is a sign of
demoralization about achieving the real thing. Democracy is a great deal
more than voting. It's supposed to be a way of expressing the will of
the people, and voting is just a means to that end. For that matter, the
will of the people is considered a good thing because it's assumed to be
the straightest route to the greatest good of the greatest number.
There's a mass of assumptions to unpack.
Assumption 1: the greatest good of the greatest number is a desirable
goal. So far, so good. I think everybody agrees on that (but maybe only
because we're assuming it's bound to include us).
Assumption 2: Most people will vote in their own self-interest and in
the aggregate that will produce the desired result. The greatest good of
the greatest number is the outcome of majority rule with no further
effort needed. Assumption 2 has not been working so well for us.
Possibly that has to do with the further assumption, Assumption 2a, that
most people will apply enlightened self-interest. Enlightenment of any
kind is in short supply.
The other problem is Assumption 2b, which is that majority rule will
produce the best results for everyone, not just for the majority. There
is nothing to protect the interests of minorities with whom the majority
is not in sympathy.
Assumption 3: Tallying the votes will show what the majority wants. This
is such a simple idea it seems more like a statement of fact, and yet
reality indicates that it's the easiest part of the process to rig. Most
of the time, people don't know what they want. Or, to be more precise,
they know what they want — a peaceful, happy, interesting life for
themselves and their children — but they're not so sure how to get
there. So a few ads can often be enough to tell them, and get them
voting for something which may be a route to happiness, but only for the
person who funded the ads. Besides the existential difficulty of
figuring out what one wants, there's also a morass of technical
pitfalls. Redrawing voting districts can find a majority for almost any
point of view desired. Different ways of counting the votes can hand
elections to small minorities. Expressing the will of the majority, even
if that is a good idea, is far from simple.
However, although democracy's failed assumptions have become clearer,
one of the more optimistic ones is unexpectedly successful. (Unexpected
to me, at any rate.) Evidence shows that lots of people are surprisingly
much better at guessing right than most of the individuals within the
group. This has the catchy name of the "wisdom of crowds," although
wisdom is perhaps overstating the case. It implies that majority rule
ought to work a lot better than it does.
I think the reason why it doesn't lies not far to seek. The experiments
demonstrating crowd smarts rely on a couple of preconditions. Everybody
in the group must have much the same information. And everybody in the
group has to come to their decision /independently/. The implications
for how elections must be run are obvious and rather diametrically
opposed to how they're run now. Just for a start, it implies no campaign
advertising and no opinion polling.
I've argued repeatedly that people won't put effort into issues of
little direct interest to them. Governing the country is one of those
distant activities. It's a background noise to their primary concerns,
and the best thing it can do is be quiet. Achieving a condition of
effective yet quiet government takes a lot of knowledge, attention to
detail, consideration of implications, and preventive action. In short,
it takes professionals. Nobody is going to do the work required in half
an hour on Friday night after the barbecue. Even if they had that half
hour, they'd find more fun things to do during it.
The fact that people will not spend time on what feels like boring
homework means we're applying democracy at the wrong end of the process.
Voters aren't much good at governing — anyone who lives in California
needs no further proof of that. They're not even good at finding other
people who can govern — anyone who lives on this planet can see that.
It isn't that voters aren't smart enough. They are. The problem is that
hiring is a boring job. People get paid to do that. If they're not paid,
and are not responsible for the performance of the employee, there's no
way enough people will spend time carefully considering a candidate's
resume, studying their past performance, digging up clues about what the
candidate really did, and trying to form an accurate assessment of
probable future performance. Voters want someone who doesn't put them to
sleep, at least long enough to do their civic duty and decide for whom
to vote.
That's one of those things everybody knows and nobody mentions unless
they're being funny or cynical. As in, for instance, the following
ironic comment
about a
British candidate's campaign strategy:
David Cameron is direct about how the next election is not one of
ideological revolution (which would only flame existing suspicions
that the British have about the extent of the modernization of the
Tories) but rather who would better manage the economy and government.
Because electing the best technocrat is really inspirational to
casual voters.
So voters get what they want, more or less inspirational campaigns. When
it turns out that doing the job requires competence, it's too late.
Voters are no good at electing leaders. They should be unelecting them.
(I'll discuss below alternate ways of finding officeholders.) Figuring
out what's right is hard, but noticing something wrong is easy. We
shouldn't be looking to voters to craft solutions. They'd be much better
at smashing messes into small enough pieces to cart away.
Of course, nobody could govern if they had to face recall elections
every morning. There needs to be a threshold. The discontent needs to be
widespread enough to be more than individual dudgeon or the machinations
of interested parties. Ways to measure discontent need study because
there's a fine line between capturing every free-floating anxiety out
there and making the process too difficult to be effective at policing
officials. Petitions are one traditional way to measure discontent.
Whether they're the best way, I don't know. (An example of possible
implementation is given below.)
If 5% of the voters in the area concerned — city, state, nation, or
world — or 500 whichever was larger, signed a petition for recall, and
the whole recall effort was volunteer and uncompensated, then the recall
election would be held soon after, say six weeks. The concerned parties
could present their arguments in voter information booklets. They could
have real debates, and answer real unscripted questions from real
unscripted people, but they could not advertise during that time (or any
other time).
The example is just to give some indication of what I mean. However
dissatisfaction is measured, the essential element is that the process
must be immunized against influence either by members of the public or
by the officials, which is also why the time frame involved should be
short. Those wise crowds only appear in the absence of slant.
One more safeguard is needed. Consciousness of scrutiny can produce good
behavior, but if the scrutiny only appears on a schedule, the same
becomes true of the good behavior. Officials should be subject to random
checks for which they can't prepare. Effective methodology needs study
and trial in practice, because until it's been tried there is no fund of
experience on what works. As an example of what I mean, something like
random "snap unelections" might be useful. These might list those in the
top five percent for complaints, even without formal petitions against them.
I am not trying to argue in favor of a particular method. Effective
methods can best be found by means of longitudinal studies. The point is
that the most effective methods known at the time are the ones to apply,
and they should be able to make officials do their jobs in the interests
of the greatest good all the time, not once every few years.
This is also the place to discuss some of the minutiae of the voting
process: delimitation of districts and methods of counting votes.
The whole process of drawing districts has reached ludicrous
nonfunctionality in the US. I'm not familiar with the situation
elsewhere, but it's such an obvious way to enable politicians to choose
their voters, rather than the other way around, that it probably crops
up everywhere when it's not actively stopped. And stopped it must be. It
makes a mockery of the whole idea behind democracy, whether the voting
is used for elections or unelections.
Political districts are simply a matter of population and ease of access
to the seat of government. (And polling places, if the technology is at
a level where neither mail nor electronics is reliable enough.) That is
a problem in geography, not politics. Districts need to be drawn by
technicians and scientists with degrees in geographical information
systems, not by politicians or judges or other amateurs. In very spiky
situations with concerns of bias, three separate groups of technicians
could draw the lines, one group from each side and another one from
halfway around the world, and the results agreed upon, or averaged, or
whatever other solution works to provide an objective outcome. But
however it's done, this critical step in government can't itself be part
of the political process.
One objection voiced against purely population-based districts in the US
is that some have been drawn to protect minorities, and drawing them
otherwise will reduce those protections. This is an attempt to solve a
problem by not addressing it. It's like the old joke about dropping a
coin around the corner, but looking for it under the streetlight because
it's easier to see there. Minority rights need protection. There's no
question about that (and I'll discuss some methods in a moment). So they
should be directly protected. It makes no sense to hope that some kind
of protection might emerge from an irrelevant kabuki dance on another
topic. The fact that the dance is feasible under the current political
system is as relevant to the desired outcome as the streetlight is to
finding the coin.
Methods of tallying votes is another bit of arcana that has come to
wider attention in the US after the problems with the 2000, 2002, 2004,
2006, and 2008 elections. (It's getting hard to pretend that these are
all unfortunate exceptions with no pattern.)
The bottom line is that the method of adding up the votes can produce a
variety of results, as the graphic to the right shows. The example is
taken from an article in Science News, Nov. 2, 2002, by Erica Klarreich
. Fifteen people have
beverage preferences. Six prefer milk first, wine second, and beer
third; five prefer beer first, wine second, and milk third; and four
prefer wine first, beer second, and milk third. (Example from work she
cites by Saari.)
Plurality voting is conceptually easy, it's very commonly used, and it's
the worst at expressing the will of the voters. (In the example, 6 first
place votes for milk is greater than 5 for beer, which is greater than 4
for wine.) It differs from majority voting in that the latter requires
the winning choice to have more first place votes than the other choices
combined, in other words to have more than half the total votes. There
is no majority winner in the example, and a second election between the
two top candidates would be needed in a majority vote system.
In instant runoff voting, the choice with fewest first place votes is
eliminated, and those voters' second choice is used in the tally. (In
that case beer would win because 6 votes for milk is less than 5 plus 4
for beer.)
Intuitively, it seems that the third system, which allows voters to rank
their choices, ought to capture people's intentions, but the actual
result is to favor the least-disliked alternative. (In the example, wine
is liked only by 4, but disliked by nobody. So if first place choices
have 2 pts, second, 1 pt, and third, 0, then the tally is: 12 for milk,
which is less than 14 for beer, which is less than 19 for wine.) I've
seen this method at work in an academic department where it was mainly
notable for favoring the blandest candidate, since that one had the
fewest enemies.
A peculiar paradox can occur in multi-round voting that wouldn't occur
in single round, according to Saari. This is true whether or not the
second round happens on another date or immediately after the first
tally, as with instant-runoff. Increased popularity can actually cause a
frontrunner to go down a notch in the tally and lose the election. The
process is described in Klarreich's article. Paradoxical outcomes, where
the most popular choice doesn't win, have been studied mathematically by
Saari who's pointed out that plurality voting does lead to the most
paradoxes by far.
There is also a method called cumulative voting. In the easiest-to-use
variant, each voter gets as many votes as there are alternatives. If
it's a simple up or down issue, like a recall, the number of votes per
person might be one or some agreed upon number for the sake of minority
protection (see below). The voter can then distribute those votes
however they like. Cumulative voting was studied extensively by Lani
Guinier
,
who published The Tyranny of the Majority (1994), among many other
writings. More recently, it's also been studied by Brams (2001
, 2003)
(pdf), and other publications. As Brams notes in 2003, "The chief reason
for its nonadoption in public elections, and by some societies, seems to
be a lack of key “insider” support.”
In the example diagrammed above, cumulative voting would give three
votes to each voter. A beer fanatic could put all three votes on beer.
Somebody who likes all three drinks equally could give one to each. The
outcome of approval voting is not possible to predict mathematically
since the distribution of votes depends on the voters. And that is as it
should be.
Approval voting has been applied in practice
most notably,
as far as I'm concerned, in the Australian Territory of Norfolk Island.
That's the closest it comes to being used in an autonomous
administrative unit. In general, cumulative voting is applied where
minorities are powerful enough to insist on being heard, such as
corporate boards, or, in the US, where it's mandated by the federal
government to rectify injustices under the Voting Rights Act. That
indicates its effectiveness at thwarting majority dictatorship. However,
there are also potential disadvantages in that the system can be
vulnerable to insincere tactical voting. Given the degree of
coordination required for tactical voting, cumulative methods could be
more subject to gaming in the smallest elections. In large-scale ones,
it seems to me that the benefit of mitigating majority dictatorship far
outweighs the small likelihood of successfully coordinating insincere votes.
Scholars of voting methods could, no doubt, come up with further
improvements that would make the likelihood of tactical voting even
smaller. Furthermore, to be successful, insincere voters would need to
have good advance information about likely election results. But in the
system described here, opinion polling is not allowed because it
interferes with the independence of decision-making.
When voters need to decide among an array of possible policies rather
than candidates, it is especially important to use voting methods that
reflect voters' wishes as closely as possible. Since determining policy
is one of the major uses of voting in the system I'm envisioning, it's
critical to use something more accurate than pluralities.
Another advantage of cumulative voting is the relative simplicity of
designing understandable ballots. In the example to the right, there are
three bubbles to fill in next to each choice. It's simple to point out
that putting all one's votes on beer, say, would use up the votes and
would weight beer the most heavily.
One big problem with majority rule, even assuming that large groups have
a way of making the right decision, is that "majority," /by itself/ does
not mean "large group." Even a mere unelection with only two choices
could deliver a bogus result if only a tiny minority of voters bother to
cast a ballot. Then an even tinier minority decides the issue, and
there's no crowd to generate wisdom. A quorum is an essential
requirement of any election that's supposed to be fair.
The quorum, I think, should be set quite high, such as 75% or 80% of the
population voting in the particular election. In a fair society,
registering to vote and voting must be as effortless as is consistent
with honesty. If most of the population can't be bothered to express an
opinion when doing so is as simple as returning a postage-paid envelope,
then the issue in question didn't need a vote. It stays as it was, and
the election officials try to figure out how to prevent frivolous
elections from being called.
Elections could be called by petition or by some other indication of
popular will or by the relevant officials, and could be held at any
time, although not oftener than once per quarter. Government should be
unobtrusive to avoid citizen fatigue, and having elections every few
days would not satisfy that requirement. Since a primary function of
elections is oversight, not leadership, the ad hoc timing is
intentional. It would reduce effectiveness if they were regularly scheduled.
I'm assuming it goes without saying, but perhaps I should be explicit
about the remaining aspects of voting. The process must be as easy as is
compatible with 1) secrecy of the ballot, 2) fair voting 3) transparency
of the process, and 4) maintaining tamper-proof records that can be
recounted later.
Minority Protections
The issue of minorities is a structural weakness in democracies.
Majority rule is a defining feature of democracies, so they’ll
necessarily be less than ideal for minorities. Saying “Get used to it”
is not a solution. Majority rule is not some desirable goal in itself.
Its function is to serve the greatest good of the greatest number.
However, people tend to ignore any problems that aren't their own, and
so a minority can get stepped on for no reason at all. Minority
protections counteract obliviousness in majorities, and they enable
minorities to require consideration of their legitimate needs. Those
protections are vital because without them fairness is lost for some,
and since it only exists when shared by all, then it's lost for all.
"Fairness" limited to a few is just privilege, and a dictatorship of the
majority is still a dictatorship.
Various methods of protecting minority rights have generally been
designed with a view to a specific minority. For instance, part of the
purpose behind the number and distribution of Senators in the US was to
make sure the less-populous states had louder voices in the Senate. At
the turn of the 1800s when the system took shape, the important minority
was farmers. Farmers remain a minority and they remain important, but
they were and are far from the only minority group whose rights need
protection.
What's needed is a method to give any minority a voice. As is generally
the case, what's right is also what's necessary for long term survival.
Ignored minorities are endangering democracies, or preventing their
birth in a pattern that's far too common. Sectarian, tribal, ethnic, and
racial strife are endemic at this point. Often it's fomented by
politicians with ulterior motives, but they wouldn't be successful
without a fertile substrate for their nasty work. By facilitating
gradual change, democracies are supposed to prevent the need for violent
overthrow, but their inability to accommodate minorities is leading to
predictable results.
The most promising idea I've heard for giving minorities a voice is
cumulative voting, discussed by Lani Guinier (1994), which I described
earlier. Although it would not help a minority against an equally
committed majority, which except in unusual circumstances is as it
should be, it would help a focused minority against a diffuse majority.
People who felt strongly about an issue, could concentrate all their
votes on one choice and potentially prevail. An advantage of building
minority protections into the vote tallying method is that minorities
don't have to be defined either beforehand or for the purposes of the
vote. Any group that feels very strongly about an issue has a way to
make its voice heard.
(There is a point of view that democracies are defined by majority rule,
and any dilution of it is somehow wrong. Apparently, there's something
to this effect in Roberts Rules of Order
. That idea could only
be valid if there's no difference between democracy and majority
dictatorship. If the idea behind democracy is fairness, which is the
only meaning that makes it a desirable form of government to most
people, then one has to do what it takes to prevent majority
dictatorship. That necessarily involves curbing the power of the majority.)
Sometimes, a minority might need greater protection than that afforded
by cumulative voting. For instance, consider the case of a small group
speaking a rare language who wanted schools to be taught in their mother
tongue as well as in the dominant language. A country's official
language is hardly a matter of principle. It's something that could be
appropriately decided by vote. And yet, if it is, all minor languages
will die out. Whether or not one sees the value of diversity (I happen
to think it's huge), the fact is that it doesn't hurt anyone for the
smaller groups to preserve their languages. In order to do it, though,
they may need actual preference, not just improved equality, to ensure
their ability to live as they like.
A similar situation is an ethnic group which is outnumbered in its home
area by immigrants. (For a while that was true of Fijians and
Fijian-Indians, for instance. Or, as another example, the American
Indians and all the later arrivals.) Although the new place is now home
to the immigrants, and it's not as if they necessarily have a country to
go back to, yet it's just not right for a people and a culture to be
obliterated at home by the weight of a majority. In that case, too, I
could see a weighted cumulative voting system that protects the cultural
viability of the original inhabitants.
It's possible that majority rule would be less of a problem in a fair
society, since all matters of rights are decided by law, not by vote.
Only matters of opinion and preference could be subject to votes, so
minority protections in voting serve as an added safeguard, not as the
only one.
Officeholders
We're used to the concept that positions requiring special skills are
filled by appointees selected on that basis. The heads of a nation's
financial oversight, environmental oversight, or air traffic control are
not elected. Judges are appointed, sometimes for life in the hope of
ensuring their independence as well as their knowledge. Those
jurisdictions where judges are elected (there are some, at least in the
US) tend to show up in the news as examples of appalling ignorance and
bad judgment. It would be self-evidently foolish to expect voters to
evaluate someone's qualifications to run, say, the food safety division.
Yet when it comes to running the whole state, we revert to a Romantic
notion of the purity of the amateur. A heart full of good intentions is
supposed to work better than a skill set. For all I know, that might
even be true. We'll never know because there is no way to reliably find
these pure amateurs. Set aside worries about purity for the moment, and
consider elections. There is no universe in which amateurs emerge
victorious from a system that requires a small or large fortune and a
life remade to fit it. We're getting professionals. They're just
professionals at something other than governing.
I realize there's nothing terribly new about those insights, and that
the objection to trying anything else is that it will be worse. It's
back to saying that "this is the least-bad system out there." That's
fine as far as it goes, but it's no reason to stop trying to do better.
I'm suggesting that professionals are needed if we want
professional-level workmanship. There are indeed two big pitfalls which
stop people from trying that obvious solution. The first is the
selection process. Judging skills is very difficult. A thorough
evaluation takes much more time than people are generally willing to
give, so the tendency is to go to a magic number system. Get a degree
from the right school, and you're in. Take a test that generates a
grade, and we're all set. The Imperial Chinese used elements of a
test-based bureaucracy for hundreds of years before they fell, and all
it led to was the most ossified system on the planet.
The selectors themselves are another weak link in the selection process.
It takes one to know one, and the selectors would have to be at least as
adept at governing as the pool of candidates. If they were, though, why
would they be hanging around, reading applications? And that doesn't
even touch on the problems of bias or corruptibility. At least with
hordes of voters, the thinking goes, it's not possible to buy all of
them and any bias cancels out. (It doesn't necessarily, but that's a
whole different topic.)
The second pitfall is oversight. Once a person has the power granted by
a government position, how do you make sure they're doing their jobs? If
they aren't, how do you prevent the elite cliques from covering for each
other? This factor has the steepest uphill climb against the tendency to
forgive all things to the powerful until it's too late.
It may, however, be possible to preserve enough randomness to prevent
fossilization, enough expertise to prevent incompetence, and large
enough numbers among the selectors to preserve both any wisdom the crowd
might have and prevent corruption.
The thing to keep firmly in mind is that randomness is acceptable. It's
even good. We like to tell ourselves that our selection processes,
whatever they happen to be, are set up to find the best. And then
naturally we declare the person we have found to be the best, which
proves that the process works. On the evidence, this is a circular fairy
tale. A comforting one, but nonetheless nonfunctional.
There is an element of randomness in any selection process now
operating, whether it's hiring employees, ranking songs on the web, or
picking the market's stock favorites. Consider elections. Many factors
aren't controlled, and although that's not the same as random, it might
be better if it was. For instance, who runs for election to begin with
has little to do with job qualifications. So we already have randomness.
It's merely unplanned.
I'll give an example of a selection process just to illustrate what I
mean. Obviously it hasn't been tested, and it's only by a process of
trial, error, and study of the evidence that a really workable solution
can be found. For low level offices such as city councils or county
positions, people who felt they were qualified could put their names up
for consideration. They'd be expected to show experience relevant to the
position. If it was managerial, they'd be expected to show some talent
at managing, even if the evidence came from volunteer work managing a
kindergarten. If it required specialized knowledge, they'd be expected
to show training and some experience in that field.
The selectors would not choose the actual officials. They would choose
who among those who volunteered their names actually had
plausible-sounding qualifications. This part of the process would work
best in a computer-saturated society. Maybe it would only work in a
computer-saturated society, because the pool of selectors should be as
large as possible and their ranking of the candidates should be highly
redundant. In other words, many selectors should each look at candidates
independently and rank them as plausible or implausible. They shouldn't
know what ranks have been assigned by other selectors. Not all selectors
should have to look at all candidates, but only at as many as they had
the energy to read up on properly. Candidates with multiple "passes"and
few or no "fails" would advance to the next stage.
The selectors themselves would be chosen from among those in the
population who had the qualifications to evaluate the backgrounds of
candidates. It's the same general idea as the selection of the pool of
reviewers for academic journal articles. There's a large population with
known qualifications. They can opt out of the task, but they don't have
to do anything special to opt in. Here again computers are probably
essential. The database of people with the background to select
candidates for given offices would be huge. The database could be
compiled from an amalgam of job descriptions, both employed and
self-employed, and of school records showing who received which degrees.
If each application is evaluated by a few hundred people, then one could
hope the crowd of independent and less-than-totally ignorant people
should show a bit of wisdom.
The idea is that by having as large a pool of selectors as is consistent
with the basic knowledge needed to be one, the pitfall of ignorant
selectors is avoided. Possibly people would be willing to participate in
the process out of a sense of civic duty and volunteerism, much as they
do now on computer help forums or in the search for stardust in aerogels
. If that's not so,
then there could be a formal request system, rather like jury duty now.
Qualified people would receive notification to evaluate, say, three
applications, with a minimal penalty if they ignore it.
Once the selectors had narrowed down the pool of candidates to those
plausibly capable of doing the job —and that's all they would be doing,
they wouldn't be ranking them — then the actual officeholder should be
selected by lottery.
I know that selection by lottery sounds odd, but it's not really that
different from current systems except for two things. One is that the
pool of candidates would be narrowed to those who might be able to do
the job. Two is that the process would not select according to criteria
that have no relevance to the job. Both of these would be improvements.
Furthermore, a lottery system would prevent at least some selector bias,
and it would prevent the formation of impervious elite cliques. A plain
old lottery would have as good a chance of finding the best person for
the job as most of our current methods, and a much better chance if the
pool was narrowed down at the beginning to those who had a chance of
competence at the job. There would also be the added advantage that we
couldn't fool ourselves into thinking we were so bright that the person
selected necessarily had to be the best.
(This system differs in a couple of important ways from random
appointments in, for instance, the ancient Athenian concept of democracy
or from sortition. (Sortition? Who comes up with these terms? "Random
selection" would be too easy to understand, I suppose.) The random
selection operates on a not-random pool of people with some expertise.
The offices in question are those requiring expertise to administer
specific functions and are not to find representatives who form a
legislative body. Elections and voting are still part of the process,
but they function to provide oversight instead of to select government
officials.)
In a society of the far future, when fairness has been the standard for
so long that all people really are equal, then the composition of the
population of officeholders may cease to matter. We're far from there
yet. So one other use to which a lottery system could be put is to
ensure that those in office reflect the population they're supposed to
govern. The pool from which the officials are chosen could be limited to
those coming from the necessary groups. Half the positions could be
filled from the pool of males, half from females, and those in whatever
mix of races, ethnicities, castes, or whatever, is relevant. I'm not
suggesting that people from different groups can't feel for each other.
But it's a fact in this stupidly stratified world of ours that they
often don't, and that having people in government from otherwise
marginalized groups makes a beneficial difference.
Another advantage to this system is that the candidates would not have
to be good at anything except the jobs they're doing and hope to do (and
at expressing that in their background materials). They wouldn't have to
be able to stomach the campaign process in order to govern. They
wouldn't have to know the right people in order to run. They wouldn't
have to figure out which parts of themselves to sell to raise the money
campaigning takes.
Higher offices, those at the level of province, state, nation,
continent, or world, would be selected in much the same way with the
difference that the pool would only be open to those who had proven
themselves by, for instance, at least five successful years on a lower
rung. The time should be long enough to make sure that an official
couldn't create problems and then escape the consequences by moving up.
The track record needs to be complete enough to form a basis for judgment.
Success would be defined as complaints not exceeding some low percentage
(to be discussed in the Oversight section), and, certainly, a lack of
successful recall elections. It would also mean effective exercise of
foresight and prevention of larger consequences in problems that arose,
and it would mean a well-managed department and high morale among the
personnel who worked for that official in their lower rung capacity.
This would tend to make high officials relatively elderly, since some
time has to elapse before they can put their names in for national or
international jobs. I would see that as a good thing, since it
introduces a natural element of term limits. Ideally, the system would
not depend on rapid advances in anti-aging technology to fill the
highest slots.
The purpose of selectors at the higher rungs would be, again, checking
the backgrounds of the officials who had expressed an interest in a
higher position and removing the names of those who clearly did not
qualify. The selectors shouldn't, in this case, select who is in the
pool but rather who is out of it. The idea behind that is to reduce the
effects of selector bias.
An official's life under this system would not be a sinecure. The
following section on Oversight will show that's it's even less of a walk
in the park than it seems so far. When the perils of a position are
great, then the compensation has to be proportionate. Miserliness about
the salaries of government officials would be a big mistake. They're
being asked to perform a difficult, demanding, and generally thankless
job, and asking them to do it for peanuts is counterproductive. Their
jobs, since they hold the well-being of the population in trust, are
more important than private sector jobs. As such, they ought to be
compensated proportionately more than private sector executives in
comparable positions of responsibility. As I'll discuss in the chapter
on Money and Work, I'm of the opinion that pay in the business world
should have some limits. With that caveat, the compensation of
government officials should exceed their business counterparts by some
percentage, like 10% or 20%, over the average (mode) in the private
sector. The tendency of too many voters to resent any rewards for their
public servants has to be curbed.
One potential problem, even in the ideal situation of completely
competent technocrats, is that nobody would think creatively. There'd be
no great leaders, just good ones. Although that would be an improvement,
it still feels like an unnecessary compromise to renounce the
outstanding just to avoid the awful.
Maybe the continual and inevitable influx of outsiders in a
lottery-based system would be enough for new ideas to percolate in. If
that looked to be insufficent, then I would imagine that the two most
straightforward points to change would be the candidate pool and the
stringency of the selection process. Whether it's best to loosen
requirements or tighten them depends on what longitudinal studies of the
results of different methods show in practice. Possibly, evidence might
show other factors or other methods entirely are most associated with a
selection process that facilitates outstanding governing at any given level.
Whatever the method, it's important to guard against being too clever,
primarily because that's the likeliest excuse for trying to shift the
balance of power, but also for its own sake. We, meaning we humans,
really aren't that good at looking into each others' souls and finding
the best person for, well, anything. It's better to trust to the luck of
the draw. That is, after all, how we've selected our leaders for all of
human history, but without the benefit of requiring any evidence of
qualification at all.
Precognitive decisions
There's also a cognitive aspect to decision making that any society
aspiring to sustainability, and therefore rationality, must consider.
I've stressed the need for informed citizenry and education, but all of
that is layered on top of much quicker and less conscious modes of
thinking. Rational decision-making requires awareness of that level,
where it's pushing, and whether that's the desired direction. I'll go
over some of the symptoms and factors involved.
Decisions require commitment to a given path and therefore a loss of
other options. Without a sense that the decision is right, the process
is uncertain and unpleasant. In the interests of avoiding that, we do
what we can to feel sure, and that's where the trouble starts.
That conviction of being right can come from ignorance, as in the
Dunning-Kruger effect
(which
should be a joke, but isn't). That research showed something most people
have observed: the incompetent rate their own abilities more highly than
the skilled, precisely because they don't know enough to realize how
little they know. Education may not fix that, because the ability to
absorb the information may be lacking.
Or the conviction can come from knowing it all. Confidence in being
highly trained can lead to ignorance of bias. Education cannot fix that.
Examples can be found in any field requiring a high level of skill
where, for instance, experts hold on to pet theories against the
evidence or where there is discrimination against women or minorities.
In other words, all of them. I'll mention two examples.
Orchestras hired men because they played better. Once hiring committees
started having auditions behind screens, where the actual music was the
only information available to them, they hired more women (Goldin &
Rouse, 2000
).
Recently, women actually do better than men in a blind selection
process, implying that not only were the experts wrong in their
assumptions about men, but also that women as a whole are more than
equally good. They're better
. (A not unexpected
outcome when mediocre women are less likely to attempt careers than
mediocre men.) But it came as a big shock to many of those doing the
hiring that something besides their superior knowledge of music was
influencing their decisions. (I'd actually be surprised if quite a
number didn't think there was something wrong with blind selection
rather than with their decision-making.) Unawareness is just that.
Unawareness. Saying "I know I would never do that" only confirms the
lack of awareness if one has just finished doing precisely that.
Another example is the prevalence of tall male CEOs. The hiring
committees spend months on these decisions, examining all sorts of
minutia. Yet somehow, the end result of all the conscious thought is
that tall is better. That's true for a chimpanzee trying to lead a
troop. There's not a shred of evidence to say it's true for the skills a
CEO needs.
That is an important point. The emotional reaction, the "first
impression," the judgment made in those first few minutes before there's
any chance that evidence could come into the result, that emotional
reaction is not merely a confounding factor that takes effort to
discount. It takes /precedence/ over reason. Reason then serves to
justify the decision already taken. The results tell the story, not the
convictions.
And that relates directly to the "intuitive betrayal hypothesis"
(Simmons & Nelson, 2006
), which
refers to acting against intuition. The "hypothesis predicts that people
will be more confident in their final decisions when they choose the
intuitive option than when they choose a nonintuitive alternative." Not
only is that first impression obtained without /relevant/ evidence, but
subsequent evidence that contradicts it is discounted because of the
confidence conveyed by gut feelings. The emotional confidence doesn't
arise when unsupported by intuition, so counterintuitive decisions never
feel "right," no matter how much evidence supports them.
Gut feelings operate with the same level of analysis available a million
years ago. That doesn't make them useless. They may work better than
expected for avoiding danger or finding a sex partner. But they lack the
ability to parse modern situations that didn't exist then. And yet it
doesn't feel right to go against them. In short, we have a problem.
(I know this runs contrary to much of the recent motivational press
about trusting gut feelings. That's a welcome message because it
confirms what we want to do anyway. On the evidence, however, it does
not lead to reality based decisions. Don't be surprised if your
intuition says otherwise.)
Either way, whether from ignorance or smugness, it's important not to
assume that people will make decisions based on the best available
information. They will do that when it's the only available information.
Not being omniscient, we don't necessarily know what the best is, but
the totally extraneous can be easier to determine. Thus, for instance,
politicians' hair styles can be confidently stated to have no predictive
value for indicating their legislative abilities. Everybody, including
all voters, knows that. And yet, when voters have that information, it
becomes a big enough factor to predict the winner.
.
We need to consciously deprive ourselves of the information used in
"first impressions" and force ourselves to consider evidence first, and
then force ourselves to reconcile any large incongruity that crops up
between between intuition and evidence-based choices. We need to force
ourselves to be aware of the information coming from intuition and to
consciously examine whether it's pushing in a fair and sensible direction.
There are some people who can examine their assumptions and force
themselves to ignore intuition and go purely by evidence. That's the
essence of scholarly work, and it takes most people years of training.
Even with all that training, scholars are notorious for holding to
erroneous pet theories. Without all that training, the odds are much
worse. So, although everyone who reads this will know they, personally,
have no trouble discounting emotion, in the aggregate there are too few
who can do that. Even with all the education in the world, rationality
must be actively and consciously given priority and extra weight if it's
to have a chance of taking its optimum place in decision making.
Laws
The biggest breakthrough in creating an egalitarian legal system came in
the days of the ancient Romans. The laws, they said, should be written.
That way they were accessible to everyone. They applied equally to
everyone (officially), and all people had the same information as to
what they were.
The only part of this noble ideal that has been preserved, now that most
people can read, is that the laws are written down. To make up for that,
they're incomprehensible, inaccessible to anyone outside a select
priesthood, and applied unequally based on access to that priesthood.
It's an egregious example of how to deprive people of their rights
without raising any alarm until it's too late. That's not knowledge
that's been lost since the time of the ancient Romans. More recently,
Thomas Jefferson
made
the same point.
I know that the official justification for legal language is precision.
If the laws were written in English then, God help us, we'd have to
figure out exactly which implication out of many was intended. Yet,
oddly enough, nobody seems to know what the stuff written in legalese
means until it is so-called "tested in the courts." And then the
decision handed down becomes a precedent, that is, an example of what
that law means in practice. In order not to get lost in a forest of
possible interpretations, it's customary to follow these precedents.
Yet, officially, the reason we need legalese is because otherwise there
could be different possible interpretations.
Furthermore, tell me which has the more unequivocal meaning: George
Orwell's piece on Politics and the English Language
or any legal
document you care to dig up. The whole argument that legalese is somehow
more precise is nothing but a smokescreen. Its precision is an illusion
generated by the common consent of an ingroup to exclude outsiders. The
meaning of words is always a matter of common consent. When the meaning
pertains to the laws of the land, that consent truly needs to be common,
and not limited to a priesthood.
Laws in a fair society would have to meet a plain language standard.
There are various ways to measure adequate clarity. One, for instance,
would be to make available Orwell's essay above, or one of Krugman's
newspaper columns, or any other clear piece of writing about difficult
concepts, and see whether people who understood them correctly also
understood what a new law said. If people who have no trouble
understanding plain language misunderstand the new law, it needs to be
rewritten until it says what it's trying to say. Areas of the law that
apply only to a specialized subset of people, maritime law for instance,
would need to be comprehensible to the target group, not necessarily to
those who aren't relevant. Plain language laws are one more thing that
would be much easier to do in a computer-saturated society. A few
hundred independent readers would be plenty to get an accurate
assessment of comprehensibility.
Another aspect of clarity besides plain speech is plain organization. At
this point new laws which are revisions of older laws which expand on
yet older laws which rest on seriously old laws written in an idiom so
out of date they need to be translated are all tucked away in separate
books of law and referenced in a Gordian knot of precedents. People get
degrees for their ability to track laws through thickets much worse than
that sentence. That situation, too, is rife with opportunities for
conferring advantage on those in the know. It's not accceptable in a
fair society whose legal system cannot work better for some than for others.
The situation is analogous to the issues faced by programmers trying to
track the changing code of a program. They've come up with version
control systems to handle it. A very small subset of what they do is
also in document management options that can track changes to text. As
Karl Fogel
notes,
version control has application to a wide range of situations in a world
where many people make changes to the work of other people. Staying for
now with legal codes, a version control system identifies
* who introduced or changed which item, whether line or paragraph or bill,
* which other items it relates to, influences, requires changes in, or
negates
* and which items are currently proposed, being reviewed, or accepted
into the main body of code.
Just the first line, showing who authored which bits of proposed
legislation, would be a large change from the current opaque system. In
the software world, one way to check authorship is the command "git
blame." Computer geeks have a sense of humor, but it's nonetheless true.
A further improvement needs to be made to an eventual legal version
control system. The branches and webs of related laws need to be easy to
find, search, access, and understand by ordinary users with no previous
training. That will take invention of new methods by experts in the
presentation of information, because as far as I know, no such thing is
available in any field yet.
I've been discussing laws as if they arrive by immaculate conception,
which was unavoidable since I've eliminated the elected class of
legislators. Who, in this system, are the lawmakers?
There are different aspects to the process of creating laws. Proposing a
law to address an overlooked or new problem is just one step. Then it
must be checked for redundancy. If not redundant, it needs to be written
clearly, then debated, and finally passed. The checking and writing
parts are specialist functions that require skilled officials as well as
general input from the public.
The best originators of laws are hard to pin down. Philosopher kings,
plain old kings, and priests have always made a mess of it, so far.
Democracies generally have settled on the idea that there's really no
way to know. Greengrocers or dancers or doctors of law can be elected to
office and their interest in the job is supposed to be a hopeful sign
that they'll write laws worth having. Their aides help them with the
legalistic bits. That system has worked better than the wise men system,
so why not go with it all the way? Anyone can propose a law if they see
the need for it.
Again, I'm serious. Allowing anyone to propose laws is no different from
allowing legislators to be elected regardless of background for the
purpose of making laws. The only difference is that by making the
process open there's the chance that the Queen and King Solomons among
us may contribute. There's the chance, of course, that the really stupid
will also contribute, but we seem to be surviving their help in the
national assemblies of the world. Furthermore, under an open version
control system, proposing the law is just the first step. The checking
and debating (in papers, or internet fora, or real ones) will have the
attention of professionals as well as the general public, and that would
likely be a more stringent selective process than what we have now. The
professionals would also have the responsibility of formally writing it
down and checking the text for clarity. Redundant or superfluous laws
could be weeded out at this stage, as could ones that contradict rights
or accepted laws.
There are some areas of law which are necessarily specialized, but the
modifications required in this system are minor. Plain language in that
case means plain to the people who will be using the law. The reviewers
will be those affected by it. The professionals checking its wording and
validity would have the relevant specialized background. The principle,
however, remains the same.
Then, last, comes the decision whether to make the new law part of the
active code. I see that as another matter best left to the wisdom of a
large but qualified crowd. The selectors of laws would be all those with
the professional qualifications to know whether the new law agreed with
guaranteed rights. A quorum would be a few hundred, and the majority
favorable should be much larger than half, such as three quarters or
even more. Their function is to determine constitutionality, and if
they're rather evenly split, that's not a good sign. As with my other
examples of methods, there may be better ways to achieve the goal of a
streamlined, functional, adaptable, and equitable legal code. The point
isn't the method but the goal.
Moving on from the making of law to its practice, there is at least one
systemic lapse that increasingly prevents justice instead of ensuring
it. The time required to reach a verdict is appropriate only for
bristlecone pine trees. Justice delayed is justice denied, and by that
measure alone there's very little justice to be had. This fact has not
been lost on corporations who have been using lawsuits as a weapon. The
Intel-AMD lawsuit comes to mind as an example. (1
, 2
,
and links in those articles.) The few decisions actually handed down
have said that Intel did abuse its market position. But then Intel
appeals again, or they delay trial by finding new emails, or some other
thing. Even when it's the US Supreme Court, Intel disagrees with the
decision and is appealing. I wasn't aware that it was possible to appeal
a decision by the Supremes, but one thing is eminently clear. Intel
continues the disputed practices and AMD loses by it. Meanwhile, the
courts labor over stalling tactics as if they meant something. (Update
2009-11-11: the endless suit was finally settled out of court. AMD
settled for much less than their real damages, no doubt because they
gave up hope of justice instead of delays.) By such means a good thing,
due process, is turned into its opposite. The situation is so egregious
that, on occasion, the lawyers themselves point it out
.
Simple cases should be decided in days or weeks, medium ones in months,
and complicated ones in less than a year. The legal system needs
deadlines. It will improve, not hurt, the cause of justice if they have
to focus on the merits of cases instead of every possible minutia that
can be tacked onto the billable hours. There is no human dispute so
complicated it can't be resolved in that time unless there's determined
obfuscation. Even the US Supreme Court feels that a lawyer can
adequately summarize a case in half an hour. All aspects of hearing a
case can be given short and reasonable time limits.
The way cases are tried in our current system, with judge(s), lawyers,
juries and the lot does not necessarily have to work against fairness.
But from what I've seen myself, the jury system seems rather hit or
miss. The deliberations for one case I was on were surprisingly short
because one of the men had booked a fishing trip for the afternoon on
the assumption he wouldn't actually be called for duty. I happened to
feel we came to the right decision as it was, but those who didn't were
definitely quashed by an agenda that had nothing to do with the law.
There are thousands of stories along those lines and they are
disturbing, but possibly they could be fixed. Maybe the system would
also work better in a plain language context with short trials that
didn't waste everyone's time. Below is an example of an alternative way
to structure a judicial system.
Legal professionals are selected by the methods already outlined or by
earning a degree. In other words, selectors can decide that some
combination of legal training and experience has provided knowledge of
the law and of rights equivalent to a degree.
Amateur juries are replaced by panels of legally trained people that are
chosen with input from the two sides. Each side chooses from one to
three of these professionals, depending on the importance of the case or
the point in the appeals process. Those professionals, call them
"judges" perhaps, then agree on another one to three, for a total of
from three to nine on a panel.
The panel familiarize themselves with the case, hear the arguments of
the advocates or the two opposing sides if they haven't hired lawyers,
ask the two sides questions, and ultimately hand down a decision and a
sentence, if any. The voices of the "middle" members of the panel could
be weighted more heavily if necessary through a cumulative system, as in
voting. Two further appeals with newly selected, and maybe larger and
more stringently qualified, groups would be allowed, but if they all
decided the same way, then the decision would stand. If they did not,
the majority of decisions after a further two appeals, i.e. three out of
five, would stand.
Even though the current system could, theoretically, work fairly, in
practice it has enough problems to make alternate methods worth testing.
My notion of one such alternative is in the example to the right. It
retains the somewhat random element seen in juries, and the training
requirement for professional members.
All aspects of the system need to be tax-funded. Nobody can be
financially beholden to either of the two sides. That's essential to
reduce the influence of wealth on the whole legal system. The court
apparatus and all the legal professionals, including the lawyers for
each side, must be publicly funded. Acquiring extra legal help based on
ability to pay doesn't fit with a fair system.
I realize that the first thing wealthier people would do is hire
somebody who calls themselves anything but a lawyer and proceed from
there. That has to be illegal and punishable because it subverts
fairness for everyone. The model to follow might be medical licensing. A
license would attach to being any part of the legal profession, and one
of its requirements would be to receive no outside funds of any kind.
Those found offering any kind of legal advice without a license would be
fined some multiple of whatever they received from their clients, with
increasing penalties if they kept doing it. It really is that important
to keep money out of the courts. That should be obvious from the current
state of justice.
My thinking is that a system as in the example, with ad hoc collections
of professionals, would cut out much of the chaff from the legal process
by obviating the charade supposedly needed to keep legally inexperienced
jurors unbiased. Yet it would preserve the presence of expert assistance
for each of the two sides, and even add voices for them when the final
deliberations were in progress. Because the legal personnel are
assembled partly by the parties involved, a class of judges with undue
personal power should not develop.
I know that such a minimally stodgy system seems very odd, very unlegal.
Stodginess is such a constant feature in courts that it's absence feels
like the absence of law itself, or at least the absence of due process.
And yet, what we see is the slow, deliberate pace being used to deny due
process, not assist it.
The real issue, however, is that whatever form the legal resolution of
disputes takes, whether slow or not, it should have the hallmarks of
fairness. It should be equally accessible to all, it should treat
everyone equally, and it should accomplish that with a minimum of
effort, anguish, and expense for all equally.
Administration and Taxes
These are the practical, boring parts of government. Nameless flunkies
are supposed to just do them somewhere out of sight.
In many ways it's true. They are practical and boring … in the same way
as bricks. If they're done wrong, the whole building comes down.
In a sustainable system, they need as much attention as principles. It's
quite possible they need /more/ attention because they form the actual
structure. They not only keep the whole edifice standing, they're also
most people's actual point of contact with the government. Last, and far
from least, precisely because they're boring they're an easy place to
start the creeping abuse of power without causing enough resistance. So,
mundane or not, it is very important for the administrative aspects to
facilitate fairness instead of counteracting it.
I've already touched on the relation of politics to voting districts,
but there is a more general sense in which the organization of a
country's — or the world's — subunits can foster equal treatment for
all. Currently, bigger is better because it gives the central government
control over more resources, taxes, and people to conscript into armies.
That is really the only reason to deny self-determination to smaller
groups. The Turks don't want control over some of the Kurds because they
value their culture so much. The Chinese haven't colonized Tibet or
Uighur because they admire the scenery.
Administrative borders delimited according to varying criteria would be
more conducive to self-determination and peace. There is no necessary
minimum size for a group based on language, culture, or a shared
philosophy. A viable economic unit, on the other hand, requires several
million people to have enough complexity to be stable. But there is no
requirement that those two have to be the same unit. Small groups could
run their own schools and video stations in their own languages plus the
main language of the larger group. Small units could together form one
viable economic unit. Those could work together for their common benefit
vis-a-vis other similar large groups.
In terms of borders, this is not that different from the nested sets of
units within and between nations. The difference is that
self-determination is not limited to the biggest units but is spread to
the smallest units compatible with a system that doesn't waste people's
taxes. Things that need to stay coordinated over large areas, such as
transportation or roads, are wastefully inefficient when handled by many
smaller units, as, for instance, railroads in the US. Things that are
best handled on a local level, such as membership in a time zone, cause
inefficiency when imposed from far away, as, for instance, the single
time zone in all of China. The bigger the unit, the less its function is
actual administration and the more it is arbitration between its members
and with other similar groups. People could identify with nations or any
group that suited them, but administration should devolve to the lowest
level where it can be accomplished efficiently.
Taxation is a supremely prickly subject, perhaps even more so than
national power. The couple of points I'd like to make are that fairness
requires an equal burden, and that any burden must have a limit or it
becomes unfair no matter how equal it is.
An equal burden is not measured by the amount of tax. It is measured by
the payment capacity of the taxpayer. That should go without saying, but
if it did we wouldn't have people suggesting that flat taxes are the
"fair" solution. Paying a 50% tax for someone who makes $10,000,000 per
year will have no effect on their life. It'll mean they put $5,000,000
less in the bank. But someone who makes $10,830 the 2009 poverty level
wage in the US would die of
starvation if they paid $5400 in taxes. The burden is just as
disproportionate if the amount is 1%, although the consequences for the
poor might be less vicious. I say "might" because at a 1% tax rate
government services would be near-nonexistent and there would be no
safety net at all. An equal burden means that the proportion of tax paid
relative to income should have an approximately equivalent felt impact
across income levels.
How high a burden is too high is a matter of opinion. As a matter of
data, the countries that provide significant services for the taxes they
collect — medical care, education, retirement benefits, and the like —
take around 40% of their GDP on average in all taxes. (That and much
other tax data at the OECD
.)
Some of the countries in that situation, such as Australia, achieve 30%
levels. Japan reaches 27%.
(Just for comparison, the US takes about 28%, but only the elderly
receive funded medical care and only school children may get free
education. If the money US citizens have to spend on the services
provided elsewhere was added in, they'd have the highest or near-highest
tax rates of any developed country. The per capita medical spending here
is $7200, college spending is around $20,000 per student, which works
out to around $1300 per capita. Adding in only those two items puts the
US in the 46% rate, but with much worse outcomes than, say, Norway, at 44%.)
So, speaking in round numbers, the data suggest that a government can
fulfill even very advanced functions on about one third of the GDP.
Given also, that large numbers of people from many different cultures
receive services and pay that amount without feeling wronged, an amount
in that range could be considered a rule of thumb. In other words,
taxes, as a matter of law and rights, could not go higher than a fixed
percentage, like 35%, and for that they would have to provide advanced
benefits. Better service for less cost is, of course, always an
improvement. For each reduction in benefits, taxes would be reduced by
law to a lower percentage. The government would be limited in how much
money it could take in, at any given level of service. How that tax
burden is distributed among businesses, asset owners, and income has to
follow the principle that the burden is shared equally among /people/.
In other words, a business is not a person and its taxes would be
evaluated in light of how they affect the owners, workers, and
customers. In the interests of transparency, the taxes should be
evident, not hidden in other costs.
Last, I wanted to discuss some aspects of bureaucracy. The system I'm
suggesting would depend heavily of technocrats, bureaucrats,
professionals, or whatever you'd like to call them. Controlling
bureaucracies is therefore essential. Unfortunately, there's no clear
path to follow. If there's an example of a bureaucracy, whether public,
private, military, educational, or artistic, that didn't grow until it
became an end in itself, I haven't heard of it.
I'd be willing to bet there's a very simple reason for that. It's human
nature to admire status, and that accrues with numbers of people and
size of budget. It does not accrue from doing an excellent job with the
fewest resources. And the size and budgets of departments depend partly
/on the departments themselves/. The need for new hires is determined
within the bureaucracy, not outside of it. The very people high enough
up the scale to control hiring are the ones who gain respect,
promotions, and better pay based on the importance of their empire.
Bureaucracies can't be controlled until that factor is consciously
identified and counteracted.
If that reward structure was changed, there might be a chance of
controlling bureaucratic growth. What that new structure should be is
something management experts have been studying and obviously need to
continue studying. There's still a shortage of effective methods in the
wild. Of course, the first hurdle is that the people who know how to
work the current system don't want effective methods, but that's the
usual problem of unseating vested interests. That part is not specific
to bureaucracies as such. If it were to turn out that no matter what,
bureaucracies can always pervert any rules and keep growing, then my
whole concept of a government of specialists would be unworkable. Some
other way than what I'm envisioning would have to be found to keep
concentrations of power to their optimal minimum.
I'll give an example of how I imagine limits to bureaucracy might work.
Again, not because this is necessarily the best way but to illustrate my
meaning.
First, there's data on what proportion of expenses are spent on
administrative and infrastructure costs in various types of
organizations. For instance, Medicare has been much discussed lately.
Krugman
notes that it has 2% administrative costs against the private insurers'
11%. Others give the figure as 3% or 6%. Among non-profits, food banks
have near-zero administrative costs, whereas museums, given the exacting
nature of their requirements, tend to be around 15%. Research
institutions, whether university, federal, industrial, or military, have
administrative and infrastructure costs in the low 30% range
. As with
so many things discussed so far, there is no one right answer. But, for
a given situation, it is possible to find out what people spend on
administrative personnel and infrastructure, to compare that to what the
more effective institutions in that class spend, and to set realistic
limits. Those limits should probably be set somewhat higher than the
best-in-class. We're not all geniuses, and the first priority is good
service, not a government which is the low price leader.
The limits as a percent of the budget would be set outside the
bureaucracy in question, and based on data collected across the whole
functionally comparable sector. The percentage amount would not be
subject to appeal. The bureaucracy would have to account for its
spending, but it wouldn't be submitting budget requests. They'd do their
jobs with a fixed amount of money and there'd be nothing to request.
If a department's functions changed — if, for instance, a breakthrough
like the invention of PCR meant the Patent Office would be processing
vastly more inventions based on that — then the department would submit
evidence of the change to the equivalent of a General Accounting Office
which would decide whether or how their budget needed to be adjusted.
From an outsider's perspective, one big source of bloat in bureaucracies
is management layers. Ranks, like budgets and pay scales, should also be
limited externally to the flattest structure compatible with effective
management. Ranks should not be at the discretion of the department head
any more than staffing.
Forms are an ever-present aspect of bureaucracies and probably the main
avenue of interaction with them for the general public. Commenting on
such a picayune thing as forms may seem odd, but abuses start in the
small stuff. Just as more bodies means more status for the very person
doing the hiring, similarly paperwork is a way for bureaucrats to act
like they're doing a job. What makes it bad is that they are the ones
with sole discretion about the forms they generate. In a system that
expects bureaucrats to run things, forms will likely metastasize unless
their growth and proliferation are stopped.
Paperwork is a symptom of a much larger issue. It indicates whether
government serves the citizens or the continued employment of its
officials. If bureaucrats are really to be public servants, then an
important factor in their jobs is to consult the convenience of their
masters. That means forms must be as few in number and as simple and
non-repetitive to fill out as is compatible with getting the job done
optimally.
There should be explicit limits on the number and length of forms in any
given department. If a new form is needed, the department then has to
figure out which old one is obsolete. There should also be explicit
limits on how long it takes a naive user to complete all the paperwork
necessary to achieve a given result. (The US government prints estimates
of how long it will take to complete its forms. No limits, just
estimates. It's the beginning of the right idea, but the estimates are
so low they can only be explained by time travel.) What the limits
should be on the time it takes to fill out forms is a matter of opinion.
However, if it was decided by vote, I and everyone else would always
vote for zero as the appropriate amount of time. I'm not sure what the
fairest and most effective consensus method should be to decide that
question, but whatever it turns out to be, setting limits on forms is
essential.
Last, one of the things bureaucracies are famous for is ducking
responsibility. As I noted in earlier chapters, leaving that loophole
open is not a minor lapse. It goes right to the heart of a fair system
that requires accountability. Government by committee is not compatible
with fairness. Government by passing responsibility up to higher levels
is also not compatible. Responsibility /and the control relevant to it/
needs to rest with the lowest level official who can effectively carry
it. Responsibilities and control must be clearly demarcated to prevent
officials from passing it up the line … or down to someone without
enough control over the decision. The person who controls the
decisionmaking is also the one who must sign off on it, and who will be
held responsible if fault becomes an issue.
- + -
In summary, decision-making in an equitable and sustainable system would
differ from our current one in fundamental ways. Changes range from the
far-reaching to the mundane, from the end of war and opaque laws to
disciplined bureaucracies.
/[Continued in Government 2: Oversight]/
+ + +
Government 2: Oversight
Oversight
The need to guard the guardians has been a slow concept to develop in
human history. Rulers had God on their side, or were gods, and it's only
in the last few centuries that there's been even a concept of checks and
balances. But, as usual, the powerful find workarounds and we're back to
the point where "you can't fight City Hall" is considered a cliché.
In a sustainable system, that is, in a fair one, that feeling couldn't
exist. Without effective oversight of the government, a sustainable
system won't happen.
Real oversight depends on the independence and intelligence of the
overseers, with the first factor apparently much more important than the
second. The financial meltdown, the false evidence for the US to start a
war in Iraq, the astonishing loss of consumer control over purchased
digital products: these all depend(ed) on cozy relationships among the
powers-that-be. Those powerful people are, if anything, smarter than
average. So the first requirement of oversight is that it has to be
distributed, not concentrated among an elite. It should, furthermore,
stay distributed and avoid creeping concentration from re-emerging. This
is where current methods fall down. Elections happen too rarely and
voters can be manipulated for the short periods necessary. Lawsuits take
too much time and money to be more than a rearguard action. Our current
feedback loops are far too long to work consistently.
There are a few pointers showing what will work. Although no current
government has effective oversight unless officials cooperate (which
suggests the system relies more on the hope of honesty than a
requirement for it) many governments do employ elements of oversight.
Some things clearly work better than others.
First is that transparency prevents the worst behavior, or at least
enables citizens to mobilize against it. Full scale looting of public
money and dreadful decisions taken for self-serving ends require secrecy.
Second is that elections do cause politicians to sit up and take notice.
Unfortunately, when the elections occur on a schedule, the politicians
only pay attention on that schedule. One essential change is a framework
to call elections — in the system described here that would be recall
elections — at any time.
Third, and now we're already in largely untested territory, is the need
for finer-grained feedback. It's essential to have ways of communicating
dissatisfaction and implementing the necessary corrections which are
both effortless and scalable. An escalating system of complaints might
be a solution.
Fourth, in the case of plain old criminal wrongdoing, there need to be
criminal proceedings, as there officially are now.
Fifth is the need for oversight to have its own checks and balances.
Government is a tensegrity structure
, not a pyramid. Officials need
to have ways of responding to complaints. And the system of oversight
itself must be overseen.
Who does the overseeing is as important as how. Distributed — in other
words, citizen — oversight is the essential part, but citizens have
other things to do besides track officials. There also need to be
professional auditors whose task is the day-to-day minutia of oversight.
They keep tabs on all the data provided by transparency, take
appropriate actions as needed, and alert the public to major developing
problems that are overlooked. This is another area where professionals
are needed for the boring or technical details and the general public is
needed to ensure independence and functional feedback.
So, starting with transparency, the first thing to note is what it
isn't. It is not a data dump. It is not an exercise in hiding some data
under a mountain of other data. It does not mean keeping every record
for the sake of theater. It doesn't mean stifling interactions by
perching the Recording Angel on everyone's shoulders.
Transparency means the /optimum/ data for the purpose of promoting
honesty in government. Information not relevant to that purpose is just
noise that masks the signal. However, as always when an optimum is
required, the difficulty is finding the right balance. If the right to
know is defined too narrowly, there's the risk of oversight failure. If
it's defined too broadly, official interactions may be stifled for
nothing or privacy may be recklessly invaded. Organizations such as
Transparency International have
considerable data on how to approach the optimal level. The more
governments that have a commitment to providing clear and honest data,
the more we'd learn about how to achieve the most useful transparency
with the least intrusion and effort.
What we know so far is that financial data and records of contacts and
meetings provide two of the clearest windows on what officials (also
non-governmental ones) are up to. Financial data in the hands of
amateurs, however, can be hard to understand, or boring, or vulnerable
to attitudes that have nothing to do with the matter at hand. For
instance, accountants have the concept that tracking amounts smaller
than what it costs to track them are not worth following. Amateurs, on
the other hand, will notice a trail of gratuitous donuts but feel too
bored to follow statements of current income as present value of net
future cash flows . The fact
that the statement is Enron's means nothing beforehand. It is important
to ignore the easy distractions of small stuff and to notice the big
stuff, especially when it's so large it seems to be nothing but the
background.
There were plenty of auditors involved in the Enron fiasco who knew
better, of course. That wasn't a simple matter of amateurs distracted by
ignorance. However, the auditors received vast sums of money from the
company, which simply underlines the need for true independence in
oversight, including independence from membership in similar networks.
Real transparency would have made the necessary information available to
anyone, and would have prevented both the disaster and the pre-disaster
profits, some of which flowed to the very people supposedly in oversight
... which is why oversight wasn't applied in time.
Records of meetings and contacts are another important source of data
about what officials are doing, but that raises other problematic
issues. People need to feel able to speak candidly. After all, the whole
point is to promote honesty, not to generate ever newer layers of
speaking in code. A soft focus is needed initially, followed by full
disclosure at some point. Maybe that could be achieved by a combination
of contemporary summaries of salient points and some years' delay on the
full record. The interval should be short enough to ensure that if it
turns out the truth was shaded in the summaries, corrective action and
eventual retribution will still be relevant. The interval would probably
need to vary for different types of activity — very short for the Office
of Elections, longer perhaps for sensitive diplomatic negotiations
between countries.
Presentation of information is a very important aspect of transparency,
although often overlooked. This may be due to the difficulty of getting
any accountability at all from current governments. We're so grateful to
get anything, we don't protest the indigestible lumps. Governments that
actually served the people would present information in a way that
allows easy comprehension of the overall picture. Data provided to the
public need to be organized, easily searchable, and easy to use for rank
amateurs as well as providing deeper layers with denser data for
professionals. If these ideas were being applied in a society without
many computers, public libraries and educational institutions would be
repositories for copies of the materials.
A related point is that useful transparency has a great deal to do with
plain old simplicity. The recordkeeping burden on officials should be
made as simple and automatic as possible. In an electronic world, for
instance, any financial transactions could be routed to the audit arm of
government automatically and receive a first pass computer audit. Some
simplicity should flow naturally from requiring only information
relevant to doing the job honestly, rather than all information.
Information relevant to other purposes, such as satisfying curiosity,
has nothing to do with transparency. Or, to put it another way, the
public's right to know has limits. An official's sex life, for instance,
unless it affects his or her job, is not something that needs to be
detailed for the sake of transparency, no matter how much fun it is.
Officials, like everyone else, do have a right to privacy on any matter
that is not public business.
The professional audit arm of the government would have the job of
examining all departments. The General Accounting Office fulfills some
of that function in the US now, but it serves Congress primarily. In the
system I'm thinking about, it would serve the citizenry with publication
of data and its own summaries and reviews. It would also have the power
to initiate recalls or criminal proceedings, as needed. (More on
enforcement in a moment.)
A professional audit agency doesn't provide enough oversight, by itself,
because of people's tendency to grow too comfortable with colleagues.
Citizens' ability to take action would counteract the professionals'
desire to avoid friction and ignore problems. But citizen oversight, by
itself, is also insufficient. Its main function is as a backstop in case
the professionals aren't doing their jobs. The day to day aspects, the
"boring" aspects of oversight need to be in the hands of people paid to
pay attention.
Feedback is the next big component of oversight, after transparency. The
sternest form of feedback is recall elections, but in the interests of
preventing problems while they are still small, a system that aims for
stability should have many and very easy ways to provide feedback.
The easiest way to give feedback is to complain. That route should be
developed into an effective instrument regarding any aspect of
government, whether it's specific people, paperwork, or practices.
Complaints could address any annoyance, whether it's a form asking for
redundant information or it's the head of the global Department of
Transportation not doing his or her job coordinating intercontinental
flights. Anyone who worked for a tax-funded salary, from janitors to
judges, could be the subject of a complaint.
It's simple enough to set up a complaints box, whether physical or
electronic, but effectiveness depends on three more factors. The first
two are responsibility and an absence of repercussions. People have to
be able to complain without fear of retribution. On the other hand,
complaints have to be genuine in all senses of the word. They have to
come from a specific individual, refer to a specific situation, and be
the only reference by that person to that problem. Anonymity is a good
way to prevent fear of retribution. Responsibility is necessary to make
sure complaints are genuine. Somehow, those two conflicting factors must
be balanced at an optimum that allows the most genuine complaints to get
through. Possibly a two-tiered system could offer a solution: one tier
would be anonymous with some safeguards against frivolous action, and
one not anonymous (but with bulletproof confidentiality) that could be
more heavily weighted.
The third factor is that the complaints must actually be acted upon.
That means sufficient staffing and funding at the audit office to
process them and enforce changes. Since effective feedback is crucial to
good government, staffing of that office is no more optional than an
adequate police force or court system. It needs to be very near the
front of the government funding line to get the money determined by a
budget office as its due. (The budget office itself would have its
funding level determined by another entity, of course.)
It's to be expected that the thankless jobs of officials will generate
some background level of complaints. Complaints below that level, which
can be estimated from studies of client satisfaction across comparable
institutions, wouldn't lead to action by others, such as officials in
the audit agency. An intelligent official would note any patterns in
those complaints and make adjustments as needed. If they felt the
complaints were off the mark, they might even file a note to that effect
with the audit agency. The fact that they were paying attention would
count in their favor if the level of complaints rose. But until they
rose, the complaints wouldn't receive outside action.
In setting that baseline level, the idea would be to err on the low
side, since expressed complaints are likely indicative of a larger
number of people who were bothered but said nothing. The level also
should not be the same for all grades. People are likelier to complain
about low level workers than managers whom they don't see.
Once complaints rise above what might be called the standard background,
action should be automatically triggered. Some form of automatic trigger
is an important safeguard. One of the weakest points in all feedback
systems is that those with the power to ignore them, do so. That
weakness needs to be prevented before it can happen. The actions
triggered could start with a public warning, thus alerting watchdog
groups to the problem as well. The next step could be a deadline for
investigation by the audit office (or the auditors of the audit office,
when needed). If the volume of complaints was enormous, then there's a
systemic failure somewhere. Therefore, as a last resort, some very high
number of genuine complaints should automatically trigger dismissal,
unelection, or repeal of a procedure. Transparency means that the volume
of complaints and their subject is known to everyone, so when that point
was reached would not be a secret.
I know that in the current environment of protected officialdom, the
feedback system outlined sounds draconian. The fear might be that nobody
would survive. That's not the intent. If that's the effect, then some
other, more well-tuned system of feedback is needed. But whatever it's
form, an effective system that's actually capable of changing official
behavior and altering policies is essential. In a system that worked,
there would be few complaints, so they'd rarely reach the point of
triggering automatic action. In the unattainable ideal, the selection
process for officials would be so good that the complaints office would
have nothing to do.
I've already touched on recall elections, but I'll repeat briefly. The
recall process can be initiated either by some form of populist action,
such as a petition, or by the government's audit arm. After a successful
recall, an official should be ineligible for public service for some
period of time long enough to demonstrate that they've learned from
their mistakes. They'd be starting at the lowest rung, since the
assumption has to be that they need to prove themselves again.
The fourth aspect of oversight, punishment for crimes, corruption, or
gross mismanagement by officials, is currently handled correctly in
theory, but it needs much stricter application in practice. The greater
the power of the people involved, the more it's become customary, at
least in the U.S., to let bygones be bygones after a mere resignation.
It's as if one could rob a grocery store and avoid prosecution by
saying, "Oh, I'm sorry. I didn't mean it." Worse yet, the response to
criminal officials is actually /less/ adequate than it would be to treat
shoplifters that way. The state sets the tone for all of society, so
crimes committed by representatives of the state do more damage. So
there must be real penalties for any kind of malfeasance in office. If
there has been financial damage, it should be reimbursed first out of
the official's assets. Nor should there be any statute of limitations on
negligence or problems generated by officials. They are hired and paid
the big money to pay attention. If they don't, they must not escape the
consequences merely because people are so used to letting government
officials get away with everything.
So far, my focus has been on ways of controlling officials, and that is
the larger side of the problem. But, that said, officials do need ways
of contesting overactive oversight, downright harassment, or unjustified
recalls or prosecutions. The point of draconian oversight — draconian
from our perspective that teflon officials are normal — is to have
public servants who live up to their name. They must serve the public.
On the other hand, they're an odd sort of servant because a significant
part of their job is doing things for the public good which the public
usually doesn't like very much at the time. (If that weren't true,
there'd be no need for officials to do them. They'd get done without
help.) So, there are two reasons why officials must be protected from
capricious oversight. One is that servants have rights. Two is that they
have to be able to take necessary unpopular steps without fear of
reprisals. The public, after all, is as capable of abusing power as the
individuals it's composed of.
Complaints are the earliest point where an official could disagree with
the public's assessment. If the complaints are caused by a necessary but
unpopular policy, the official needs to make clear why it's necessary,
and why something easier won't do. Complaints happen over time, so if
they're a rising tide, that's not something that can come as a surprise
to the responsible party. The official's justifications would become
part of the record. When complaints reach a high enough level to be
grounds for unelection or dismissal, and if the official is convinced
that a decisive proportion of the complaints is unwarranted, they could
ask for independent review. (An example of possible specifics is given
below.)
The official and the complainant(s) could agree on one or more audit
experts, depending on complexity, to review the case. If that didn't
lead to a resolution, the next step would be the legal process, with
appeals if needed. Only decisions that went against an official
should count as bad marks on their record. Simply feeling the need
to defend oneself is not.
However, after repeated cases that go against a complainant, those
individuals should no longer have standing to bring future cases of
that type. A case that went against an official would result in
recall or firing, so repeated cases wouldn't arise.
The audit arm of the government is another source of oversight that an
official could disagree with. The ability to respond to its charges
would be built into the system. If the responses passed muster, the case
would be closed. If not, there could be appeals to a review board, and
eventually the legal system. As in the example box, I'd see this as a
process with limits.
Recalls are the most far-reaching form of oversight, and the ability to
contest them the most dangerous to the health of the system. Contesting
a recall should be a last resort and something that's applied only in
exceptional cases. Normally, if an official felt a recall was
unjustified, they should make that case during the unelection and leave
the decision to voters. However, for the exceptional case, officials
could contest a recall if they had a clear, fact-based justification for
why it had no valid basis. It would then go to reviewers or a legal
panel with the necessary specialized expertise, chosen by the opposing
sides as discussed earlier.
The biggest difference between those reviewers and others is that
potential conflicts of interest would have to be minutely scrutinized.
More nebulous loyalties should be considered as well in this case. The
tendency to excuse the behavior of colleagues is very strong and it's
one of the biggest problems with oversight of government by government.
As with any process involving a balance of rights, ensuring the
independence of the arbiters is essential.
To provide an additional dose of independence, a second review track
could be composed of a more diffuse pool of people with less but still
adequate expertise, along the same lines as selectors who determine the
eligibility of candidates for office. A matter as serious as contesting
a recall should use both tracks at once, and be resolved in favor of the
official only if both agreed.
As I've done before, I've provided some specific ideas about
implementation not necessarily because those are good ways to do it, but
mainly to illustrate my meaning. Whatever method is applied, the goal is
to ensure the right of public servants to defend themselves against
injustice, just like anyone else, and to prevent that from becoming an
injustice itself.
Because of the potentially far-reaching consequences of contesting
recalls, overuse absolutely must be prevented. If it was the considered
opinion of most reviewers involved that an official who lost a bid to
prevent a recall did not have grounds to contest in the first place,
then that attempt should be considered frivolous. One such mistake seems
like an ample allowance.
A structural weakness of oversight by the public is that officials have
training, expertise, and familiarity with the system. That gives them
the advantage in a contest with a diffuse group of citizens. An
official's ingroup advantage may be smaller against an arm of the
government tasked with oversight, but that's not the issue. The
important point is that citizen action is the oversight of last resort
so it sets the boundary conditions which must not fail. Since officials
have the upper hand in the last-resort situation, it is right to err on
the side of protecting oversight against the official's ability to
contest it. Officials must have that option, but it can't be allowed to
develop into something that stifles oversight. It's another balance
between conflicting rights. It may be the most critical balance to the
sustainability of a fair system, since that can't survive without honest
and effective government.
Regulation
The government's regulatory functions flow directly from its monopoly on
force. What those functions should be depend on the government's purpose
which, in the case being discussed here, is to ensure fairness. People
will always strive to gain advantage, and the regulatory aspects of
government must channel that into fair competition and away from
immunity to the rules that apply to others.
In an ideal world, the regulatory function would scarcely be felt. The
rules would be so well tuned that their effects would be
self-regulating, just as a heater thermostat is a set-it-and-forget-it
device. More complex systems than furnaces aren't likely to achieve this
ideal any time soon, but I mention it to show what I see as the goal of
regulation. Like the rest of government, when it's working properly, one
would hardly know it was there.
However, it is regulation that can achieve this zen-like state, not
deregulation. This is true even at the simplest levels. A furnace with
no thermostat cannot provide a comfortable temperature. In more complex
human systems, deregulation is advocated by those powerful enough to
take advantage of weaker actors. (Or those who'd like to think they're
that powerful.) It can be an even bigger tool for exploitation than bad
regulations. The vital point is that results expose motives regardless
of rationalizations. If regulation, or deregulation, promotes
concentrations of power or takes control away from individuals, then
it's being used as a tool for special interests and not in the service
of a level playing field.
Current discussions of regulations tend to center on environmental or
protectionist issues, but that's because those are rather new, at least
as global phenomena. In other respects, regulations are so familiar and
obviously essential that most people don't think of them as regulations
at all. Remembering some of them points up how useful this function of
government is. Weights and measures have been regulated by governments
for thousands of years, and that has always been critical for commerce.
Coins and money are a special instance of uniform measures — of value in
this case — and even more obviously essential to commerce. The validity
of any measure would be lost if it could be diddled to favor an
interested party. Impartiality is essential for the wealth-producing
effects to operate. That is so clear to everyone at this point that
cheating on weights and measures is considered pathetic as well as
criminal. And yet, the equally great value of fairness-preserving
regulation in other areas generally needs to be explained.
Maintaining a level field is a primary government function, so undue
market power is one area that needs attention. The topic overlaps with
the discussion about competition and monopolies under Capital
in the Money
and Work chapter, but it is also an important example of regulation that
can be applied only by a government, so I'll discuss it briefly here as
well.
Logically, monopolies shouldn't exist because there ought to be a
natural counterweight. They're famous for introducing pricing and
management inefficiencies that ought to leave plenty of room for
competitors. But, people being what they are, market power is used to
choke off any competition which could reduce it. The natural
counterweight can't operate, and only forces outside the system, such as
government regulations, can halt that process.
One point I've tried to make repeatedly is that fairness is not only a
nice principle. Its /practical/ effects are to provide stability and
wealth. That is no less true in the case of preventing unbalanced market
power. Imbalance leads to chaotic consequences during the inevitable
readjustment. Having enough market power to tilt the field is not only
unfair and not only expensive. It's also unstable, with all the social
disruption that implies. Recognition of that fact has led to the
establishment of antitrust laws, but they tend to be applied long after
economic power is already concentrated. They assume that the problem is
one hundred percent control.
The real evidence of an imbalance is price-setting power. When a
business or an industry can charge wildly in excess of their costs,
that's evidence of a market failure and the need for some form of
regulation. I'll call entities who are able to dictate prices
"monopolies" for the purposes of discussion, even though they have only
a controlling market share, not necessarily one hundred percent of it.
Some monopolies are well understood by now. The classical ones involve
massive costs for plants or equipment and very small costs to serve
additional customers. Railroads are one example. The initial costs are
high enough to prevent new startups from providing competition. Equally
classic are natural monopolies where one supplier can do a better job
than multiple competing ones. Utilities such as water and power are the
usual examples. There'd be no money saved if householders had water
mains from three separate companies who competed to supply cheap water.
The duplicated infrastructure costs would swamp any theoretical savings.
So far, so good. The need for regulated utilities is well understood.
Regulated transport is a bit iffier. The need for a state to handle the
roads is accepted, even among the free market religionists in the US,
but the need for state regulation and coordination of transport in
general is less widely recognized. The free marketeers here can be
puzzled about why it takes a week to move things from Coast A to Coast B
using the crazy uncoordinated patchwork of private railroads. But, on
the whole, the need to enforce limits on pricing power, the need for
standards and for coordination is known, if not always well implemented.
The same need for standards, coordination and limits on pricing holds
with all other monopolies, too, but this isn't always grasped,
especially in newer technologies. Money is not the only startup cost
that can form a barrier to entry. An information industry is a recent
phenomenon that started with the printing press and has begun
exponential growth after the computer. The main "capital cost" is not
equipment but the willingness of people to learn new habits.
Imagine, for instance, that Gutenberg had a lock on printing from left
to right. All later presses would have had to convince people to read
right to left just so they could get the same book but not printed by
Gutenberg. Good luck with that. Similarly with the qwerty keyboard.
People learned to use it, it turned out to be a bad layout from the
standpoint of fingers, and we're still using it over a hundred years
later. Only one cohort of people would have to take the time to learn a
new layout, and that just doesn't happen. If there had been a modern
DMCA-style copyright on that layout, there'd still be nobody making
keyboards but Remington.
History shows that people simply won't repeat a learning curve without
huge inducements. And so it also has dozens of examples of companies
which had the advantage of being over the learning curve and thus had a
natural monopoly as solid as owning the only gold mine. Any monopoly
needs regulation. If it works best as a monopoly, then it has to become
a regulated utility. If it's important enough, it may need to be
government run to ensure adequate transparency and responsiveness to the
users. If it doesn't need to be a monopoly, then it's up to regulation
to ensure that the learning curve is not a barrier to entry.
Regulation that was doing its job would make sure that no single company
could control, for instance, a software interface. As discussed in the
chapter on Creativity ,
an inventor of a quantum improvement in usability should get the benefit
of that invention just as with any other. But it cannot become a
platform for gouging. Compulsory licensing would be the obvious solution.
Another new technology that's rapidly getting out of hand is search.
Methods of information access aren't just nice things to have. Without
them, the information might as well not be there. A shredded book
contains the same information as the regular variety; there's just no
way to get at it. Organizing access to information is the function of
nerves and brain. Without any way to make sense of sight or sound or to
remember it, an individual would be brainless. Nor would having two or
three brains improve matters. If they're separate, any given bit of
information could be in the "wrong" brain unless they all worked
together. They have to work as a unit to be useful. Information access
is a natural monopoly if there ever was one.
The fact that search is a natural monopoly seems to have escaped too
many people. Like a nervous system, without it all the information in
cyberspace might just as well not be there. The usual answer is to
pooh-pooh the problem, and to say anyone can come up with their own
search engine. But having multiple indexers makes as much sense as
having multiple water mains. It's a duplication of effort that actually
reduces value at the end.
That's not to say that different methods can't exist side by side.
Libraries, for instance, organize information in a fundamentally
different, subject-oriented way than do search engines, and that
facilitates deeper approaches to the material. Left to itself, over
time, the need for that approach would lead to the development of the
more laborious subject-based indexes in parallel with tag-based ones.
But that's another danger with having a monopoly in the latter kind.
Undue influence might not leave any parallel system to itself. To put it
as a very loose analogy, word-based methods could become so dominant
that mathematics hardly existed. Whole sets of essential mental tools to
handle information could fall into disuse.
On the user's side, there's a further factor that makes search a natural
monopoly. Users don't change their habits if they can help it. (That's
why there are such heated battles among businesses over getting their
icons on computer desktops. If it's what people are used to and it's
right in front of them, the controlling market share is safe.) Instead
of pretending there is no monopoly because people have the physical
option to make an initially more effortful choice — something people in
the aggregate have shown repeatedly they won't do — the sustainable
solution comes from facing facts. More than one indexer is wasteful, and
people won't use more than one in sufficient numbers to avoid a
controlling market share. It's a natural monopoly. So it needs to be
regulated as a public utility or even be part of the government itself.
There are further considerations specifically related to search. Any
inequality in access to knowledge leads to an imbalance in a wide array
of benefits and disadvantages. Further, an indexer is the gateway to
stored knowledge and therefore potentially exercises control over what
people can learn. In other words, a search engine can suppress or
promote ideas.
Consider just one small example of the disappearance at work. I wanted
to remove intercalated video ads from clips on Google's subsidiary,
Youtube. But a Google search on the topic brought up nothing useful in
the first three screens' worth of results. That despite the fact that
there's a Greasemonkey script
to do the job. Searches for
scripts that do other things do bring up Greasemonkey on Google.
Whether or not an adblocker is disappeared down a memory hole may not be
important. But the capability to make knowledge disappear is vastly
important. Did Google "disappear" it on purpose? Maybe not. Maybe yes.
Without access to Google's inner workings, it's impossible to know, /and
that's the problem/.
We're in the position of not realizing what exists and what doesn't.
Something like that is far more pernicious than straight suppression.
That kind of power simply cannot rest with an entity that doesn't have
to answer for its actions. Anything like that has to be under government
regulation that enforces full transparency and equality of access.
(None of that even touches on the privacy concerns related to search
tools, which, in private hands and without thorough legal safeguards, is
another locus of power that can be abused.)
There are two points to this digression into the monopolies of the
information age. One is that they should and can be prevented. They have
the same anti-competitive, anti-innovative, expensive, and ultimately
destabilizing effect of all monopolies. And a good regulator would be
breaking them up or controlling them for the public good /before/ they
grew so powerful that nothing short of social destruction would ever
dislodge them.
My second point is that monopolies may rest on other things besides
capital or control of natural resources. All imbalances of market power,
whether they're due to scarce resources, scarce time, ingrained habits,
or any other cause need to be recognized for what they are and regulated
appropriately.
Regulations as the term is commonly used — environmental, economic,
financial, health, and safety — are generally viewed as the government
guaranteeing the safety, not the rights, of citizens to the extent
possible. But that leads straight into a tangle of competing interests.
How safe is safe? Who should be safe? Just citizens? What about
foreigners who may get as much or more of the downstream consequences?
How high a price are people willing to pay for how much safety?
If the justification is safety, there's no sense that low-priced goods
from countries with lax labor or environmental laws are a problem.
Tariffs aren't applied to bring them into line with sustainable
practices. The regulatory climate for someone else's economy is not seen
as a safety issue. Egregious cases of slavery or child labor are frowned
on, but are treated as purely ethical issues. Since immediate physical
harm to consumers is not seen as part of the package, regulation takes a
back seat to other concerns. Until it turns out that the pollution or
CO_2 or contaminants do cause a clear and present danger, and then
people are angry that the situation was allowed to get so far out of
hand that they were harmed.
Approaching regulations as safety issues starts at the wrong end.
Sometimes safety can be defined by science, but often there are plenty
of gray areas. Then regulations seem like a matter of competing
interests. Safety can only be an accidental outcome of that competition
unless the people at the likely receiving end of the harm have the most
social power.
However, when the goal is fairness, a consistent regulatory framework is
much easier to achieve. The greatest good of the greatest number, the
least significant overall loss of rights, and the least harm to the
greatest number is not that hard to determine in any specific instance.
In a transparent system with recall elections, officials who made their
decisions on a narrower basis would find themselves out of a job. And
safety is a direct result, not an accidental byproduct, of seeking the
least harm to the greatest number.
Consider an example. The cotton farmers of Burkina Faso cannot afford
mechanization, so their crop cannot compete with US cotton. Is that a
safety issue? Of course not. Cotton is their single most important
export crop. If the farmers switch to opium, is that a safety issue?
Sort of. If the local economy collapses, and the country becomes a haven
for terrorists, that's easy to see as a safety issue. A much worse
threat, however, would be the uncontrolled spread of disease. Collapse
is not just some academic talking point. The country is near the bottom
of poverty lists, and who knows which crisis will be the proverbial last
straw. So, once again, is bankrupting their main agricultural export a
safety issue? Not right now. So nothing is done.
Is it, however, a fairness issue? Well, /obviously/. People should not
be deprived of a livelihood because they are too poor to buy machines.
There are simple things the US could do. Provide regulation so that US
cotton was not causing pollution and mining land and water. The
remaining difference in price could be mitigated with price supports to
Burkinabe cotton. The only loss is to short term profits of US cotton
farmers, and where that is a livelihood issue for smallholders
assistance could be provided during the readjustment.
There may be better ways of improving justice in the situation, but my
point is that even an amateur like me can see some solutions. The
consequences of being fair actually benefit the US at least as much as
Burkina Faso. (Soaking land in herbicide is not a Good Thing for
anyone.) And twenty or thirty years down the road, people both in the US
and in Burkina Faso will be safer. The things they'll be safer from may
be different — pollution in one case, social disruption in the other —
but that doesn't change the fact that safety oriented regulation has no
grounds for action, but fairness oriented regulation can prevent problems.
Globalization is yet another example where regulation based on fairness
facilitates sustainability. Economists like to point out that
globalization can have economic benefits, and that's true, all other
things being equal. The problem is other things aren't equal, and
globalization exposes the complications created by double standards. If
it applied equally to all, labor would have to be as free to move to
better conditions as capital. (See this post
for a longer discussion.) Given equal application of the laws,
globalization couldn't be used as a way of avoiding regulations. Tariffs
would have to factor in the cost of sustainable production, and then be
spent in a way that mitigated the lack of sustainability or fairness in
the country (or countries) of origin. There'd be less reward for bad
behavior and hence fewer expensive consequences, whether those are
contaminants causing disease or social dislocation caused by low wages
and lost work. Those things are only free to the companies causing the
problem.
Enforcement must be just as good as the regulations or else they're not
worth the paper they're printed on. The financial meltdown is a stark
recent example showing that good regulations are not enough. Maintaining
the motivation for enforcement is at least as important. In the US and
in many other countries, the regulations existed to preserve the
finances of the many rather than the narrow profit potential of the few.
In hindsight, the element missing was any desire to enforce those
regulations. (E.g. see Krugman's blog
and
links therein.)
Enforcement is necessarily a task performed by officials, who are hired
precisely to know enough to use foresight so that hindsight is
unnecessary. That's understood everywhere, and yet enforcement is
repeatedly the weak link in the chain. That's because the responsible
officials are not really responsible. Rarely do any of them have any
personal consequences for falling down on the job. That is the part that
has to change.
Officials who enable catastrophe by not doing their jobs commit
dereliction of duty, something that has consequences in a system
emphasizing feedback and accountability. There has to be personal
liability for that. People are quick to understand something when their
living depends on it. (With apologies to Upton Sinclair.) The right
level of personal consequences for responsible officials is not only
fair, it would also provide the motivation to do the job right to begin
with and avoid the problem.
For personal responsibility to be justified, the right path to take does
need to be well known. I'm not suggesting officials be punished for
lacking clairvoyance. I'm saying that when the conclusions of experts
exceed that 90% agreement level mentioned earlier, and yet the official
ignores the evidence, then they're criminally culpable. Then a lack of
action in the face of that consensus needs to be on the books as a
crime, punishable by loss of assets and even jail time, depending on the
magnitude of the lapse. Nor should there be any statute of limitations
on the crime, because there is none on its effects. The penalties need
to be heavy enough to scare officials straight, but not so heavy they're
afraid to do their jobs. Finding the right balance is another area where
study is needed to find optimal answers. But regulation is a vital
function of government, so it is equally vital to make sure that it's
carried out effectively. Dereliction of duty is no less a crime of state
than abuse of power.
Officials who are actually motivated to do their jobs are important in a
system which works for all instead of for a few. The point of fair
regulation, as of fairness generally, is that avoiding problems is less
traumatic than recovering from them. /Prevention/ of problems is an
essential function of regulation. Prevention takes effort ahead of time,
though, when there's still the option to be lazy, so it only happens
when there's pressure on the relevant officials. That means there have
to be penalties if they don't take necessary steps. It's unreasonable to
punish a lack of clairvoyance, and yet it's necessary to punish
officials who let a bad situation get worse because it was easier to do
nothing.
In short, the government's regulatory functions should serve to maintain
a fair, sustainable environment where double standards can't develop.
Safety or low prices might be a side effect, but the purpose has to be
enforcing rights.
Public Works
This is one of the few areas where the highest expression of the art of
government is more than invisibility. Some of the greatest achievements
of humankind come from the type of concerted action only governments can
coordinate. The great pyramids of Egypt, medieval European cathedrals,
and space flight were all enterprises so huge and so profitless, at
least in the beginning, that they couldn't have been carried out by any
smaller entity.
People could — and have — questioned whether those projects were the
best use of resources, or whether some of them were worth the awful
price in suffering. The answer to the latter is, obviously, no. Where
citizens have a voice, they're not likely to agree to poverty for the
sake of a big idea, so that problem won't (?) arise. But that does leave
the other questions: What is worth doing? And who decides?
There's an irreducible minimum of things worth doing, without which a
government is generally understood to be useless. The common example is
defense, but on a daily level other functions essential to survival are
more important. A police force and court system to resolve disputes is
one such function. Commerce requires a safe and usable transportation
network and reliable monetary system. In the system described here,
running the selection and recall processes would be essential functions.
And in a egalitarian system, both basic medical care
and basic education
have to be rights. It
is fundamentally unfair to put a price on life and health, and equally
unfair to use poverty to deprive people of the education needed to make
a living.
As discussed in the chapter on Care, the definition of "basic" medical
care depends on the wealth of the country. (Or of the planet, in a
unified world.) In the current rich countries, such as the G20, it
simply means medical care. It means everything except beauty treatments.
And even those enter a gray area when deviations from normal are severe
enough to cause significant social problems. On the other hand, in a
country where the majority is living on a dollar a day and tax receipts
are minimal, basic medical care may not be able to cover much except
programs that cost a tiny fraction of the money they eventually save.
That's things like vaccinations, maternal and neonatal care, simple
emergency care, pain relief, treatments against parasites, and
prevention programs like providing mosquito netting, clean water, and
vitamin A.
The definition of basic education likewise depends on the social
context. In a pre-technological agrarian society it might be no more
than six grades of school. In an industrial society, it has to include
university for those who have the desire and the capacity to make use of
it. The principle is that the level of education provided has to equal
the level required to be eligible for good jobs, not just bottom tier
jobs, in that society.
I've gone rather quickly from the irreducible minimum to a level of
service that some in the US would consider absurdly high. Who decides
what's appropriate? I see that question as having an objective answer.
We are, after all, talking about things that cost money. How much they
cost and how much a government receives in taxes are pretty well-known
quantities. From that it's possible to deduce which level of taxes calls
for which level of service.
The concept that a government can be held to account for a given level
of service, depending on what proportion of GDP it uses in taxes, is not
a common one. And the application of the concept is inevitably
empirical. It is nonetheless interesting to think about the question in
those terms.
As I've pointed out
,
there are several governments who have total tax receipts of about a
third of GDP and provide all the basic services plus full medical,
educational, retirement, and cultural benefits, as well as providing
funds for research. Hence, if a country has somewhere between 30% to 40%
of GDP in total tax receipts, it should be providing that level of
service. If it isn't, the citizenry is being cheated somewhere.
On a local note, yes, this does mean the US is not doing well. Total
taxes here are around 28%. Japan
provides all the services and benefits listed above while taking in 27%
of total GDP. The low US level of services implies taxes should be
lower, or services should be higher. The US does, of course, maintain a
huge military which, among other things, redistributes federal tax
dollars and provides some services such as education and medical care to
its members. If one factors in the private cost of the tax-funded
services elsewhere, US total tax rates would be over 50%. Given the gap
between the US and other rich countries, the US is wasting money
somewhere ….
On the other hand a poor country, which can't fairly have much of a tax
rate, might have only 5% of an already low GDP to work with. Services
then would be limited to the minimums discussed earlier. Even poor
governments can provide roads, police, fire protection, commercial
regulation to make trade simpler, basic medicine and education. (It
should go without saying, but may not, that a poor country may not have
the resources to deal with disasters, and may need outside help in those
cases.) The only thing that prevents some of the world's countries from
providing the basics now are corruption, incompetence, and/or the
insistence of elites on burning up all available money in wars. None of
those are essential or unavoidable. Fairness isn't ruinously expensive,
but its opposite is.
At any level of wealth, the receipts from fair levels of taxes dictate
given levels of service. Citizens who expect more without paying for it
are being unfair, and governments who provide less are no better.
There is now no widespread concept that a government is obligated to
provide a given level of service for a given amount of money. Even less
is there a concept that taking too much money for a given level of
service is a type of crime of state. The government's monopoly on force
in a might-makes-right world means that there aren't even theoretical
limits on stealing by the state. However, when it's the other way around
and right makes might, then those limits flow logically from what is
considered a fair tax level.
So far, I've been discussing taxes and what they buy on a per-country
basis which feels artificial in a modern world where money and
information flow without borders, and where we all breathe the same air.
A view that considers all people equal necessarily should take a global
view of what's affordable. I haven't discussed it that way because it
seems easier to envision in the familiar categories of nations as we
have them now, but there's nothing in the principles of fair taxation
and corresponding services that wouldn't scale up to a global level.
Finally, there's the most interesting question. In a country — or a
world like ours — which is doing much better than scraping by, what
awe-inspiring projects should the government undertake? How much money
should be devoted to that? Who decides what's interesting? Who decides
what to spend?
Personally, I'm of the opinion that dreams are essential, and so is
following them to the extent possible. I subscribe to Muslihuddin Sadi's
creed:
If of mortal goods you are bereft,
and from your slender store two loaves
alone to you are left,
sell one, and from the dole,
buy hyacinths to feed the soul.
Or, as another wise man once said, "The poor you shall always have with
you." Spend some money on the extraordinary.
So I would answer the question of whether to spend on visionary projects
with a resounding "yes." But I'm not sure whether that's really
necessary for everyone's soul or just for mine. I'm not sure whether
it's a matter of what people need, and hence whether it should be baked
into all state budgets, or whether it's a matter of opinion that should
be decided by vote. If different countries followed different paths in
this regard, time would tell how critical a role this really plays.
It seems to me inescapable that countries which fund interesting
projects would wind up being more innovative and interesting places.
Miserly countries would then either benefit from that innovation without
paying for it, which wouldn't be fair, or become boring or backward
places where fewer and fewer people wanted to live. Imbalances with
unfortunate social consequences would then have to be rectified.
As for the ancillary questions of what to spend it on, and how much to
spend, those are matters of opinion. Despite that, it's not clear to me
that the best way to decide them is by vote. People never would have
voted to fund a space program, yet fifty years later they'll pay any
price for cellphones and they take accurate weather forecasting for
granted. I and probably most people have no use for opera, and yet to
others it's one of the highest art forms.
Further, another problem is that budget instability leads to waste.
Nobody would vote for waste, but voter anger can have exactly that
effect. Jerking budgets around from year to year wastes astronomical
sums. A good example of just how much that can cost is the tripling,
quadrupling, and more of space program costs after the US Congress is
done changing them every year. Voters, on the whole, are not good at the
long range planning and broad perspective that large social projects
require. I discuss a possible method of selection under Advanced
Education
.
Briefly, I envision a similar system to the one suggested for other
fields. A pool of proposals is vetted for validity, not value, and
selection is random from among those proposals that make methodological
sense. However it's done, decisions on innovation and creativity have to
be informed by innovative and creative visionaries without
simultaneously allowing corrupt fools to rush in.
- + -
A fair government, like any other, has a monopoly on force, but that is
not the source of its strength. The equality of all its citizens, the
explicit avoidance of double standards in any sphere, the renunciation
of force except to prevent abuse by force, short and direct feedback
loops to keep all participants honest, and all of these things supported
in a framework of laws — those are the source of its strength. The
strength comes from respecting rights, and is destroyed in the moment
rights get less respect than might. The biggest obstacle to reaching a
true rule of law is that intuition has to be trained to understand that
strength is not muscle.
+ + +
Money and Work
The Nature of Money
Commerce is not a function of government, and if money had nothing to do
with power, there'd be no need for much government involvement in
commercial affairs. But money and status turn into each other so fast
that regulation is essential to keep it in its place. That place,
furthermore, is limited to things that can be sold. It's not selling
everything that can be turned into money. A few steps will show why this
is so.
Money properly plays a role only to enable the transfer of measurable
goods or services. Vegetables harvested, hours worked, court cases
argued, or websites designed, these are things that can be measured and
priced. Each of them also has intangible qualities, which can be what
gives them their value as opposed to their price, but they do have
significant measurable components.
In contrast, one has to struggle to find any way to measure those things
which are mainly intangible, and any measurement never captures what's
important about them. It's the usual list of all the things that really
matter: love, truth, beauty, wisdom, friendship, happiness, sorrow,
justice. There is no way to squeeze money out of the priceless. That's a
binary system. It's either priceless or worthless, and monetizing it
only turns it into trash. Keeping money out of where it does not belong
is more than a nice idea. It's essential for a sustainable society
because nothing lasts long if it's worthless.
A second vital aspect concerns what money actually is. When you come
right down to it, money is a measure. It measures goods to make it
easier to exchange things. That's all. It might as well be a ruler,
except that nobody has ever died for a lack of inches. But that says
more about what people do with it than what money is. The fact remains
that money is simply a measure.
As such, money ought to share the characteristics of other measuring
devices. One centimeter is always the same, for rich and poor, now and a
thousand years from now, and whether it's used in London or Lagos.
However, unlike inches or minutes, the stuff money measures grows and
shrinks from year to year. Wealth can fluctuate, so measuring it is much
more of an art than a science. Economists have developed deep and
sophisticated ideas on how to practice that art, so there are ways to
approximate the ideal of a consistent measurement even if it can't be
perfect.
Besides fluctuation, there are other reasons why one might not want to
keep money perfectly consistent. Another way to describe consistency is
that there should be neither inflation nor deflation. A dozen eggs now
should cost the same as they did in 1750. The very idea sounds funny
because the situation is so far from actual events it's
near-inconceivable. But only something real can grow (or shrink) like
that. It's quite possible to start with one tomato and end with dozens a
while later. But money isn't like that. A bigger pile of money does not
change the pile of tomatoes in any way. Or, to put it more generally, a
bigger pile of money doesn't change actual wealth. Neither gold nor
paper nor conch shells will keep you alive. Only what money buys, that
is, what it measures, can actually increase wealth. However, because of
the confusion of ideas, an increase in inflated money can feel like an
increase in wealth unless its inability to buy more is particularly
stark, as in hyperinflation.
Inflation is therefore very handy as a feel-good mechanism. It's not
just politicians who use that fact. Almost everybody who can will charge
more for what they've got if they can get away with it. There are ways
to counteract that tendency, such as optimal competition, cost
information, overall money supply, and other factors that economists can
tell us a great deal about. The main point is not that there are things
we can do, but that it's essential to recognize how strong and how
universal is the desire not to do them. That desire needs to be
explicitly and constantly counteracted. Otherwise money ceases to be a
measure, which is the only function it can perform to good ends, and
becomes a tool of control.
Tangentially, I have to discuss the concept in economics that a basic
level of unemployment is necessary to keep inflation in check. There is
even an acronym for it I gather from reading one of my favorite
economists: the NAIRU
,
the non-accelerating-inflation rate of unemployment. It's axiomatic that
without enough unemployment (how much is subject to debate) the
wage-price spiral starts. In other words, unless enough people are flat
broke, other people will raise prices. The fact that the system
/depends/ on a class of people with less access to life, liberty, and
the pursuit of happiness doesn't get much discussion.
That is not acceptable if all people are equal. A fair system simply
cannot depend on some people being less equal than others. The demand
side of inflation can be influenced by competition, by price
information, and by any number of other measures that can be applied
equally to everyone, but it cannot be influenced by the involuntary
poverty of a few. Personally, I don't believe that poverty is the only
way to control inflation, that it's impossible to find a balance between
equal competing interests, and I do believe that it's up to economists
to work out new methods compatible with consistent rules.
However, if my intuition is wrong, and poverty really were to prove
essential, then it has to be voluntary poverty. If people making less
than X are essential to the system, then it's up to society as a whole
to facilitate their function. The government provides the meager annual
income to the proportion of people required, and in return for having
only the bare minimum to survive, those people don't need to work. It
may be that faced with the alternative of a whole cohort of people who
get a "free ride," (the quotes are there because poverty is never a free
ride) there will be more motivation to work out a system that doesn't
require poverty.
There are also real, i.e. non-inflationary, reasons why prices change.
We have much better industrial and agricultural processes now than in
1750, which makes the price of eggs on an inflation-adjusted scale much
lower now than then. (We've also gone overboard with factory farming,
but even with good practices, food is now proportionally cheaper than it
was then.) Further, there may be completely new things that need
measuring. In 1750, you couldn't buy a computer at any price.
Given that money is a measure, it's idiotic to run out of inches, as it
were, when the underlying wealth is there. And yet that was the trap
countries had to climb out of in the 1930s when they were on the gold
standard, but didn't have enough actual gold to measure their wealth. In
what was recent history at the time, major countries such as Germany and
Russia had struggled with hyperinflation, and a strict gold standard was
intended as a "never again" law. It worked as far as it went. But
without an explicit understanding that inflation was not the problem,
unreliable measurement of goods was, they fell into the opposite error
and allowed deflation to develop.
In the details, using money as a measure is hugely complex. "Running out
of inches" can happen when there aren't enough "rulers," and also when
the existing money is hoarded instead of used. Psychological factors —
which are the worst kind — can be at work, or it can be a simple matter
of the reward structure. For instance, in our current economic crisis,
governments narrowly averted deflation by pouring money on the world's
financial systems. (They didn't do it fairly, sufficiently, or well, but
they did do it.) Logically, that money should be bubbling up through the
system at this point (late 2009). Instead it's sitting in banks because
the executives' jobs depend on their banks staying in business, and
their primary concern right now is surviving an audit by having adequate
capital reserves. So instead of lending, they complain about bad credit
risks and sit on the money. Had a given level of lending been made a
condition of aid, in other words if the reward structure was different,
then bankers would have had to continue improving their balance sheets
through their business rather than taxpayer funds.
The complexity is at least as intricate when merely trying to define
inflation. It's easy to say eggs should cost the same everywhere and
everywhen, but to someone who doesn't eat eggs, their price doesn't
matter. This is nontrivial. One of the big complaints about measures of
inflation, like the Consumer Price Index, is that it's out of touch with
reality. College tuition costs skyrocket, medical costs skyrocket, and
the CPI barely moves. That's because those costs are mostly not part of
the CPI which assumes, for instance, that college is optional. That's
true, on one level, but if you're paying for it, it's not true on
another level. True measures of inflation — and their corresponding
corrective measures — would have to take the diversity of needs and
goals into account. One number can't encompass the different reality
felt by people in different walks of life. The point is that none of
them should have to contend with money changing in its ability to
measure the goods and services they need. Fairness, after all, is for
everyone. The calculations must reflect reality. Reality won't dumb
itself down for convenience in calculation.
The complexity can be addressed if economic policy is explicitly geared
to making sure that everyone's money buys the same today as it did
yesterday. Economists are very good at math. If the goal is clear, the
complicating factors can be identified, and the calculations performed.
Economists are not so good at answering the philosophical,
psychological, and sociological issues that have to be resolved
correctly first, before the complex calculations can serve a good purpose.
Getting back to some of those philosophical issues, the unexamined
assumptions about money have enormous impact on daily life. Money is
familiar stuff and we take it for granted. People don't worry about
whether their attitude toward it makes sense in light of what it
actually is. Judging by results, that's no less true of treasury
secretaries, economists, and giants of finance than it is of Jane and
Joe Schmoe. It may not matter whether ordinary citizens understand what
money is, but the unexamined assumptions of policy makers affect the
whole world.
The fundamental error is thinking that inflation or deflation are only
economic issues. They aren't. Money is a measure, and that makes its
constancy a matter of fairness. Economics comes into it to figure out
the numbers conducive to that goal, but economics, like science in the
parallel situation, can't determine purpose. Because it's convenient to
forget that, or because it simply isn't thought through, there's a sense
that money follows its own rules and all we can hope to do is steer it.
That's absurd. Money is not a force of nature. It is a human construct,
and its effects flow from human actions and regulations. They need to
flow fairly, not as methods of covert control or as chaotic corrections
against past errors.
Money as measurement has another far-reaching implication. Paying
interest on money /in and of itself/ makes no sense. It would be like
adding inches to a ruler merely because it had been used to measure
something. The inches would change size and the ruler would become useless.
People can be paid some proportion of the new wealth created by the use
of their money, and they can be paid for the risk of losing it, but it
makes no sense to pay for the money itself. Those aren't new thoughts
and paying for risk or new wealth are the accepted justifications for
charging interest. And yet it's so easy to become accustomed to paying
tribute to power that nobody measures interest rates by objective
standards. How much wealth was created can be approximated, and the same
for how much risk was incurred. And yet the only accepted controlling
factors on the price of money are supply and demand. Which ends only in
those with the supply making all the demands. There's nothing "free
market" about that.
Before anyone objects that supply and demand are the only controlling
factors that work, I'd like to add that supply and demand are steerable.
Governments currently control interest rates through money supply and
the interest they charge on funds available to banks. I'm not sure why
the government's function as a flywheel is only legitimate when applied
to the biggest players. It could apply to everyone. An intractable
problem of overcharging that isn't solved by active promotion of
competition would definitely melt away if the government provided a
fairly priced alternative. Anyone who doubts the effectiveness of such
actions need only watch the panic among the US health insurance industry
at the thought that the government might provide a fairly priced
insurance option. Politically motivated or unrealistic price controls
are not effective for long, but that doesn't mean prices can't be
steered within fair and realistic limits.
At the extremes of high interest, there is a sense that gougers are
taking advantage of the situation. That's led to usury laws and, in the
extreme case, the Koranic prohibition against interest generally.
Although it's the right idea, what both approaches miss is that the
problem isn't interest. The issue is that money is a measure and that
interest works only inside that paradigm. Allowing interest charges
outside of those limits isn't just a problem of some people cheating
other people, it isn't merely an aberration with no significance for the
integrity of the system. It subverts the real purpose of money for
everyone and therefore results in unsustainability.
Capital
At the heart of every endeavor involving money lies the exciting skill
of accounting. Seriously. Without the invention of double entry
bookkeeping in the Middle Ages, the scope of capitalism would have been
limited to what you could keep track of in a checkbook. I'm joking only
a little bit. By and large, that's true. How money is accounted for is
central to using it, so if we can get the accounting right, misuse of
money can be much less of a problem.
On a simple level of personal fraud, the need to prevent misuse led to
the invention of accounting. But correct accounting can also prevent
much more generalized fraud. That's important because activities related
to making money inherently run on self-interest, and self-interest has
the property of maximizing gain by any means available. Capitalism is
the economic system that, so far, lets the most people work according to
the natural template, and so capitalism works. All that remains is to
make it work well.
Accounting is the tool that can do that. Consider, for instance, one of
the worst side effects of an economic system founded upon the pursuit of
self-interest: the tragedy of the commons. The term comes from the
common land in villages where everyone could let their livestock graze.
That way even landless peasants didn't have to live solely on cabbage
and gruel. Since it belonged to nobody, nobody took care of it, and it
became overgrazed and useless, especially after infant mortality
decreased and the number of peasants grew. The same process can be seen
everywhere that "free" and common resources are used, whether it's air,
oceans, or geostationary orbits. The economists call these things
"externalities" because they're external to the transaction. They affect
someone other than those buying and selling, and they don't need to be
booked on the balance sheet.
The thing is, what's on the balance sheet is /a consequence of the
established rules/. There was a heartwarming can-do article about North
Ivory Coast in early
2010. Rebels held sway there, and they kept the cooperation of the
population by charging no taxes. So, in North Ivory Coast, taxes (and
everything they buy) were an externality, and people worried about how
to get the traders to accept a more "normal" system after reunification.
In the same way, people everywhere else worry about how to get
corporations to pay for the downstream costs of their business. The only
difference is we're used to taxes being on balance sheets, and we're not
used to putting social costs there.
Neither is impossible. It just gets done. Governments pass laws that
people must not hold slaves, and legal businesses stop holding slaves.
If taxes have to be paid, taxpayers do so. If corporations are required
to contribute to retirement or unemployment insurance, they do so. If
they're not required, those things turn into externalities overnight.
All of these things are not physical laws. They're rules made by people,
and they can be changed by people with the stroke of a pen.
To avoid the tragedy of the commons, the only change needed is to move
to accounting that books the total cost of ownership. /All/ the costs of
a product, including the downstream costs usually passed on to others,
have to be included in the balance sheet. There can be no externalities.
It's the "you broke it, you pay for it" principle. That's only fair.
Money to deal with downstream costs has to be paid into a fund that will
deal with those costs when they come up.
The beginning of implementation of that idea is already happening, but a
lack of transparency and accountability makes it very weak. Nuclear
power plants, for instance, pay into a fund that is supposed to cover
decommissioning costs. The amount has been estimated based on industry
information rather than actual data from the few small or partial
decommissionings there have been at this point, so the costs are
underestimated by an order of magnitude or so. Including total cost of
ownership is meaningless if it is allowed to turn into a charade. If
transparency and accountability are not sufficient to force companies
into honesty, then additional methods must be found and applied. The
point is that there must be a total cost system, and that it must be
fact- and reality-based.
As with all things, the details are complicated. Perhaps the most
complicated of all is how to put an estimate on future costs, and how to
delimit where a business's responsibility ends. But those are not new
issues. They're solved daily by companies and governments everywhere.
Something like good will is a more nebulous entity than anything
discussed here, and yet that's regularly priced when companies are for
sale. The only thing that's new is that costs which are currently not on
the books have to be put there.
Downstream benefits can be an externality just as costs are, but it's a
much less common problem. Most such activities are already public or
non-profit with government support, so the beneficiaries have already
contributed payment for their future benefits. An example, though, of
companies who don't see rewards commensurate with the benefit they
provide are vaccine makers. Whether that recognition is best given in
the form of favorable tax treatment or by some other method, there needs
to be some way of returning some of the benefit to those creating it.
External benefits are not a parallel situation to external costs. There
is nothing to be fixed, so there is no price tag on the future. The
benefits often come in intangible form — good health, for instance, is
priceless — and there would be no way to return the full benefit to the
company. Furthermore, the people in the company already benefit from
living in a society where such intangibles are common. So, in important
ways, they're already repaid. That's why external benefits can only
receive token recognition, but insofar as they exist, that recognition
should exist as well.
The other major flaw endemic to a system based on self-interest is that
there is almost no long-term planning or overall coordination. Those
things are supposed to emerge organically from the properties of the
market, just as the resilience of ecosystems emerges from individuals'
self-interested effort to survive. When the desirable emergent
properties don't appear, economists speak of market failure.
By now, the market is failing the whole planet. The assumption has been
that market failures are exceptions or something that happens at the
limits of the system where some deficiencies are unavoidable. But the
failures happen so consistently and with such vast consequences that
it's necessary to consider whether there's a fundamental
misunderstanding of what markets can do.
The analogy between markets and natural systems isn't simply imperfect,
as all analogies must be. It is downright wrong. Markets have a
structural feature not seen in natural systems: their components are
self-aware, able to envision goals, and able to alter their behavior
based on those goals. That is a game-changing difference, and means that
none of the models useful in natural systems, not even the most complex,
can be applied to economics without drastic revision and the addition of
several new variables.
Economists think they are modeling a croquet competition, as it were,
and have complex calculations of the force of the mallet hitting the
ball, trajectories, wind speeds, athlete fitness, and on and on and on.
But the game they're really modeling is the one the Queen ordered Alice
to play in Wonderland where the ball was a hedgehog and the mallet a
flamingo that had to be tucked just-so into the crook of her arm. By the
time she had the flamingo adjusted, the hedgehog uncurled and ambled
away. When she recovered the hedgehog, the flamingo twisted up and fixed
her with a beady glare. The pieces don't stand still in economics, and
they all have a mind of their own.
That's an old idea by the way, even though much ignored. For instance
:
"In his 1974 Nobel Prize lecture, Friedrich Hayek, known for his
close association to the heterodox school of Austrian economics,
attributed policy failures in economic advising to an uncritical and
unscientific propensity to imitate mathematical procedures used in
the physical sciences. He argued that even much-studied economic
phenomena, such as labor-market unemployment, are inherently more
complex than their counterparts in the physical sciences where such
methods were earlier formed. Similarly, theory and data are often
very imprecise and lend themselves only to the direction of a change
needed, not its size." (/Hayek, Friedrich A. "The Pretence of
Knowledge" .
Lecture to the Memory of Alfred Nobel./)
Expecting emergent properties to work for the best in a system whose
participants are self-aware and able to use those properties for their
own purposes is a fallacy. It serves the interests of those who'd like
to take advantage of the system, but that doesn't change the fact that
it doesn't work.
What does work for self-aware participants who can modify their behavior
is rules that apply to everyone, with rewards for abiding by them and
punishments for transgressions. That type of regulation is the domain of
law, not commerce. The /rules/ need to favor long range planning. Using
total cost is one such rule. Transparency and the ability of affected
people to alter outcomes are another two. Accountable regulators who can
be unelected are yet another.
The forces for beneficial outcomes have to come from /outside/ the
market. Those outcomes cannot be an emergent property of markets. They
may happen occasionally by accident because markets don't care either
way, but they're not going to happen regularly or even often. Applying
the right laws is what will lead to the most vigorous markets with the
most generally beneficial effects.
Again, these are things everyone knows even if they're rarely
articulated. The proof is in how hard people with market power work to
make sure none of those rules can take hold. The problem always comes
back to undue concentrations of power and the difficulty of doing
anything about it for the perceived immediate payoff. As I've said
before, I don't have new solutions for how to wrest control back after
it's been lost, but I do think it points up how vital it is to prevent
such concentrations to begin with.
Preventing concentrations of market power is as important a function of
government as the maintenance of a fair monetary system. I've touched on
it in the sixth chapter under Government 2: Regulation
.
Current practice already recognizes that extreme concentrations benefit
only a few at the expense of everyone else. Economists even view
unbalanced market power as a symptom of market failure. Given that they
know it's a malfunction, you'd think they'd be more concerned about why
it's so pervasive.
And pervasive it is. For a quick selection of anti-competitive practices
here's a list from
Wikipedia: monopolization; collusion, including formation of cartels,
price fixing, bid rigging; product bundling and tying; refusal to deal,
including group boycott; exclusive dealing; dividing territories;
conscious parallelism; predatory pricing; misuse of patents and copyrights.
It reads like a summary of the high tech business model. There's a
reason for that. In the interests of giving US businesses a leg up
against their foreign competitors, the US government ignored most of
their own antitrust law. Biotech, electronics, and software, all the new
industries of the last few decades, have been allowed to operate in a
free-for-all with the usual results. There are now a few behemoths who
spend most of their time squelching innovation, unless they own it, and
overcharging for their products. The easiest example is the telcos, who
were more regulated in Europe and Japan than in the US. In the US,
average broadband download speed in 2009 is said to be around 3.8mbps
and falling
,
whereas in Scandinavia it's around 5mbps
.
(I say "said to be" because mine is less than 1mbps.) Cost of broadband,
however, is similar or cheaper in the countries with faster speeds.
Average phone service cost in the US is around 75c per minute
. In
contrast, the Scandinavian rates are less than *one-fifth* that amount
.
If the more-regulated European telcos could compete in the US, how long
do you think the coddled oligopoly here would last? A relatively new
method of squelching competitors is using the law, such as patent law,
to subvert the laws on competition. There is nothing in antitrust laws
preventing much larger companies from tying up their competitors in
lawsuits, which, because money is a factor, hurt small players much more
than large ones. Instead of competing on price and performance,
companies compete on lawyers, defensively filing for as many patents as
possible. Then when the suits start, one can countersue with one's own
arsenal. There's even a new term to describe companies whose entire
business model is extortion using the law: patent trolls. The market no
longer works to provide good products at the best price. The law no
longer works to provide equal justice for all. And the situation
continues to devolve because doing anything about it is an uphill
struggle against some of the largest corporations in the world.
As I'll argue in a moment, I see strict enforcement of competition as
the primary tool to prevent excessive market share. However, the
evidence shows that significantly larger size than competitors, by
itself, is enough to tilt the playing field. So fallback rules to
restrain size are also necessary in case competition can't manage the
task alone. Whenever there's talk of using more than competition to
restrain business, a typical set of objections emerges.
One of the commonest arguments against down-regulating a company rapidly
acquiring majority market share — at least in the US — is that the
others are losing out because they just don't compete as well.
Suppressing the winner only enables inefficient producers. Free
marketeers make the same argument in each new situation no matter how
often concentrations of market power lead to inefficiency. They pretend
that a snapshot in time of a vigorous growing company is indicative of
how that company will behave once it has dominance. It doesn't seem to
matter that there's a 100% consistent pattern of using dominance to
throttle competition instead of increasing innovation. Promoting market
power in the face of that evidence is neither rational nor sustainable.
It is competition that leads to vigorous competitors, not the lack of it.
Another argument is that cost of intervention exceeds benefits to
consumers, so using the total cost approach that I favor so much, it
does more harm than good. However, I've never seen this approach applied
to the real total cost. It's applied to the snapshot in time where,
right this instant, Widget A would be X amount cheaper for consumers,
but the cost to the government and industry in implementing the change
would be more-than-X. It's irrelevant to them, for some reason, that in
a few years when the company in question is too big to control the price
will go up and stay there. Plus it leads to an even bigger loss of value
as innovation suffers. (Just as one small example, consider cell phones
in the US, yet again. We could, at this point, all have phones that move
seamlessly between voip, wifi, and cell, that use voice-to-text and
visual voicemail, and we could have that at a small fraction of the
costs we pay. We don't because the four large companies that own the US
market don't have to compete against anyone who would provide it, and
this way they can feed in useful features as slowly as possible.) A real
total cost approach would take into account the vast downstream effects,
and thus would never provide a justification for unlimited, cancerous
market growths.
Free market proponents also object to "restraint of trade," as if that's
a bad thing. On some level, it's essential because market motivations
don't relate to larger or long term social benefits. However, even free
marketeers see that. All except the wild-eyed extremists understand the
need for some antitrust, pro-competition regulation. The very heart of
capitalism, the stock and commodity markets, are the most stringently
and minutely regulated businesses on the planet. (That doesn't stop
anyone from trying to work each new loophole as it comes up, such as
high frequency trading
, which
proves how essential unflagging regulation is.)
But there's also another, and larger, point about restraint of trade.
Markets are just as capable of it as governments. Restraint by market
participants isn't somehow better than government restraint, even though
it has no softhearted good intentions. It's precisely to prevent market
restraint that government restraint is necessary. Interestingly, the
loudest objections to government involvement generally come from those
who hope to take advantage of their market position.
All that said, the opponents of market restraint do have a point.
Restraint as an attempt to manipulate the market is not valid no matter
who does it. It can only be used to keep the field level.
That implies protectionism, as one type of market restraint, should
never be applied. By and large, that's true, but only when protectionism
is correctly delimited. Equalizing prices to compensate for bad
practices is not protectionist even though, for instance, the World
Trade Organization currently assumes it is. WTO's thinking is based on a
fallacy. Equalizing prices does not protect industries that can't
compete in terms of their actual business. Instead, industries that try
to compete by sloughing off real costs are penalized. Nullifying
cheating is not the same as protectionism.
There is one instance when true protectionism is justified: in the case
of a much weaker party who would otherwise suffer significant social
disruption. The situation can arise because of past imbalances, such as
former colonial powers and the former colonies, or because of small size
and intrinsic resource poverty. In the former case the adjustment would
be temporary, in the latter possibly permanent. Ideally, of course,
every region could sooner or later find its niche and not require that
kind of assistance. But it's not hard to imagine, for instance, a
hunter-gatherer culture which could never compete economically with a
factory-based one. Yet it's also not hard to imagine that preserving
diversity could one day mean the survival of our species in the face of
overwhelming disaster.
Getting back to competition itself, it's interesting how hard it is to
maintain in spite of all the rules supporting it. If current rules were
applied, market distortion would be much less of a problem than it is.
It's not rules we lack. It's enforcement.
The first question therefore becomes which factors work against
enforcement and how to counteract them. The best rules imaginable won't
help if they're not enforced. Any number of factors working against
enforcement can be identified, but there is one which, by itself, is
always sufficient to cause the problem. People will always try to gain
what advantage they can. And while they're in the process of gaining it,
whoever tries to halt the fun is reviled.
Consider one current example. Now that the housing market has done what
was once unthinkable and caused losses, there's fury that bankers were
so greedy, that financiers invented instruments whose risks they could
pass on to others, and that regulators didn't stop the excesses.
And yet, I don't remember anyone even joking about what a wild ride we
were having at the height of the fever. I lived in one of the centers of
it, in southern California. Everywhere you went, in superrmarket aisles,
at the next table in restaurants, in the dentist's waiting room, you
heard people discussing the latest house they bought, their low monthly
payments, the huge profit they were going to make, the good deal they
got on a second mortgage and how they were going to put their money to
work by trading up. The return on investment was enormous. It was simply
a sober business decision not to leave money on the table.
Now let's say the regulators had shut down the frothy housing price
appreciation. It would have been simple. All they needed to do was
enforce the income and asset rules required to get loans, and enforce
accurate risk assessments on new financial instruments. Those two
things, by themselves, would have made the whole repackaging and resale
of mortgages a much more staid business. With known and sober risk
assessments, there would have been far fewer investors able to buy. The
mortgage derivatives would have been too risky for pension funds and
municipalities and the trillions of dollars they control.
With fewer buyers, the sellers who thought they were going to make a
profit of $300,000 on their house were now going to make, maybe,
$25,000. Can you see them being happy about it? Not easily. Any
regulator who did them out of $275,000 would have been skewered. And so
would any politician who supported him or her. There is little
protection in the current system for making unpopular but necessary
decisions.
It was only after the bubble burst that people would have been glad of a
gain, any gain, instead of the losses they got. It was only after the
bubble burst that it was all the bankers' fault. People transformed
almost overnight from savvy financial mavens into innocent victims.
That isn't to say it was not the bankers' fault. It was. And the
regulators. But the point I'm trying to make is that it was also
everyone else's fault. There were /no/ loud voices — either in the media
or in the neighborhood — who wanted the party to stop.
So the problem has two sides. One is that people always take what
advantage they can. That's hard enough to stop when it's a matter of
power and people are trying to ignore that they're losing by it even as
it happens. But the other side is that when it comes to money, people
don't feel they're losing by it. The accumulative phases feel like gains
to most people, not losses. Party poopers are not wanted.
It's vital to recognize how unpopular it is to down-regulate gains and
how inevitable that unpopularity is always going to be. A disaster
before it happens is nothing, and it is the nature of prevention that
disasters do not happen. So how it feels /at the time/ is that money is
taken off the table for nothing. That always breeds resentment. That
always breeds a thousand arguments why the best idea is to go for broke.
I think the force of these gut-level convictions is so strong that
brain-level reasoning will be bent to it sooner or later. Very rational
governments may be able to avoid some disasters, but convincing excuses
will overwhelm reason often enough that disasters will still happen.
Recognizing that emotional dynamic, any government which is sincere
about regulation for the greater long term good will need to employ
automatic triggers for the most unpopular and necessary regulations.
There is just no way we can rely on our rationality to see us through
when there's money to be made.
Automatic triggers mean that when a company grows beyond a certain
market share, say 50%, /in any part of its business/ then
down-regulation takes effect. No excuses. I'm thinking, for instance, of
the national interest argument used in the US to "help" the nascent high
tech industry. It seemed so rational, but in the long run (which turned
out to be a mere decade or two) it actually turned out to be against the
interests of the nation and its citizens. The reason for the caveat
about "any part of the business" is that vertical integration can't be
used to acquire a chokehold on a critical process, but to then argue
that there is less than 50% market share overall.
It's important to stress that automatic triggers are the last resort, a
backstop. Regulatory scrutiny and downsizing would generally be
appropriate before that level is hit. Prevention of problems is the
goal, and that means preventing the development of market-altering
market share to begin with. The purpose of an automatic trigger is to
compensate for the inevitable lack of regulatory perfection.
If, on the other hand, the business is one where monopoly makes sense,
such as water supply or internet search, then the business needs to be
transformed into a regulated utility. Another instance of justified
monopoly is intellectual rights. (I'll discuss them in more detail in
the last chapter.) The whole point to copyrights and patents is to give
creators the benefits of their inventions. However, I think we've
slipped a vital cog in the understanding of which rights are involved.
The right to benefit is not the same as a "right" to manipulate markets.
A creator has the inalienable right to receive payment for their
invention for the term of the copyright or patent, but not the right to
absolute market power. The rules of competition still hold. In other
words, the creator must be paid, but they must also license their
creation if requested. Compulsory licensing should include the retention
of what's called "moral rights" in the fiction industry — control over
usage the creator considers inappropriate — but compulsory licensing has
to be a component of a system dependent on competition.
Whether a company needs utility status or competition, the point is that
a business answerable only to private interests cannot be allowed to
acquire game-changing levels of market power.
Moving on from the failsafes guaranteeing competition to the day-to-day
rules promoting it, the ideal would be for them to work so well that the
failsafes are never actually needed.
Transparency and short feedback loops should go a long way toward
achieving sustainable competition. By transparency I mean easily
accessible, true cost and reliability information. A Consumer
Reports-style listing and explanation needs to be one component. That
type of review is not costless, and it should be funded from a tax on
all businesses. It is, after all, the extreme likelihood of biased and
deficient information coming from vendors that creates the need for
independent review in the first place. Ease of access also means that,
when possible, information shouldn't require separate access at all. For
instance, cost of production could be a subscript to the listed price.
If there are other downstream payments, the total cost would also have
to be listed.
Picture that for something like the iPhone in the US. Cost of production
for this coolest-gadget-on-the-planet when introduced in 2007 was around
$220
;
minimum cost with texting over the two year contract when introduced was
around $2445
.
The price listing required in all ads and other information would be:
$599, $220, $2445. (Sale price, cost of production, total price.) I
think the information factor would generate pressure for fair pricing.
Coupled with compulsory licensing, the pressure would also have a way to
take effect.
When they can, vendors have done their best to suppress that type of
information, probably because it works. There was a very useful service
that popped up a few years back which would tell you how many minutes
were left on your cell phone plan(s). Once it caught on, the telcos had
it shut down by refusing to provide the necessary data, even though the
users had explicitly told the service to access their own data. (Similar
services have since popped back up — it is, after all, a rather basic
idea that should be part of the phone plan to begin with — but the US
telcos seem to successfully keep them in the fragmented startup stage.)
Given that unused minutes are the biggest reason why US users are paying
around 75c per minute without realizing it, it becomes very clear why
the service had to be quashed, even though it did nothing but put
information in users' hands. Information, by itself, can be a powerful
force, and depriving people of it is another example of using small,
unnoticed actions to grab anti-competitive advantage.
In a computer-saturated society, there are further ways to provide
customers with information across all vendors that make buying decisions
even more transparent. Air travel has several sites like that, but also
points up the shortcomings of a haphazard, private facility. Some
carriers aren't included, so the comparative power falls short, and
there are no guarantees of truth or objectivity.
The solution to those problems in this subset of search functions is the
same as in the larger set: searching either has to be a government
function or a regulated public utility subject to rules that require
transparency, objectivity, and ease of use.
The other major tool to promote competition is to give all competitors
an equally effective voice. The most alert enforcers against
anti-competitive practices will always be the smaller competitors who
suffer the most from them. Short and effective feedback loops promote
economic fairness just as much as any other kind.
Feedback should consist of a series of measures that escalate in the
level of compulsion. The first tier could be a public call for comment,
referenced where the product or company involved is advertised or
listed. The call would have to have a documented basis and be considered
realistic either by an independent ad hoc panel of knowledgeable people
or by a regulator. If independent comments accumulated on one side or
the other, the call should be withdrawn or the offending company should
modify its practices. If the outcome of comments was insufficient to
cause action, there could be appeal for review by regulators whose
decision would be binding. If either party disagreed with it, then the
last stage could be the legal process. Especially in fast moving tech
markets, which side gets the benefit of injunctions pending a decision
can amount to a /de facto/ decision in itself. As in other areas, when
the merits of the case are less than obvious, the decision should favor
the weaker party. Since in the system envisioned here money is not a
major factor at any point, larger companies should not have an advantage
by their size alone. Furthermore, there are time limits both on
regulatory and legal processes. Those have to be set low enough so that
decisions are still relevant by the time they're made.
One inevitable question in any system of checks is what provides the
balance. What is to prevent competitors from using the law as
harassment? There need to be safeguards at every step. From the first
step, the call for comment, the accused company has the right to present
its side. If the call really is harassment, it shouldn't get past the
first step of vetting for plausibility. If it does, the public will have
no sense of injury and there would be few comments. If the accuser
escalates to regulatory or even legal review, and that for something
which has already appeared baseless at earlier stages, then there needs
to be some form of sanction. The most direct might be to rescind the
right to complain about anti-competitive practices for some period of
years. Monetary damages, however, need to be applied sparingly if at
all. It is more important to protect the right to contest
anti-competitive behavior than it is to punish all abuses.
As always, the regulators play a crucial role. Their function is to
prevent abuses by either side and to keep things running so smoothly the
legal system has nothing to do. Equally obviously, they will almost
never be able to perform to that standard because where money is to be
made, nobody will be happy. There are many safeguards in the system to
keep the regulators honest: transparency, complaints, recalls, and legal
action. In this case particularly, though, it's to be expected that
people will make an extra effort to game the system using complaints.
Protection of regulators from frivolous or vengeful complaints should
always be robust, but that protection may need an extra layer or two for
financial regulators.
If I'm correctly imagining some of the comments on these ideas, they'll
have started with skepticism that market power can be consistently
contained. After seeing the sections on promoting competition, the
objections change to, "Well, of course, if you do it /that/ way, but
people will never go for it." I (obviously) disagree. People do whatever
they have to do. There was a time when it was equally inconceivable that
governments would consist of elected officials. As I keep saying, I have
no good ideas on how to get control that's been lost, but people have
figured out how to do that over and over again. They'll do it eventually
in the economic sphere as they've done in the political one. Once that's
happened, then what I'm hoping to do is help in the quest for
sustainability so that we avoid the heartbreaking regressions we keep
going through.
Finance
The world of finance — stock markets, banks, commodity markets, and the
like — has few regulatory problems, contrary to intuition. Those are
some of the most tightly regulated activities in commerce, which is
ironic given the public philosophy of some of the players. At the heart
of the free market is an intensely levelled playing field. The necessary
regulatory environment for finance is well understood, and insofar as
additional measures are needed they're not a fundamental rethinking of
the principles involved. The principles are fine: market information
must be equally available to all, all transactions happen in an open
environment where all participants are equal, and the only variables are
supposed to be price, performance, and quality.
The deficiency in high finance is enforcement. After financial
catastrophes, enforcement ratchets up a bit, followed by new methods of
avoiding it that then inevitably lead to the next catastrophe. The
primary mission in the world of big money has to be finding sufficient
tools to make enforcement consistent. I would expect that transparency,
the rules for competition, and explicit limits on size and market share
would be sufficient to prevent backsliding on enforcement. However, if
it proves otherwise in reality, then means have to be found to /prevent/
unsustainable financial situations, not merely to fix them after the fact.
Enforcement of sustainable and equitable rules in the financial world is
always likely to be difficult. The very close relation of money and
power is one big reason. The overwhelming desire to believe in
unbelievable deals is another. It's not that people don't know those
deals are too good to be true. It's that they don't want to know.
I'll give an example of how complexity is used to achieve ignorance
since it's a method seen most commonly in finance. It was particularly
in evidence in our last go-round of inventing new financial instruments
and a whole new shadow banking system that was free of regulation.
The idea was that these new instruments were totally different from the
old ones, and hence couldn't be regulated by the old methods. They also
weren't regulated by any new methods, but that bothered few people while
the things were ostensibly making money. On the rare occasions when
there was talk of regulation, the objection was that these things were
so new nobody knew how to regulate them. However, that was okay because
markets worked in the interests of their participants, not against them.
When it turned out not to be okay, even someone with the financial
credentials of Alan Greenspan said he was in "a state of shocked
disbelief
."
That's nonsense. If I, a financial amateur, could see the outlines of
what was happening, then there is no chance the high financiers didn't.
They knew enough to know perfectly well that they didn't know how the
risks were being evaluated. That, by itself, is normally a red flag in
financial circles.
Just to underline how easy it was to understand the essentials, I'll
give a quick summary here. Mortgages were divided into packages of good
ones and not-so-good ones and sold as packages. Some of them carried
very high yields and yet were rated to have low risk. That's a classic
too-good-to-be-true scenario.
The packages were a mix so it was hard to tell how risky they really
were. The hugely complicated calculations to assess the risk of the mix
were passed to computer programs whose workings very few people
understood. That's a classic Garbage In, Garbage Out scenario. You don't
need to understand the programs or the instruments to know that.
The potentially GIGO risk was insured by companies that had only a few
percent of the capital needed to cover any eventual failures. Logically,
they should have been terrified of losses, since they had so little
capital to cover them, and should have erred on the side of setting risk
estimates too high. Instead the risks came in so low even pension funds
could buy the packages. They were sold by the trillion, and everybody
collected fees on the sales all up and down the line.
If that doesn't smell of a scam to you, then you have an enviably
trusting mind. Everybody in finance knew it was a house of cards. There
is no need to understand the intricacies of risk assessment to know
that. People didn't care because they were making money on it, but that
is not the same as not knowing. Pleading ignorance is just an excuse.
It's been said that the only people who can be conned are the ones who
want to be, and that's the problem with financial enforcement. Now that
the party is over, the best minds
in finance are pointing out that the excess complexity led to lack of
transparency, and that it's important to reduce the complexity if it
isn't to happen again. But even the complexity is only a symptom. The
cause of the disease is the desire to believe in the scam.
Somehow, enforcement has to be immune to the desire for the latest get
rich quick scheme, and immune to everyone's, including the experts',
willingness to believe they're on to a winning super-deal. It's my hope
that a clear-eyed awareness of the root cause and a matching legal
framework to stop flimflam before it starts will be enough to free
people from the predictable cycle of asymmetrical advantages and
subsequent crashes.
Scale of business
Whether an enterprise is global or local, distributed or concentrated,
can have social consequences. Military procurement is perhaps the one
area where the broader implications of one aspect of sourcing are
already recognized, but that's only one specialized domain. The scale of
enterprises has many general social implications that markets won't take
into account without government attention.
Theoretically, trade allows everyone to benefit from different natural
advantages. But situations where everyone benefits from resources beyond
human control are not the same as situations engineered by humans to
benefit some at the expense of others.
The clearest example is labor. Money can scour the world for the
cheapest workers, but workers can't search the planet for the best
money. That sets up an imbalance which has nothing natural about it.
It's based on government regulations about nationality and immigration
that are outside the markets. It's simply dishonest to pretend that
globalization (in the current common meaning of the word) is successful
because it is superior on economic grounds when the whole thing would
collapse tomorrow without external regulations that give one side
asymmetric power.
(I'm not arguing, as a practical matter, that infinitely mobile labor is
a good idea or even compatible with maintaining social cohesion. The
point is that capital and labor must have similar bargaining power with
respect to each other in a sustainable system. If labor can't be
infinitely mobile, neither can money.)
Globalization, for all the rhetoric about spreading wealth, has
manifested the usual consequences of concentrated power. As is often the
case with money, that feels good to some people to begin with. For
instance, price dumping can be part of establishing monopoly and can
seem like a good deal to consumers for a while. Once market power is
established, the good deal evaporates. But that's not the end of the
costs to the one-time winners. Nobody is immune to exploitation.
Expensive US labor loses to Mexican labor, which loses to cheaper
Philippine labor, which loses to Lesotho labor, which loses to
Guangdong, which loses to Western China. There are any number of
specific paths, but the ability of capital to move much faster than
workers leaves the same trail of job loss and social disruption
everywhere. That's an absurd price to pay for a few decades' worth of
cheap T-shirts.
The solution I see to this problem — and reality-based economists may
know of better and more elegant ones — is pricing in line with the cost
of production under equitable constraints. That seems as if it would
largely solve the whole issue. There is no point chasing cheap labor or
lax environmental regulation if doing so only triggers compensatory fees
and doesn't give any advantage against the competition.
Inclusion of real costs would cancel out more than inequities in the
"colonies." It would also make transport play its proper role. Now the
enormous explicit and hidden fossil fuel subsidies make it economically
feasible to transport pint size bottles of drinking water half way
around the globe. It seems quite mad, but our crazy quilt of subsidies
and inequities has made it a rational business decision. If realistic
pricing were applied, the unsustainability of such practices would be
reflected in their cost.
Obviously, no single nation could make others act fairly by itself in
this fragmented world of ours. As with all rules that promote fairness,
they can only work if everyone abides by them. On a planetary scale,
we've refused to understand the huge benefit of doing so, although on a
national level some people have. It's worth noting that the more
commitment a nation has to equitability, the richer they are, even when
resource poor. They are not losing by forging ahead with fairness,
counterintuitive as that might be. Nor is it a matter that countries are
rich first and therefore have the luxury of fairness. Saudi Arabia is
rich and continues to be grossly unfair. Germany after the World War II
had next-to-nothing. It was not the Marshall Plan by itself that saved
them. It was how they used it. Sooner or later (later, I would guess,
based on current trajectories) the screamingly obvious benefits of
fairness will start to be understood across national boundaries as well
as within (some of) them.
One of the early arguments in favor of globalization was that it would
spread the wealth. By selling goods in more expensive markets, companies
could pay higher wages to laborers who were used to working cheap. That
would lift the standard of living everywhere. The only problem with this
rosy scenario is that no specific company feels under an obligation to
spread wealth, nor are there any rules to make them do so. So once the
wealth actually spreads a bit and the cheaper location becomes less
cheap, the company moves to the next place with better pickings.
There are people who argue that chasing the cheapest buck doesn't matter
because another company will be along in a minute. But continuous job
loss and job search dependent on large corporations over which there is
no local control is not a recipe for security, independence, or
sustainable development. This has been evident, for instance, in Africa
. (Also, (Phalatse,
2000 (paywall).
Other instances: Jordan, 2002, Washington Post
;
Silver, 1993
.)
The fact that exploitation coexists with wealth for a few does not prove
the success of globalization. It proves that those who start with
comparatively better advantages are better able to profit from the
situation in which they find themselves. Or, once again, "thems that
has, gets." That is not equitable and, as we're finding while the world
spirals toward disaster, not sustainable either.
Another social reason to limit trade is preservation of a way of life.
Sometimes, as in the Swiss support for their dairy farmers, the
motivation is to save another industry and to present tourists with the
views of alpine cows that they came to see. Sometimes, as in the
Japanese support for local small-scale rice farming or in Inuit whaling,
the motivation is to preserve an activity seen as culturally vital. Of
course, it's easy for arguments about a way of life to shade into
protectionism with no redeeming features, the kind protecting a local
business when there are better /mutually/ beneficial alternatives and
when the real obstacle is the cost of the transition for a few people.
However, just because bad protectionism exists, and just because it can
be hard to distinguish from the necessary kind when they overlap, that
doesn't mean it's a good idea to give up and avoid making the
distinction. The alternative is to lose whole chunks of human diversity
in exchange for not much. The cheap stuff coming in from overseas
doesn't stay cheap forever, but lost ways of life never come back.
Many examples of the benefits of local production come from food, which
is easily the clearest case of diseconomies of scale. Food which has to
be shipped long distances loses taste (if it had any to begin with after
being bred to ship well), is stored for much longer, loses nutritional
value, and often is subjected to processing to cover up for those
defects. It's another of the many instances where cheap is not a bargain
if the product has no value. And that's the best case scenario.
The worst case scenario develops the situation much further. Huge
agribusinesses process food into a near-addictive non-nutritive
substance that creates obesity and disease. At least in the US, the
production end is hugely subsidized through various farm bills. With the
2010 health insurance reform bill, as Pollan
says, "the government is putting itself in the uncomfortable position of
subsidizing both the costs of treating Type 2 diabetes and the
consumption of high-fructose corn syrup." That way lies madness. And
bankruptcy.
In the same NY Times article, Pollan notes that experts at MIT and
Columbia designed a food program to counteract obesity and were
surprised to discover "that promoting the concept of a 'foodshed' — a
diversified, regional food economy — could be the key to improving the
American diet." Had they done their background reading, they would have
known that they'd re-discovered the wheel. The value of locally produced
fresh food has been discussed for decades. Perhaps the best known in the
US is Frances Moore Lappé's book, Diet for a Small Planet
, published in 1971,
but the tradition goes back to the biodynamic agriculture movement of
the early 1900s. At least in agriculture, even considered only from the
narrow standpoint of the final product, leaving aside the social
consequences, it's not hard to see that small scale works better than large.
The smallest, most distributed ways of doing business generally provide
the most benefit compared to larger ones. They spread the wealth without
the application of brute force, which is always desirable in an
equitable society, and they prevent many of the problems of undue market
power from ever developing. Each of those is an enormous social good.
The only thing small scale enterprises can't do is provide the lowest
possible price if bigger businesses are given a free ride for the social
costs they generate.
Efficiency is often given as the counter-argument to the desirability of
small-scale enterprises. As far as I can see, the argument only works
when no downstream or social costs are included, and when efficiency is
held to be the highest good. That's a laughable assumption to everyone
except the shareholders making money on it.
All that said, though, I want to stress that although efficiency isn't
the only factor, it /is/ a factor. In some cases, size really does bring
enough benefit to justify making exceptions. Ore smelters, for instance,
would not work best as Mom and Pop operations. Other examples of
relatively large optimum size are heavy industries, industries that
require very expensive equipment, and industries with pollution problems
that are better mitigated as point sources. The optimum, as always,
balances many factors. The goal is the smallest size consistent with
optimum efficiency that takes into account all social factors. Maximum
efficiency at any cost is only another name for getting someone else to
pay the price.
Corporations as bodies
Confusion of concepts and reality can lead to bizarre actions like
jumping from a third floor window on the idea that one can fly. When
many people share the confusion, whole societies can fall. A real world
example is the strange conflation of corporations (a word based on the
Latin /corpus/, or body) and real biological bodies.
That point has been brought home recently by the US Supreme Court's
weird decision
in the 2010 Citizens United case. Corporations, said some of the men
of the Court, have the same rights as real people even though they don't
share any of the characteristics that rights are supposed to protect.
Corporations can't be stabbed, they don't bleed, they don't starve, they
can't be jailed. They have none of the responsibilities and
vulnerabilities that go with rights.
Except for a few /rarae aves/ in the legal system, that's been obvious
to everyone, including me. The following repeats a post I wrote on the
topic
several years ago.
The history of the delusion shows that, as usual, people had some help
before they started believing in it. What helped was money. Wikipedia
provides a potted
summary. In the mid-1800s,
“Corporate law at the time was focused on protection of the public
interest, and not on the interests of corporate shareholders.
Corporate charters were closely regulated by the states. Forming a
corporation usually required an act of legislature. Investors
generally had to be given an equal say in corporate governance, and
corporations were required to comply with the purposes expressed in
their charters. … Eventually, state governments began to realize the
greater corporate registration revenues available by providing more
permissive corporate laws.”
So the corporation as we know it was born. They were group entities —
body business rather than body politic, so to speak — but it didn’t take
long for bright wits to realize that real bodies had more rights than
business bodies did. They wanted some of that. So they spread the notion
that this was unfair. All people are created equal, aren’t they?
Astonishingly enough, they found judges who fell for it
.
In 1922, the Supreme Court ruled that the Pennsylvania Coal Co. was
entitled to “just compensation” under the Fifth Amendment because a
state law, designed to keep houses from collapsing as mining
companies tunneled under them, limited how much coal it could
extract. … [In the mid-1990s a] federal appellate court struck down
a Vermont law requiring that milk from cows treated with bovine
growth hormone be so labeled. Dairy producers had a First Amendment
right “not to speak,” the court said.
However, these odd “rights” don’t extend to the responsibilites that
real people have. Sludge dumpers argue that their rights to due process
and equal protection under the law are violated when towns prevent them
from dumping, but when real bodies are harmed by the toxic waste somehow
that’s nobody’s fault and somebody else’s problem. This is not the way
it works for real people with real rights. To begin with, I can’t dump
toxic waste in your garden. To go on with, if I did, I’d be liable for
the ensuing damages.
If corporations are persons, then why aren’t they persons all the way?
They get the rights, but they don’t get the consequences of their
wrongs. But then again, how could they? Only people can pay fines, even
when they’re shareholder people. Only people can go to jail. And the
individuals in question can always argue that the crime wasn’t theirs
because it was committed by a much larger group of people. So the
individual’s right to equal protection under the law means that nobody
is punished and the crimes continue.
Giving special privileges to wealth and escaping accountability are both
the antithesis of justice. There can be no corporate personhood in a
fair or sustainable system. Corporations are business entities, not
people. Trying to fool people into believing otherwise is just a way to
give a few rich people an excuse to trample everyone else. Nice for
them; bad for everyone else; and the reason why it’s such a durable
piece of flimflam. Anything that helps the rich and powerful has huge
staying power.
However, just because corporations aren't people doesn't mean they can't
be legal entities who exist under given rules. And, since the people who
run corporations really are people, they have the responsibility of
making sure their corporations follow those rules. Responsibility must
be matched to control, so it cannot rest on shareholders. It’s another
ridiculous legal fiction that they control anything. The people actually
making the decisions must be the ones facing personal consequences if
they break the law, the same way any other criminal does. If that
principle were actually applied, corporate executives would either
straighten out or (and I'm being only slightly facetious) we’d have to
fine them enough to pay for all the new jails we’d be building.
Advertising
I need to spend some time on a tangent about advertising. The issues
surrounding it are largely ignored at this point, but they have huge
effects that make it the business of government. The right to freedom
from intrusion
by
others has been discussed, and there is also the information function
which relates to education. In this chapter, the relevant problem is
that free markets are based on the premise of rational choices, but
advertising works largely by trying to short circuit rationality.
In effect, advertising is a way of not playing by the rules and of not
appealing to reason. The fact that people say they don't care doesn't
change the problematic nature. The difficulty with any form of cheating,
the reason /why/ it's cheating and not just another way to play the
game, is that it can't be applied by everyone equally without implosion.
It depends on there being a few manipulators, and many who are manipulated.
There is also the problem that people don't consent to manipulation.
When there was first talk of using subliminal advertising — such as
messages to drink soda flashed on a movie screen too fast to see
consciously — people were incensed at the thought of having their
strings pulled. Such advertising was quickly abandoned partly because of
the backlash and no doubt also because research showed it wasn't
terribly effective. But when it comes to ordinary advertising, the kind
that's not trying to hide, people are convinced that it has no effect on
them. It's their choice whether to pay attention to it or not, and if
advertisers want to throw billions of dollars at the hope that they
might make an impression, let them. It doesn't matter.
That is manifest nonsense. Armies of professionals don't throw billions
at the same thing for decades if it doesn't provide a good return on
investment. There's a body of research accumulating that shows why.
Advertising, it turns out, is effective /because/ people tune it out.
From a 1999 paper on "When an Ad's Influence Is Beyond Our Conscious
Control" :
Further, all four studies provide strong evidence that the response
bias caused by incidental ad exposure is due to unconscious
influences—advertised products were more likely to be included in a
consideration set even when subjects were explicitly trying to avoid
choosing products that were depicted in the ads.
Note that. /"More likely [to be considered] ... even when subjects were
explicitly trying to avoid choosing products that were depicted in the
ads."/
That is not an isolated finding. Parents are unaware
of how much their
children steer their buying choices. (And you can bet your last dollar
that the children aren't spending time paying attention to the effect of
ads on them.) People who were deciding between buying French or German
wine preferred one type over the other by three to one depending on
which nation's music was piped over the sound system. On being
questioned later, only one in forty had even noticed the music
to say nothing of
noticing its effect on them. A 2008 study of web advertising
found
that "upon exposure to Web ads, consumers experience priming caused by
implicit memory and build a more favorable attitude toward the
advertised brand regardless of the levels of attention they paid to the
advertisements."
Advertising for financial services companies boosts consumer confidence
in them
even though all the ads have done is make people feel the name is
familiar. That's enough for a sufficient number of people to place money
with the company, which more than pays for the advertising. Think about
that. People are willing to hand over /money/ based on an artificially
induced sense of familiarity. Yet if asked, I'd bet all those people say
they tune the ads out. (The comments to the article, for instance,
express contempt.) They certainly wouldn't admit to basing investment
decisions on them. But in the aggregate, regardless of what any given
individual thinks, that's exactly what they're doing.
So far, from the standpoint of government, the examples indicate the
need to rethink what constitutes permissible advertising. It gets worse,
though. Advertising is as successful at steering political choices
as it is at steering consumption. An interested party, Americans for
Campaign Reform, has actually done the hard labor of plowing through
years' worth of data on spending for and winning US Representatives'
seats between 1992 and 2006. What they don't make explicit in "Does
Money Buy Elections?"
is that most of the money is spent on advertising, aka "wholesale mass
media communication," and that "name recognition" means only the same
fuzzy feeling of familiarity noted above. It does not mean awareness of
a candidate's past actions or their implications for the future.
For the typical non-incumbent candidate, pursuing a combination of
retail grassroots campaigning and wholesale mass media communication
is the only viable means of obtaining the level of name recognition
that is required for voters to take note. ... But few non-incumbent
candidates ever reach the competitive threshold [of spending].
[Pegged at $700,000]. Incumbents, by contrast, enjoy a range of
institutional advantages ... [and] require relatively less campaign
spending than non-incumbents to mount a credible campaign, even as
their demonstrated ability to raise funds far exceeds that of the
average challenger.
Likewise and even more starkly, they look at all New York State races in
2000 .
Interestingly, ACR comes to the conclusion that campaigns should receive
funding so that everyone can reach the competitive threshold. The
obvious implication that advertising influences voting is too
distasteful to be addressed.
However, Da Silveira and De Mello (2008)
(pdf)
look at the relationship directly. They use Brazilian data which allows
easier comparison of amounts and effects due to their campaign funding
laws. The correlation is so consistent they were able to put numbers on
it: "We find that a one percentage point increase in TV time causes a
0.247 percentage point increase in votes."
Da Silveira and De Mello also destroy the comforting thought that
correlation does not demonstrate causation in this case. It is the only
variable that changes between general and runoff elections that involve
the same candidates and the same voters a few weeks later.
Contrary to that finding, Meg Whitman's high profile attempt to buy the
California governorship in 2010 through advertising is certainly one
attempt that failed. Despite spending over one hundred sixty million
dollars of her own fortune, she lost against Jerry Brown's thirty
million or so. As a California voter myself, I feel that's down to our
general intelligence. More seriously, the real message is that the
occasional contrasting data point doesn't change the overwhelming
direction of the evidence.
The first step to dealing with the problem of advertising is
acknowledging that there is one, no matter how much that hurts the need
to feel we're in control. The fact is that plain old brainless
repetition, especially if there's also a resonant feeling, will generate
enough votes to swing enough districts to make democracy meaningless.
Advertising simply cannot be any part of the political process. Voter
information: yes. Debates (real ones): yes. Advertising: no. This is
true in a system with elections, but it's no less true for a system with
unelections. In politics, negative advertising works better than
positive, so ads could have an even more pernicious effect where the
voters' function is essentially negative. A system of government
premised on rational choice has to run on appeals to reason, not on fake
familiarity or manufactured adrenalin spikes.
But even outside of government, advertising works, which means it's a
problem. It's assumed to do nothing but draw attention to choices people
would have made anyway, maybe more slowly. It is assumed, in effect,
that it does /not/ work. But in a market system based on rational
choice, that is the very thing ads undercut. That makes it a form of
market manipulation. It is not playing by the rules. And that makes even
non-political advertising very much the business of government.
That said, and although market manipulation is categorically off-limits,
it's hard to say what exactly is the appropriate place for advertising
in commerce. There's nothing wrong with drawing attention to products.
But there is something wrong with stepping over that line and pulling
people's strings. Finding that line requires much more research than we
currently have on advertising and degrees of subrational manipulation.
Although there are mountains of research on advertising, most of it
studies effectiveness from the standpoint of advertisers rather than
fairness. Standards for what it means to be non-manipulative would have
to be identified. They'd have to include all the various quasi-ads such
as sponsorship, product placement, trial products, as well as ordinary
paid ads.
Even without research, some of the more egregious practices are rather
obviously unacceptable. One example is a business model pushing "free"
products in return for ad-induced mind share. That model simply wouldn't
pay off without manipulating consumers. If it did, there would be no
need for companies like Google to expend the energy they do tracking
every single user click forever, or, worse yet, invading privacy to have
an even larger pool of data for targeted ads. Information is the same
regardless who's looking at it, but suggestion only works on the
suggestible. A model that depends on manipulation is not legitimate in a
fair society.
Prices and Incomes
At the extremes of low and high, the amount paid for both goods and work
can reflect market power rather than worth. I'll discuss minimum and
living wages in the last section. The government also has a role in
counteracting pricing or incomes that are based on taking whatever
someone will pay for it. Contrary to popular belief, that is not a
simple matter of supply and demand. Except in the case of some
entertainment celebrities (which includes sports), stratospheric
compensation almost always depends on the ability to set — or at least
heavily influence — one's own pay. Disproportionate prices of consumer
goods almost always depend on excessive market share or some
monopoly-related practice. Both of these are a departure from the level
playing field markets are supposed to depend on, and both therefore need
government regulation.
Paying ridiculously high compensation can't be justified on either
philosophical or practical grounds, even though it's true that some
people are worth vastly more than others to the rest of humanity. People
like Mother Teresa, Bishop Tutu, the Dalai Lama, Shirin Ebadi, Einstein,
Rosalind Franklyn, P. G. Wodehouse, Mother Jones, Wycyslava Szymborska,
in different ways and for different reasons really do have more to offer
than the rest of us. But that has almost nothing to do with how much
money they make. /In terms of the sorts of things that can be measured
and paid for/, nobody is worth hundreds of times more than anyone else.
The skills of an outstanding manager are worth more to a company than
those of a mail room clerk, but only somewhat more.
Severe financial inequality accompanies corrosive social consequences.
Large income inequality is associated with decreased social cohesion,
increased corruption, and poorly run governments. The evidence is
everywhere, whether the comparison is between countries or within them.
(Just one source, Conscience of a Liberal, Krugman, 2007
, with reference
to a library's worth of others.)
At this point it's customary to get into an argument about whether
income inequality causes bad things or bad things cause inequality. That
shows a fundamental misunderstanding of the processes at work. Societies
are organic structures, not linear and mechanical. Everything in them
works together and simultaneously. Whether corruption causes inequality
or vice versa is immaterial. Once started (and they can start by any
route), they'll reinforce each other, and the cure can come from any
side. Ideally, it comes from all sides at once. Arguing that the process
should consider equalization of incomes last, if at all, is almost
always symptomatic of wanting wealth more than justice.
Economically, vast compensation is associated with /reduced/ performance
. Interestingly, those results
are evident
even when the comparison includes only other executives, i.e. "industry
standard" peer groups, which ignores the already inflated industry
averages in the US. A more useful comparison is to workers' wages. A
2002 NPR report , showed
that executive salaries 11 to 20 times larger than workers' wages are
considered ample in most of the developed world. Then there's the US,
which approaches 500 times. That multiple did nothing but grow after
2002 until these worth-any-price executives had to be bailed out by the
government.
Incomes need to be capped for the greater good just as company size
needs to be limited for the same reason. Excessive incomes are a sign of
structural imbalance. What, exactly, is "excessive" can be approximated
based on research and practice. If upper echelon total compensation is
tied to worker pay then the incentives are in place for sustained
equitability.
The incentive structure is vitally important and must be continually and
critically reevaluated. For instance, an obscure rule change by the SEC
in 1993 limited to $1 million the
deductibility of executive pay as a cost to the corporation, unless the
compensation was performance-related. A sensible rule, since otherwise
all profits could simply be "disappeared" into executive pay. And yet,
it resulted in an avalanche of payment methods defined as
performance-related not-salary when neither was true. These forms of
compensation had a good deal to do with some of the financial exotica
that led to the crash. In a system with enough transparency to make such
practices publicly visible and actionable, and with alert regulators who
could be recalled if they're asleep on the job, it's to be hoped that
issues like this would be handled within a year or two of appearing. If
those factors aren't enough, further measures need to be put into place
to /prevent/ problems from growing.
The final line of defense against excessive income inequality is a
progressive tax system. At the upper reaches, taxes at 95% will prevent
the most acquisitive from making much headway. Yes, that will reduce the
desire to "work hard" to make more money. That's the whole point. That
desire at that level has corrosive consequences for other people and, it
could be argued, also for the individuals themselves.
However, don't get me wrong. I'm a great believer in the value of having
plenty of people of independent means in a society. Most of them may be
Bertie Woosters
, but a
few of them are Charles Darwins, Lady Ada Lovelaces
, or George Hales
. Their contributions
are immense and likely wouldn't have happened if they'd needed somebody
else's approval for funds. So I emphatically do not think that taxation
should make it impossible to live on unearned income. I do think that it
should be high enough to prevent unearned income from growing without
limits or from exceeding the top acceptable compensation for high
earners such as executives. Having independent means is fine; being a
billionaire is socially corrosive. The idea throughout is that some
income inequality is okay, but only within limits, and that those limits
should be enforced by making it structurally difficult to exceed them,
by taxation and/or by whatever works most effectively with the least
friction.
I see one exception to the rule that incomes should not grow without
limits. People whose contributions to others are incalculable should
benefit by their deeds. In other words, inventors, artists,
entertainers, and similar people whom the public pays as it enjoys the
result of their work aren't in the same class as the rest of us.
Progressive taxation should still apply, but the top rate would not be
used to promote an equitable ceiling. It would be based only on carrying
a fair share of government costs, something that would be in the
neighborhood of 50-60% rather than 95%. (If the cost of government and
its programs is about 33% overall, and if one agrees that poorer people
pay a proportionately smaller share, then it follows that the top rate
for businesses or people would be around 60%, if the cost of government
was the only factor.) Thus, the one way to be ridiculously rich is to
create something that many people want or need. Possibly, that means
there would be lots of would-be artists and inventors in the fair world
I'm imagining. Some of them might even produce something worthwhile.
Estate taxes are a significant way of spreading the wealth in some
countries, but they are hard to justify. Taxes have already been paid on
the accumulated wealth, so further taxes are double taxation of the same
person, even if they are now dead. Instead, taxes should be calculated
from the standpoint of recipients, that is, based on what their new
financial standing is. A very large sum left to one person would be
subject to a high tax because the person would suddenly have a huge
income. (Whether earned or unearned, the amount is what matters.)
However, a large sum divided among many people would be taxed based on
what they owe, which would vary based on their wealth. Again, trying to
deal with this issue fairly leads to incentives that would tend to
spread the wealth, which is a good thing.
Prices of goods, as opposed to people's work, is another area where the
amount paid can be far over or under what's dictated purely by market
forces.
Subsidies are one distorting influence. They get built into the price of
the product and further skew the price of competing products until all
purchase decisions have to be irrational to make money. Decisions based
on factors other than reality are not sustainable. Take energy, for
example. Nuclear energy has many subsidies, explicit and implicit.
Without just one of them, the government guarantee and limitation of
their insurance liabilities, the companies making the decisions would be
exposed to the inherent risks. Decision-making about whether to use
nuclear power would suddenly become much more reality based. Fossil
fuels get a huge implicit subsidy from the military expense of acquiring
and defending access to the raw materials. Housing subsidies in the US
benefit owners but not renters. The mortgage interest deduction is so
thoroughly baked into house prices at this point that cancelling it
would lead to (another) implosion in house prices. Meanwhile, the
inflated price makes it that much more difficult for renters should they
want to become buyers. A fair government can certainly help people, but
it can't help some people at the expense of others. Nor should it or
anyone else distort the market away from level.
Pricing can be drastically irrational in some very limited areas that
don't concern most people. Artwork, custom made goods, ultra-luxury
goods, all sorts of things that most people never buy, are outside the
scope of this discussion. Paying thousands of dollars for
diamond-studded phones and the like may be foolish, but the things are
more akin to jewelry than to consumer goods in the ordinary sense. Their
prices affect only a few volunteers, so to speak, and don't have much to
do with fairness.
I see information as a major tool toward achieving fair pricing.
(Together with the fact that advertising has to have a circumscribed
role as discussed in that section, and could not be used to gin up
irrational demand.) I've mentioned that the cost of production has to be
listed with price at all times. I've also mentioned that finding
comparative information should be as simple as the process of buying a
product. Armed with both information and the rules promoting
competition, unjustifiably high prices should not be a problem.
Unjustifiably low prices, however, might be. If too many people enter
what seems to be a lucrative opportunity at once and competition becomes
so strong that everyone is operating on paper-thin margins, then nobody
can make a reasonable living. To prevent that, just as the practice of
"dumping" by underpricing goods is illegal now, likewise selling goods
or services for less than will yield a living in the long term should be
illegal also. That's necessarily another approximated number, but as
with all of them, the best estimate is better than no estimate. If
sellers can show a regulator that their prices are lower than that
because they've found better ways of operating, that is they can make a
living at the lower prices, then the estimate gets lowered instead of
the price being raised.
Pricing of basics
Necessities are in a class by themselves, more or less the converse of
ultra-luxury goods. Everybody depends on them and access to them is very
much a matter of fairness. Some of them, like light and air, we don't
think of in terms of money (yet). They're part of the commons, but as an
unexamined assumption rather than as policy. If someone figures out how
to monetize them, there's nothing to stop them, no matter how wrong it
feels. And it is wrong. Demanding money in exchange for not choking off
life can never be right.
First, what are the necessities? Light, air, and water are the easy
ones. In the natural state, they're usable without any processing, so
they must be free. Nobody can rightfully ruin them or limit them and
then charge others for access to the usable form. In some far future
world, nobody could put up solar screens in space and then charge
regions for sunlight. In a not-so-future world, breathing stations
necessitated by bad air pollution would have to be provided free, paid
for by the people ruining the air.
It can never be acceptable to allow a resource to be ruined, and then to
charge for not ruining it, whether or not the underlying thing is
essential to life or not. For example, airlines make seat sizes too
small to allow normal movement, and then charge for enough space to
accommodate the passenger's knees. Or the similarly odious, but as-yet
unrealized, concept of allowing cellphones and then charging people to
avoid the noise pollution of those creating it. (Even worse: charging
for the privilege to use cellphones /and/ charging others to sit as far
away as possible.) That sort of thing is nothing but a variant on the
gangster practice of demanding money not to break someone's bones. It's
criminal for them, and it's criminal for everyone else.
Water is a vital resource that holds a position halfway between the free
essentials and the ones we're used to paying for because it generally
needs to be delivered to the point of use. A delivery price can be
charged /at cost/, but a substance which is a daily necessity for
everyone and which cannot be stored in significant quantities by most
people cannot become a source of profit. Nobody's life can be bought and
sold, either directly or indirectly by acquiring a chokehold on an
essential. Chokeholds cannot be applied without double standards.
Nor is there any way to have a sustainable competitive free market in
absolute necessities like air and water. Controlling that kind of
necessity confers a completely unbalanced amount of power, and sooner or
later somebody will take advantage of it. The only solution is to keep
these things out of markets altogether. They cannot be bought and sold.
They can be municipal utilities if they need to be delivered, such as
air on a space station, but they cannot be traded.
Which brings the discussion to food and housing. At the basic level,
they're absolute necessities. But they also have to be produced by
someone as well as distributed. And it's not self-evident where to draw
the line for less-than-basic levels of these things. The situation is
unavoidably self-contradictory: food and housing have to cost something
in money or labor, and yet it is the deepest kind of unfairness to let
anyone die for lack of them.
Where and how to draw the lines with respect to both food and housing is
something that requires input from experts and social decisions about
how far above bare survival the basic acceptable minimum lies. The
important point here is that the line exists, which means the prices of
some types of housing and food are not just a concern of markets. They
have more to do with fairness than markets at the very basic level of
the right to live.
The fairness component of access to basic necessities has several
implications for government action. The price ceiling for basic food and
housing would be subject to special scrutiny by regulators that other
goods wouldn't have. That ceiling would have to be identified and, if
need be, enforced. Profiteering would be a crime at all times, not just
during war or disaster.
It also means the government, that is, the regulators in it, have the
responsibility of ensuring an economy where nobody suffers hunger or
homelessness (barring natural disasters). Regulators who egregiously
allowed structural imbalances to develop that condemned some people to
the edge of survival would be fired. The clause stipulating that the
guilty party's assets are first in line for any recovery of damages
should be even more motivating toward acting for the greatest good of
the greatest number. Any citizen could bring a suit to that end, since
it affects a basic right. They would only have to show that the
regulator acted contrary to the knowledgeable consensus at the time of
the beneficial course of action.
The biggest changes would be in real estate and housing. Those are now
entirely market commodities, but that framework makes sense only at the
upper end of the market. Land, for instance, is a resource more like air
and sunlight than food or housing. Human labor doesn't produce it.
Except in terms of improving top soil, we don't create it. It differs
only in its scarcity, but that, unfortunately, is enough.
There is a finite amount of land to go around, so there has to be a way
to distribute it. For thousands of years that's been done by buying,
selling, and holding title. There's nothing wrong with that /so long as
the first consideration is fair access/ rather than market forces, as is
appropriate for an essential and pre-existing resource. At the
smallholder end of the scale, the distribution model needs to be more
like a public water utility than a stock market. The underlying
substance doesn't have a price. The charges don't include profits. The
cost is the living wages for the people administering the equitable
distribution of the resource.
The basis of a right to land is not, emphatically not, that everyone
gets their slice of the Earth's acreage. It's that land is primarily an
unownable resource, and that people have a right to enough of it to
live. They can't be charged for something that nobody had a right to put
fences on to begin with. In case my meaning isn't clear, imagine an
analogous case with air. If someone sucked all the air off the planet,
and then charged each of us to feed it back, we'd feel they had no right
to do that. And they don't. It's the same with land. Using it to live
doesn't give anyone the right to more than they need, or to sell their
"share" to somebody who needs more, or to ruin it. It means only that in
the same way as we take air for granted and expect to have enough to
breathe, we should be able to take our right to land for granted and
expect to have enough to live.
The clause about having no right to ruin the land is important. Also,
since the right derives from the right to live, it has to actually be
used for that purpose in order to have a right to it. So, one has a
right to the plot of land around one's primary house, or to an allotment
(community garden in the US), or to a subsistence farm. One doesn't have
a right to a farm if one isn't a farmer. One doesn't have a right to an
allotment if one doesn't use it. And people who do use land, whether as
suburbanites or growers, must keep it in at least the same condition as
when they started. Those are basically the rules that govern allotments,
at least in Great Britain, and something similar to that should be
generalized.
The point about using land well needs to be stressed. Zimbabwe has
recently taught everyone what happens when it isn't. They redistributed
land from large farmers to landless citizens. In principle, it needed to
be done (although they didn't ease the transition for the large farmers
nearly enough, but that's another topic). Then it turned out that many
of the newly minted farmers didn't want to be farmers. They just wanted
ownership and/or money. Some did want to farm, but no longer knew how,
or knew but had no tools. Available help, such as tractors or fertilizer
sat in warehouses. Meanwhile, ministers snapped up land at bargain
basement prices from the new owners who wanted money, not farms. The
result was that in the span of a few years a very fertile country didn't
produce enough food to feed itself. Obviously, that is not the idea
behind having a right to land. The point is to have something like the
allotment system, writ large.
If smallholder land can only be transferred at the price of the
administrative costs, then the biggest threat to the system is likely to
be cheating. People will start demanding undercover payments. The
penalties for cheating have to be draconian enough to overmatch the
desire for shortcuts. Markets in organs for transplant, for instance,
are suppressed with serious and enforced penalties. The same would have
to be true of smallholder land. An example of a possible preventive
measure might be to allow surreptitious recording of land transfer
negotiations. The negotiators wouldn't know whether their words were
being preserved for posterity or not. If there was even an allusion to
any kind of under the table compensation, then the bribed party would be
awarded triple that amount out of the briber's assets.
The fact that smallholder land is handled like a public utility instead
of a commodity would have the most far-reaching consequences in
countries where subsistence or small-scale agriculture was important. A
right to enough land to live on would help to make life more secure
there. In areas where most people are not farmers, the right to land has
less direct application. It would mean the land under someone's primary
house couldn't be priced higher based on its location. The primary
determinant of the price of housing becomes the building, which is not
unlike the current situation in many parts of developed countries.
Unlike land, labor is by far the largest factor in producing housing. On
the one hand that makes it tradeable, like other measurable goods
produced by people. On the other hand shelter is an absolute necessity,
like food, so even though it's tradeable, it has to have strict ceilings
at the low end. At the level of basic needs it's a right, not a market
commodity. At the opposite end of the scale, it can fall into the same
class as diamond-studded watches. Where to draw the line for basic
housing, "normal" housing, and luxury is, as I mentioned earlier,
something that needs expert study and social decisions appropriate to
local levels of technology and custom.
In one respect, housing is unlike food. It regularly goes through boom
and bust cycles, whereas food tends to do that only during famine or
war. We're so used to seeing speculation on real estate, it feels
normal. But it shouldn't. Speculation in a life necessity is
profiteering, and it's not right at any time. Thus, even though luxury
housing, like museum pieces, would fall outside the usual scrutiny of
price, all other classes would not. Housing is a right for all people.
Even though society is not under an obligation to provide
more-than-adequate shelter, it is under an obligation to prevent
structural changes which could threaten basic rights. Speculation in
housing sets up precisely that kind of instability and cannot be allowed.
Preventing speculation is not difficult. For instance, tax treatment of
gains or losses on second houses, or on those sold within a few years of
purchase, is enough to tamp down much of the excitement by itself.
Commercial real estate is, by definition, not a luxury since it's
involved in people's livings, so it would be bound by the same rules.
Markets cannot be laws unto themselves in an equitable society. They
must participate in the same framework of rights that delimit acceptable
behavior in all fields. That implies perhaps the largest departure from
current practice when it comes to the pricing and distribution of goods
necessary for basic needs.
Labor
Pay and hours worked are vital to everyone, even when the rest of
economics or finance seems remote and theoretical. It's another area
where regulation (or the lack of it) and market forces interact. Pay
depends on supply and demand, but it also depends on regulation. If
there are minimum wage laws, businesses pay at least the minimum wage
(unless they can skirt the law). If there are no labor laws, people are
quite capable of making others work constantly from the earliest age. If
there are laws, businesses find themselves able to make a profit with
less exploitation. They say they can't — all the way back to Dickens'
Bounderby they've been saying they can't — and then after the new
regulation is passed, they make more money than ever. All that twaddle
needs to be ignored. A more useful procedure is to figure out what's
right, then what's possible, and to see where they overlap.
What's right isn't difficult. It's in Roosevelt's Second Bill of Rights
, and it's in the UN
Declaration of Human Rights
.
Everybody has the right to freedom from want, and sufficient means to
develop and fulfill their own potential to the extent they desire. The
first takes a basic level of money, the second also takes time.
Calculations of living wages are easy to find for many cities, mainly
because cities have realized that the costs fall on their budget when
employers can pay workers too little to live. That means cities have to
estimate living wages so they can pass ordinances requiring them. Thus,
for example, the current federal US minimum wage is $15080 per year, but
a living wage established by Los Angeles for airport workers was
approximately $23,400 in 2009 ($11.25 per hour) raised to $29,640
($14.25) in 2010 .
However, if you add up the expected expenses for a person in the area,
taken, for instance, from the amounts the IRS
will
allow a delinquent taxpayer and fair market rent
tables available from HUD, even $29,000 a year is low. The costs are
approximately $14,000 for rent for a one-bedroom or studio apartment;
$2400 for utilities; $6300 for food, personal care, and clothing;
transportation is $6000 (which assumes paying off or saving for a new
car on an ongoing basis since there is no public transportation worthy
of the name here); medical, dental, and vision expenses are $720 a year.
That's $29,420 so far, for one person. That person has to have this
amount after taxes, which means making around $35,000 before taxes.
That sum doesn't include any expenses for a dependent child, such as
food, medical costs, school fees, clothing, nor does it include phone
costs or internet access, if any. Allowances for a child, for instance
according to USDA calculations
for the LA area, are near
$12,000 per year for one child. To have $42,000 after taxes requires a
wage of around $50,000 before taxes.
The Universal Living Wage
organization uses slightly less round numbers than I do and estimates
the local living wage at $46488. The example is from the area where I
live, and I can vouch for the fact that it would require careful
attention to the pennies not to go over the amounts listed. You'll
notice that the estimates don't include money for toys, books, movies,
or the occasional holiday. $50,000 ($24/hr) is what it takes for one
parent and one child to live without extravagance and without money
worries in urban southern California. If you're shocked at how far away
we are from a living wage, you should be.
In the low-cost rural parts of the country, where a minority lives, a
living wage can be as low as $8 per hour. Remember that the federal
minimum wage is supposed to be enough to support a family of four, not
one. Even in rural areas, except for singles, it doesn't approach a
living wage. The US national average living wage is said to be around
$12/hr for one person working 40
hours per week.
For those wondering about the effect on business of paying a lot more
than minimum wage, actual studies don't find damage, let alone threats
to survival. For instance, a study
at San
Francisco's airport, where wages jumped to $10/hr, an increase of 30% or
more, found that turnover was much reduced and improved worker morale
led to measurably greater work effort. The result was that the cost to
employers of the much higher wages was 0.7% of revenue.
The next question is how much time it should take to make a living.
What's customary is not necessarily the best yardstick. In 1850, it was
normal to work about 11 hours per day, 6 days per week
.
In the US, improvement stopped in the 1950s and the work week has been
40 hours per week ever since. In France, it is currently 35 hours per
week. Hunter-gatherer tribes have been estimated to spend about 24 hours
per week
keeping body and soul together. There is much dispute about that, since
it's hard to define "work" in a comparable way for hunter-gatherers.
However, the quibbles are on the order of a few hours here or there.
There seems to be broad agreement that they work much less than more
developed societies.
Since longer work weeks are already well known, the 24 hour work week is
interesting to think about. Speaking somewhat facetiously (but only
somewhat!) we should be able to do better than hunter-gatherers or else
how can we claim progress?
Twenty four hours, specifically, not 25 or 26, has advantages because
the number is divisible so many ways. That could allow for great
flexibility in scheduling. Four, six, eight, twelve hour days are all
possible in work weeks of six, four, three, or two days. That would
allow individuals to have many options, always desirable in a society
trying to preserve the maximum freedom compatible with the same degree
in others.
It would allow people to continue their educations and have other
interests besides work. For many people, that might result in nothing
more than an expanded social life — in itself a good thing — but a few
would do interesting and other socially enriching things with their
time. Having time is at least as important as money for the flowering of
creativity.
Perhaps most important, assuming there are regulations to stagger
parents' hours on request, a 24-hour work week would solve many child
care issues in two parent families without either parent being run
ragged. That is not a minor matter. Parental involvement is the single
biggest factor in a child's eventual growth into a contented and
productive adult. It has huge social implications, and it's main
facilitating factor is /time/. Any society that cares about
sustainability will make sure that the parents among them have that
time. It also solves some child care issues without assuming the
presence of a third class of people, such as grandmothers or nannies. In
a country with true living wages, nannies would be far too expensive for
almost everybody in any case.
The flexibility of hours per work day could also be useful in jobs that
require constant and perfect focus, which is impossible to maintain for
eight hours. For jobs such as air traffic controllers, it would be
better for everyone if shifts were four hours long (with the requisite
break(s) within the shift).
One likely consequence of a 24-hour work week is that plenty of people
would try to have two or even three jobs. However, a system of shorter
hours and higher pay means that incomes would be more equal than they
are now. That would mean only a few people could have double jobs
without depriving others of work. So, while there's nothing
intrinsically wrong with people working themselves silly, assuming they
don't have children, it can't come at the expense of a livelihood for
others. Acquiring double or even triple jobs should only be permitted
when there's enough of a labor shortage that there are no other people
to fill the positions.
Having envisioned the levels of wages and work time that are consistent
with the implementation of the right to a living and the pursuit of
happiness, the next step is to see how close we can come to that in the
realm of the possible. One thought experiment is to see what the hourly
wage would be if income were distributed completely equally. Perfect
equality is never going to happen and is not even desirable because
people have different goals and abilities, but it does show the highest
income level an economy could support for everyone equally under current
conditions. If the living wage is higher than that, it would bankrupt
the economy. If lower, it might redistribute income, but it is not
impractical.
I'll use the US as an example again. Figures are readily available
. In
2007, there were 139.3 million tax returns filed
and the
reported amount earned by all was around 7.9 trillion. That includes
government payments like unemployment, as well as rental income and
unearned income from interest, dividends, and capital gains. That works
out to $56,712 per taxpayer if the income wealth of the country were
spread evenly among all earners. That would be a national average of
$45.44/hr for a 24 hour work week.
A national average living wage of $20/hr for a 24 hour week, or $25,000
per person per year, is less than half of the theoretical equal
distribution. It would come nowhere near bankrupting the economy. It
would even allow plenty of room for some people to earn far more than
others. There's no question that a living wage earned over 24 hours per
week would require different priorities than our current ones and that
it would redistribute income, but that's different from saying it's
impossible. In that, it's just like so many other characteristics of a
fair society: It's so far from our current situation, it looks unreal.
It goes without saying, but I'll say it anyway, that the amount of money
needs to be transposed to a lower or higher key depending on local
economic conditions — at least until the whole planet has one smoothly
running economy. Poorer countries have lower costs of living and lower
pay, but if income is not concentrated in a tiny elite, not-rich does
not equal poverty-stricken. Take Botswana
, as an interesting example. They
have some mineral wealth (e.g. diamonds), but then, so does Congo
(DRC)
whose GDP (using purchasing power parity per capita) is about 2% of
Botswana's. About three quarters of Botswana is the Kalahari desert.
It's a small country with some resources, not a lot, that doesn't trade
oil or drugs. What they do have going for them is low levels of
corruption .
The country was one of the poorest at independence nearly 50 years ago.
Now they have a GDP in purchasing power equivalents of $25 billion and a
population of two million. Assuming the same proportion of earners and
income as a proportion of GDP as the US, equally distributed income
would be $14,000 per year. The minimum wage is $0.58 per hour
(approx. $1200
per year). If there's a similar relationship between minimum and living
wages there as in the US, the amount needs to be quadrupled or
quintupled. So, a living wage might be around $6000, or $5/hr in a
24-hour workweek. That's much less than $14,000. There is plenty left
over to allow some people to be richer than others.
They've managed that despite being hard hit by the AIDS epidemic. The
government, not just charities or the medical profession, have tried to
take action against it
(pdf). Given
the scale of the disaster, it shows that coordinated and reasonable
government action makes a big difference.
Although I'm using Botswana as an example, I'm not saying the people
there are saints. There's plenty of sexism, up to and including ignored
levels of gender violence, and other problems. They are not totally
unlike their neighbors. What they show is the level of dividends paid by
any progress toward justice, even when it's not perfect.
The biggest problems in very poor countries, those where ensuring
better-than-starvation incomes for all people is literally beyond the
scope of the economy, are corruption and fighting
.
Consider Kenya as a not-too-extreme
example. They have resources, fertile land, and a large and generally
educated population. But the national income, evenly spread, could not
provide even $3 per day per person ($1095 per year). And yet, forty
years ago they were one of the richer countries in Africa. Kenya is far
from the worst country in the world when it comes to corruption and war,
and yet it's been enough to beggar them. I know colonialism didn't help.
But oppression or the extraction of resources is not the hardest thing
to recover from. The hardest thing is catching the corrupting illusion
that raw power, whether military or financial, brings benefits and that
justice is a loser's game. Whereas in reality it's the other way around.
The point I'd like to stress is that even in countries without great
wealth, the resources of the people can give all citizens a life free
from abject poverty, preventable disease, and job-disqualifying
ignorance. It just requires a lot less greed and corruption on the part
of the elites. In other words, the problems in poor countries are not
different from the ones in rich countries. The consequences of the same
basic cause are just more stark.
Last, there's the issue of people who aren't working. They fall into
three categories that need different approaches. The temporarily
unemployed are easily addressed using much the same methods as are
already familiar. A tax (or insurance, if you want to call it that) is
subtracted from all earnings, and paid out as the need arises. Likewise,
we already know how to address the inability to work due to physical or
mental disability. They're paid the same way as retirees, which I'll
discuss in the chapter on Care
. The last group is the
problem. There are people, and they're common enough that almost
everybody knows someone in the category, who are normal in every way
except the ability to support themselves. The shiftless are not
unemployed. They're unemployable. There's a difference, and attempting
to pretend their situation is temporary wastes everyone's money and energy.
However, no matter how true that is, it goes against the grain for
almost everybody to pay the shiftless a guaranteed annual income and not
worry about it. Possibly, if there's a real need for a non-zero
proportion of people with bare survival incomes to keep inflation down,
then the non-workers could be that group. That's also similar to the
current system, except that I see it as being explicit and the members
of that group take on the poverty because they prefer it to working.
However, assuming true full employment is not in fact incompatible with
a non-inflationary economy, then the problem of non-workers who don't
want employment remains.
Religious orders and the military have traditionally been the social
service agencies for people who need outside discipline to manage their
lives. The military would no longer be large enough, unfocused enough,
or unskilled enough in the world I'm envisioning to serve that purpose.
Religious orders are outside the scope of government and couldn't be
relied on to solve social problems. But an institution whose explicit
purpose is to provide a regulated environment in exchange for work could
well be a facet of government.
A government always needs a work corps, whether they use the military
for that purpose, or a national guard, or something like the Civilian
Conservation Corps. Most of the people working in it would be ordinary
workers, but because it is a government agency, it could also serve as
the place for people who were unemployable on the open market. (It's
much the same /de facto/ pattern as the military has now.)
A civilian work corps could take on the tasks perennially in need of
labor: planting trees, delivering meals-on-wheels, maintaining city
parks, and so on. On the principle that anything which can be voluntary
should be, people in the program could volunteer, to the extent
possible, for the jobs congenial to them. The work week would be the
same length as for everyone, which would leave time for voluntary
training programs. Such programs would allow people who might have
simply not found their niche in life to do so. The civilian work corps
could have a range of options for housing, from the ordinary situation
of people living on their own and coming in to work, to halfway houses,
to dormitories and meals in cafeterias. People wouldn't have to have
trouble organizing their lives to choose the latter options, but they'd
be there for those who did.
There would inevitably be some people whose labor was so worthless it
didn't even cover the cost of their room and board, or who were more
trouble to oversee than they were worth. That's why this has to be a
government function: to support those who can't hold up their own end.
As to how people take part in this program, I think most of it would be
self-sorting. People who need extra structure in their lives, tend to
gravitate to the situations that provide it. Those who didn't, those who
kept being thrown out for nonpayment of rent, who were picked up
repeatedly for being intoxicated in public places, who one way or
another made themselves a burden on others with no record of
subsequently making amends, they could be "drafted," as it were, and
prevented from being as much of a burden in the future.
- + -
The role of government in the economy is the same as everywhere else: to
ensure there are no abuses of power and therefore no concentrations of
power, and to ensure there are no double standards. The consequences of
consistently applying those principles to money and work require big
changes compared to all current systems, from much stricter regulation
of inflation, deflation, interest, and competition, to a living wage
that requires a more equitable distribution of income. The ability to
make vast fortunes would be limited, if not lost. The rewards for human
happiness would be incalculable. Applying fairness to economies depends
largely on whether people want money or goods, in the literal meaning of
the word.
+ + +
Care
There are many questions about the extent of people's duty to care for
each other, but the existence of the duty is a foregone conclusion.
Almost nobody, now or in the past, abandons ill or disabled members of
their group. If they do, they're viewed as despicable subhumans.
Scientists, with their intense belief in the absence of a free lunch,
explain that behavior by noting that overall group survival must be
improved. That's a bit like saying water is wet. If a costly trait, like
helping others, does not help survival, those creatures who have it die
out. If it does help, well, then they survive long enough to be around
when scientists are studying the issue.
The other evidence that it's good for the group is that countries with
the most solid safety nets, the OECD ex-US, are also the wealthiest and
best-run. Far from impoverishing the people because they're wasting
money on non-producers, it somehow enriches them. The clearest example
is perhaps Botswana, discussed in the previous chapter, which went from
poor with no safety net to richer with a better safety net. In the
relationship between wealth and social safety nets, the point is not
which comes first. If care comes first, apparently that increases
wealth. If wealth comes first, providing care certainly doesn't reduce
it and does make life richer.
I'd argue, together with major moral philosophers, that we also have a
moral duty to each other. It's not only that our souls are hardwired or
that there's utility to security for all. We're social animals. Our very
lives depend on the functioning of our group, which makes it more than
churlish to refuse assistance when a unit of that group is in need. It's
cheating, a form of stealing, to refuse to reciprocate for benefits
already received.
The sticking point is the delimitation of the group we're willing to
care for. Everybody includes their families. Most people include their
immediate circle of friends and neighbors to some extent. Large numbers
of people extend it to all citizens of their countries, but even larger
numbers don't. And some feel that way about all of humanity.
As a matter of fairness, care needs to extend to the whole group. I'd
argue that means all humanity, but we don't have coordination on a
planetary scale yet. The largest group currently able to distribute
benefits and costs consistently is the nation. It's simple fairness to
extend care to the whole group because anything else requires a double
standard: one rule for me and another rule for others. Everybody in a
position to do so takes whatever help they need if they're facing
disease or death. Everybody, at the time, feels help is a human right.
If it's a human right for one, it's a human right for all.
There's also a practical reason why care should be a function of
government. The larger the group over which the burden of care can be
distributed, the smaller the cost borne by any one individual. The
proportion of GDP paid in taxes is actually lower in some countries with
medical care for their citizens (e.g. Australia, UK
)
than is the equivalent per capita expense in, for instance, the US,
where citizens pay more per capita for those services and yet have
poorer outcomes. (A reference
summarizing the previous link.) The government is a more efficient
distributor of social insurance than, in effect, requiring each family
or small town to be its own insurer. It's an analogous case to mass
transit. A village would be crushed by the expense of building a
complete modern mass transit system for themselves alone, but when
everyone pays into it the costs per individual are small and the
benefits are much greater.
Providing care works best as a coordinated, distributed, non-profit
system, which is exactly the type of endeavor government is designed to
undertake. (Unlike defense, however, government doesn't have to have a
monopoly on care.)
I'll spend a moment on the concept of moral hazard (as I have in an
earlier post
)
since it has some followers at least in the US. The idea is that if
someone else is paying, the normal limits on overspending are lifted and
much money will be wasted. A recent example occurred in the financial
industry. Institutions peddled much riskier instruments than they would
have on their own account because they assumed that taxpayers would
carry major losses should they occur. Much money was wasted. So moral
hazard is a real problem. It's just not a real problem in the realm of
social insurance.
Social insurance is for things people would rather avoid, even if
someone else pays for them. Nobody gets old for fun. Very few people go
to doctors by choice (unless it's for elective cosmetic treatments, and
those aren't covered in any system). Medical visits are a chore at best,
and one most of us avoid no matter who's paying for it. Nobody says,
"Gee, I think I'll check into the hospital instead of going to the
beach." So the motivation to spend other people's money is simply not
there on the part of the patients. The doctors can be another story
,
but that problem is created largely by faulty reward systems. At this
point, ways of fixing those are known if people actually want to end the
problem rather than profit from it.
It's also worth pointing out that the consumer model of medicine is as
much of a fantasy as the freeloader. When people need treatment it's not
usually a planned and researched event. Sometimes it's even a desperate
event. Very few patients know which treatments are available for them or
which are best for their condition. There is no way, in that case, to
"shop" for the best option. It's a complete misapplication of a
marketplace model, which presupposes equal information among rational
and independent actors. Patients are not in a position to be choosy and
are in a dependent relationship to vastly more knowledgeable experts.
Basing actions on fantasy is, to use the metaphor one more time, like
jumping from a third floor window on the assumption one can fly. It does
not end well.
Two classes of patients who do cost more than they should are
hypochondriacs and malingerers. Doctors are quite good at spotting the
latter, and the former are a microscopic expense compared to the costs
of trying to stop them from using the system.
There is a simple reason for that, and it's not just because of all the
expensive bureaucratic gatekeepers. The main reason is people don't
always know when they're ill or how serious it is. That's why they go to
the doctor. Because they /don't/ know. That means any attempt to make it
difficult or costly to get medical attention results in a significant
number of people who don't get it early. Untreated medical problems,
except when they lead to immediate death, are always more expensive to
treat later. Thus, it is /more/ expensive to discourage people from
spending other people's money to go to doctors, counterintuitive as that
might seem.
Common sense dictates the opposite because it seems obvious that paying
for medical care will cost more than not paying for it. So any care has
to cost more than no care. And that is indeed true. But it misses a
crucial point: it's very hard to watch people die in the street.
Refusing to spend any money on care for others is only cheaper if one is
willing to follow it through all the way to the logical conclusion.
Without care, some people will die in inconvenient places. If you can't
stand the sight and send the dying to hospital, somebody will wind up
paying. The amount will far exceed what it would have cost to prevent
the problem in the first place.
The common sense intuition that it's cheaper not to pay for care depends
on being willing to live in a world where people suffer and die around
you. Such places require psychological coping processes, mainly the
invention of reasons why the victims deserved their fate so that one can
feel safer in a terrifying world. The process necessarily feeds on
itself and further blunts understanding — whether of people, of
situations, or of effective solutions — until there's none left. The
real moral hazard of social insurance is not spending money and losing
some of if it. The real hazard is withholding it and losing everything else.
Medical Care
Given that there's a duty to care for others, how big is it? Does it
have priority over all other needs? Would that serve any purpose?
The answer is, of course not. The duty extends to what can feasibly be
provided without destroying other essential aspects of life, given local
technology and funds. If funds are limited, and they always are, some
types of care have to be given priority over others.
Since social insurance involves spending other people's money, the first
criterion should be using it where it brings better downstream benefits.
That one rule indicates a well-known series of actions that all return
hundreds of times their initial cost in subsequent benefits, both to the
economy as well as to their citizens' quality of life. Those include
maternal and neonatal care, clean water, safe toilet facilities, bednets
in malarial regions, vaccinations, prevention and treatment of parasitic
diseases, and preventing disease-producing vitamin A, B and protein
deficiencies. Using public funds to avoid much larger expenditures later
is a clear case for investment, in the literal meaning of that word.
The next tier of public health is provision of emergency services, then
basic hospital care, and then full-blown medical care. Spending money on
all of these saves money down the road, but the more initially expensive
services may nonetheless be out of reach for the poorest countries, even
with intelligent priorities. That is a clear signal for international
aid, I would argue, and not only for moral reasons but also for the
purely practical one that health and disease don't stop at borders.
Effective treatment of TB anywhere, for instance, is prevention of TB
everywhere.
Palliative care for the terminally ill has no financial downstream
benefits, but a system without double standards will always provide it,
regardless of the nation's poverty level. An absence of double standards
dictates that available treatments to alleviate suffering must be provided.
So far, I've been discussing government involvement that people welcome,
even when they don't welcome paying for it. But when public health
measures are compulsory, that's much less popular. A compulsory
component is, however, essential. The right to be free from harm comes
very near the top in the hierarchy of rights, and a careless disease
carrier can certainly spread harm.
It's become acceptable to respect people's rights to their own beliefs
over the right to be free from harm, which shows confusion over the
relative rank of the two. Ranking a right to belief, even when it's not
counterfactual, above the right to be secure in one's person will end in
all rights being meaningless … including the right to one's own beliefs.
Freedom isn't possible if freedom from harm is not assured. That's
generally obvious to people when the threat is immediate, and the
confusion arises only because so many people no longer feel that lethal
communicable diseases are their problem. So, it has to be understood
that the right not to catch diseases from others is of the same order as
not to be harmed by others generally. The powers-that-be have the right
to enforce whatever measures are needed to safeguard the public health.
That said, although the mandate to safeguard the public health is an
ethical issue, /how/ it's done is a scientific and medical one.
Effectiveness must be the primary determinant of action. Even though the
government has the right to compel adherence to public health measures
like vaccination or quarantine, if compulsion is not effective, the very
mandate to safeguard the public means that compulsion should not be applied.
The fact is that medical compulsion makes sense only under rare and
unusual circumstances, generally when there's imminent danger of
infection for others from specific individuals who are uninterested in
or incapable of taking the necessary steps to prevent transmission. That
situation can arise in any potentially epidemic disease, such as
tuberculosis or
AIDS . As Bayer & Dupuis note
in the linked article on TB, there's a range of effective measures from
directly observed ingestion of medication all the way up to involuntary
detention. The deprivation of rights must be proportional to the degree
of threat.
Public health is best served when people are willing participants in the
process of preventing disease, and the best way to enlist cooperation is
to give them what they want. At the simplest level, that's information,
and at the most complex, it's treatment for the disease and assistance
with any tangential consequences. The importance of helping rather than
forcing patients is evident in any public health document. For instance,
the CDC sheet on tuberculosis treatment
mentions
coercion only in the context of increased treatment failure due to
noncompliance. Also, as should be obvious, high voluntary treatment
adherence leads to lower costs
.
Common sense plays us false again by insisting that it must be more
expensive to persuade people than to dispense with all the extras and
just force them to shut up and take their medicine. The fallacy in that
is clear with a moment's thought. A person facing what amounts to
imprisonment, or any other negative consequences, will hide their
disease — and spread it — as long as possible. It's far more expensive
to treat an epidemic in a whole population than it is to give even the
most expensive treatment to the few people who start it. A person who
can expect every possible assistance will go to be tested as quickly as
possible, which costs less by orders of magnitude than dealing with an
epidemic. (There's also, of course, the benefit that far fewer people
take time off work, suffer, or die.)
Vaccination has different obstacles, but ones which it's equally
important for concerted government action to overcome in the interests
of public health. For vaccination to be effective in preventing
epidemics of highly infectious diseases (as opposed to conferring
individual immunity), a high proportion of the population must be
immunized. The number varies based on infectivity and mode of
transmission, but it's on the order of 95%. Then, if the disease infects
a susceptible person, the chances are it will find only immune people
around that individual and be unable to spread.
That sets up an interesting dichotomy. If vaccinations are compulsory
for the purpose of public health, fairness requires the rule to apply to
everyone equally. However, as a matter of practical fact, it doesn't
matter if a few people avoid it. How communities handle this depends to
some extent on how angry it makes them if some people can be exceptions.
The most important factor, however, is the practical concerns around
encouraging maximum cooperation. The sight of people being dragged off
to be vaccinated does nothing to educate people that immunization is
actually a good thing to do. I would argue that the practical
considerations indicate approaching the issue of exceptions as
tolerantly as possible, and only employing compulsion when the public
health is endangered. In other words, education about the benefits is
more important than enforcement against people who don't get it, unless
the latter group is actually endangering others.
Education about vaccination — or any other fact-based issue of public
significance — has to be understood in its broadest sense. It's the sum
of inputs people get on the topic, whether from news reports, ads,
stories, school, or, last and least, government-produced information
materials. /All/ of these avenues need to respect the truth.
Requiring respect for the truth may sound like interference with the
right to free speech, but I strongly disagree. On the contrary,
necessary restrictions actually support free speech by improving the
signal to noise ratio. My argument is in the chapter on Rights
,
but the short form is that people are not entitled to their own facts,
and that facts differ from beliefs and opinions because they're
objectively verifiable to an acceptable standard of certainty. Respect
for the facts and its converse, a relative absence of misinformation,
together with useful information in school that is later reinforced by
wider social messages, clears the air enough to enable people to make
reality-based decisions when the need arises. Those who persist in
counterfactual beliefs become a small enough population that workarounds
are possible.
Getting back to vaccination, specifically, the question was how to
handle the small minority of people for whom immunity is not required on
grounds of public health. Exemptions could be given to those who want
them, up to the maximum that is safe for epidemiological purposes. If
demand is larger than that, the exemptions could be distributed randomly
within that pool. (And it would indicate that education efforts need to
be stepped up.)
Other public health measures need to follow the same principle.
Compliance can be compulsory when that is essential to prevent harm to
others, but given the nature of people and diseases, the system will
work better and more cheaply if most compliance is willing. That means
the vast majority of efforts need to be channeled toward ensuring that
people understand the benefits of public health measures, even when they
involve personal inconvenience or hardship. Compulsion needs to be
reserved for cases of criminal negligence and endangerment. It needs to
be the last resort, not the first, and it needs to be limited to the
very few actually abusing the system, not the larger number who are only
afraid of it.
Although not usually understood as medicine, there are many functions of
government with direct effects on health. Agricultural subsidies, urban
planning, and mass transit all come to mind. Urban planning may sound
odd, but the layout of neighborhoods, the proximity of stores, the ease
of walking to mass transit, the availability of car-free bike routes,
and the presence of parks, all have a large effect on how much walking
or other exercise
people do just in the course of everyday life. That's starting to look
like an underappreciated factor
in
maintaining public health. Direct government support of exercise
facilities is another example of a public health measure that's not
usually included in that category. Those facilities could be public
playgrounds, swimming pools, playing fields, gyms, dance schools, or
just about anything that facilitates movement rather than sitting.
A problem with the current implementation of government policies that
affect public health is a narrow definition of the field. As a result,
the relevance of health to policies is overlooked. An obvious example in
the US is corn-related subsidies. They were started to get corn-state
votes (not their official reason, of course), and had the effect of
making high calorie corn-based ingredients cheap, which has contributed
to a rise in calories consumed
,
obesity, and associated diseases. Separate from the issue of whether
buying corn state votes with taxpayer funds is a good idea, it's
definitely a bad idea to make people sick in the process. Any aspect of
government with a public health component needs to be examined for
public health implications and implemented accordingly.
Moving from matters too mundane to be considered medical to those too
new for their implications to be generally appreciated, the increasing
availability of genomic information raises some fairness issues.
(Discussed here
earlier.)
The vision is that as we learn more, we can take preventive steps and
avoid the ills we're heir to. In the ads for the first crop of genetic
testing companies, the knowledge about disease susceptibility is used to
apply treatment and lifestyle choices that avert the worst. That's not a
hard sell.
But there are many other aspects to testing that are less benign. Some
are personal, and hence not a direct concern here. For instance, what
about diseases for which there is no treatment? In a system with true
individual control over their own data (discussed under privacy
in the second
chapter), concerns about information falling into unwanted hands should
be a thing of the past. The decision whether or not to be tested is
still a difficult one, but also a purely personal one. The government's
responsibility ends at making sure that all tests have the patient, not
profit, as their focus. An adequate medical system would require access
to useful follow up counseling for all tests. Allowing testing without
comprehension is allowing some companies to use people's fears to part
them from their money. It's hard to see how it differs from less
technical scams.
The most problematic implication of genetic testing is that improved
information about risk undermines the premise behind private insurance.
The general idea behind insurance is that bad things don't happen to
most people most of the time. By taking a little bit of money from
everybody, there's enough money to tide over a few specific people who
have problems. The bigger the group, the better this works.
However, private insurance companies necessarily insure only a subset of
the population. If the risk pool is smaller than everybody, then the
best thing a company can do to improve profits is to get rid of bad
risks. Hence, the better the tools and the more accurate the risk
assessment, the less private insurance will actually function as
insurance, i.e. as a way of diluting risk. We can try to patch that with
regulations and fixes, but the underlying gravity will always work in
the same direction. Insurance companies will use testing to slough off
bad risks. They have so much to gain from a more accurate assessment of
risk, that I'd be willing to bet they'll be among the earliest adopters
of diagnostic genome scans. In places with private medical insurance, it
won't just be former cancer patients who are uninsurable.
The inescapable implication is that genetic knowledge works to the
individual's benefit only in a national or supranational health care
system. Anything less, any ability to exclude some people from the pool
will, with improved knowledge, end in so many exclusions that there is
no pool and hence no real insurance. Thus, there's yet another practical
reason why a national medical system is not only a good idea, but a
necessary one.
The most difficult question is testing for diseases which do have
treatments. They force a choice about who controls treatment decisions.
The obvious answer — the patient decides — is easy when all parties
agree. But in many cases they don't. Assuming a national medical system
funded by taxpayers, there's a mandate not to waste money. Then assume
genetic testing that can accurately predict, say, heart disease risk.
(We're not there yet, but we'll inevitably get there some day, barring a
collapse of civilization.) Exercise is a demonstrated way to improve
heart health. So, should everybody exercise to save taxpayers (i.e.
everybody) money? Should exercise be compulsory only for those in the
riskiest quintile? Or the riskiest half? How much exercise? Which
exercise? Will those who refuse be given lesser benefits? Or what of
drugs, such as statins, that reduce some cardiovascular risks? If people
are forced to take them, what does that do to the right to control one's
own body? And that doesn't even touch on who's liable if there are
eventual side effects. More broadly, do we have the right to tell some
people to lead more restrictive, healthier lives because their genes
aren't as good? What level of genetic load has to be present before
prevention becomes an obligation? What happens when the medical wisdom
changes about what constitutes prevention?
The questions, all by themselves, show how impossible the choices are.
Add to that how counterproductive compulsion is in medicine, and it
becomes clear that the idea of vesting treatment control in anyone but
the individual is absurd.
That does mean taxpayers have to foot the bill for the stupid behaviors
of others, but there are two reasons why that's a better choice than the
alternative. The first is that everyone, for themselves, wants to be
free to live life as they see fit. Fairness demands that others be
treated as we ourselves want to be treated, so control needs to rest
with the individual on that basis alone. The second is that the choice
is not between wasting some money on the unwise or saving it for the
more deserving. The choice is between wasting some money on the unwise
or wasting vastly more on an unwieldy system of oversight. The
ridiculous endpoint is a webcam in every refrigerator and cars that
won't start if the seat senses the driver has gained too much weight.
It's important to remember the nature of the real choice because any
system of public universal health care will naturally tend toward
preventive measures. In a private system, the incentives promote the
exclusion of riskier members and a focus on diseases because that's
where the profits are. (Discussed at more length here
). In a
public system, exclusion is not an option and prevention is cheaper than
cure, so the focus is on, or will tend to, preventing disease in the
first place. That's a very good thing, except if it's allowed to run
amok and trample crucial rights to self-determination in the process.
So, again, it's important not to force preventive measures on people
even though the system is — rightly — focused on prevention.
However, just because preventive measures for non-infectious diseases
can't be forced, that doesn't mean prevention can't be promoted. As I
discussed under vaccination, fact-based information can certainly be
presented. Education and social planning that facilitate healthy living
have to be the tools to promote prevention.
Retirement
Care of the elderly is another major area of social responsibility. From
one perspective, it's the disabilities of age that require help, which
would make retirement a subset of assistance to the disabled generally.
In other ways, however, retirement is supposed to be a deserved rest
after many decades working. Those are two separate issues.
The main difference between them is that helping the disabled is a
universal social obligation, but providing a reward for a life well
spent really isn't. That kind of reward also raises some fairness
issues. If the reward is a right, and rights apply to everyone equally,
then everyone should get that reward. But life span is not a known
quantity. There's no way to calculate a universally applicable number of
years' rest per number lived. A method that relies on the luck of the
draw to get one's rights may work after a fashion, but can hardly be
called fair.
Retirement based on the proceeds from investments is not the topic here.
Obviously, anyone can invest money and try to acquire independent means,
but that's not an option for everyone. Considerable starting capital is
needed, or the ability to save large sums of money over a period of
decades. That's especially true in a system where interest rates are
limited by degree of risk and wealth creation. For instance, assuming a
3% rate, one would need around $800,000 to receive $25,000 per year, if
that was the living wage
. A person
making $25,000 who had no starting capital and received 3% interest
would have to save somewhat over $10,000 per year to have a large enough
nest egg after some 40 years. Although that's perhaps not physically
impossible, it would preclude a family or any other interests. It is, in
any case, not the sort of sacrifice that people would generally choose
for themselves. On the other hand, if a person has somewhat over
$200,000 to start with, then a rather high savings level of $1800 per
year (for retirement alone) over 40 years will also yield $800,000. Not
everyone has $200,000 to start with, and even fewer don't have other
pressing needs for the money before reaching retirement.
Self-funded, investment-based retirement is available only to people
with money, who can certainly try to follow that course. From a
government perspective though, when rights are the concern, retirement
based on individual investment is irrelevant because it can't be universal.
Given that there's no social obligation to pay a guaranteed annual
income to able younger people, it's hard to see why there should be one
to able older people. However, a number of factors muddy the issue.
Older workers are less desirable in work requiring speed or stamina.
That can lead to real issues in keeping or finding work as people get
older, since any job, physical or not, requires some stamina. At a
certain point, and the point varies with the individual, it's better for
everyone if retirement is an option.
Recognition of the reduced employability of the elderly could be
expressed in much looser "disability" standards as people get older.
Given that people differ, it should be a gradual scale, with a reduction
in standards beginning, say, at forty and reaching completely
self-defined disability at five years less than average life expectance.
In other words, the older one gets, the more weight is given to one's
own assessment of ability to work and the less to medical input.
Retirement in that system would be very different from what it is today,
except for the most elderly. Much younger people in chronic poor health
could elect to retire early with medical approval. Much older people
could keep working if they wanted to. Mandatory retirement would not be
a factor, which is as it should if individual self-determination is
important.
Retirement also does not have to be an all or nothing system. Hours
worked could be reduced gradually. People who no longer have the
capability to work full time could reduce their hours, and pension
payments could make up the difference to a living wage. Where possible,
there should also be rules to enforce accommodation for workers who need
it. That applies not only to the elderly, but also to the disabled and
to parents who need alternating shifts.
The proportion of people drawing pensions in a flexible system oriented
to individual health might increase or decrease compared to our current
situation. Without access to the necessary detailed information, it's
hard to tell. My guess is that it would reduce it quite a bit.
Furthermore, unlike a 40-hour work week, a 24-hour week
would not
exceed the stamina of a much larger number of people. So the proportion
of retirees supported by the working population might be smaller than
under the current system.
I suspect that a big part of eventual resistance to retirement benefits
based on reduced stamina would come from the feeling that after decades
as wage slaves, people deserve some time of their own. In other words,
it's based on the retirement-as-reward model. However, I also suspect
that the shorter work week would reduce the power of that model
considerably. A 24-hour week leaves enough time for interests besides
work and means that unsatisfying jobs occupy less of one's life. That
may mean less need to retire just to escape a bad job or to "have a life."
Mandatory age-specific retirement does have one useful social function.
It forces change. It prevents ossified codgers from taking up valuable
space people require for other purposes. And, if done right, it's also
good for the codgers by shaking them out of ruts. I'm not aware of
studies proving that the leavening effect of retirement is necessary for
social health, but my guess is it's good for us.
I suspect that would be even truer if and when we have longer life
spans. Nobody could stay active, creative, interested, or even polite,
in the same job for a hundred years. Something like the academic
tradition of sabbaticals would be necessary for everyone. In the popular
misconception, those are long vacations, but in reality they involve
different duties rather than time off. In this case it would be a time
to reorient and to look for work in a new field, or for a different job
in one's old field. With a 24-hour work week, there is enough time to
take a few years for retraining ahead of the required transition. If the
rule was to take a year off after every thirty, in a type of socially
funded temporary "retirement," and to return to a new situation, there's
no question it would have a leavening effect.
There is a question about how it could work in practice. People starting
a new career after leaving another at the top would have lower salaries.
There's generally huge resistance to any change involving less money.
Those at the top of the tree could easily game the system. For instance,
the Russian limit on Presidents is two terms, so Putin came back as
Prime Minister instead, at the behest of his cronies. The U.S. military,
to give one example, approaches both problems — people serving too long
or gaming the system — by having the option of retirement after 20 years
of service and no option to return to similar rank. However, I doubt
very much that any society could afford fair pensions, i.e. ones that
equalled a living wage, for everyone after 20 years of work. The
leavening effect would have to come from changes to other jobs.
I see the financial aspect of the requirement to switch jobs as follows,
although what actually works would have to be determined as the ideas
were applied. People who haven't had a major job change in 30 years
(i.e. a change other than promotions or transfers) would prepare for it
much as they do now with retirement. They'd retrain ahead of time, if
they wanted, and they'd save some money over the years to cushion the
transition. For average earners, interest payments wouldn't equal a
living wage, but they'd help ease a lower starting salary in a new job.
Further, given flatter salary differences
, a lower
starting income would likely still fall within the range of middle class
pay. Those for whom the difference is large would be the high earners,
and it doesn't seem unreasonable to expect them to save more and cushion
their own transition. The process would be much the same as the way
people plan now for lower incomes in retirement.
In summary, retirement under a system that tries to apply equally to all
would look a bit different. Disability would be the primary determinant
of age at retirement, but with rules for determining disability adapted
to the realities of old age. Mandatory retirement at a specific age
can't be applied equally, but having some mechanism that requires job
changes is probably necessary for social health. A year "sabbatical"
every thirty years or so to facilitate and reinforce the shift to new
work should be funded as a form of retirement in the broad sense.
Disability
The obligation to care for people with disabilities covers a wide
spectrum. At one end it means nondiscrimination and the requirement to
provide simple accommodations as needed. At the other end are those who
require 24-hour nursing care. That spans a range of services, starting
with simple information, and continuing through increasing levels of
assistance as needed. From a government perspective the issue is
coordinating very different functions so that they're matched to
variable individual needs without waste. The social and medical science
of delivering care is, of course, far more complex, and is beyond the
focus here.
Waste can arise because people overuse the system, and that problem gets
popular attention. It can also arise from management inefficiences that
don't properly match services to needs, which is a much more boring
topic. The money wasted is, as usual, orders of magnitude greater in the
boring zone. The good news, however, is that since much of the waste is
caused by the government's lack of responsiveness to the needs of the
disabled, it would be amenable to solution in a system with
transparency, effective feedback methods, and administrative
accountability. Medical and social science, as well as the disabled
themselves, can determine which aid is needed and when. The government's
job is to streamline distribution and delivery of services so that
they're matched to the individual's current needs rather than the
government's administrative ones.
As with all other aspects of social care, the expense of the assistance
that can be provided depends on a country's wealth. Accommodation, home
help to enable the disabled to live outside institutions, and any
devices to assist independent living, are not expensive and return a
great deal of value to the disabled themselves and to everyone else by
retaining more productive members of society. Expensive medical
treatments might be out of reach in some situations, but if government
were doing everything possible to coordinate an easier life for the
disabled in other respects it would be a big step forward from where we
are now.
Child Care
[This section benefited a great deal from comments by Caroljean Rodesch,
MSW, and child therapist. Errors and bizarre ideas are, of course, mine.]
Facilitating child care is directly relevant to social survival, and
it's not hard to make the case that it's society's most important
function. Logically, that should mean children (and their parents) get
all the social support they need. In practice, the amount of support
follows the usual rule of thumb in the absence of explicit rules: those
with enough power get support, and those without, don't.
Defense, the other social survival function, can provide useful insights
into how to think about child care. Placing the burden of child care
solely on the family and, more often than not, on the women in the
family, is equivalent to making defense devolve to tiny groups. It's
equivalent to a quasi-gang warfare model that can't begin to compete
with more equitably distributed forms. When it comes to defense, it's
clear that effectiveness is best served by distributing the burden
equally in everyone's taxes. That is no less true of rearing the next
generation, which is even less optional for social survival.
Just as having a police and defense force doesn't mean that people can't
resolve disputes among themselves, likewise having social support where
needed for children doesn't mean that the state somehow takes over child
care. It means what the word "support" always means: help is provided
when and where parents and children can use it. It's another task
involving long term coordination among all citizens for a goal without
immediate profit, precisely the type of task for which government is
designed.
The state's responsibility to children extends from supporting orphans
and protecting children from abuse up to more general provisions for the
growth and development of its youngest members.
Barring overwhelming natural disasters, there are no situations where
it's impossible for adults to organize care for children. There is no
country in the world, no matter how poor, that doesn't have sufficient
resources to care for orphans. That's a question of will and allocating
resources. For instance, there is no country with orphans that has no
standing army. There is no country with orphans without any wealthy
people. People make decisions about what is important, and spend money
on it. But that's not the same as being unable to afford care for orphans.
Judging by people's actions when directly confronted with suffering
children, few people would disagree about the duty to care for them. But
on a less immediate and more chronic level, distribution of the actual
money doesn't match either instinct or good intentions. As with
everything else, meeting obligations depends on rules that require them
to be met. Otherwise the powerless have no recourse, and there are few
groups more powerless than children.
Children's right to care parallels the adult right to a living, but it
also has important additional aspects beyond the satisfaction of the
physical needs. Children have a right to normal development, which means
a right to those things on which it depends. Furthermore, individuals
vary, so the needs for nutrition, exercise, medicine, and education need
to be adjusted for individual children. Specifically with respect to
education, children have the right to enough, tailored to their talents
and predilections, to make a median middle class living a likely option.
Provision of care is a big enough task that the discussion often stops
there. I'll address care itself in a moment, but first there's a vital
practical factor. Effective child advocates are essential to ensure that
the right to good care is more than words. Children with parents
presumably have backers to make sure they get what they need. For
orphans, or those whose parents don't fulfill their duties, there need
to be child advocates, people with the power to insist on better
treatment when there are inadequacies.
Advocacy needs to be the only task of the advocates. If they are also
paid care providers, administrators, or have other responsibilities in
the state's care of children then there is an inevitable conflict of
interest. If a child's interests are being shortchanged, the people
doing it are necessarily adults, possibly even the advocate him- or
herself in their other role. Given the social weight of adults versus
children, the child's needs are the likeliest to be ignored if there's
any divergence of focus. In order for the advocates to truly represent
the children under their oversight, they cannot have conflicting priorities.
Another essential element for good advocacy is a light enough case load
that real understanding of a child's situation and real advocacy is
possible. Given a 24-hour work week, that might be some ten children per
advocate.
The advocates are public servants and, as such, would be answerable to
their clients just like any other public servants. Other adults, or even
the child as she or he gets older and becomes capable of it, can call
the child's representative on shoddy work. However, since the clients
are children, contemporaneous feedback is unlikely to ensure that the
advocates always do their jobs properly. An added incentive should be
that children can use the benefit of hindsight if the advocate has been
irresponsible. In other words, children who are not adequately cared for
by the state can take the responsible representatives to court,
including the advocates who neglected them, and call for the appropriate
punishment. Thus, being a child advocate would carry much
responsibility, as it should, in spite of the light workload.
Given the potential for legal retribution, the expectations for what
constitutes good care need to be stipulated clearly for all paid carers.
If the state provides the types of care shown to have similar outcomes
to families, and if the carers meet their obligations, that would be an
adequate defense in eventual suits.
I don't see these same constraints — outside oversight of and by
advocates and the eventual possibility of legal retribution — as
necessary for parents, whether birth or adoptive. In the case of
criminal negligence or abuse, the usual laws would apply and parents
could expect the punishment for criminal behavior. But there isn't the
need for the same level of feedback about subtler neglect because
parents aren't likely to decide that it's five o'clock on a Friday, so
they'll forget about their kids until some more convenient time. Paid
government functionaries, on the other hand, could be expected to often
consider their own convenience ahead of the child's unless given real
motivation to do otherwise. The mixture of rewards and penalties
suggested here is just that, a suggestion. Research might show that
other inducements or penalties were best at motivating good care for
children. However it's accomplished, the point is to ensure good care.
There's nothing new in the idea of child advocates. They're part of the
legal system now. But they're woefully overworked and under-resourced
which limits their effectiveness at their job. In a system with
realistic case loads and built-in feedback, implementation should be
more effective. It's very important to get the advocacy component right,
because even the best laws are useless if they're not applied. Children
are not in a position to insist on their rights.
Child advocates have another important function. A point I made earlier
in Chapter 4 on Sex and Children
is
that one of the most important rights for children, practically
speaking, is the right to leave a damaging family situation. Application
of that right is likely to be problematic whether because children don't
leave situations when they should, or because they want to leave when
they shouldn't. Adult input is going to be necessary for the optimum
application of children's rights. The child advocates would be in the
front lines of performing this function. Any adult in contact with the
child could do it, but it would be part of the official duties of the
advocates. They would assess the situation, call in second opinions as
needed, and support the child's request, or not, depending on the situation.
The definition of adequate care meets with a lot of argument. For some,
only family placement is adequate, even when there are no families
available. The consequence, at least as I've seen it in the U.S., is to
cycle children in and out of foster homes like rental furniture.
Somehow, that's deemed better than institutional care.
The definition of adequate care by the state has to take into account
the reality of what the state can buy with well-spent money. Nobody, not
even a government, can buy love. So it is pointless to insist that
adequate care must include a loving environment, no matter how desirable
such an environment is. That simply can't be legislated.
What can be legislated is a system of orphanages and assistance for
families who want to adopt. The word "orphanage" tends to raise visions
of Dickensian horrors crippling children for life. Orphanages don't have
to be that bad.
The studies I've seen showing the damaging long term neurological and
behavioral effects of foster and institutional care don't separate the
effects of non-parental care from the effects of unpredictable
caregivers, anxiety-inducing uncertainty about one's future, neglect,
bad care, and downright abuse. Bad factors are more common in
non-parental care, and bad factors are, well, bad. It's not surprising
studies would show such care is not good for children.
An adequate standard of care would meet a child's basic needs, and those
are not totally different from an adult's. Children need a sense of
safety, of comfort in their situation. Stability, which families
generally provide, is an important component, and that's one reason
family situations are supposed to be better than institutions. But
stability, by itself, is only slightly preferable to chaos in the same
way as prison is preferable to a war zone.
At least as important as stability — possibly the main reason stability
feels comfortable — is the feeling of control it provides. Having no
idea what will happen next, which is a very fundamental feeling of
powerlessness, causes fear, frustration, and anger in everyone, at any age.
That is where birth families have a natural advantage. Since the child
was born into that situation, it feels normal and isn't questioned. So
long as things continue as they were, the world seems to operate on
consistent rules, one knows what to expect, one can behave in
predictable ways to achieve predictable results, and there's some sense
of control over one's situation. This is why even bad situations can
seem preferable to a change. The change involves a switch to a new and
unfamiliar set of conditions, the same behavior no longer leads to the
expected consequences, and there's a terrifying sense of loss of control
and of predictability. That's bad enough for an adult with experience
and perspective. For a child with no context into which to place events,
it's literally the end of the world.
I haven't found sociological studies that place the question of foster
care in the context of a child's sense of loss of control because it's
axiomatic that children are powerless. I'm saying that they shouldn't be.
Given that children, like adults, need stability and a sense of control
in their lives, there are implications for what it means to have
child-centric laws. Children have rights, and rights are fundamentally
rules that give one control over one's own situation compatible with the
same degree of control for others. There have to be some differences in
implementation because of children's lack of experience, but the
fundamental idea that children have the right to control some
fundamental aspects of their own lives must be respected. That means
children should not be separated from people they care about, and that
they should be able to separate from those who are causing serious
problems for them.
With the very basic level of control of being able to leave damaging
situations, children's main need is for stability in a benign
environment with someone they care about. They don't actually need their
birth parents. They only seem to because birth parents are usually the
best at providing that environment. If they don't, and somebody else
does, children are far better off with the somebody else rather than dad
or mom, no matter what the birth parents want. Child-centric laws would
give more weight to the child's needs in that situation than the
parent's. The current focus on keeping children with their birth
families under any and all circumstances is misguided. The focus should
be on keeping children with the people they, the /children/, care for
the most.
Specific people may be beyond the state's power to provide due to death,
disaster or abandonment, but a stable benign environment can be
achieved. One way is by facilitating adoption. The other is by providing
a consistent, stable, and benign institutional environment.
If there are relatives or another family where the child wants to stay
and where the adults are happy to care for the child, then that
transition should be facilitated. The child should be able to live there
without delays. The child advocate would have a few days, by law, to
make an initial check of the new carers, and during that time the child
could stay in the facilities for orphans. The longer official process of
vetting the new family and making the adoption official would then, by
law, have to take place within weeks, not years.
There also needs to be a fallback solution if no family situation is
available. Likewise, there needs to be a place for children who have to
leave their families and have no other adults who can care for them. In
my very limited personal experience, systems that mimic family
situations seem to work about as well as ordinary families do. I've seen
that in the Tibetan Children's Villages. They're set up as units with a
consistent caregiver for a small group of children (about five or six).
By providing decent working conditions and good salaries, turnover among
caregivers is reduced and reasonably consistent care is possible to achieve.
It's not the cheapest way to provide care, but saving money is a lower
priority than enabling children to grow into healthy adults.
Longitudinal studies of outcomes may show that other arrangements work
even better. My point is that there are situations with paid caregivers
and consistent environments that seem to work well. The solutions that
are shown to work, those with a similar level of success as families at
allowing children to grow into stable and competent adults, are the
standard of care the state can provide, and therefore should provide.
The problem of dysfunctional parents leads to the difficult question of
revoking parental rights. In a system where children have rights and can
initiate separation from dysfunctional families, border line cases
become easier to decide. If it's not a good situation, and the child
wants to leave, the child can leave. The situation does not have to
reach the more appalling level seen when the impetus to remove the child
comes from the outside, as it always does now.
If the child is not the one triggering a review of the parents' rights,
then the decision on whether to revoke becomes as difficult as it is
now, if not more so.
The child's needs, including what the child her- or himself actually
wants, have to be the primary deciding factor. Even dubious situations,
barring actual mental or physical abuse, have to be decided according to
what the child genuinely wants. It may go without saying that
discovering the child's wishes requires competent counselors to spend
time with the child. Also, since the child has rights, she or he could
ask for another counselor if they didn't do well with the first one.
Only in cases of actual abuse could the child be taken out of a
situation even without their request. The reason for that is probably
obvious: abuse, whether of adults or children, tends to make the victim
feel too helpless and depressed to leave. Physical condition of the home
would not be a reason to place children elsewhere. It would be a reason
to send in a professional to help with home care, but not one to
separate families.
Care of infants
There's a systemic problem when the input of children is part of the
process of enforcing their rights. Those who can't articulate any input,
such as infants, require special rules. I'll discuss that, which in some
respects includes disabled children with similar mental capacity, and
then return to the care of children generally.
Infants have a number of special needs. They can't stand up for
themselves in any way, so they must have others who stand up for them.
Their physical development is such that incorrect handling, even when it
would be physically harmless for an older child, can lead to permanent
disability. And they're at such an early stage of development that bad
care causes increasing downstream harm. So they need skill in their
carers. The harm caused is permanent, so preventing it has to be the
first priority, one that trumps parental rights.
That last may seem particularly harsh in the current view where children
have no rights. However, as discussed in the second chapter, rights have
to be balanced according to the total harm and benefit to all parties.
The right to be free from interference ends at the point where harm
begins. Harm to infants and children is a /more/ serious matter than
harm to adults — more serious because children are more powerless and
because the downstream consequences are likelier to keep increasing —
and the adult's rights end where their concept of care or discipline
harms a child. The child doesn't have more rights than an adult in this
respect, only the same rights. The extent to which the reduction in
parental rights seems harsh is a measure only of how much children are
truly viewed as chattel.
Prevention is always better than cure, but in the case of infants it's
also the only real alternative. There is no cure for a ruined a future.
The good news is that there are a number of proven measures that reduce
the incidence of neglected or damaged children. Almost nobody sets out
to have a child on purpose in order to abuse him or her, so the human
desire to do the right thing is already there. It's only necessary to
make it easy to act on it. For that, people need knowledge, time, and
help in case of extreme stress. All of these can be provided.
A semester's worth of parenting class must be an absolute requirement in
high school. Students who miss it could take a similar class in adult
education. It should teach the basic practical skills, such as
diapering, feeding, clothing, correct handling, and anger management.
There should be brief and to the point introductions about what can be
expected of infants cognitively and emotionally, and when. There should
be programs that allow students who want more extensive hands-on
experience to volunteer to work with children. And for those who come to
this required class with plenty of life experience on the subject —
through having been the eldest in an extended family, for instance —
they could act as student-teaching assistants in those areas where they
were proficient. Nobody, however, should be excused or everybody would
claim adequate experience. It's an odd thing about people, how
near-universal the drive is toward parenthood and yet how common the
desire is to avoid actual parenting.
Since having the class is a prerequisite to having a child, there need
to be sanctions for those who insist on ignoring it. On average the
likeliest consequence of skipping the class will be nothing. Offspring
have been born to parents with no previous training for over five
hundred million years. However, in a small minority of human parents,
the lack of knowledge and practice will lead to children who become a
greater social burden. That costs money, so it's only fair to make those
who cause the problem to share in its consequences. Yearly fines should
be levied on those who have children without passing the parenting
class, until they do so. The assessment should be heavy enough, i.e.
proportional to the ability to pay, that it forms a real inducement to
just take the class. The amount, like other top priority fines, would be
garnished from wages, government payments, or assets, and would apply to
both biological parents. If one parent makes the effort, but the other
doesn't, the fines would still apply to the recalcitrant one even if
they're an absent parent.
Not everyone learns much in classes, even when they pass them, so there
would no doubt be a residual group of parents who find themselves
dealing badly with infants. For that case, there needs to be an added
layer of help to protect infants. One program that has been shown to
work is pairing a mentor with parent(s) who need one. Those programs
come in different forms and go by different names, such as Shared Family
Care in Scandinavia, Nurse-Family Partnership
in the US, and similar programs (e.g. 1
,
2 , 3
). Parents could ask
for that help, which would obviously be the preferred situation, or any
concerned person, including older children, could anonymously alert the
local child advocates to look into a specific situation. Professionals
who notice a problem would be under an obligation to report, as
discussed below and in Chapter 4
.
Time is a big factor in successful parenting, in many ways even bigger
than knowledge. A fair society with an equitable distribution of money,
work, and leisure that results in a parent-friendly work week
, such as one of
24 hours, would presumably go a long way toward enabling people to
actually care for their children. In two-parent or extended families,
alternating shifts would make it easier for at least one of the parents
to be there for the children at all times.
Caring for infants, however, is full time work in the literal meaning of
the term. There is no time for anything else. So parental leave needs to
be considered a given. The amount of time best allotted could still bear
some study, but judging by the experience of countries with leave
policies, somewhere between one and two years is appropriate. The time
would be included in planning by employers, as they do now in a number
of OECD countries, or the same way they accommodate reservists' time in
the military. Those without children who want to use the benefit could
take leave to work with children or on projects directly for children.
Statistics from countries with good parental leave policies, with
economic and medical systems that reduce fear of poverty and disease,
and with some concept of women's rights, show that infant abuse or
neglect is a much smaller problem than in countries without those
benefits. However, even in the best situations, there will still be
individuals who find themselves stressed beyond their ability to cope.
The system of care needs to include emergency assistance along the lines
of suicide hot lines — call them desperation hot lines — where immediate
and effective help can be forthcoming on demand.
Which brings me to the horrible question of what to do when all
prevention fails and abuse or neglect of infants occurs. The early
warning system of concerned neighbors, relatives, or friends needs to be
encouraged with appropriate public health education campaigns, so that
people know what the signs of actual abuse are and so that the social
discomfort of "meddling" is known to be less important than protecting
infants. All professionals in contact with the child, whether daycare
workers, pediatricians, nurses, clergy, or others, would have a legal
obligation to alert child advocates to potential problems. Then the
advocates could look into it, and take steps to remove the infant to one
of the children's villages before permanent damage occurred. If the
parent(s) wanted to retain custody, they would need to demonstrate a
willingness to learn what the problem was, to correct it, and to retain
a mentor until the infant was out of danger.
Revocation of parental rights is a very serious step which might be
necessary after review by child advocates and other involved
professionals. To ensure that the parent(s)' side was represented, each
of the two parents concerned could appoint one member of the eventual
panel, and, as with other legal proceedings
,
decisions could be appealed. One difference is that quick resolution is
even more essential when infants are involved than in other legal cases.
Finally, who decides on the fate of an infant so badly damaged by abuse
that they've been severely disabled? This is not as uncommon a situation
as one might think because it doesn't necessarily require much malicious
intent. An angry parent who shakes a baby can cause permanent and severe
brain damage. Parents without a grasp of nutrition can cause
malnourishment that leads to retardation. If the worst has happened,
then should those same parents be the ones to decide on the child's
future? The answer is obviously not, at least not in any system where
children aren't chattel. The parents would have their parental rights
revoked (besides being subject to legal proceedings for criminal abuse),
and decisions about the child's care would be given to a court-appointed
child advocate. If the infant is so damaged they've lost higher brain
functions and would actually suffer less if treatment were withdrawn,
then a panel of child advocates should make the decision.
- + -
When a child can leave a family and move either to another one or to
state care facilities, a gray area develops. Since it will inevitably
take some time for the official situation to catch up to the real one,
responsibilities for care decisions and financial support need to be
made explicit.
There is a spectrum of parental rights and different degrees of
separation. The situation is more analogous to divorce than to death,
and needs similar official nuances to match it. In the earliest stages
of separation, day-to-day care decisions would have to rest with the
actual caregivers, but longer-term decisions could be put off until the
situation clarified. If the relationship between child and parents
deteriorated rather than improved, the child advocate(s) would find it
necessary to transfer more decision-making to the new caregivers until
in the end full rights would be transferred. None of it would be easy,
the specific decisions would always depend on the individual situations,
but that doesn't mean it's impossible. These things can be worked out.
They can be arranged between divorcing adults, and the situation with
children who need a separation is not fundamentally different. The main
point is that /the child should have some input into and sense of
control over their own fate/.
In the case where a child has left a problematic home and the parent(s)
are still living, the question of child support arises. In the very
short term, the parents would be liable for reasonable payments for
food, in the longer term for a realistic amount that covers all
expenses. Once a child had left home, that amount would be paid to the
government's children's bureau who would then disburse it to the child's
actual caregivers. An arrangement via a responsible and powerful
intermediary is necessary since the child is unable to insist on payment
and since it isn't right to burden the caregivers with financial or
legal battles with the parents in addition to their care of the child.
Since payment goes to a government agency, non-payment would have much
the same consequences as non-payment of taxes. The amounts would be
deducted from wages or other assets.
An issue that arises repeatedly whenever money is involved is that
money, rather than the child, becomes the focus. In current custody
battles, it's not unknown for one parent to demand some level of custody
mainly so as to save or get money. And yet, some form of support
payments need to be made. It's hardly fair for parents to get away
scot-free, so to speak, when they're the ones who created the problem
that drove the child to another home. I'm not sure how that
contradiction can best be resolved. It's easy enough, when caregivers
are personal acquaintances, to see which of them are concerned about the
children and which of them are mainly interested in the check. Possibly,
the difference would be evident to child advocates or other
professionals who could include it in the record. More important,
children have rights in this system, they can contest poor treatment,
and they have accountable advocates whose only job is to make sure they
get good treatment. Possibly, that would keep the brakes on the more
mercenary caregivers. If it doesn't, better methods have to be found.
The point is to approach as closely as possible the goal of a stable
benign environment for the child.
State care of and for children runs into money. Child advocates,
orphanages, and the bureaucracy to handle payments and disputes all need
to be staffed, or built and maintained. The right of a child to a fair
start in life is one of the obligations that must be met, like having a
system of law or funding elections. It has to come near the top of the
list of priorities, not the bottom. In terms of actual money, taking
care of children without a home is not nearly as expensive as, say,
maintaining a standing army. No country seems to have conceptual
difficulties paying for the latter, so with the right priorities, they'd
have no difficulty with the former either.
General social support for parents is a less acute subject than care of
orphans, but it has an effect on almost everyone's quality of life.
Social support implies that when nothing more than some flexibility and
accommodation is needed, there's an obligation to provide it. Parents
who need flexible hours or alternating shifts should be accommodated to
the extent compatible with doing their jobs. Different employers might
need to coordinate with each other in some cases. If the systems are in
place to do it, that wouldn't be difficult. Another example is
breastfeeding in public. Breastfeeding can be done unobtrusively, and it
is simple facilitation of basic parenting to avoid unnecessary
restrictions on it.
When parents need time off to deal with child-related emergencies,
rather more effort is involved for those who have to pick up the slack.
Unlike parental leave for newborns, but like any family emergency, this
is not a predictable block of time. Family leave for emergencies needs
to be built in to the work calendar for everyone, plus recognition that
those with children or elderly dependent relatives are going to need
more time. As with parental leave, those without a personal need for it
who would nonetheless like to take it can use the same amount of time to
work directly with children or the elderly.
Parental and family leave are social obligations borne by everyone, so
the funding should reflect that. Employers, including the self-employed,
would be reimbursed out of taxes for the necessary wages, at a living
wage level. Employers could, of course, elect to add to that sum, but
the taxpayers responsibility doesn't extend to replacing high wages,
only to providing the same and adequate standard of support to all.
The need for child care in the sense of day care may well be alleviated
by a parent-friendly 24-hour work week. However, there is still bound to
be some need for it since children require constant watching, and since
some parents are single. Day care service could run the gamut from
government-provided crèches where parents can drop off infants or
toddlers on an as-needed basis (a service available to some in
Australia, for instance) to nothing more than a government role in
providing coordination for parents to form their own child care groups.
(I know that in the US the immediate objection to the latter will be how
to determine liability. The US has gone so liability-mad that ordinary
life is becoming impossible. There always has to be someone to sue and
there always have to be bulletproof defenses against suits. I see
nothing fair in that and don't see it as part of a fair society.
Criminal negligence — an example in this context might be undertaking
child care and then gabbling on a phone while the children choke on
small objects — would be pursued like any other crime. Vetting carers
ahead of time, and the knowledge that crimes will be met with criminal
penalties, are the tools to prevent criminal behavior, just as similar
constraints operate in other work. Like other crimes, damages are not
part of the picture. Punishment is. If harm requiring medical treatment
happened, that would be covered by general medical care.)
The government's responsibility to inspect and certify increases with
the formality of the arrangement. Who and what is certified, and which
aspects need vetting by individuals need to be explicitly stated. Costs
to the taxpayers vary with degree of government involvement. At the low
end, the government acts as a coordinator of volunteers, and has minimal
responsibility, something even very poor countries can afford. At the
high end are government run day care and preschool centers that are
designed to contribute to the child's healthy growth and education.
Care Facilities
I've mentioned what facilities for children could look like. In the
Money and Work chapter I also said that unemployable people should be
able to live in the equivalent of "on base," as in the military, except
that in the world I'm envisioning that would be a social and public work
corps, not an army corps.
There's a common theme, which is that the government needs living
facilities for a number of people with different needs. The obligation
to care for the elderly, the disabled, children, the ill, and the
mentally incapable, all mean that there need to be an array of
state-funded facilities to provide that care. There's also the
aforementioned work corps, some of whom will need living quarters.
These facilities are probably best not viewed in isolation. Some of them
would probably even be improved if compatible functions were combined in
a mixed use environment. For instance, child care, the elderly, the
disabled, and the work corps can all benefit from contact among the
groups, even if that contact is only for some hours of the day. It's
known, for instance, that children and the elderly both benefit from
each others' society (in reasonable quantities). A "campus," as it were,
that included the many different facilities might be the best way to
deliver services efficiently with the greatest convenience to the users.
Increasing diversity and choice is always a goal in a system that prizes
individual rights.
Anti-Poverty Measures
I haven't discussed transition methods much in this work since the idea
is to point toward a goal rather than accomplish the more difficult task
of getting there. Outright poverty would presumably not be an issue in a
society with a fair distribution of resources. However, outright poverty
is a big issue in almost every society now, and there are some simple
measures that can be taken against it which are worth mentioning.
The simplest way to alleviate poverty is to give the poor money
.
I know that P. J. O'Rourke said the opposite, but in his case it was
supposed to be an insightful witticism based on his own insight and wit.
Actual study and experience prove otherwise.
The concept of alleviating poverty by giving the poor money seems silly
only because psychological coping mechanisms among the non-poor require
blaming the poor for their own fate. Their poverty must be due to
something they're doing, with the corollary that therefore it's not
something that can happen to me. And what they must be doing is wasting
money. Otherwise why would they lack it? And if they waste money, it's
crazy to give them more without very stringent controls.
The fallacy lies in the assumption that the poor cause their own
condition. That is not always true. Some of the poor are money manager
miracles, squeezing more out of a penny than more liberally endowed
people can even conceive. Nor do they lack energy. It's just that, as de
Kooning put it, being poor takes all one's time. The result is that
without any resources, time, or money, the poor have no way to work
themselves out of the trough they're in.
Now comes the part that's really hard to conceive. Evidence is
accumulating from several national programs in Brazil, in Mexico, in
India, that there's a large gender disparity in who wastes money
and who
doesn't. Women spend it on their children, on feeding them and on
sending them to school. Men are less reliable. National programs (e.g.
Brazil , pdf)
which have given money to poor women have been very successful. (Just to
be clear, I doubt this is genetic. Women, as a disadvantaged class,
don't see themselves as being worth more than others, whereas men see
their own needs first. Whether women would still be good guardians of
cash if they had a higher opinion of themselves remains to be seen. For
now and the foreseeable future, poor women are good guardians.)
Also, as the linked sources and an accumulating number of others point
out, reducing the middlemen who handle aid money is a critical component
of success. One reason giving the poor money may not seem to work is
that the stringent controls people like to implement require plenty of
overseers. These middlemen, like people everywhere, tend to freeze onto
any money near them, and so less gets to its final goal. This can be due
to outright corruption, but the bigger expenses are more subtle than
that. Either way, the fewer intermediaries, the better.
The take-home message for a country trying to move toward justice and
alleviate poverty is to provide an allowance to mothers with children,
contingent on staying current with vaccinations and school work if that
seems necessary.
Medical Research
Support for basic research relates to academics and is discussed in the
next chapter
,
but there are other research-related issues with direct impact on care.
Applied research, especially on locally important conditions, can
improve care or reduce costs. Research programs also function to retain
creative members of the medical profession, precisely those people who
are likeliest to come up with solutions to problems. That is no less
true in poor countries, who tend to consider all research a luxury. Some
of that feeling may be due to the very expensive programs in rich
countries, but research does not always have to be costly. A few
thousand dollars applied intelligently to local problems can return
multiples of hundreds on the investment. As for expensive research, a
country without the infrastructure could still make a conscious decision
to provide some support. It could participate by supporting field work
in partnership with other institutions, or by supporting travel for
medical scientists working on specific projects. I would argue that some
support of research is always appropriate because of its stimulating
effect on the rest of the system.
Another aspect of research is its context, which tends to promote some
questions and therefore to preclude whole sets of answers to issues that
are not addressed. Consider the current state of medical research as an
example. Finding cures is much more spectacular than, for instance,
preventing disease by laying down sewer pipes. Nobody ever got a Nobel
Prize for sewer pipes. Yet they've saved more lives than any medical
cure so far. So, despite the utility of prevention, there's a bias
toward cures
.
Add to that a profit motive, where far more money can be made from cures
than prevention, and research questions regarding prevention won't even
be asked, let alone answered.
A recent development in medicine has opened another field in which
profit motives hold back rather than promote progress. Genomics has made
explicit something that has always been obvious to practicing medical
personnel: people vary in their reactions to treatment. Sometimes, as
with some recent cancer drugs, the variation is so great that the
difference is between a cure and complete uselessness. But profits are
made by selling to millions, not to a few dozen folk scattered across
the globe. So, despite the fact that some intractable diseases require
individualized treatments, pharmaceutical companies aren't very
interested in bringing the fruits of that academic research to market.
They keep searching for one-size-fits-all blockbuster drugs instead of
what works. In effect, profits are costing us cures
.
The government can put its finger on the scale here. It can provide
grants for the questions of greater benefit to society. It can provide
lucrative prizes for the best work. (It's amazing how fast status
follows lucrative prizes in science.) It can endow a few strategic
professorships. The cost of all these things is minute compared to the
social and eventual financial benefits.
A function of government in the area of medical and social research
should be to keep an eye on the context and to counteract tendencies to
skew research toward glamorous or profitable questions. The structural
problems now are that preventive and individual medicine are
underserved, but in another time the biases might be precluding other
questions. Government should act, as always, to counteract tendencies
that don't serve the public good.
- + -
The concept of care follows the same principles as all the other aspects
of society: spread the benefits as widely as possible and the burdens as
thinly as possible without shortchanging some at the expense of others.
Do all this within evenly applied limits imposed by technology and
wealth. It's the same "one for all, and all for one" idea that's
supposed to guide the internal relations of human groups in general.
When it doesn't, it's because people fear they'll lose by it … or hope
they, personally, will gain by depriving someone else. The experience of
countries with good social care systems shows that in fact the opposite
is true. Either everyone wins or /everyone/ loses.
+ + +
Education
Social Implications
I don't remember the actual details of a question Ray Bradbury was once
asked. It was something like what would be the most significant aspect
of the future. Faster than light travel? Immortality? Brain-computer
interfaces? Limitless power from quantum foam? Those were mere details,
he said. The gist of his answer is what I remember. Pay attention to the
first grade teachers. They hold the keys to real progress.
There really is no way to overstate the significance of education to the
survival of any advanced society. More precisely, it's critical to the
survival of any advanced society worth living in. Survival of equitable
and sustainable countries -- or worlds -- depends on education. It is
not a luxury that's interesting only when there is nothing else to worry
about. Without it, crises are guaranteed and the wisdom to avoid them or
solve them will be gone.
Further, important as survival is, it's not the only reason a fair
society has to make education a priority. Given that there's a right to
a living, education is also a bedrock necessity toward putting that
right into practice. No human being can make a living without learning
how to do so.
Last, there's the plain political fact that an informed citizenry is an
essential component of democracy. Without it, there can be no democracy
because the whole system depends on informed decisions, and without the
tools to understand the issues, it's a logical impossibility for voters
to make fact-based decisions.
Having said that education is the most important factor for democracy, a
reminder of the caveat. As discussed in decision making
in the fifth chapter, there are very strong intuitive influences on
decisions that operate well before the slow conscious mind catches up.
They bias the results. No amount of education will prevent those
influences unless decision making situations are explicitly set up to
promote rational elements ahead of intuitive ones.
The tools provided by education have to be much more than merely making
some information available before an election. They include
/accessibility/, in all senses of the word, of information at all times.
They include enough education to understand the information. They
include a sufficiently quiet environment, in a cognitive sense, so that
the information can be heard.
Education has to be defined correctly if it's to fulfill its vital
functions in society. In a narrow sense, education is about learning a
specific body of subjects, such as math or history, in a formal setting.
In the broad sense, and in the one relevant to a discussion of social
implications, education is any avenue by which people absorb
information. That includes formal institutions, but it also includes
media, advertising, stories, games, and news.
All of those things inform people one way or another, so all of them can
play a role in making a society more or less useful to all its citizens.
Because these tools are so powerful in their cumulative effect, it's
important for them to adhere, without exceptions, to the principles of a
free and fair society. If they do, they're likely to facilitate that
society. If they don't, they can destroy it.
It's because information and education in the broadest sense are so
powerful that they have the potential for such a striking
Jekyll-and-Hyde effect. An informed and knowledgeable citizenry is the
vital element of a free, fair, and sustainable society. A misinformed
one is toxic. Unless that is recognized, and unless misinformation is
actively fought, a well-informed electorate is not possible and neither
is democracy. There will always be people who stand to gain by shading
the truth, and it will always be necessary to counteract them.
That means limits on noise are critical to prevent information from
being drowned out. The distinction between the two is covered in free
speech vs. noise
in the chapter on Rights. I'll repeat briefly in the next few paragraphs.
First, the right to free speech has to protect all matters of belief and
opinion, as it does now. Statements about politics, religion, or any
other subjective matter, are protected.
Factual matters, on the other hand, can be objectively evaluated.
There's an effective tool for that called the scientific method, and
once there is a high degree of certainty, it's just plain silly to make
up one's own "facts." It's even worse than silly to push untruth to
advance an agenda.
The rules can err on the side of free speech rather than suppressing
misinformation because protection against counterfactual babble is not
quite as critical as protection for opinions and beliefs. An occasional
untruth has little impact, just as tiny doses of arsenic don't do much.
Repeat doses and large doses, however, damage the body politic. The
brain is wired to accept repetition as evidence of what to expect in the
future, in other words as evidence of something real. Nor is that wrong
when dealing only with Mother Nature. (References for that effect are
legion. It was part of the textbooks all the way back in 1920
.
It's also evident in the close correlation between numbers of ad buys
and numbers of votes seen in Da Silveira and De Mello, 2008
.)
Furthermore, and this is the most damaging aspect, it doesn't matter
whether one is paying attention or not. There's even evidence that the
effect of repeated messages is stronger
if they are
"tuned out." That way they avoid conscious processing altogether, which
is a good thing for those trying to manipulate attitudes.
Discussion of the extent of manipulation by repeated messages tends to
meet with disbelief. The reaction is some variant of "Well, it doesn't
work on /me/." Partly, this is because the whole point of manipulative
messaging is to fly under the radar of conscious thought, so it's hardly
surprising that people don't notice it in action. (And after the fact,
the tendency is to insist that preferences are based on pure reason.)
Partly, it's because it indeed does not work on all people all of the
time. It's designed to work on /enough/ people, and that's all you need
to manipulate whole societies.
Manipulation is a very bad thing in a society predicated on rational
choices and therefore on freedom from it. Nor is manipulation tolerable
in a system without double standards. It requires a perpetrator and
subject(s). It would be ineffective without two classes of people,
unlike reasoned arguments which can be applied equally by everyone.
Because untruths are toxic to the rationality on which a fair and
sustainable society is based, and untruths repeated 24/7/365 are
especially toxic, it is essential to dial back the noise. It is not
enough to provide analysis or responses after the fact. The model needs
to be truth in advertising laws. They don't suggest education campaigns
after someone has promised immortality in a bottle and made millions. By
then, the damage has been done. It doesn't matter how soberly anyone
refutes bogus claims. Too few will listen and the scam will work. The
lies have to be prevented, not bleated at after the fact. It has to be
illegal to push falsehoods, and even more illegal to push them
repeatedly. (In free speech vs. noise
I
discuss how this might be done.)
The idea of limiting free speech in any way is generally anathema
because the dominant philosophy holds that silencing is the only
problem. So long as nobody at all is ever suppressed, then ultimately
the truth will out and everything will be fine.
I agree that silencing is bad and cannot be tolerated. /But drowning out
free speech is no less effective at stopping it from being heard/. Both
ways of destroying free speech are toxic. A drowned voice is lost just
as surely as a silenced one. The truth will indeed eventually prevail,
but unless our actions are aligned with it, its inevitable triumph only
means that we'll be dead. It's a complete fantasy to imagine that people
live in some universe where they have all the information and all the
time to study it, and that they'll carefully check through all the
inputs available regardless of cost or time or status and identify the
best. Call that the myth of the Rational Information Consumer to match
its equally preposterous twin, the Rational Economic Actor.
I've stressed the importance of reducing counterfactual noise before
discussing education itself because no matter how good or well-funded
learning is, it'll fail in the face of a miasma of misinformation.
Seductive lies are just that, seductive. Telling people they "should" be
critical and they "shouldn't" be swayed is a lazy refusal to deal with
the problem posed by objective untruth. It's like telling people they
"shouldn't" catch colds on an airplane. The real solution is to filter
the air well enough to prevent infection, not to put the burden on
individuals to spend the flight in surgical masks and rubber gloves.
Likewise when it comes to education, it's not up to individuals to take
out the garbage before they can begin to find the information in the
pile. It's up to the people fouling the place to stop it.
So, an important part of the function of the government in education is
a negative one. It's administering the methods used to claim falsehood,
administering the evaluation of those claims, and applying the
appropriate corrective action in the case of repeat offenders (as per
the discussion in the Rights chapter).
Effectively administering dispute resolution takes money. All of
education takes money and, as I'll try to show in a moment, cheap
education is the most expensive option of all. Who is responsible for
providing that money? Since education is a right and a social good that
requires considerable coordinated action for no immediate return, it
falls squarely into the functions of government.
Citizens have a right to enough know-how to make a living. Schools,
obviously, should be publicly funded. Any advanced society will need
more skills than a general education, so taxpayers should also fund
students obtaining that certification, which takes place in technical
schools and universities. The research activities of universities also
provide social benefits, sometimes incalculable social benefits, almost
always without immediate payoff.
That leaves only one relatively small component unfunded: learning
purely for fun. My guess is that it would be foolish to leave it out. I
don't know of research proving it (and certainly haven't seen any
disproving it), but I'm sure a mentally active citizenry will not only
pay for itself but will also add those priceless intangibles that make a
society worth living in. Furthermore, the marginal cost of letting the
whole adult population learn at will is bound to be smaller than setting
up an elaborate -- and doomed -- scheme to verify that the student's
goal is certification. Or, worse yet, setting up a whole parallel system
of private "adult education" that will never be able to provide access
to most subjects. A rocket scientist or a scholar specializing in
tercets of the 14th century would be unlikely to attract enough students
at a small private school. At a university, students without a goal
would add only a tiny marginal cost to the total. The social good of
increased choices for all far outweighs the small extra outlay.
Anyone who might be thinking that a nation couldn't possibly afford all
that, on top of the other government obligations, should remember that
the better-run countries do afford all that, and they afford it at
around a third of their GDP taken in taxes. These better-run countries
don't have large armies. It's a question of priorities.
Moving on to the positive aspects of education, and starting with the
most specific case, that of informing citizens, there are a number of
government functions to cover. The most basic of all is
election-specific voter information, although even that has its own
complexities.
Consider California voter information booklets as an example. They're
reasonably good. For each issue they include the arguments of
supporters, opponents, lists of supporting organizations, and a summary
by a legislative analyst. Unfortunately, the only simple and effective
way to fight through the verbiage is to note which are the sponsoring
organizations, and even that takes some inside knowledge. Without
any truth-in-labeling laws for politics, who's to know whether "Citizens
for Tax Reform" promote flat taxes or corporate taxes or taxes on the
importation of exotic parrots?
Further, in a system where objective facts must be supported, the voters
can be shown the extent of that support. I've discussed objective
statements as if they're either factual or not, but since the support is
statistical it's never a fully binary system. In issues subject to vote,
there's likely to be quite a bit of room for interpretation as to what
the facts are actually telling us. In those cases, a simplified variant
of the scholarly procedure is indicated. Proponents can explain how the
facts support their view and provide the confidence levels for that
support. Understanding the meaning of confidence levels would be part of
basic education. (There's nothing about a 95% probability of being right
versus a 75% probability that would be hard to grasp even for a fifth
grader. All it needs is to be part of the curriculum, which I'll discuss
shortly.)
Voter information booklets that go the extra mile to assist voters would
be laid out with the same attention to readable typography as a magazine
article or a web site that wins prizes for presentation of information.
Statements of fact by either side or by the analyst would be accompanied
by the associated confidence level in the ten independent and most
widely cited studies on the subject (or fewer, if that's all there are).
If relevant points that weaken their argument were missing from either
side's summaries, the analyst would point them out. The analyst would
also add a one sentence description to each sponsoring organization,
making clear what their mission is. For instance, "Citizens for Tax
Reform: promote the use of plain language in all tax documents."
Good voter information is more difficult, but entirely possible before
100% literacy is achieved in a country. Even after that's achieved,
basic information can always be provided via audio and graphic channels.
During transition periods, it's essential that illiterate groups in
societies be informed to the extent possible by whatever means works best.
Information about elections and the government generally are only part
of what citizens need. Facets of private enterprise that have social
consequences must also be equally accessible to public scrutiny. Ease of
oversight is one of the basic concepts, so any aspect of an enterprise
subject to regulation must also be transparent and subject to citizen as
well as regulatory oversight. It's an essential element of making sure
there are many lines of defense against abuse, not just one.
Confidentiality of trade secrets can be an excuse against transparency
in some cases, but the larger the company, the lower the priority of
trade secrets. When not much damage can be done before the problem is
evident, then one can give things like confidentiality more leeway. The
larger the company or the bigger its clout, the more weight needs to be
given to oversight and regulation and the less to the company's
concerns. They can do too much damage too fast to cut them any slack.
Information must also flow easily the other way, from citizens to higher
authorities. Citizen feedback and reports of potential problems are an
important component of ensuring effective regulation. That requires more
than sufficient education to know that one can make a complaint or a
report. It requires personnel at the other end to process it, as
discussed in the chapter on Oversight
.
Distributed and active information flow can happen only when the
facilities for it exist. Ben Franklin's concept of public libraries in
every community was an expression of that ideal, and he was right that
they're a vital part of the information system in a free society. Some
people think they've been superseded by the net, but I disagree. The two
complement rather than replace each other. Both are needed. In the
interests of the greatest ease of access, everything that can be
available on the net should be. But to facilitate deeper study,
libraries will always have a function. Not necessarily because they'll
have books in them, but because they have different ways of organizing
information and, mainly, because they'll have librarians. Finding good
information in the glut now available is actually more difficult, not
less, than it was in the days of scarcity. Librarians are information
specialists who have a vital teaching role to play in helping people
find and use information, so they are an essential part of the education
system in the broad sense.
Tangentially, regarding my statement about the complementarity of
different kinds of information delivery, there's growing support for
that view. Gladwell's
recent discussion of the US civil rights protests and "strong"
commitment vs "weak" in the real vs virtual worlds refers to some
interesting studies. The same difference is very evident in online
teaching, although I hadn't put my finger on it quite so clearly.
Dissertations are appearing (for instance, 2006
,
2005
)
pointing out qualitative differences as online methods become more
widespread and their limitations more obvious. The 2005 reference is
particularly interesting as it studied software developers who, of all
people, are least likely to need face-to-face interaction. Yet even for
them it was essential. Real public libraries with real librarians are
important even in a world where the sum of human knowledge is also
available on the net.
The flow of information requires physical infrastructure. Since the
Enlightenment in the early 1700s it's been understood that free speech
is only possible when narrow interests cannot control the flow. More
generally, the vital resource of communication, the nervous system of
the body politic as it were, has to be free of any manipulation by
special interests. Properly understood, the post office, communications,
and the net are all part of a society's system of information, a system
without which everything breaks down. Like other vital systems, such as
defense or medical care, providing for it and sustaining its
infrastructure has to be a government function. No other entity has the
accountability to society that fulfilling a vital social function requires.
Furthermore, in a fair system everyone must have the same access to
information and the same ability to communicate. Like a postal system,
it must either be cheap enough so that price is not a consideration for
individuals, or it must be free at the point of use, like the internet
before commercialization. That doesn't preclude payment for commercial
services that are not socially vital, but it does mean there must be an
accessible layer for the part that is. A commercial service can never
fulfill that function since, if nothing else, it has to discriminate at
least on the ability to pay.
Thus, communications and information are a function of government and
inextricably tied in with education in the broad sense. The government
needs to provide and regulate the physical infrastructure, whether
that's mail coaches, fiberoptic cables or satellites, and provide a
basic tier of service funded from taxes. That basic tier should include
text and images for information, education, political, legal, and
feedback purposes. It would also include some quota of emails, phone
calls, or letters per person. Organizations or businesses with
requirements exceeding the quota would need to use commercial services.
Another area that shades from public good to purely private is
entertainment. Whether Shakespeare is purely fun or purely education or
something in between is an opinion. Whether Starcraft is good for
teaching strategic skills and hand-eye coordination or is a silly game
is likewise an opinion. As in other matters of opinion, those are best
decided at the community level.
Since there must be complete separation between the state and matters of
belief, taxpayers couldn't be asked to fund religious teaching,
organizations or communications. The one exception is that individuals
can do what they like with their personal quota of free communication.
The idea to be realized is that everyone has access to information and
communication so that they can take full part in the political and legal
life of the country. Communication without much social significance,
such as status updates, does not need to be taxpayer-funded, nor does
commercial communication, which should be paying for itself. But it can
be very hard to draw the exact line between optional messages and
important ones, and it is always more important to facilitate free and
open communication than to scrimp on the marginal cost of a few more bits.
On Teaching and Learning
Education in the narrow sense of formal training is necessary to be able
to use the information and communication tools the government should be
providing. Before discussing that, however, it seems necessary to
discuss the nature of teaching and learning, because it's essential to
get these right for a fair or a sustainable society. Misconceptions
about both teaching and learning lead to little education and lots of
wasted money. Even worse, the resulting ignorance leads to ignorant
votes, which includes, naturally enough, ignorant votes on schooling,
and so on in a descending vortex. (This section discusses the same ideas
as in the "War on Teachers" series, I
, II
,
III
.)
Going through the motions of sitting in school doesn't always teach
anything, so the first question is, "What is effective education?" It's
one of those things whose definition changes depending on the goal, but
there is a common thread. We've effectively learned something when,
faced with a quandary, we know how to approach an answer, which facts we
need and where to look them up if we don't know them, and once we have
them we know how to integrate them into a solution. That's true of a
sixth grader solving a word problem in math class, of a business
executive figuring out how to sell a new computer model, or of a
philosopher ruminating on the relationship between different classes of
thought. I'll use the word "competence" to summarize that result,
although it's really much more than that.
There are many active components in learning. Remembering the relevant
facts or sources takes effort, making the relevant connections to find
solutions to problems takes even more effort, and figuring out where the
gaps are in one's knowledge and filling them takes perhaps the most
active commitment to effectiveness of all.
Anything that requires active participation requires willingness, and
willingness cannot be forced. That is a hugely important point that much
of the current popular discussion about education overlooks. So let me
repeat it: learning can only be done willingly. It cannot be forced.
Punitive measures have never worked, don't work, and will never work
because they /can't/ work.
Effective teaching is whatever enables effective learning. That requires
knowledge of the subject matter, certainly, and of how to present it so
that students can learn most easily. But that's just the first step.
After that, the teacher has to pay attention to each student, to try to
get a feel for what they've understood, what they're still missing, and
how best to fill the gaps given the /student's/ way of organizing
information. It feels a bit like an exercise in mindreading, and the
teacher has to care about the student to be able to do it. Understanding
someone else simply doesn't happen without caring. The teacher may not
even care about the student, strictly speaking. They may only care about
doing their work to a professional standard. But whatever the origin,
they have to care. The final and most important step is for the teacher
to take all their own knowledge and understanding and apply it to lead
the student to greater knowledge in a way that gives her or him a sense
of improvement and mastery. It's that "light bulb going off" moment that
fires the desire to continue learning which is the best gift of education.
One tangential note: the most demanding teaching is for students who
know the least. That takes the greatest skill and commitment. Advanced
students have enough understanding of the field to need less help
integrating new information. The kindergarten teachers are at the
cutting edge of the field. Teaching microbiology to medical school
students is, by comparison, a simple job. (I speak from experience,
although I've never taught kindergarten. I have taught biology to
university students, and it is very clear to me who has the harder
teaching job.)
Another tangential note: knowing the subject matter becomes increasingly
important the more advanced the student. In the US, at least, people
complain more and more about teachers too ignorant to teach. There are
several sides to that: the teachers need to have mastered their
subjects, the administrators need to assign the teachers correctly, and
the conditions of teaching need to be good enough to retain people with
a choice of jobs. Only one of those depends on the teachers themselves.
I'll discuss conditions of teaching below.
One objection to the discussion of what's involved in teaching is
probably that it's irrelevant to the real world because only the very
best teachers do more than know the subject matter. Everyone who teaches
knows that isn't so since everyone who teaches does it to some degree.
Parents showing their children how to button a shirt are doing it.
Anyone who doubts it should pay close attention to themselves when
they're trying to help someone they care about learn something. There'll
be that distinct feeling of trying to get inside their minds in order to
figure out how best to explain it. The main difference is that
non-teachers do that briefly, generally only with one or a few people at
a time, and usually only for relatively simple concepts. Multiply the
effort and attention involved times the number of students in a class,
the amount of the subject that needs explaining, and the number of hours
teaching, and one can start to have some concept of what it is that
teachers do.
To get a more complete concept, think about doing a teacher's job. The
closest thing to it that most people have done at some point is public
speaking. That addresses the first step, presenting information to a
group of people. The act of doing it requires intense concentration
because one's mind has to be doing several things at once: keeping the
overall presentation in mind, speaking coherently about the current
point, and simultaneously preparing the next few intelligent sentences
while the current ones are spoken. A practised teacher will also, at the
same time, be gauging the level of understanding in the audience,
modifying the presentation on the fly as needed, and answering questions
as they come up. When the students being taught are young, the teacher
will also be keeping order and checking for untoward activities, all
while not losing track of any of the above six simultaneous aspects of
the act of teaching. That level of concentration and engagement is
standard during all of in-class time. It's not special or something only
the best do. Some do it better than others, but every teacher does it.
That is fundamentally different from many other jobs. The level of
preparation, concentration, and energy required, all at once, is very
unusual. Aspects of it occur elsewhere, whether in insurance adjusting,
software coding, or bricklaying, but the need to have all those factors,
all functioning at a high level, all at once is uncommon. The closest
parallel to in-class time is performance art. As with performance art,
the great majority of time spent on teaching happens outside of class.
And, also as with performance art, it won't amount to anything unless
the individual involved puts a great deal of her- or himself into it.
All of the above should make clear that teaching requires even greater
active involvement than learning. Teaching, like learning, also cannot
be forced. Punitive measures have never worked, don't work, and will
never work because they /can't/ work.
Trying to force good teaching would be like taking someone by the scruff
of the neck and banging their head down on the table and yelling,
"THINK, dammit!" It is not going to happen no matter how much it seems
like the simplest solution.
Good teaching cannot be forced. It can only be enabled. Luckily, that's
not difficult.
Humans actually enjoy learning and teaching. It’s probably got something
to do with our 1400 cc or so of brain. It's evident even in small
things, like preferring new places for vacations, and watching new
movies rather than re-runs. It’s more fun. And teaching is fun, too.
There’s nothing quite like watching that “Aha!” moment light up
someone’s face. To the extent that there's tedium involved, and there
definitely is, a proper start on the road to learning puts that in the
context of the skills gained and it seems like quite a small price to
pay for happiness. In the same sense as the difference between eating
and force-feeding, discipline is needed to learn but nobody has to be
forced to learn or to teach. On the contrary, force makes people /not/
learn (which currently happens too much). All that’s needed for
education is an unstressed environment with enough time and resources,
and it happens.
What does "enough time and resources" actually mean? First and foremost,
it means small class sizes. It's inevitable that the amount of attention
a teacher can give any individual student is inversely proportional to
the number of students requiring attention. In a class of thirty-five or
forty children, a teacher's time will be taken up with keeping order.
Education will be a distant second. The fact that many children do
nonetheless learn in such environments shows how strong the tendency
toward learning is. Small tutoring groups of less than five or six
students measurably result in the greatest amount of learning. That's
why students having trouble with a subject get tutoring. It's not
because all tutors are brilliant teachers. It's because the format
allows for a great deal of individual attention. Class sizes much bigger
than twenty or so are getting so far from the optimum as to be problematic.
Another important facet is time. I've mentioned the time needed to give
individual attention, but there's also the time needed simply to
effectively present and absorb information. A student can't study
constantly. The neurological processes need time to assimilate and
organize the information on a biochemical level. There's an optimum
beyond which more courses or more studying means less useful knowledge
in the end. That's why longer school years don't automatically translate
into much of anything. Education is not the same as manufacturing poppet
valves.
Similarly for teaching. As a high performance activity that nonetheless
requires a lack of stress, there's an optimum to how much actual
teaching can be done in any given day before quality starts to suffer.
It's not a desk job. The optimum is nowhere near eight hours a day.
Three hours every other day, up to a maximum of five hours on occasional
days, is closer to the standard that needs to be applied. (That's
actual, in-the-class teaching, not supervising study halls or the like.)
To those who want to insist that's preposterous, I suggest they spend a
semester or two teaching.
Basic resources are essential. Teachers and students lacking comfortable
buildings, pencils, paper, and basic texts are being asked to do the
work of learning in a vacuum. Given human ingenuity, sometimes education
happens even in that environment, but it is never an optimum use of
either teacher time or student brain power.
Advanced resources are nice, but the amount of improvement they confer
approaches zero for increasingly large sums of money. They may even
cross over into negative territory if the bells and whistles serve to
distract the student from the primary task of learning and integrating
knowledge.
The teachers themselves are the most critical resource, and people are
aware of that. There's much talk about how to be sure that schools hire
only "the best." Unfortunately, although the intentions are good, that
goal is a logical impossibility. Most people will have average teachers.
That's the definition of "average." /Most/ people can't have "the best."
That is why it's crucially important to make sure that "average" is also
adequate. Small class sizes, for instances, are necessary for average
teachers to be good teachers.
The other critical flaw in the concept of hiring "the best" is that we
(and I do mean "we," all of us) aren't omniscient enough to discern the
best for a hugely complex task that requires commitment and caring as
well as above average intelligence
. A system built on
hiring "the best" will excel mainly at justifying hires after the fact,
not discerning good ones beforehand. The way to find good teachers is to
require good training beforehand and then judge by actual performance on
the job. A probationary period, such as three months, would wash out the
least capable, including those who looked good only on paper. (I'll
discuss evaluation of teachers' skills in a moment.) Once again, if we
try to avert the worst and judge by actual performance, rather than
delude ourselves that we can discern the best, there's a fighting chance
of success.
Salaries for teachers and administrators are a major expense in any
educational system. Administrators tend to be background noise in the
mix and the public often hardly notices them. Like administrators
generally, they are good at growing their empires and their salaries far
beyond any optimum. In a sustainable system, they would be subject to
the same limitations and controls
as all administrators. One of those limitations is that their salaries
are directly proportional to those of the teachers they work for, so
runaway pay should be less of a problem.
Teachers' remuneration comes in two ways due to the unusual nature of
the job: salary and security. The actual money tends to be low compared
to other work requiring equivalent training and commitment.
Before continuing, I'd like to dispel the illusion that extra time off
is part of the compensation for teachers. Teachers always and everywhere
spend more time on their work than the hours that appear on paper.
Writing exams, grading, helping students, preparing class work, setting
up labs, and all the hundreds of other outside-of-class aspects of the
job cannot be dropped simply because enough hours have been worked. The
jobs have to be completed no matter how long they take. Just as one
example, under current conditions, it's a rare teacher at any level,
school or college, who doesn't do job-related work every evening
and weekend while school is in session. That amounts to around a 70-hour
work week for those 36 weeks. That's 2520 hours per year. That doesn't
count work over the summer, which is also standard for nearly all
teachers. For comparison, a 40-hour per week job, two weeks vacation &
ten paid holidays equals 1920 hours. It's because of the unpaid overtime
that long vacations are not only justifed, they're essential. Work that
takes away one's evenings and weekends rightly compensates by providing
a much longer block of time off.
In a more balanced educational system, unpaid overtime would not be
expected, and teachers really should get more free time if their
salaries are below average for their level of training.
The second aspect of compensation is tenure. The word tends to be
misunderstood to mean sinecure, whereas it's real meaning is closer to
"due process rights." A teacher with tenure can still be fired, but only
for very obviously not doing their job. Tenure means that evaluations
have to be done strictly on the merits, and that any personal biases
have no place. Anybody who wants to apply a bias will have to find some
objective excuse for it and be ready to support it during outside
review. That's not nearly as simple as making one's own decisions
without much chance of challenges. Tenure forces supervisory personnel
in academe to restrain their personal biases. It's not some ivory tower
luxury that insulates eggheads from the "real world." It's an essential
component to make sure that the lineworkers, as it were, can do their
jobs instead of worrying about their bosses. (Would that be a good idea
everywhere? Absolutely. What we need is more tenure, not less. But this
is a discussion about teaching, so I'll limit it to that.)
The security conferred by due process rights is a big part of an
unstressed environment, which is essential to good teaching. Academe
differs from other work in one important respect. Its "product" is
education, and if it's not done right the damage lasts forever. But the
damage is very slow to appear -- decades, usually -- so there are few
personal consequences for getting it wrong. In this it's unlike other
critical jobs, such as nuclear reactor control room technician. An
incompetent technician will get washed out even if they're good friends
with the boss because the stakes are too high. But a competent teacher
who is not friends with the principal is in deep trouble.
Without tenure, it's too easy to follow personal biases. Skill becomes a
secondary consideration. (I know that nobody will admit to being less
than objective. The evidence tells us that people are, no matter what
they admit.) Yet skill in teaching is the critical factor in effective
teaching, not relationships with important people. Without tenure, only
exceptional supervisors will disregard networking when handing out
perqs, promotions, and even pink slips. Exceptions aren't enough to keep
the system working, and skill becomes an accidental feature. With
tenure, teachers can focus on teaching instead of devoting their best
energies to office politics.
So far, the simple matter of enabling education involves small class
sizes, secure, not-overloaded teachers, and good basic facilities. Those
few factors by themselves may explain why people are so dissatisfied
with the quality of schooling. None of them are the cheapest option.
In education, you don't exactly get what you pay for. The optimum amount
intelligently applied returns many times the investment. Too much
spending generates decreasing marginal returns. Too little spending
yields less than nothing because ignorance is expensive. It's a
nonlinear relationship, but people who want education on the cheap can
be sure of landing in the third category.
The final issue is how to see whether students have learned and teachers
have taught. It's to evaluate results. This is another area of great
current ferment because the public at large wonders uneasily, as a
President famously put it, "Is our kids learning?"
As far as students go, we know how to do evaluations. Grades based on
demonstration of active knowledge are what work. If the education system
isn't dysfunctional to begin with, in other words if teachers can do
their jobs with the required autonomy, they're the ones with the skill,
training, and familiarity with the students to best evaluate them.
In contrast, a note on what doesn't work: mass tests of passive
knowledge. The US school and university systems are increasingly
engulfed in national multiple choice testing schemes that are incapable
of measuring competence. The top third of students who pass through the
system go on to college. Those are identified by the tests as the best.
So many of them have such poor ability to integrate knowledge that
colleges throughout the country have to keep expanding their remedial
courses or raising their entrance requirements. People sense that the
tests we have aren't doing the job, so the obvious solution is … to
require more of them. It's getting to the point where tests aren't
measuring learning. They're replacing it. That definitely does not work.
As to evaluating teachers, I hope the foregoing discussion has made
clear that teachers are not the sole ingredient determining student
success. The conditions under which they teach are more important than
the skill of the individual teacher. I mean that literally. An average
teacher with a class of fifteen will be able to transmit a good bit of
knowledge to the students. A brilliant teacher with a class of forty
will only be able to affect a very talented few, if that.
However, even under ideal conditions teachers need oversight and
feedback like everyone else. There are three different approaches that
don't work. The traditional one in the US is evaluation by the teacher's
principal, and the new ones either incorporate student feedback or use
tests of student knowledge to judge the teacher's merit.
The traditional principal-centered methods of evaluation aren't weeding
out bad teachers or rewarding good ones. Principals have too many
chances for intuitive and personal bias. Without counterweight from
other authorities of at least equal status, the system will suffer from
too many teachers retained on factors other than their skill. There's a
reason why unions insist on seniority as the sole criterion. At least
that's objective and not dependent on the principal. The current focus
on tests is also groping toward objectivity. However, selecting an
outcome by throwing darts at a board is also objective. It's just not
very meaningful.
Student evaluations of teachers were all the rage for a while but are
starting to recede as people grasp the fundamental problems with that
approach. Teachers can affect the results by giving students what they
want, which is fun and easy classes. Learning becomes secondary. By far
the simplest, least-work method for teachers to reduce the time they
have to spend on students and the flak they catch from them is to give
them higher grades. Grade inflation has become so widespread and severe
that everybody is noticing. And it's gradually dawning on people that
it's not a good thing.
There's also a fundamental problem with student evaluations which would
exist even in a perfect world. Students don't know enough to know what
they need to learn. That's why they're taking classes. It's only after
they've learned and used the knowledge that they can even begin to
evaluate the teaching. Student evaluations after ten or twenty years
might give very useful insights, but they could never provide more than
partial feedback on the current situation.
Student evaluations do provide valuable and necessary feedback, but it's
essential to keep it in perspective. It is useful as a tool for the
teacher to modify their own teaching style or methodology, but it can
never be a factor in job evaluation without undermining the mission to
educate students.
The newest hot topic is testing students to see how much they've
learned. If not very much, then the thinking goes that the teacher must
not be very good. The assumptions behind the approach are ludicrous:
that multiple choice tests indicate effective learning or competence,
that the teacher rather than the conditions of teaching is more
important for student learning, that the student is not the primary
determinant of student learning, and the list could go on. (There is an
introduction to the topic and some further links in War on Teachers I
.)
Most fatal of all, though, is the flaw in the very concept of measuring
teaching. Only tangibles are quantifiable, and only quantities can be
measured. But teaching, especially teaching at the school level, has
essential components that involve attention, understanding and caring.
The hard truth is that there is no way to measure those. They can be
evaluated to some extent by other human beings with similar capacities,
but they cannot be measured. If we insist on measuring how "good" a
teacher is, all we'll succeed in doing is throwing out the very thing
we're trying to measure: effective teaching.
All that said, it does not mean there is no way to evaluate teachers. It
means there's no simple objective method of doing so. I think most
people actually know that, but they keep hoping a few multiple choice
tests will do the trick because that would be so nice and cheap. That
shortcut doesn't exist. What can be done is to try to ensure the
objectivity of the evaluators, and to have clearly stipulated methods
that sample the totality of a teacher's work.
The method doesn't even need to be invented. It's been used in similar
forms in many places. The important factors are inspectors, a pair or
more who can act as a check on each other, who come from outside the
district and are not any part of the teacher's chain of command, who
evaluate the administration as well as the teachers, who make class
visits, and who, when evaluating teachers, look at all of their work,
such as syllabuses or lesson plans, exams, grading, student work and
in-class interest, and so on. The inspectors must themselves be former
or current teachers. There's no need for the visits to be a surprise.
Teaching ability improves with practice, but it can't be faked. Even
months of warning won't turn a bad teacher into a good one. However, a
few days' warning is plenty, otherwise there'll be too much motivation
to put on a show.
Assuming the inspectors are professional and honest, which could happen
in a system with transparency, easily accessible results, and feedback,
then such complete evaluations will actually provide usable information
to the whole system, rather than mere numbers which could just as well
have come off a dartboard.
A real method of evaluation is not cheap, any more than education itself
it. Like education, it returns many times the investment by improving
the larger system. And, also like education, the methods that seem cheap
are actually the most expensive. They can, by themselves, kill off good
teaching.
Formal Schooling
Education has many purposes: ensuring general literacy, providing
training and certification, enabling the advancement of knowledge, and
providing opportunities to broaden one's mind or learn new skills. Some
learning is best done full time in a school. Some is most useful as an
occasional class tucked in around the other demands of life. Educational
institutions need to have the flexibility to match diverse purposes and
interests to provide maximum functionality and choice.
First and foremost is basic education. Democracy itself is hardly
possible without literacy. Increasingly in the technological age it's
not possible without numeracy either. Democracy could flourish in a
world with nothing but clay tablets and wooden wagons, but not without a
basic level of knowledge and information.
At its best, schooling can accomplish what's needed these days. How and
what to teach is known and many students around the world graduate with
a good education, which proves that it can be done. There's a common
thread running through what works and what doesn't.
Studying a vocabulary list for a multiple choice test teaches nothing.
The words themselves are soon forgotten, and the ability to use them in
context never appears. Following a set of steps for a specific set of
math problems teaches nothing. The steps are forgotten right after the
exam, and the notion that it ever had any application to the real world
never arises. The examples could be multiplied, but the gist is always
the same: getting the right answers from students does not, by itself,
prove that they learned anything. Knowledge must be actively used and
integrated for a person to "own" it and to be able to use it in life.
Thus, for instance, writing and revising essays teaches an enormous
amount in any number of subjects. Working through mathematical word
problems or using math elsewhere, such as shop classes or in a model
rocketry project, teaches analytical skills as well as integrating
arithmetic, algebra, and geometry. Labs are essential in the sciences,
in language classes, and wherever the ability to turn knowledge into
actions is required. Passive knowledge never turns into active knowledge
without practice.
The shared characteristic of everything that works is that it takes a
great deal of student and teacher effort and time. There is no way to do
it effortlessly. There is no way to do it quickly. There is no way to do
it in large classes. There is no way to do it with overworked teachers.
There is no way to do it cheaply. In the longer run, it pays for itself
many times over. If you look no further than this year's budget, there
seem to be plenty of corners to cut. But if the things that actually
make education work are cut, then children learn much less than they
could and have a harder time of it than they need to. And some learn so
little they go on to become society's most expensive adults.
It's important to remember that education, real education, can't be done
on the cheap. Like other vital services, from police to medical care, a
sustainable society, if it wants to continue sustaining itself, has no
choice but to provide at least the bare necessities of all of them.
Microscopes may not be necessary for every student in biology classes,
but small class sizes and well-trained teachers are an absolute
requirement in every class. The intangible is almost always more
important than the bricks and mortar.
Getting back to the large amount of required knowledge, two factors work
together to alleviate the difficulty of learning. A child's earliest
teachers have to know how to transmit the excitement of mastery so that
the tedium of learning is in its proper context. And the earliest phases
of instruction have to be much more individually tailored than they are
now. Day care workers are highly specialized teachers, not babysitters.
Their training needs to include skill in providing more advanced
learning opportunities, seeing which children are eager to use them, and
helping them do so to the extent of their interest.
The idea isn't to generate a raft of precocious little geniuses for
parents to boast about. It's for children to learn what they're
interested in when they're interested. Learning to read at the right
time, for instance, is downright fun. Learning as an adult is a real
slog. That's an extreme example, but there is also variation in
readiness at young ages. Some children have an easier time learning to
read at three and by the time they're six it's actually more difficult
for them, not less. Others might be better off waiting till later than
six. There does need to be a maximum age by which a child must start
school because there are parents who will keep them out forever to save
money or to prevent cultural taint. At some point, all children are
ready to begin formal learning. The consensus among educators seems to
be that that age falls somewhere between five and seven. The same
variation in development exists for numeracy, for hand-eye coordination
and physical skills. All of them happen on their own schedule in each
child, and learning is easiest when fitted to the child and not the
other way around. It's an extension of the principles of flexibility and
choice adapted to the youngest citizens.
The idea of instruction based on individual needs should then be
continued right through the whole system. Children will reach the first
grade level at different ages. That doesn't have to be a problem if they
can also enter first grade at different ages. The grades need to be
defined by subject matter, not age of the children. A six year-old math
genius could be in the same tenth grade math class as a twenty year old
with a different level of aptitude. That six year-old might be just
starting to read, and during recess be part of a playgroup that's mainly
seven year-olds. There is no child-centered reason to segregate children
rigidly by age into one regiment for all activities.
If we don't waste the brain power of children at the earliest stages of
learning, there's a chance they'll be able to absorb what they need to
become effective citizens by the time they're adults. Because that's
another area where the system now falls short. Children don't learn near
enough. Too many graduate with so little literacy that they associate
having to think only with failure. The same goes double for numeracy.
Practical skills are hardly part of the curriculum, except for some
rudimentary sex education, or shop classes for students on a technical
track. Basic life skills such as managing finances, nutrition, fitness,
how to interview for a job, how to fix small appliances or simple
plumbing or how to know when the mechanic is making sense are too
lowbrow to even mention. The idea that skills are low class dates back
to the days when education was only for people who had personnel for
that sort of thing. An egalitarian society should be providing an
education that helps everyone through life, including all aspects of it.
Parenthetically, it should go without saying that technology changes and
specific skills learned in school might become obsolete. One could learn
how to replace hard drives in computers only to graduate into a world
where memory storage has shifted to photonic crystals. But the type of
analytical thinking applied to solving practical problems does not
change, does take practice, and can be taught. It's also different from
the analytical thinking needed in math or science or philosophy. It
needs its own training.
Numeracy is another big deficiency in current education. Students
achieve better math skills in some countries than in others, but from a
citizen's perspective it's not adequate anywhere. Math skills are taught
in isolation and the usual refrain later in life is "I never used that
stuff even once." A great deal of time and mental effort is wasted for
nothing. Actually, for less than nothing because the process makes
people allergic to numeracy, which is a dangerous loss.
The problem in math as in other academic subjects is that the curricula
are established by experts, and they know what is needed to eventually
be successful in their field. But children don't need the foundational
elements for an advanced degree. They need a sampling that confers
useful skills. In math, they don't need to factor polynomial equations.
They need the fundamentals of logic, they need arithmetic, basic methods
of how to solve for an unknown, basic statistics and how to evaluate
numbers presented to support an argument, and where to look when they
need methods they haven't learned. None of these are impossible to teach
to children. The curriculum just needs to be tailored to them instead of
to a theoretical requirement for an advanced university degree. Schools
need to prepare children for the skills they'll need in life. University
prep courses can prepare them for the knowledge they'll need if they go
on for advanced degrees.
Besides literacy, numeracy, and practical skills, children need enough
introduction to the larger picture of human life on earth so they at
least know how much they don't know. They need enough introduction to
history, other cultures, art, music, chemistry, biology, and geology to
have some idea of what the discussion is about when these subjects come
up in the news or in entertainment. And also so that they have some idea
whether that's where their talents lie and they want to learn more about
them.
In short, there's a great deal for children to learn. It should be
obvious why it's essential not to waste their time and to tailor the
curriculum so they learn what they need rather than what they don't. I
suspect there'd be a great deal more willingness among children to learn
things whose practical application is made clear to them, and they'd
absorb more with less effort.
There's so much to learn that school is bound to be a full time
occupation in the early years. But just as the transition into academic
learning should be gradual and individual, the transition to work should
be the same. Furthermore, there's no good reason why one ever has to end
completely and the other to take over all one's time. Some people might
stop all formal learning. Others might continue taking classes their
whole lives, whether for fun or profit. Work and education could fit
together in larger or smaller proportions throughout life. Equality does
not mean that everyone does the same thing. It means that everyone can
do what works for them. The educational system can be an important
facilitator of that goal.
A flexible educational system would not have the same rigid barriers
between stages and institutions that there are now. Day care would turn
into school, which would shade into college, and some of those would
also be loci of advanced research. All of them would be open to anyone
who fulfilled the prerequisites. The more experienced the students, the
more use could be made of distance methods, where they're effective.
Obviously, some age stratification is inevitable and good, since
children at different ages learn differently. But that's the point. The
stratification should be based on what's best for effective learning
with the least effort, not on a chronological age.
Certification is a small part of education, but it's nonetheless
important. The general idea is to prove that the bearer completed a
required body of courses and practice within an appropriate period of
time. Whether that was done as a full time student or part time is not
the issue. All that matters is how well it was done, something that
should be recognized at the school level as well as at the university.
Just as there's no "Bachelor's Equivalency Degree," why should there be
a high school equivalency degree? These things mark off levels of
education, and that's all they need to do. One level might be required
as a prerequisite to further study, but that doesn't mean it somehow
makes sense to put people in a straitjacket as to the time required to
reach it.
Another way to facilitate flexibility in the certification process is to
separate it from education in the broad sense. The basic level of
knowledge a citizen needs should come with a high school degree. Call it
a General Education degree to make the learning involved explicit rather
than the manner of acquiring it. On top of that could come certification
for specific lines of work, be it plumbing or medicine or accounting,
and those would include courses directly relevant to that work.
Further general education, however, would be a matter of individual
predilection. It's a waste of everyone's time, and in a taxpayer-funded
system also of everyone's money, to expect adults to absorb information
for which they see no use. I've taught college long enough to know that
for a fact. Children are capable of absorbing everything, when they're
taught well, but by the time they're 18 most people are more interested
in getting on with the business of life. General education is something
they come back to, and when they do, then they can really make something
of it. A system that gives people the time and infrastructure to do that
is the one which will see the flowering of intelligence and creativity
people hope for from general education.
One aspect of school life could be worse in a flexible system than it is
now. Bullying is a consideration whenever children of different physical
abilities are thrown together, so it has the potential to be worse if
children of different ages are together. It's probably clear by now that
attacking children, including when it's other children doing it, and
including humiliation or non-physical attacks, are just as illegal as
similar behavior in adults. It's not a minor matter just because adults
aren't the victims. It has to be suppressed as vigorously as any other
criminal behavior. Prevention should be the first line of defence, using
proven methods of education on the issues, reporting, and enforcement.
It should also be appropriately punished if and when it does occur. This
is not something to handle leniently because "they're just kids."
Bullying is the beginning of adult patterns of behavior that are against
everything a fair society stands for.
Flexibility in education and its adaptability to different schedules
could be promoted also by the academic calendar itself. Scheduling may
seem like a pathetically pedestrian concern, but just as with other
administrative trivia, the effect in reality can be greater than that of
high-minded principles. Academic schedules now hark back to medieval
times, when the students needed to attend to their "day jobs" of helping
with farming during summer and early fall. Thinking purely in terms of
academics, though, another arrangement would work better. The school
year could be divided into three 3-month semesters with one month
between each one. Short or intensive courses could be taught during the
one month intersessions. Three-month classes packed into one month would
count the same as the longer versions for both students and teachers. A
school year would be eight months, and any given semester or any of the
one-month periods could be used for vacations in any combination. This
would allow school facilities to be used on a continuous basis without
pretending individual students can be constant learning robots. Major
holidays would be treated as they are in other jobs, with one or two-day
breaks. It would allow academic scheduling to fit more different kinds
of work schedules, which would be desirable in a system aiming to
promote the widest latitude of individual choices.
Whenever the government has an essential function there's the question
whether it can also be appropriately carried out by individuals or
commercial interests. For defense, the legal system, elections, and
regulation the answer is "no." For care of the vulnerable the answer is
"yes." For education, I think the answer is "maybe."
Having parallel systems of education results in two separate problems.
One is financial. When the rich can buy themselves a completely
different education than those with median incomes, that promotes
artificial divisions in society based on money alone and it fosters
inequality. That is unequivocally a bad thing. At the very least, where
private education is available, there needs to be a rule that all people
on taxpayer-funded salaries must send their children to the public
schools. Possibly, the same rule should extend to everyone whose wealth
or income falls into the top ten percent.
The other problem is cultural. If a given body of knowledge is needed
for an informed citizenry, and that citizenry is essential to an
equitable and sustainable government, it makes no sense to say that some
people can pull their children out of the community. Because children
are the group affected. Alternate schooling for cultural reasons is not
about universities. It's about parents who want to set their children on
a different path than the larger society. I would argue that parents
don't have the right to make that choice for their children.
Unfortunately, the children don't have the knowledge to make an informed
choice for themselves. It's a very difficult question.
On the other side, private schooling can provide a valuable check on
public education. It can experiment with new methods, like the
Montessori and Waldorf schools. If the public schools start slipping, it
can provide a vivid counterpoint showing how far they've gone.
One way of making sure that the same body of knowledge was available to
all schoolchildren would be to have the same requirements for all types
of schools, whether public or private. More could be taught in private,
but not less. They would also all be subject to the same evaluations and
inspections.
However, that requirement would be costly to implement for home
schoolers. Home schooling has unique issues of teacher and student
competence. The latter could be promoted by requiring home schoolers to
take the same exams as public school students, graded by teachers for
extra pay.
Teacher evaluations, however, become a real quandary. Holding home
schools to the same standard is fine in theory, but any system of valid
teacher evaluation will be resource-intensive. When applied to a
micro-scale situation like home schooling, the cost per pupil will be
huge. Should taxpayers have to pay that bill? If they don't, and the
costs have to be borne by the home schoolers, none but the richest would
ever have that option. I don't see an equitable solution to the dilemma.
Maybe there wouldn't be very many home schoolers if they were held to
the same standards as public schools? Maybe they could all be convinced
to use the distance learning component of public education?
In most countries, there would be a distance learning option to
accommodate children who couldn't reach schools. At the school level,
I'd envision distance learning as a supplement to, not a replacement
for, in-class learning because of the differences in commitment that I
discussed earlier. Some part of the child's school year, such as at the
beginnings and ends of semesters, would be in regular classes while
living in boarding facilities.
Turning now to what basic education entails on the teacher's side, it
clearly would need competent teachers if children's capacity to learn is
to be fully utilized. In the US, there might be the feeling that, right
there, is a fatal flaw in the whole idea. The current feeling is that
far too many teachers are incompetent and need to be kicked. Sometimes
that's to make them improve, sometimes it's to kick them out, but either
way, they need "better standards."
People have it backwards. Work that requires a high level of caring and
personal commitment cannot be improved by kicking the person doing it.
The time to set standards is /before/ they're hired, not after.
Afterwards, you can only make sure they keep meeting them. And if the
objection is that you can't get good teachers nowadays, well, that's the
problem. If the conditions of work make too many competent people go
elsewhere, then the thing to change is the conditions of work, not to
try to force competence where none exists.
To get the good teachers the system I'm discussing requires, they need
to have relevant certification requirements and real evaluation on the
job. They need adequate salaries. They need control over their own jobs
so that they can do what they're expert at, teaching, without
interference. They cannot be construed as the cheapest solution to a
shortage of other staff or as a supply cabinet to make up for budget
shortages. And they need small class sizes in schools and in other
classes where the students are beginners. Ensuring good teaching is
simple. After hiring good people, after having real evaluations to make
sure they stay good, and after providing them with good working
conditions, the only thing anyone has to do to ensure good teaching is
to get out of the teachers' way.
That brings me to the administrators in the system. Teachers are not
necessarily good administrators, or vice versa, so, except for the
hybrid position of principal, there doesn't need to be a requirement for
educational administrators to have been teachers. There do need to be
the usual limits on administrators. In keeping with the principle that
power devolves to those actually doing the job, administrators
administer. They submit accounting (based on teacher data when
appropriate), settle disputes, hire janitors, order supplies, make sure
the school is prepared for emergencies, that there's a nurse available,
and so on. They're appointed by the usual process
for administrators: either random selection from a qualified pool for
the higher offices or simple hiring for lower offices. They're subject
to the same oversight and recall potential. As with other
administrators, their salary is tied to that of the people they work
for, in this case teachers. They don't tell teachers how to do their
jobs, hire them or promote them. That needs to be done by experts in the
field, who are other teachers. In the interests of objectivity, teachers
from outside the district should have the main voice in those decisions,
and they shouldn't include the same ones involved in evaluations of the
teacher concerned.
Advanced Education and Research
In terms of teaching students, universities are not as far away from
where they need to be as (most) schools. Knowledge of subject matter
plays a greater role than teaching skill, although the latter is never
completely irrelevant. The importance of subject matter and the ability
of students to learn on their own means that large classes are less of a
problem (although large labs don't work at any level) and that distance
learning can be more extensively utilized. However, just because they
work a bit better than schools, doesn't mean universities are actually
where they need to be. Even in as simple a matter as teaching (it is
simple at that level) there's still much that could be improved.
In other respects than teaching, universities have drifted far off
course, at least in the US. (The indications are that the situation is
not materially different in other countries, but I'm intimately familiar
with the US system, so this discussion reflects that.) The main problems
are in the treatment of faculty and in the focus on research for
university gain. Both of these negatively affect students and the
advancement of knowledge. Those are not minor functions. So, by slowly
destroying the whole point of universities, those two trends also waste
a great deal of taxpayer money. How well universities function has major
social implications, so, although it's tangentially related to
government, I'll give some suggestions on how the situation could be
realigned.
First, a micro-summary of how the current situation developed to put my
suggestions in context. Research received government funds because of
the incalculable social benefits of the occasional breakthrough.
Research requires infrastructure, so the government included money for
"overhead" and didn't limit it to the actual direct costs. It was
intended as support for the critical social function of universities
generally. So far, so good.
University administrators saw money, which, with a bit of accounting to
beef up infrastructure costs, could be turned into larger amounts that
they could do something with. Government grants became what would be
called profit centers in industry. Administrators allocate funds, so
they never have a hard time convincing faculty of the importance of
their priorities. The push was on to get research funding. As time went
by, administrators grew increasingly dependent on that overhead. The
reward structure for faculty became increasingly intertwined with
getting grants.
Grant proposals take time to write. (Imagine doing about 100 complex tax
returns in a row, to get a sense of the tedium and attention to detail
involved.) The chances for funding of any given proposal are small, so
several are done by many faculty every year. Most of that work is
inevitably wasted. Further, publications that prove one did something
with earlier grants are essential to getting future grants. So
publications proliferate. Writing them, submitting them, and revising
them also takes time.
All of that is time that cannot be spent on teaching. Status naturally
accrues to research as a less humdrum activity than teaching. When in
addition to that, promotions and pay raises depend almost entirely on
quantity of research output and grants received, teaching falls only
slightly above emptying the wastebaskets. A necessary job, but one tries
to get someone else to do it.
Much of the teaching is passed down to those who have little other
choice. That also fits well with the administration agenda of reducing
spending. Teaching assistants, temporary, and part-time faculty are all
many times cheaper than tenure-track faculty. This serves to lower the
status of teaching still further until academe has become more or less a
caste system. Poorly paid people teach far too many classes and are
hired only for three months, sometimes a year, at a time. Looking for
work every few months takes time. Both the overload and the job hustling
reduces the time they can spend on individual students. The result is
that nobody can really afford to make individual students a priority.
Teaching faculty are just trying to cover their classes when there are
only 24 hours in a day, and research faculty are mainly concerned about
funding. It's not that any of these faculty are bad teachers, but either
it's not their real job or they're overloaded.
Teaching and learning suffer under the current reward system, but so
does research even though it's supposed to be the whole point. The
problem is not just that people generate chaff to get the money. It's
more systemic than that. Government-funded research is for those
projects without obvious payoff. Studies with a clear potential for
profit are rarely basic research and should be funded by the industries
that benefit from them. But potentially breakthrough research has a
problem, and that is its position out on the cutting edge. Nobody who
isn't themselves on that edge is likely to understand it, and the few
who are out there may well be professional rivals. They may be friendly
rivals, but they are still less than objective reviewers. So funds are
doled out either based on the opinions of those who don't understand the
research or those who could have a stake in undermining it. None of that
promotes brilliance, even though the system has sacrificed everything
else for excellence in research.
That summary should show how far-reaching the consequences of minor and
well-intentioned changes can be. Provide money for unspecified indirect
costs as a proportion of the direct research funds, and before you know
it, overhead is at 50% and proposals are porked up to cost the taxpayers
as much as possible. It is essential to get the reward system right, and
to re-evaluate it periodically to see where it's slipping and to make
the necessary changes.
The solution is to realign rewards with desired outcomes, and at least
one component of doing that is always to pay for things directly. If you
want well-taught students, make teaching the actual job of professors.
Make it impossible to hire or promote them on any other basis.
Furthermore, the teaching should actually and realistically fit into a
24-hour work week, the same as everyone else's. Faculty members wouldn't
have to fear turning into teaching drones who are bored to death. They
would have as much time as everyone else to pursue interests for which
they're not paid. Professors who are hired partly or entirely to do
research should likewise have to meet realistic and explicit
expectations. They should be paid to do research. They shouldn't also be
funding building upkeep. Buildings should be paid for by funding for
buildings. Money for administration would be assigned using the same
independent methods of assessing prices as other government functions.
In all aspects, the point is to pay directly for the thing being bought,
and not to pay for something else and hope some of it ends up
facilitating the actual goal.
Basic research, however, still presents a problem because in important
ways the goal isn't what it seems to be. It's to study a given topic, of
course, but the goal is generally understood as more than that. It's
presumably all about the results. Yet the results of real research are
unknown at the beginning. That's why it's being done. So trying to
pre-judge results by funding interesting conclusions is only about
playing it safe. It's not about new discoveries that, by their very
nature, never have foregone conclusions. Trying to fund "promising"
research is actually a waste of money rather than a prudent use of it.
When it comes to basic research, the government is in the same position
as the scholars doing it: they have to give it their all without any
preconceived notions, and then when it doesn't work, they just do it
again for a different question. That's hard enough for scholars to do
after years of training. It's not something governments have ever done,
but that's how it must be done if they actually want to get what they're
paying for.
Another reason to give up on trying to evaluate the research itself is,
as I've mentioned, that very few people besides the researcher actually
know enough about the topic to do that. Trying to limit funds to those
proposals the reviewers understand is another way of trying to play it
safe, instead of taking the risks that a step into the unknown requires.
Potential payoff might seem like a good way of identifying promising
research, but, except for applied research taking small incremental
steps, it is not. Once again, this is because there is no way of knowing
the outcome of basic research. Sputnik was launched in 1957, but the
telcos heavily dependent on satellite technology only made fortunes over
four decades later. In medicine, potential payoff can be a bad goal for
a different reason. What we want in medicine are the cheapest solutions,
not the expensive ones. We want a measles vaccine, not topnotch eye
transplant surgery after a bad case of the disease. In many important
ways, profits can actually cost us cures
. The
likely payoff is only worth considering in the broadest social terms and
with intangibles weighted far more than money. That's not what people
usually mean by the word "payoff."
A rather different way of funding research is needed: one that
recognizes the inevitable ignorance of both the research and its
results. I could see a system working as follows. Some proportion of
taxes is dedicated to research based on what is deemed an optimum
balance between affordability and fostering social advancement. That pot
is divided into a few large sums, more medium ones, and many small ones
of a few thousand dollars each.
Short proposals can be sent in by anyone with the requisite academic
credentials, whether they work in education or not. Universities would
be obligated to provide facilities for funded research as part of their
charter. The proposals indicate which funding size class they need. If
the money is awarded and not used up, the remainder could be used on
further work to develop the idea. The goal is to reward intelligent
frugality in research. The proposals outline the idea, but they're
evaluated purely on the soundness of their methodology. Can the proposed
methods produce the data the researcher needs? Scholars are good at
evaluating methods and what they say about a researcher's grasp of their
field. Methodology review would also serve to filter out proposals for
perpetual motion machines and other silliness, if somebody with academic
credentials were foolish enough to submit such a thing. To promote
objectivity, the reviews need to be double blind, not single blind as
they are now. In other words, identifiers on the proposals need to be
removed so that both reviewers and proposers are anonymous (or as much
so as possible). Part of the evaluation of methods would be whether the
proposal asks for a realistic amount of money given the methods
involved. The reviewers would be drawn from a pool of qualified people,
using similar procedures to other matters requiring review (Government
II, Oversight
).
If the proposal passes the review for methodological coherence, then it
goes into the pool for funding. Funds would be allocated randomly to
proposals in the pool until the money ran out. If not funded, the same
proposal could be resubmitted without being reviewed again for some
period of years, after which it would be checked to see whether it was
still relevant. If the proposal is funded, the researcher could be asked
to show daily or weekly log books describing work done, but there would
be no requirement for publication because there might not be any results
worth publishing. Negative results would be posted or publicly logged at
a central library so that others would know of them and the same
research didn't continue to be repeated. Receipts for funds spent would
be mirrored to the government auditing arm which would be expected to
pick up on any irregularities.
As always when I outline specific approaches, I'm only trying to flesh
out my meaning. Experience would show whether there were better ways to
spread research money as widely, as effectively, and as frugally as
possible. Incentives should be aligned with enabling scholars to
concentrate on their work instead of extraneous distractions.
Direct payments, whether for teaching, research, buildings, maintenance
or administration, are simpler under a single payer plan, when the
government funds all of education. Then one source of funds doesn't need
to try financial acrobatics to influence a sector outside its purview.
Diffuse Learning
Humans can keep learning throughout life, but the obvious implication is
generally avoided. It's not just children who are at risk from their
impressionable minds. Any suggestion that adults might uncritically
absorb messages tends to be met with offended objections. And yet the
evidence that adults can learn without wanting to or trying is
unequivocal. The entire advertising industry is based on it. I would
think that the prospect of messages attaching themselves to one's brain
would worry people. The rational reaction would be to avoid unwanted
messages before they could take hold. Denial does nothing but allow the
involuntary learning to proceed.
There's resistance to the rational approach mainly, I think, because of
the fear that others might start to take away one's favorite
entertainment. The general idea is, more or less, "It's none of your
business. Don't tell me what to do."
In a free society, one where everyone can do anything that doesn't
interfere with the same rights in others, that is indeed true. What you
do to your brain is your business. The same rule applies to more
tangible drugs.
But that right does end where it affects others, and if enough members
of a community learn lies from pervasive background messages, they'll
start to act on them. Lies don't reflect reality, so actions based on
them cause damage, and that /will/ affect everyone. The fact that it's
impossible to draw a line between what's damaging and what is not, and
the fact that narratives are a culture's soul whether they reflect
reality or not, doesn't change the fact that sustainability depends
heavily on reality-based actions.
So, even though there's no question that it's a can of worms, societies
do have to think about what the prevailing diffuse education is
teaching. They need to be at least aware of what's going on. Living in
the comfortable fantasy that it doesn't matter which stories we tell
ourselves hasn't worked and won't work. Ironically enough, that attitude
is often justified by the equally comfortable fantasy that we know
reality … when our grasp of it is mediated by those selfsame stories
that supposedly don't matter.
There are currently three main ways of floating messages to people: ads,
entertainment, and news. That includes aspects of new media and social
media, which fall into one or more of those categories. How messaging
should or can work along those different paths is an open question. In
some cases, the answer, or at least a good starting point, seems clear.
Consider ads, for instance. Truth in advertising laws and limits on
repetitiveness of commercial speech would change ads into something
barely recognizable by current standards. Both that and some
entertainment related issues were discussed in Free Speech vs. Noise
.
However, the point being made there related to preventing a
counterfactual din. The point here is the much murkier one of examining
what the various messages actually teach.
There's a justifiable horror of establishing limits measured by what
fits in the narrowest minds or by what serves the interests of the
powerful few. And yet it beggars belief that any constant message can
have no influence. Censorship does nobody any good. But just because
thought control by silencing is bad, doesn't make thought control by
repetition good. The issue has to be addressed, although I'm not sure how.
I'll discuss a few examples of diffuse but highly consistent messages
that have a pernicious social effect, and how it may be possible to
counterbalance those situations without censorship that silences ideas.
Some of the most obvious one-sided messages permeating media are the
usefulness of violence, the sexiness of women, and the admiration of
tall lean body types.
The narrow range of body types defined as attractive is an interesting
example of how the stories we tell ourselves influence the subsequent
stories in an increasingly tight spiral. Preference for long, thin women
has grown so extreme that for the vast majority of women the only way to
meet the ideal is anorexia. Some countries have actually noticed the
health costs of anorexic girls, and started making motions toward a
better balance. Spain
,
for instance, began requiring fashion shows to employ less etiolated models.
All that remains is to take that intelligent approach all the way. The
images of us that we present to ourselves should reflect us. The
cultural expectation needs to be that the people elevated as
representative of humanity actually are representative. Across the
totality of media, they should reflect the mix in the population. Ads,
fashion magazines, videos, and all the rest should all reflect the
actual demographics of the populations by gender, race, age, and body
type. This doesn't place an undue burden on casting directors or their
equivalent. They're already incredibly precise and careful in their
selections. They just need to reprioritize their criteria. The penalties
for not doing so could be to hand over casting to groups that have
proven themselves capable of selecting diversity as well as talent. The
grounds for "eminent domain" in casting could be a new class of
misdemeanors called "bad judgment."
The one-sided messages about sexiness are a problem /because/ they're
one-sided. By faithfully reflecting the sexism of the wider society in
which men have sex and women are sex, the needs of an entire half of the
population become invisible. That damages the half who are denied,
obviously, and it damages the other half by putting them in an
adversarial situation for an activity that's fundamentally cooperative.
If the heart and soul of the activity are avoided, what's left is
obviously going to be a pale shadow of the real thing. Men, too, get far
less than they could. And yet, even though the attitudes damage
everyone, they strengthen over time because that's how stories work. The
new one has to top the old one or there's a flat feeling of "been there,
done that." If they're headed in a good direction, they can get better.
If they're headed in a bad one, they get worse.
The situation up to that point is already damaging, and has already
passed beyond the point of being a private matter. But it doesn't stop
there. Sex becomes conflated with rape, as might be expected when it's
construed as a fight. By dint of repetition, rape then becomes a joke,
and currently we're in the realm where anyone, victim or bystander, who
doesn't get the joke is a prude or a spoilsport or both. And at least
one source of this brutalization of everyone's humanity is the
inequality at the beginning of the road.
That also points to a possible solution. There's a way to distinguish
sex from dehumanization. Sexual messages, whether they're relatively
subtle in ads or not so subtle in pornography are not damaging when both
points of view are equally evident and equally considered. In ads, for
instance, men would sometimes be desirable and not necessarily behind
the camera whenever sexuality is involved. Women would generally be just
humans, as they are in life, and not always sexy. Pornography could
actually serve a useful purpose if it taught people what works for
women, since that's clearly far from self-evident to many men. But
something like that would have an uphill struggle to even be classed as
porn at this point. A rejection of harm would cut out much of what's now
misunderstood as porn, and the requirement to represent women's own
point of view would dump most of the rest. But there might be ways to
foster socially beneficial attitudes without instituting a toxic
morality police. More on that in a moment.
Pervasive violence in media is generally considered irrelevant to the
real world because most people don't become measurably more violent
after partaking. But the most pernicious effect of a constant drumbeat
of violence in entertainment is defining it as fun and turning it into
something normal. We're to the point now where reacting to a
decapitation (in entertainment) with "Cool!" is cool. The fact that such
a thing is supposed to be fun, for some value of the word "fun," isn't
even considered a symptom of anything, let alone madness. The mentality
leaks out of fantasy, which people like to pretend is separated from
reality by an impenetrable wall of reason, and is already evident in
ordinary life at the extremes. How else to explain the widespread easy
acceptance of torture in a country which once wrote the Bill of Rights?
Assuming violence in media is harmless because it doesn't lead to
observable viciousness in most people suffers from another fallacy too.
Violence has its social effect not by the actions of most people but by
those of a few. It may affect only a tiny minority, but depending on
what they do, the human damage could be immense. The point with violence
in media is not whether it affects most people, but whether and how it
affects anyone.
If violent entertainment leads to, say, increased low-level bullying by
a few people it will pass under the scientific radar unless the
experimental design is explicitly looking for that. (I'm saying "people"
rather than "children" because workplace bullying is no less of a
problem than the schoolyard kind. It's merely different in execution.)
Yet the bullying will damage much more than the victims, criminal as
that is. It will have a chilling effect on the freedom of all people,
who'll be constrained not to behave in ways that attract bullies and,
even worse, not to help the victims out of fear for themselves. That
last is a lethal corrosive at any age. I don't know of any studies that
have even tried to measure subtle long term effects. I can't imagine how
you could, given the sea of variables involved.
Repeated messages of violence in entertainment, as entertainment, run
counter to everything that makes either a fair society or a sustainable
one work. It seems pretty obvious that such messages would need to be
counteracted.
I can think of two approaches that might help reduce the lessons
violence teaches for real life. One is that the only acceptable victims
of it -- this is supposed to be /entertainment/, after all -- are
machines, such as obvious non-humanoid robots. It shouldn't be
acceptable to define the suffering of any sentient creature, let alone
humans, as entertainment. That way, one could still shoot at things, but
they would be things. The taboo against harming living, breathing
creatures would be part of the message.
The other approach could be an ethos that requires an allegiance to
underlying reality. The assumption behind violent entertainment is that
violence solves problems. The enemy is gone after being smashed. The
question that should be asked is does violence really solve the conflict
in question? If no -- and there are very few situations where violence
works the way intuition expects -- then the resolution in the story
needs to reflect what happens in the real world. (Yes, those stories
will be harder to write.)
All of the remedies to the specific examples discussed involve cultural
shifts. It becomes unacceptable to have cookie cutter models or women
who could be replaced by rubber dolls with no loss of function. The
creative classes can work with those paradigms just as they now manage
to make movies about detectives who don't smoke or blacks who don't play
a banjo. The social function comes in where damaging repetition is
identified, brought forward, and by social consensus is taken out of the
repertoire. It's been done before. There is nothing impossible about it.
There is one other lever besides good will and intelligence that can be
brought to bear. Whenever there is talk of limiting expression, the
objection arises that interfering with artistic expression is a bad
thing. There's a good deal of truth to that. On the other hand, the
pattern to most of the objectionable repetition is that it is designed
to extract money from people by stimulating their adrenal or other
glands. The artistic expression only becomes an issue when the revenue
stream is threatened, for instance by a requirement to soften the high
their users can expect. It's never brought out as a reason for lower
profits because the creators' artistic integrity required them to make a
less salable product. Real artistic expression seems far less dependent
on money than the kind whose main function is pushing a high.
That should point the way to a possible method of suppressing
objectionable repetition. When a community decides it's had enough of
violent "fun," something they could do by vote, or when the courts
decide certain kinds of expression run counter to the founding
principles of a fair society, it could be illegal to produce it
commercially. The whole production and distribution would have to be a
volunteer effort. People with a real message will continue trying to get
it out under those circumstances. People who are trying to make a quick
buck off a rush will quit.
News forms a gray area when considering repeated messages. On one hand,
reality may be what's providing the messaging, in which case a news
organization does right to report it. On the other hand, news which is
trying to draw in viewers is evidently quite as capable of pandering to
the lowest common denominators as any other medium.
I've already discussed
the likelihood that for-profit news is a logical impossibility. The
mission of news organizations has to be reporting truth without fear or
favor to the best of their ability. The mission of a for-profit is to do
whatever it takes to get the most profit. Those two missions might
overlap on rare occasions, but they have nothing to do with each other.
Maybe the use of news as an avenue of entertaining excitement will be a
thing of the past in sustainable societies with only highminded
not-for-profit news organizations.
In case that assumption turns out to be overly optimistic, I want to
stress that news programs are more responsible than other outlets, not
less, for ensuring that they inform rather than manipulate. They need to
guard against not only misinformation on their own side, but also the
willingness on the part of their listeners to believe convenient
statements. News organizations would be subject to the fact-checking
standards in any case. Their role in fostering an informed citizenry is
so critical, however, that they should be held to a very high standard
in the more nebulous area of general messaging as well. They should not,
to take an example at random, spend more time on sports than educating
the public about global warming. The elevated standard could be
expressed in actual punishments, such as revocation of license, for a
persistent pattern of diffuse misinformation.
Not all diffuse education is bad. Useful information can also permeate
many channels. Social support for repeated messages should be limited to
those which are unequivocally valid statements of fact, such as public
health information.
Libraries, physical or virtual, also provide diffuse education. It's
available in whatever quantity or time the user wants it. That would be
important for all the information on which a fair, and therefore
necessarily transparent, society depends. Transparency provided by
diffuse availability of information isn't reserved only for issues of
obvious social significance. There are also, for instance, such
apparently mundane matters as transparent and easily available price
information which, in the aggregate, is essential for the economic
system to work.
I've spent some time on diffuse, repeated messages because they're the
dominant avenue for misinformation, and misinformation is not something
a sustainable society can afford. However, it's only dangerous when
repeated. If it's not continual, there's not much of a problem, so it's
important to keep the response to it proportional. It's better to err on
the side of carelessness than to be so vigilant against bad messages
that all speech starts to suffer. It's only the egregious cases that
need action, and then they need carefully considered action that targets
them specifically and leaves the rest of the life of the community
alone. I've tried to give some examples primarily to illustrate what I
mean by egregious and what some possible actions against them could be.
There may be far more effective methods than those I've imagined.
- + -
To sum up, education is a vital function. Without good information,
equally available to all, and without enough education to understand
that information, democracy itself is impossible. A fair system of
government is ensuring its own survival when it fulfills its obligations
to support education.
+ + +
Creativity
If we'd had the same rules millions of years ago as we do now, we'd
still be living in trees. Some bright wit would have locked down the
concept of spending the night on the ground and set up a toll booth.
Ideas as property are based on a fundamental fallacy, and as with all
fallacies, acting on them does not work. The flaw is that ideas in the
broad sense, encompassing concepts, inventions, and creativity, do not
share the main characteristic of property. They are not decreased by
use. It doesn't matter if the whole world sings the same song. That
doesn't change it and it still provides each individual with the same
enjoyment. Arguably, it provides more enjoyment because sharing ideas
and feelings is more fun than having them alone.
Property, on the other hand, is a way of distributing limited resources
that can be used up. Food, clothes, land, or phones are all things that
can only be shared a little bit, if at all, without becoming useless to
the sharer.
Ideas don't need to be apportioned among too many users any more than
sunlight does. In fact, the only way to force them into that mold is to
artificially limit their availability, just as the only way to make
people pay for sunlight would be to set up a space shield and extract a
ransom. Setting up artificial barriers and waylaying people trying to
get past them offends against an intuitive sense of justice. It breeds
resentment followed by workarounds. Which is what's happened with the
counterproductive attempt to lock down creations, whether artistic or
medical or technical.
Confusion arises because there's also a sense that creators have a right
to benefit from their good ideas. The sense is justified, just as
anybody has a right to be paid for work useful to others. That's
different from pretending an idea can be transformed into disappearing
ink if too many people look at it. Paying the creator doesn't magically
change a limitless resource into something else.
Thus, ideas are not property and creators have a right to be paid
proportionally to the benefit they bring. With those two concepts in
mind the rational way to align creativity and its benefits becomes
clearer. Don't make futile attempts to limit the spread of ideas. Try to
see how widespread they are. Don't try to extract a toll. Try to make
sure the creator gets paid.
Census methods are available to count how widespread something is. There
are many complications associated with counting the results of
creativity, and I'll get to a few of those in a moment, but for now
let's stay with the general idea. A census of usage can tally the
distribution of a given creation. The creators are then paid based on
that tally. The money to pay them comes from a tax on the physical goods
needed to use their creations, in other words from a tax on the paper,
storage media, phones, screens, pills, or other substrates that carry
the benefit to the user.
I need to discuss a terminology issue parenthetically. Since I'm
insisting that the products of creativity aren't property, I can't use
the convenient term "intellectual property" to describe the whole class
of patentable, copyrightable, and trademarkable things. I've used
"creations" instead, even though it's a clumsy-sounding term. There are
also of course differences among those three subgroups, some of them
necessary, some of them mere historical accidents. For instance, having
a different standard for patentable objects as opposed to copyrightable
expressions is necessary. Having a different term of protection — close
to 100 years at this point for copyrights, twenty years for most patents
— seems arbitrary. Most of what I'm discussing applies to new creations
generally, rather than either patent or copyright specifically.
Trademarks are a small subset where rights extend for as long as the
mark is used. That seems sensible, and I don't delve into
trademark-specific issues.
A census method with subsequent payout is superficially analogous to the
market system used now in that sales are a rough tally and the price the
market will bear determines payment. However, markets can only handle
property. Like the proverbial hammer to whom everything is a nail,
markets have handled creativity as if it was property. When the nature
of creativity causes it to escape the inappropriate container the market
isn't able to use an appropriate non-market-based approach. Instead it
keeps attempting the useless job of trying to bottle the equivalent of
sunlight. That by itself is a big waste of everyone's time, energy, and
money.
But there are other, bigger problems. The category error has generated
injustices. Since creativity can't be bottled, who gets paid and for
what is rather arbitrary. That leads to the usual result: the powerful
get paid, the others not so much. Those powerful people are very rarely
the creative people themselves. The examples are legion, but to take
just one instance, Charles Goodyear invented the vulcanisation of rubber
(without which it's about as useful as chewing gum) but died poor.
Imagine carrying out the industrial revolution without rubber, and yet
it wasn't Goodyear who saw much benefit from his work. The inventor,
programmer, or artist cheated of their work by those with more money is
such a common occurrence it's a stereotype.
The ironic result of "intellectual property" laws is that their stated
purpose of rewarding innovation is an almost accidental outcome. The
actual history shows that they
were established as a tool for governments to control content.
Distributors were the enforcers in return for a time-limited monopoly
providing guaranteed profits. The government control aspect has been
beaten back, but the philosophical tools needed to see the illegitimacy
of guaranteed profit haven't been widespread enough yet to correct the
other half of the injustice. The rewards for innovation go to
uninnovative third parties, and as the system is pushed ever further
away from its official goal it breeds mainly cynicism, not creativity.
Markets, by trying to pretend ideas are property, create a situation in
which all that matters is who "owns" the idea, not who created it. That
perverts the system of rewards and takes them away from the people with
a right to them.
Once "intellectual property" is recognized as a matter of rights rather
than markets, the institution which should handle it is clear.
Administering a system whose primary purpose is enforcing rights is
necessarily a government function. Granting patents and copyrights is
already done by a government office because it must be transparent and
accountable, without any considerations besides correctly logging who
had which idea. Distributing royalties is equally a government function
because it requires an objective census untainted by any motive except
accuracy, and transparent, accountable payments based on that. No other
entity has (or should have) the enforcement power and accountability
required.
Variants of the idea of royalty distribution based on a census have
cropped up repeatedly in recent years (e.g. 1
,
2 , 3
)
because it's an obvious way to fairly distribute the rewards for useful
creations. The idea is applicable at any stage of technology, but it is
easiest to apply in a wired world. Headers in computer files can include
attribution lists of any degree of complexity, similar to software
version control systems, and they're also much easier to census than
physically tallying actual products. (This just in, as they say: Google
is experimenting with tags that trace sources
.)
However, a wired system is also easier to game, and it should go without
saying that stringent safeguards and punishments against fraud have to
be in place. Physical sampling has an essential place, I expect, as one
of several parallel tallying methods. Used together, they would provide
one type of safeguard against fraud.
One difference between a centrally administered census system and
market-based distribution of royalties is that a census does not pretend
to have a perfect one-to-one correspondence between usage and payment.
Markets do have that goal, but their distribution of payments is on the
whole wildly unrelated to how much a given work is used. Anyone who
finds the current system good enough in terms of apportioning royalties
could not fault a census-based system for imprecision because it would
be far more precise than the markets for this purpose.
Now that the internet has made it easier for everyone to publish and
broadcast, a fundamental problem with assigning credit for creativity is
becoming increasingly evident. Creations never happen in isolation.
Every inventor, author, artist, and scientist stands on the shoulders of
others. The only way to assign credit is to draw more or less arbitrary
lines that delimit enough difference from previous work to merit
separate acknowledgement. The cutoff for a US copyright, for instance,
is more than 10% difference, although how that 10% is quantified is
rather subjective. In practice, it seems to mean "some easily noticeable
and material difference."
The imprecision is inevitable — there's no way to quantify a work of
art, for instance — and it's difficult to see any way to avoid arbitrary
and subjective demarcations. That implies that the bar should not be set
too low because the smaller the contribution, the murkier the
distinction between significance and insignificance.
Making the determination of originality is complicated by the need to be
fair. Now and in the past the "solution" to the complexity of creativity
has been to give up and simply assign a given work to the applicant(s)
for a patent or copyright. In this day of remixes, a more calibrated
system is essential. As I mentioned when discussing legislation in the
fifth chapter
,
I think that methods used in software version control can point the way.
Software, and legislation for that matter, are just special cases of
works with multiple contributors .
Sometimes those tools are called content management systems, but that's
an unspecific term covering everything from tracking minor changes in a
word processor, to blogging software, and to education course management
which may not have the necessary contribution tracking. Something like
Plone is perhaps the closest thing now available.
The software version control or Plone-type systems I'm aware of (I can't
say "familiar with") are used in the open source community by
volunteers. In that situation, most participants are honest to begin
with. Plus the community is rather small, skilled, and generally aware
of the facts, all of which acts to prevent cheating. A similar system
adapted to patents and copyrights, where money from royalties might be
in play, would need a much stricter system to prevent cheating. I'm not
sufficiently close to the field to have an idea how that could be
applied, but it's a problem that must be solved in a fair system of
creator's rights.
There's also a psychological issue that should be taken into account
when apportioning royalties. One desirable effect of an equitable system
of rewards should be that more people feel motivated to act on their
creativity and to contribute. However, it doesn't seem practically
possible, at least to me, to assign rights to and then pay for every
single dot and comma that somebody might add to a body of work. There
has to be some level below which contributions are just part of the
background, as it were. But people react badly when somebody else gets a
better reward for what they see as equal or less work. It offends an
innate sense of justice
that goes right back to chimpanzees and earlier.
The point of equitable rewards for creativity is to facilitate its
expression. It would be counterproductive to implement a system that's
felt as even more unfair than the one we have now. The system needs to
take the psychological factors into account if that goal is to be met. I
would guess that a clear demarcation — difficult as that is — between
the amount of contribution that receives royalties and the amount that
doesn't would mitigate negative reactions. When work is obviously
original, that's not a hard call to make. When it's incremental, then
perhaps adding a time factor would help. To be eligible for royalties,
the amount of work involved should be equivalent to some sizable part of
a work week. In other words, if it's equivalent to a half-time job, it's
significant. If it's something one dabbles in occasionally, then
official recognition is probably misplaced.
My sense is that commitment of contributors is not a smoothly increasing
variable. A minority contributes a lot, then there's a more or less
sparsely populated intermediate gap, and then the majority whose
aggregate contribution may be huge but where any individual adds only a
little. If that sense is correct, research should be able to identify
that gap. If the demarcation line for receiving royalties runs through
the gap, it will align with the intuition that only larger contributions
deserve specific rewards.
The next difficulty is to identify the best group to make the
determination that new material has been contributed, and how much. The
people with the clearest concept of the work are others working on the
same or similar projects, but they are also interested parties. If there
are royalties to be distributed, they may be motivated more by a desire
to keep the pool of recipients small than by an honest appraisal of
work. Critics or specialists in the field who are unconnected with the
project seem like good possibilites. Public comment should always be
facilitated because expertise often crops up in unexpected places. Once
the outside expertise and input on the originality of a given piece of
work has been obtained, then the experts at the patent or copyright
office would evaluate all the input on its merits and make a
determination, subject to appeal.
The current process is supposed to work more or less like that, too, but
for various reasons it's veered off course. Patent officials are
evaluated on how much paperwork clears their desks, not how well it
stands up to litigation. So it's become customary in the US to grant
patents for practically anything and to assume that if there's a
problem, it'll come out in the courts. Furthermore, possibly related to
the desk-clearing standard, patents are granted apparently without
anything approaching due diligence on prior art. The assumption
throughout seems to be that litigation is a solution rather than a
problem. In reality, it's laziness and dereliction of duty to expect the
courts to clean up messes that should never have happened in the first
place. Patent and copyright officials who are doing their jobs will make
a thorough and good faith effort to determine that a piece of work is
indeed new, and how new. Litigation has to be a rare occurence when a
mistake has been made. If it's not rare enough, it's indicative that the
responsible bureaucrats need to be fired.
So far, creator's rights have been discussed in the context of payment
for work. The other aspect is control over the work. Obviously, minor
contributors (less than 50%?) wouldn't have a controlling interest in
any case, but what of major ones? How much control is appropriate?
When discussing money
, I stressed
that real free markets and monopolies are incompatible. That is no less
true if the monopolist is an artist or inventor. The creator has a right
to be paid if someone is benefitting from their work, but that doesn't
give them monopoly "rights." An unfair advantage is a privilege, not a
right. Creators cannot prevent others from manufacturing their
invention, playing their song, or publishing their books. The creators
do have the right to be paid in proportion to how much their work is
used and how critical it is to people's lives. The government would
disburse the funds based on pay scales that have been worked out for
different classes of products. (I would say that ringtones, for
instance, should have lower royalty rates than headache cures.)
Royalties received for similar classes of products should be much more
consistent under that system than they are now. The pay scales
themselves would necessarily be a somewhat arbitrary because they're set
more or less by consensus. (The current system also sets royalties that
way, but the only factor taken into account is the bargaining power of
the creator.) Consistence should help avoid wide disparities in reward
for equivalent contributions.
Requiring creators to license their work is currently called "compulsory
licensing," which makes it sound like a bad thing. "Compulsory" anything
meets with reflexive resistance. But all it does is take away the
ability to impose a monopoly. That reduces short term gain for a few
people, whose objections are no different from those of anyone losing
privileges. They're not valid if the goal is equitability.
However, there is one sense in which creators would have more control
over their work in a fair system than they do now. The entertainment
industry has something called "moral rights," which refers to how a
creation can be used in other contexts. Consider, for instance,
Shakespeare's character, Lady Macbeth. There's a famous scene in which
she sees the blood of murder on her hands, and nothing can wash it out.
Shakespeare did not retain any moral rights. They hadn't been invented
yet in the early 1600s. So a soap company could have made a zippy ad
showing a relieved Lady Macbeth after one handwashing with their
EverClean soap. The only thing saving Shakespeare's legacy is that Lady
Macbeth is too old and forgotten to be worth selling.
Moral rights clearly belong to creators. Their creations belong to them
in ways that money can never buy. They have rights in them that money
can never buy. Creators, therefore, have veto power over what they see
as inappropriate alteration of their work. Like the fact of their
authorship, moral rights are also permanent and inalienable. At least
for expressions, but not tools, i.e. for copyrights but not patents,
that control over usage is permanent. Creators who felt strongly enough
could include instructions about moral rights in their wills. Those
rights, however, cannot turn into a back door to permanent copyright.
The scope of moral rights needs to be limited to relatively broad
classes of usage. Creators have the right not to have their work
perverted, but they don't have the right to prevent legitimate use of
it. Making moral rights explicit and enforcing them is probably more
important in an age of remixes than ever before. It is only fair that if
someone wants to use a character or another expression in a way that's
odious to the creator, then they're under an obligation to come up with
their own concept and not copy someone else's.
There's a point that may need stressing in a broader sense. The rights
of a creator are inalienable because they're matters of justice, not
property. They're not something that can be bought or sold or passed on
to someone else. The creator, and no one else, has the right to moral
control over their work, and the right to royalty payments at a level
and for a period of time stipulated by law.
That would mean some changes in business practices. For instance,
corporations or any other institutions would not be able to take rights
to inventions made by workers. The corporation's benefit comes from the
profit of being first to market with a new product. The royalties go to
the individual inventor(s) no matter whose payroll they're on. The
corporation recoups the costs of research from profits, not from taking
the rewards for creativity that isn't theirs.
In the case of celebrities, the valuable "property" may not be a work,
strictly speaking, except in the sense that it's the carefully
constructed public persona of the celebrity him- or herself. Private
citizens already have the right to control their personal images and
data, as discussed under privacy
in the Rights
chapter. The creativity of entertainment and sports personalities is
packaging a specific public face, but the mere fact of being a public
person doesn't make them lose all rights. In the private aspects of
their lives, they have the same rights to privacy as any citizen. In the
public aspects, they have the same moral rights to their personas that
other creators have to their work. That would make paparazzi jobs
obsolete and complicate the lives of gossip columnists, but those aren't
sufficient reasons to deprive people of their rights.
The more equitable rules of copyrights, patents, and trademarks
envisioned here would obviously require big changes across a range of
industries. The main effect would be to render armies of middlemen
superfluous. As always when the application of just principles damages
entire business models, the simple fact that profits vanish is not a
sufficient reason to reject fairness. I'll repeat myself by saying that
slavery was once a profitable business model and that didn't make it
right. Nor does implementing justice mean increased poverty. On the
contrary, every single time, there is more wealth when there is more
justice. The illusion of lost wealth comes from the few people who lose
the ability to milk the system.
Furthermore, insofar as the middlemen provide a real service, there's no
reason why producers or publishers or agents would necessarily all
disappear. Expertise in packaging, distribution, and sales is not the
same as making an invention or an artwork. Artists, especially, are
stereotypically poor at those jobs. There's a need for some middlemen.
The difference is that they would have to be paid from actual added
value and not simply because they're gatekeepers who can extract a toll.
Moving on from creators to the creations themselves, one current large
source of problems is the definition of what can be patented. In the US,
things went along reasonably well for a while when the definition was
narrowly focused on the sort of original work an amateur would recognize
as an invention. New technology, however, introduced gray areas, and the
unfamiliarity of judges and lawyers with technical issues made that
standard hard for them to apply sensibly. The desire to give US business
a boost also worked to promote the granting of patents, although that
motivation doesn't appear in the dense legalese of court arguments.
That's eventually landed us where we are now. Patents are given for mere
and obvious ideas, such as one-click shopping. Patents are granted on
life although the life in question (parts of a DNA molecule) has merely
been read, not created. There have been some actual DNA inventions, such
as bacterial artificial chromosomes, BACs, but those do not constitute
the vast majority of patents granted for DNA sequences. In the
pharmaceutical industry, when the patent on one blockbuster drug is
close to running out, it's provided in a trivially different form, such
as a weekly dose instead of a daily dose. Somehow, patents are granted
for something which is nothing but a dosage change. User interfaces get
patented, as if in a former time somebody could have locked down the
concept of using dials to control machinery, forcing any competitors to
use more cumbersome systems, such as inserting pegs into holes.
The situation with trivial and proliferating patents is complicated by
the fact that copyrights are both easier to obtain and last much longer.
In the software industry, for instance, that's created an incentive to
seek copyrights rather than patents even though nobody reads software
like a book. People use it, like a machine. Yet the legal system has let
the applicants get away with the travesty of copyrighting software. That
could be due to a lack of technical knowledge in the legal system, or to
excessive accommodation of those with money. Either way, none of this
should happen in a rational system of patents and copyrights.
Rights should be granted when the consensus among knowledgeable people
agrees that there has been non-trivial original work. And insofar as
there are necessary differences between patents, which are granted
basically for tools, and copyrights, which are for expressions, then the
creation and the type of rights assigned should be in the correct class
based on the merits. The salient feature is not which office happens to
be processing the application. The important point is in which category
the creation actually belongs.
The criteria for what is patentable is a matter of opinion. There's no
law of nature involved, and for most of human history the concept didn't
exist. As a matter of opinion, it depends on consensus. The consensus,
after a few hundred years of experience with the idea, seems to me to be
coalescing around the concept of a new tool. Stripped of its legalese,
that's the core of the criterion used in US patents in the mid-1900s
when the system worked better than it does now. Adding some of the
legalese back in, it's called the "machine or transformation test
."
An invention is patentable if it's a non-obvious device or method of
transforming one thing into another. There are, as always, gray areas in
that definition. For instance, the linked Wikipedia article points out
that a washing machine is a patentable tool that cleans wet clothes
using agitation, but a stick used to agitate wet clothes is not.
That example points up the fact that patentability is yet another
situation in which there is no substitute for good judgment. Unusual
creativity deserves recompense. Common or garden variety creativity
should be appreciated and facilitated, but it's an attribute almost
everyone shares and so it requires no special recognition.
Distinguishing between uncommon and common contributions is what takes
judgment, and always will. Most cases don't require exceptional ability
to judge. It's possible to discern whether a tool is new and
non-obvious, even if it's not always simple to articulate why. In the
washing machine example, it seems clear to me that the difference is the
obviousness of the tool. Even I could think of using a stick to work the
water through the cloth more easily. On the other hand, neither I nor
most people have ever been anywhere near inventing a washing machine.
Another example is the weed whacker. That useful invention was patented,
but also copied. The patent was not upheld
when the
inventor sued, because the judge felt it was an obvious idea. However,
it was obvious only in a "why didn't I think of that" way. The fact is,
nobody else /had/ thought of it, and if it was really so obvious, that
mechanism for weeding should have cropped up repeatedly. Perhaps a
consensus of public comment by disinterested third parties would help
avoid such miscarriages of justice in patent law. All methods that prove
useful in promoting good judgment should be applied to the issue in the
interests of maintaining a just, effective, and, ideally, frictionless
system that requires no litigation.
Fair use is a shorthand term to describe a type of usage for which
royalties are not due. In practice, people tend to view it as any small
scale, private usage, as well as socially important ones, such as in
libraries, archives, education, or quotations that are part of other
work. In the system envisioned here, the entity not paying would be
different, the government rather than the individual, but the principle
should be the same. And the principle should explicitly date from the
time when fair use meant that people could use what they had. The
current push by content owners to turn copyright into an excuse to
extract rent every time anyone touches their "property" is an attempt to
charge whatever the market will bear. It has no relation to a fair price
or to fair use.
I'd like to spend some time on a topic not usually included in
discussions of creativity or intellectual "property." The government
doesn't only administer the rights involved, it's also a major customer
for the products. Under the current system where patents grant a
monopoly as well as rights to royalties, that can mean the government in
effect becomes a factor in creating monopolies as well as becoming one
of the trapped customers. Neither of those is acceptable for an entity
that must treat all citizens equally. I assume the situation would not
arise in a system that requires compulsory licensing. If it nonetheless
should develop, it's clear that it has to be stopped. The government
should make the explicit effort in its purchasing to buy from different
companies, if their products are of similar quality. The most efficient
way to do that is probably not to have preferred vendors at all, but to
leave buying decisions up to individual bureaucrats subject to audit, as
always, and random reviews for conflict of interest. The primary
criterion, of course, is getting good value for the taxpayers' money.
The mandate to spread the government's custom brings up a tangential
aspect of compulsory licensing. As tools grow increasingly complicated,
user interfaces became more and more of an issue. People don't generally
think of interfaces as a tool of monopoly, but they can be, and they can
be the most effective one of all. There's nothing we prize more than the
time and effort we have to put into learning something. If using a
competitor's product means learning a new way to do something, then it
won't be used, even if the result is better. Just look at the
impossibility of getting the world to use anything but qwerty keyboards.
Inventing a new and improved interface is a real contribution and a real
invention. But compulsory licensing has to explicitly include the right
to use that interface wherever appropriate. The relevance to the
government's situation is that the requirement to avoid favoritism among
vendors does not have to mean reduced efficiency of the workers. The
desired interface could be ordered with any product, since the two must
be independent of each other.
That leads to the further implication that promotion of competition
requires rules that ensure interoperability. It's essential at all
levels: in simple hardware such as electrical plugs (at both the wall
/and appliance/ ends), in machines where drivers for different hardware
must interface with all software, and for users where people must be
able to take the interface they're comfortable with to any application
or device of their choosing. That would require an effective standards
setting process whose primary mission, in an equitable society, would be
the convenience of and least expense to the most users.
The requirement for interoperability does not, by itself, preclude
closed and secret processes. The government itself, however, has to
operate on stricter rules because it cannot be in the position of
relying more on some citizens than others. The government itself must be
transparent and equally accessible to all, which means the tools it uses
have to share those characteristics. Unless they do, there's a big and
unacceptable loophole in the transparency requirement. There's nothing
to stop the government from using proprietary tools, but they must be
open. Closed source hardware, software, or other tools has no legitimate
place in government offices.
Archiving is another function where interoperability and transparency
has not been an issue heretofore. Librarians preserving books for
posterity might have had to worry about physical preservation, and the
time and expense of transcription, but they never had to worry about
losing the ability to read the books themselves. The words wouldn't turn
into a jumble of unrecognizable characters unless the language itself
was lost. However, through the miracles of modern technology, we're now
in a position where the ability to read something written a mere twenty
years ago depends on the continued existence of a company.
The problem is currently most evident in computer games, where it
doesn't worry most people since games are considered worthless. Some
games from the early history of computing are already unusable because
their programming was closed and the companies who held the secret of it
have vanished. Whatever one's opinion of the games themselves, they're
the canaries in the coal mine who are showing us the future of much
larger and weightier things.
It is not right for work to be lost because of a rights holder's desire
for secrecy. Right to a patent or copyright does not include the
privilege of destroying the work made when using the tool. How best to
implement that limitation in practice would need to be worked out. Maybe
it should be a requirement to deposit all necessary information with the
government archivist whenever a product starts being used by more than
some tiny percentage of the population. Maybe some other method would be
more effective. There do have to be procedures in place to ensure that
work isn't lost simply because the knowledge of how to read it was once
kept secret.
- + -
Getting the rewards for creativity right is probably more important in
an equitable society than the kinds we currently have. In a world of
peace and equitable distribution of wealth and time, people's focus is
likely to be on finding ways to amuse themselves. For many people that
means having fun in the usual sense, playing sports, engaging in social
life, and the like. But there's a large minority for whom fun has a more
extensive definition. It means learning and doing, as well as being
amused. If the society as a whole facilitates that activity, and if it's
justly rewarded, that'll lead to a beneficial cycle of innovations and
more satisfying lives for all citizens.
+ + +
Afterword
This exploration of what equal rights means in our world shows mainly
how far we've drifted away from that goal. Many people set course for it
during and after the Enlightenment in the 1700s, but in recent
generations there's been the assumption that all the hard work has been
done.
Not so. Autopilot isn't working. It's past time to get back on track.
I see the future in grayish terms. We're not doing anywhere near enough
to avoid disaster, so we'll walk into it. We're headed for a dark age
and who knows how long it will last. That depends on how bad the
environmental damage will be and whether we react to the trouble by
pulling together or falling apart. It could be many decades, or it could
be centuries. If the worst case climate change comes to pass, it could
be millenia.
After the death of earlier empires, what's emerged from the rubble has
been better. There's no reason to assume it will be different this time.
So, even though I'm not enough of a seer to know what lies on the other
side of trouble, I'm pretty sure it'll feel like waking from a bad dream.
I've set these thoughts down in the hope that they can help toward
bridging the dark time. Maybe they'll be lost too soon to do any good,
but this is all I can do. I don't have the intestinal fortitude to do
much besides think.
This work is dedicated to all of those who have more of what it takes
than I do, who'll carry us through the time of trouble to the other side.
+ + + + +