Myodicy, Issue 21, June 2004

Bureaucratized Knowledge:
on Practical Epistemology

by Theodore Plantinga


"I know that my Redeemer liveth." These words are familiar to a great many people because Handel immortalized them in his Messiah. What some do not realize is that he did not make them up: he got them from the Book of Job (see Chapter 19, verse 25).

I often use Job's famous statement as an example in philosophy lectures. Job's confidence regarding his "redeemer" is no longer permitted us today: a number of significant have decided that we can no longer "know" anything about matters they characterize as "religious." And so we may not sing along with Handel -- or, if we do so anyway, we must be sure we do not mean it. (Is there a way to sing with one's tongue in one's cheek?)

Nowadays, so-called religious claims fall within the domain of belief -- or "mere belief," as I am inclined to call it. Perhaps people used to be utterly confident of such things as Job's thrilling affirmation in the face of almost unspeakable misery and torment, but nowadays they are strongly discouraged from saying so. Such, at least, is the prevailing wisdom.

When I mention Handel's affirmation in class, it gives me an opportunity to make a point about the philosophy of Immanuel Kant (1724-1804), who is famous for having said: "I have therefore found it necessary to deny knowledge in order to make room for faith." [NOTE 1] It is not clear to me whether the far-sighted Kant saw where his much-quoted statement would eventually lead, but his programmatic claim did help to lay the foundation for a widespread separation between, on the one hand, a domain of knowledge that is thought to be public and objective and scientific, and on the one hand, a domain of belief or faith (Glaube was Kant's German term) that is regarded as subjective and therefore widely considered inherently shaky and unreliable.

As we ponder Kant's input and its impact, it is worth noting that what certain Christians lament when they reflect on the aftermath of Kant's philosophy of religion and consider its great historical influence is precisely what is celebrated by a great many people in our culture. In other words, those who rejoice in what Kant has wrought take it that he bequeathed us a masterful modus vivendi that keeps religion in its proper (limited) place. In our new century, which looks back upon the horrors of 9/11, many people are more convinced than ever that religion must be "kept firmly in its place." Since they no longer have reason to fear the specter of Communism, religious fanaticism (sometimes called fundamentalism) appears to them to be the greatest of all destabilizing factors that threaten to upset the human applecart. And so, if you hear someone affirming that he knows -- as opposed to merely believes -- that his Redeemer lives, you may be dealing with a fundamentalist -- in which case you had better watch out!

Many Christians -- along with persons of faith rooted in other traditions -- have little awareness of Kant's philosophy of religion and therefore do not realize what is at stake in this discussion. Indeed, many have become so enmeshed in the presuppositions underlying contemporary ways of speaking that they would be little bothered by an instruction to the effect that they must now say that they believe that their Redeemer lives -- as opposed to knowing it.

On the other hand, there are still some who feel they have been stripped of something precious. They are aware that things have changed greatly during the last hundred years or so, even if some of the changes were so gradual that we were not aware of them as they came over us. But now it is as though we are waking up to discover that many of the things we used to do freely are no longer permitted us. Life has grown more constricted. Many things that once seemed quite ordinary, to the point that we took them for granted, have become bureaucratized: for example, we scarcely dare deal with vermin on our property, nor do we burn the leaves that fall from the trees on our property. New government regulations and expectations intrude into domains that we used to regard as private. And while Handel's "I know that my Redeemer liveth" is still permitted in an oratorio and in the context of a church service, we gradually grow nervous about what new restrictions may be coming. Parliaments and legislatures pass bills intended to stamp out what are called "hate crimes," and churches nervously react to them as new restrictions on their activities -- and even on their speech.


Kant is not the only major philosopher whose thinking needs to be considered in this discussion. Also worthy of consideration is David Hume (1711-76), who is widely recognized among Christian theologians as an opponent of our faith, whereas Kant is thought by many to be sympathetic, or at least helpful. Hume was the greatest thinker in the tradition known as empiricism, which ostensibly proposed to ground all knowledge in the givens of sensory experience. The result was a gentle form of skepticism in which we were compelled to admit that we as human beings do not know nearly as many things as we once thought we did.

Hume is much misunderstood on this point and was not nearly so opposed to certain traditional Christian notions as some of his opponents feared. This observation applies especially to what he had to say on the subject of miracles. It is important to remember that in Hume's ontology, any succession of events is possible in principle. The issue is not whether such-and-such an event (an alleged miracle) could possibly have occurred but whether it would be rational to believe someone's report of such an it's having occurred. For example, let's say that you are examining an historical source in the process of research aimed at the composition of a historical narrative. You come upon a miracle report: can you take it at face value? Did the axehead of II Kings 6 really rise to the surface of the water? [NOTE 2] The author of the report about the floating axehead was probably either deluded or deceived, and so it would be prudent to discount his report. But there is no way to prove that such a thing could never have happened.

If Hume was no village atheist setting out to prove that certain established verities of the Christian tradition are flat-out falsehoods, neither did he give much comfort or encouragement to those who are eager to "prove" various truths of the Christian faith in old-fashioned ways apart from revelation. If we cannot know for certain that the sun will rise tomorrow or that the kettle will boil the next time we put some water on for tea, we can hardly be expected to know that our Redeemer is alive and well.

Hume also had a lively interest in history, politics and ethics. He did not intend that we as human beings should lead lives of irresolution; rather, he was confident that if we followed our normal propensities and sentiments, we would for the most part wind up engaging in what he would regard as good conduct. He cast aside the preachments of his youth about the desperate depravity of man and his inclination toward all manner of wickedness, which he imbibed in many, many long church services in what North Americans would call the Presbyterian Church. As a result he took a rather cheerful approach to life, as is evident especially from his attitude in the short "Autobiography" he composed when he knew he was dying of a bowel disorder. [NOTE 3]

But if Hume was not a great proponent of knowledge, and if he did not think that we possess much in the way of knowledge in the proper sense of the term, he did recommend belief. Still, he insisted on a nuanced approach to belief. One of his most important -- but somewhat overlooked -- statements is his claim to the effect that a wise man proportions his belief to the evidence. [NOTE 4] The suggestion is clearly that we are entitled to have quite a range of beliefs, but that we should not hold them all with equal fervor. Some of our beliefs are but poorly supported by evidence and count as not much more than opinions and hunches. Others would come very close to what more traditional thinkers would call knowledge. The wise man knows how to distinguish in this regard.


If one were to base an "ethics of belief" on Hume, one would make much of the venerable category of "evidence." It would then be essential to ascertain just what evidence one might have for this or that pet theory or assumption. But in the centuries that have passed since the death of Hume, we seem to have grown less confident regarding evidence. The many critics of positivism remind us that the so-called facts on which we would normally be expected to be base our knowledge are "interpreted facts," that is to say, facts that are embedded in theoretical frameworks or bodies of assumptions. Especially in this age of postmodernism, we have become suspicious of science and old-fashioned epistemology. The domain of evidence has therefore shrunk in importance, and a new one, that of procedure, has come to the fore. It is this new domain of procedure, which has significant consequences for practical epistemology, that I propose to explore in this essay.

I'm not sure that W.K. Clifford (1845-79) would have understand the shift if he had lived to see it. Now, Clifford is popular with philosophy instructors because he represents a wonderful example of the old epistemology based strictly on evidence. His famous conclusion bears repeating here: "It is wrong, always, everywhere, and for any one, to believe anything upon insufficient evidence." He makes this affirmation in an essay entitled "The Ethics of Belief," which he introduces with a little tale about a ship-owner who allows his ship carrying poor people on their way to a new life in North America to sink in the middle of the Atlantic because he acquired a comfortable conviction about the ship's seaworthiness and did not bother having it thoroughly checked out and upgraded so as to be made ready for the journey. [NOTE 5]

The classic opponent of Clifford is William James (1842-1910), who speaks for the pragmatist tradition. James argues that the ethics of belief which Clifford advocates is unrealistic: No one could live by such a creed. It is one thing to follow very stringent rules as to what one accepts as true while working in the natural science laboratory (no "jumping to conclusions" permitted!), but it is quite another thing to apply such a creed in everyday life when events come rushing at us. James is convinced that Clifford's approach will not do and so he develops an alternative practical epistemology in his famous essay "The Will to Believe" and in various other works. [NOTE 6] Indeed, the tradition of which James is a major spokesman is proud to characterize itself as pragmatism. The very term puts a question mark behind Clifford's cautiousness. Clearly, James and company understood the issue well.

I sometimes wonder what James would make of the emphasis on procedure that is creeping into practical epistemology today. I am inclined to suppose that parallel arguments could be developed against it. While I do not claim the backing of James for the critical remarks later in this essay, I do confess to being inspired in part by his reaction to Clifford.


A contemporary Clifford who had thrown evidence overboard and adopted procedure as the key to his thinking would have to say: It is wrong --always and everywhere -- to believe any conclusion that has not been arrived at by following the proper procedure, a procedure that involves one's fellows, for it is a plurality of human beings that must together take responsibility for the judgment that has been reached. In other words: Don't jump the gun. Among more recent thinkers I would point especially to Ivan Illich as likely to dissent from this view. Illich has a keen sense of how the growth of government and organizations has disabled and undermined the individual and taken away his confidence that he can deal with life's challenges on his own. If we are always dependent on procedures and on the input of our fellows, we can no longer act on our own.

The changes in practical epistemology of which I have been speaking manifest themselves in many a workplace. Among them is my own, which is the university classroom. Because of the changes mentioned above, the way professors "police the classroom" is also changing and must continue to change. Indeed, it was a round of such changes at my own institution, Redeemer University College, that triggered certain reflections in my own mind and thereby led to the writing of this essay.

Yet I would not wish these reflections to be considered primarily a chapter in a debate that is specific to Redeemer. I believe they have application to many other institutions as well and to human life in general. The assumptions and consequences of practical epistemology are always with us, regardless of whether we have ever studied philosophy on a formal level.

It is now time for an example. One of the unpleasant realities of a professor's life is that he may have to deal with instances of plagiarism. What used to happen is that a professor would ascertain that plagiarism had taken place and would take punitive action as called for by the academic regulations of the institution in which he taught. If it was a question of a term paper or an essay having been plagiarized, the student would probably receive an F on the assignment and be given a verbal reprimand as well. It was also possible that the offense would need to be reported to some authority within the university, making it possible to check whether the student in question had committed plagiarism in other classes as well, in which case more stringent punishment would be called for. But the whole process began with the professor ascertaining what was the case, namely, that the student had stolen or "borrowed" ideas or language without acknowledging what he had done, passing it off as his own work.

Is such an epistemological situation strictly parallel to Job's claim "I know that my Redeemer liveth" (I know that my student plagiarized)? Or would the professor be on shakier ground than Job? Traditional university practice has by no means overlooked the fallibility of professors in these matters, and so it was presupposed that in some cases the student would appeal the professor's finding to somebody or other. Virtually all colleges and universities had some sort of provision or mechanism in place to deal with disputes about plagiarism. And if the appeal was denied, it might even be possible to make a second appeal.

But it must be remembered that in today's intellectual climate, we have postmodernism always at our elbow, questioning the motivation and integrity of the human subject making a cognitive claim (of course there are not supposed to be any subjects). In such a climate it has become credible to argue that no professor can simply ascertain that plagiarism has taken place. Still, to reach such a conclusion is not yet to despair of ever detecting such an offense as plagiarism. What we are now told is that only a body or committee or external authority, one that is detached from the original situation, would be competent to make such a judgment. What this means for institutions that have translated such thinking into policy is that professors may no longer claim to have caught a student in the act of plagiarism: what they are told to do instead is lay a charge of plagiarism, which is then to be brought to some other body or office or official to be dealt with. If the other body or official judges that plagiarism has in fact taken place and imposes a penalty, there is still the possibility of an appeal, and perhaps a second appeal. It is by no means unknown to bring a lawyer into the picture in an effort to move the whole argument into the courts.

Calvinistic institutions are also falling into line with such thinking. While they are no friends of postmodernism, their doctrine of total depravity does tend to cast a shadow of distrust over the individual professor. Might he have strange motives of which he is not even aware that drive his distrust of the student whom he suspects of plagiarizing? Is it appropriate to allow him to judge the student? Can he serve as both prosecutor and judge? If we have reasons for doubt about such matters, it might make sense to have some other body or official looking over his shoulder, a body that could even be asked to make the primary judgment whether plagiarism has taken place.

Of course one might wonder where all these appeals will end. The most significant penalty imposed by courts here on earth is the death penalty, and it is not imposed in democratic countries until a considerable number of appeal opportunities have been made available to the condemned prisoner. After all, a mistake may have been made at some lower level, and we surely would not want to execute an innocent man on the basis of a mistake someone made. And so one of the most common arguments against the death penalty -- whether in general or in its application to a particular accused person -- is sheer skepticism. Ultimately, the skeptical argument would leave us thinking that the death penalty can never be applied. I have written about this matter in my essay "You Never Can Tell."

It seems to me that however many types of appeals are permitted, the end of the appeal process must always come across to the person launching the appeals as arbitrary. If there are only interpretations and then interpretations of interpretations (postmodernism), how can anyone render a final and definitive judgment in a legal proceeding? In other words, why couldn't the appeal bodies and the procedures which they must follow be subject to the same corruption that postmodernism and Calvinism claim to find in the individual? In what sense is the "supreme court" of the land supreme? Once this brand of moral skepticism is injected into the discussion, how can we ever banish it?

There seems to be no other answer to this question than to admit openly that we have chosen to put our faith in procedures. A conclusion derived from a procedure is always to be preferred to a judgment arrived at by an individual. [NOTE 7]

In the background to such thinking is the secularization process. Our culture used to be broadly Christian -- or at least theistic. And so it used to be assumed that the true view of any situation here on earth would be the one in the mind of God. Likewise, anything in the accepted revelation of God that had a bearing on human affairs was accorded the status of unshaken truth. But when God was officially banished from the public discourse, which entailed emptying out the public square so that people started referring to it as "naked" (Richard John Neuhaus), something had to take his place. The successor to God was the committee, that is to say, a body of people working together. And people working together need some kind of method or set of rules -- in short, a procedure. Thus the notion of the procedure as the road to truth was gradually adopted. Therefore, if someone complains about appeals being exhausted and sees something arbitrary here (why not appeal a death sentence ad infinitum?), there is no other answer to give him than that there is no other game in town.


The developments I have been discussing are not entirely new in our culture. The notion of "official knowledge" has been with us for quite some time. We see it embodied in the domain of sports. In a number of competitive sports, there are usually officials or referees at higher levels of competition whose function it is to ascertain and rule "officially" what has happened, especially when it comes to scoring points. While it is possible to play such sports as North American football or tennis without officials, we have come to rely on them: these sports demand officials if they are to be played at the very highest levels, and so do many others.

In a football game, it may be quite evident to the spectators that a touchdown has been scored because someone ran across the goal line unopposed while carrying the ball, or perhaps because someone caught a pass in the end zone. Even so, in such a case an official declares that a touchdown has been scored. The spectators, presumably, pay no attention to the signal from the official. But there may also be a play in which a ball-carrier is tackled on what appears to be the goal line. Did he make enough forward progress to have advanced the ball into the end zone, thereby scoring a touchdown? Spectators may well disagree, but it is a matter for the officials to decide. Likewise, in tennis it may be evident that a ball was in bounds or out of bounds, but in some cases it is very hard to tell: again, spectators may disagree. In such cases, the declaration by a sports official must be accepted, although it could conceivably be appealed. In any case, scoring and "out of bounds" rulings in such settings involve "official knowledge."

Our society also presupposes "official knowledge" in the case of the death of a human being. If a pet expires in the family home, almost any member of the family can see what has taken place and can take appropriate steps to dispose of the remains. But if a human being dies in the family home, that person is not officially dead until someone who is legally competent to make a declaration to that effect comes to the scene and pronounces him or her dead, e.g. a medical doctor or the local coroner. Before such a thing happens, no steps may be taken to dispose of the remains. Of course there are emergency situations in which a person may be forced to act on his belief -- I would like to call it knowledge -- that someone in his vicinity has just died, but these are exceptions that prove the rule. And so, in this domain, we have "official knowledge."

Perhaps some of these requirements will be extended eventually into the death of pets. The death of animals in the barn and the proper disposal of their remains is already becoming more complicated from a legal point of view, partly because of the hazards posed by "mad cow disease."


The primary and longest-standing example of bureaucratized knowledge is the courtroom. And it is in the courtroom setting that a great many people sense the tension between "official knowledge" and common sense. Indeed, in certain situations people are outraged -- no less a term will do -- when they see that what some individual knows on a common-sense level and declares to be the case is not accepted as reality or truth by the courts. We are told that certain procedures must be carried out before a declaration to such-and-such an effect can be made. For example, an individual such as a police investigator may be absolutely certain who committed the murder about which the public is inflamed, but officially and corporately "we" do not yet know. We will only know who committed the murder if and when a trial has taken place and a guilty verdict has been reached. But for all sorts of reasons, the trial may not take place for a couple of years as yet. And so, in the meantime, we live in the tension of pretending not to know what we actually do know. Of course, even when the verdict has come in, an appeal is still possible -- indeed, a number of appeals are likely. The general principle here is that we know (officially) only what we can reasonably conclude after following a prescribed procedure.

Before the procedure has run its course, we are told to use terms like "alleged" and "allegation." After all, everyone is innocent unless and until proven guilty. And so we find ourselves speaking of the alleged assailant and the alleged misdeed, and so forth. For the victim of a crime, who may well have had plenty of opportunity to assure himself of the identity of his assailant, this requirement can be bitter medicine indeed.

The battle between bureaucratized knowledge and common sense is also played out on the level of the bail hearing. If the bureaucratized knowledge model were applied strictly in all situations, the person arrested for murder would quickly be out on the street again. And some, indeed, are -- to the consternation of the friends and family of the person who was murdered. But in many cases, common sense asserts itself to the point that a handy procedure is followed in which it is determined that although the alleged murderer is not yet officially guilty, he is somehow a danger to society and/or likely to flee if not confined. For that reason, bail is either denied or set such a high level as to become an impossibility. The "alleged assailant" is then confined until the trial takes place.


I have no quarrel as such with the procedures used in courts of law to investigate very serious criminal acts like murder. The observation I wish to make in this essay is that a model of bureaucratized knowledge which has long flourished in our court system is now invading other sectors of life. For some reason, legal procedures have gained our confidence in a time in which we have become rather dubious about the motives and intentions of individuals. And so, in various non-legal sectors of life, bureaucratized approaches to knowledge are being introduced and imposed without much question. Private judgment is suspect nowadays: the individual can no longer be trusted.

Concern about the abuse of children who are in the custody of adults other than their parents is also to be explained in this framework. And so we have seen many new regulations imposed in church settings; for example, it seems that a committee is now required to take a small child to the bathroom during a church-sponsored event. If a single individual takes care of this matter, it is almost as though we must presume that sexual abuse has taken place.

It is especially the liberal element in politics that favors such regulations and the thinking behind them. And so it should come as no surprise that it was a well-known liberal in politics, Senator Hilary Clinton, who borrowed an African proverb to the effect that it takes a village to raise a child and made it the basis and title of a successful book. Of course there is some truth to the proverb and to the ideal of getting a broader sector of the community involved in the care and supervision of children. But the concept of the village is roughly coterminous with the committee within an organizational world. And so all kinds of tasks -- the organizational equivalent of taking a child to the bathroom -- get performed by committees rather than individuals. Perhaps the thinking is that even if the individuals on committees are corrupt and up to no good, they can keep an eye on one another.


What are we to make of these developments in our culture? Are we doomed to turn into a society of lawyers and quasi-lawyers? Is there any principial reason for resisting such trends? I do believe there is. In the balance of this essay I will offer some reasons for resisting, in the hope that we may yet be persuaded to stick to more traditional patterns.

Although there are some practical difficulties of which William James would undoubtedly make a great deal, my primary objection is quite general -- indeed, it is philosophical in nature. I believe that our society owes much of its success to our confidence in individuals and our reliance upon their judgment and integrity. In this regard, I number myself among the conservatives in politics and am distrustful of left-wing approaches. It is ironic that in the decade or two since the decline and collapse of Communism, we have started to move in a collective or corporative direction by distrusting the individual and enlisting committees and villages for all sorts of tasks.

More specifically, we need to encourage individuals to form their own judgment about significant matters. Almost everyone sees this point: otherwise, why would we lament low voter turnouts in democratic elections? If we are to be successful in this regard, we must resist the widespread pressure in schools to impose ever heavier doses of group-work upon students as a preparation for what happens in life -- when one finds oneself part of the village or committee carrying out some small task.

This challenge applies especially in the epistemological domain. Just as we must learn to take individual and personal responsibility for our deeds, we must learn to make calm, sound and settled judgments upon all sorts of matters in life, with an eye to taking action on the conclusions we have reached. Only if we manage this feat will the disabling tendency which Ivan Illich fears be reversed, so that individuals again become competent in their own eyes, no longer to be undercut by the rhetoric and public perception of a group-oriented society that looks to procedures as the guarantee of truth.

In my introductory philosophy class I supply a term that students can use in an effort to identify with the challenge I place before them: I encourage them to become "epistemological Daniels." I explain to them that when I was a child in Sunday school, we were taught a lively song that still runs through my head: "Dare to be a Daniel! / Dare to stand alone! / Dare to have a purpose firm, / dare to make it known!" I then relate the notion of an "epistemological Daniel" to the minority status of Christians in our society. Because being a Christian nowadays entails holding different views on certain significant issues, we must develop the courage and ability during the years of our upbringing and education to stick to our guns, so to speak. In short, it is important to learn to stand alone in epistemological respects and to accept the consequences. Daniel and his friends were willing to risk the lions' den; can we not risk a little ridicule and rejection?


In taking such a step philosophically, we are also engaged in the task of defending common sense -- and not just in the meaning which G.E. Moore (1873-1958) gave to this term in his essay bearing such a title. [NOTE 8] There is more to the kind of defense of common sense I have in mind than simply accepting the givens of the senses under ordinary circumstances. This is an area in which Christian philosophers of various stripes can agree.

In his plea for common sense, Moore was influenced by Cook Wilson (1849-1915), through whom we eventually get directed back to the figure of Thomas Reid (1710-96), that antagonist of Hume's who has recently been resurrected by Nicholas Wolterstorff and Alvin Plantinga as a seminal source of inspiration and ideas for Christian philosophy in our time. On the continental side of the philosophical divide, I would point to Herman Dooyeweerd (1894-1977) and his defense of what he calls "naive experience."

In common sense and naive experience, we have a channel for God to address the human individual as he makes normative choices in everyday life. Some philosophers would speak here of natural law and of "things we can't not know" (Jay Budziszewski). [NOTE 9] The individual must not only be able to ascertain what is the case but must also be able to intuit or sense what is right and what is wrong. To take away this ability -- or to declare that it does not exist, which is more properly what the philosophy current in our day is doing -- is to take a dangerous and disabling step.


Secondly, these issues also need some analysis from an economic point of view. In this essay I do not mean to denigrate all group activities aimed at ascertaining what is the case or what ought to be done. Neither do I deny that in some situations a decision reached by a group is likely to be superior to one reached by an individual; in other words, I accept the usual rationale for assigning certain decisions and tasks to committees. But even when all this is admitted, it is still a question whether we can afford group decision-making in all areas and situations. There is a wonderful increase in efficiency when individuals are entrusted with ascertaining what is the case and taking action on their own. As universities discovered in the 1960s and 1970s, the fully democratized institution is a luxury we cannot afford -- it is too costly in terms of both time and money.


Thirdly, these considerations invite reflection on the proper role of policemen (in the literal sense) in our society. Our policeman are much criticized; for example, they get accused of favoring certain racial groups over others, or of being open to bribes, or of participating in some of the very same activities that they are supposed to be stamping out, such as trading in drugs. One response to such accusations has been to tie the hands of the police with more and more procedures. And so, on a general level, the police find themselves in something of the same position as the university professor who is trying to stamp out plagiarism among his students or is trying to keep the students in his class from cheating on a test. Can the policeman or the professor actually make a finding of wrongdoing and take action on the spot, or is he only authorized to gather evidence and bring an accusation to a higher body?

It is not clear to me exactly what powers the policeman on the beat still possesses nowadays, but I do sense a general erosion in this area. It is reflected in the complaint one sometimes hears to the effect that the police no longer protect people who have reason to believe that their life is in danger ("We have nothing to go on ..."). In effect, what such people are sometimes told is: We cannot do anything to protect you right now, but if and when you are murdered, you can rest assured that we will investigate your murder thoroughly and do all we can to apprehend the murderer and bring him to justice. Gradually, policemen are turning bureaucratic and are acting more like investigators than guards. Hence there has been quite some growth in private security forces.

A century or so ago, the policeman meted out ready justice on the street, with a billy club in hand. Today many people shudder at the thought (and there certainly were excesses to shudder about!), just as they shudder at the thought of parents dispensing corporal punishment to their children. One result is that actual wrongdoers are freer than they ought to be in terms of what they know they can get away. There is a perpetual tension between civil liberties and crime control. Yet it should be admitted that society's response to the events of 9/11 has meant that the pendulum is now swinging back in the direction of the police.


The professor who "polices the classroom" (a metaphor, I suppose, but a useful one) is also under strictures here. In many universities, he no longer has the power to pounce on a student whom he catches cheating on a test. What he must do is report to the student that he has evidence that the student has been cheating and declare that he intends to present the evidence to the proper official or committee at the university. Then, of course, a procedure follows. I regard this approach to the task of stamping out cheating in the classroom as a significant weakening of the moral authority of the professor, just as the policeman on the beat has of late been weakened in his moral authority.

It is my own conviction that we need to "police" quite a few sectors of life -- and not just the beat walked by the local constable. But I don't believe we can bureaucratize all such "policing." Even if policemen walk the beat in pairs, certain tense situations will lead to their splitting off from one another so that they are temporarily operating on their own.

We need to recover a robust notion of the integrity of the role of the individual policeman, and in similar fashion we need to assign appropriate authority and trust to professors who "police" the classroom. Likewise, teachers who deal with discipline situations daily in elementary and secondary schools need to know that they have the authority to take significant action in the face of misbehavior. The policeman may sometimes have to take away someone's car keys or even impound his car, thereby clearly making a judgment on the spot; the teacher sometimes also needs to take action along the lines of taking something away. But some schools nowadays use security officials, who are not teachers in either training or experience, to deal with the more blatant forms of misbehavior.

My experience as a teacher in a Christian elementary school was decisive for my thinking in this regard. During my twenties, when I was in the process of finishing my Ph.D. thesis, I got a job teaching in such a school even though I did not have teacher credentials. Of course I took a bit of informal training and advice from the principal before school opened. Among the things he told me was that discipline as applied to children of elementary-school age (I was teaching a split grades 5 and 6 class) needs to be swift and decisive -- also in the sense that once it is done, there must be no lingering after-effects. The child who has been disciplined should not be getting dirty looks from the teacher all day, nor should he have the feeling that he must now spend quite some time in the doghouse. And he must definitely not be left thinking he has now fallen permanently out of favor. Christians are supposed to believe in repentance and restoration. I did my best to apply the principal's advice in the classroom, and I also found it to be effective when I became a parent and had to discipline my own children.

Time is very important if discipline and punishment are to be effective, but when our approach to knowledge is so bureaucratized that we need to involve the village or even a mere committee in ascertaining what is the case -- to say nothing of deciding what we're going to do about it -- time just goes out the window. And so it does not surprise me to hear that many juvenile wrongdoers seem to think that not much happens to you when you break the law. In a sense they are right: even if you are quickly arrested, there are no immediate consequences. A procedure is begun, and it takes quite some time. Much of the effectiveness of the punishment that may eventually be meted out is thereby lost. And so I would point out that the fine advice given me by my principal can only be implemented if we have a robust sense of the individual and his integrity and his capacity to ascertain what is the case and to judge it from a moral point of view, with an eye to taking quick corrective action.


In conclusion, I want to take my stand with Job and with the believer who loves the music of Handel in saying that I know that my redeemer liveth. I would further take my stand in a venerable Christian tradition by affirming that I know certain things that are written on the heart (what many have called "natural law"). That murder is wrong is not a matter for debate or prevaricating -- likewise stealing.

There was a time some centuries ago when it was easy to make such affirmations, but today one almost has to be an "epistemological Daniel" to make such claims. And so, in conclusion, I will nominate myself an honorary Daniel -- even as I think of three Daniels who came to Redeemer to major in philosophy, where they found out that their name has some additional significance. Two of them have already graduated: Daniel Mullin and Daniel Van Minnen. One is still under my tutelage: Daniel Horton. To the three of them I dedicate this small essay. END


Preface to the second edition of the Critique of Pure Reason, Norman Kemp Smith translation (New York: St. Martin's Press, 1968), Bxxx. "Ich musste also das Wissen aufheben, um zum Glauben Platz zu bekommen."

"And when they came to the Jordan, they cut down trees. But as one was felling a log, his axe head fell into the water; and he cried out, `Alas, my master! It was borrowed.' Then the man of God said, `Where did it fall?' When he showed him the place, he cut off a stick, and threw it in there, and made the iron float. And he said, `Take it up.' So he reached out his hand and took it." [II Kings 6:4-7 RSV]

This autobiography is reprinted in the Open Court edition of Hume's Enquiry Concerning Human Understanding (LaSalle, Illinois, 1966), see pp. 5-16.

See the Open Court edition of the Enquiry, Miracles, Part I, p. 122.

His essay has printed in many works, including his Lectures and Essays, second edition, ed. Leslie Stephen and Frederick Pollock (London & New York: Macmillan and Company, 1886). His "It is wrong ..." conclusion is to be found on p. 346. His parable comes a little earlier, on p. 339: "A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not over-well built at the first; that she had seen many seas and climes, and had often needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely through so many voyages and weathered so many storms that it was idle to suppose that she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales."

See "The Will to Believe," in The Will to Believe and Other Essays in Popular Philosophy (New York: Dover Publications, 1956). See also his comments in "The Sentiment of Rationality," which appears in the same volume, pp. 96-7.

On the "procedure" emphasis, see Michael J. Sandel, Democracy's Discontent: America in Search of a Public Philosophy (Cambridge and London: Belknap Press of Harvard University Press, 1996).

See "A Defence of Common Sense," reprinted in various works, including Twentieth-Century Philosophy: The Analytic Tradition, ed. Morris Weitz (New York: Free Press, 1966), pp. 99-123.

See Things We Can't Not Know (Dallas: Spence Publishing, 2003).

Click here to go to the Myodicy home page.