Thursday, October 06, 2005


The New Yorker / The Critics
A Critic at Large


The social logic of Ivy League admissions.
Issue of 2005-10-10Posted 2005-10-03

I applied to college one evening, after dinner, in the fall of my senior year in high school. College applicants in Ontario, in those days, were given a single sheet of paper which listed all the universities in the province. It was my job to rank them in order of preference. Then I had to mail the sheet of paper to a central college-admissions office. The whole process probably took ten minutes. My school sent in my grades separately. I vaguely remember filling out a supplementary two-page form listing my interests and activities. There were no S.A.T. scores to worry about, because in Canada we didn’t have to take the S.A.T.s. I don’t know whether anyone wrote me a recommendation. I certainly never asked anyone to. Why would I? It wasn’t as if I were applying to a private club.
I put the University of Toronto first on my list, the University of Western Ontario second, and Queen’s University third. I was working off a set of brochures that I’d sent away for. My parents’ contribution consisted of my father’s agreeing to drive me one afternoon to the University of Toronto campus, where we visited the residential college I was most interested in. I walked around. My father poked his head into the admissions office, chatted with the admissions director, and—I imagine—either said a few short words about the talents of his son or (knowing my father) remarked on the loveliness of the delphiniums in the college flower beds. Then we had ice cream. I got in.
Am I a better or more successful person for having been accepted at the University of Toronto, as opposed to my second or third choice? It strikes me as a curious question. In Ontario, there wasn’t a strict hierarchy of colleges. There were several good ones and several better ones and a number of programs—like computer science at the University of Waterloo—that were world-class. But since all colleges were part of the same public system and tuition everywhere was the same (about a thousand dollars a year, in those days), and a B average in high school pretty much guaranteed you a spot in college, there wasn’t a sense that anything great was at stake in the choice of which college we attended. The issue was whether we attended college, and—most important—how seriously we took the experience once we got there. I thought everyone felt this way. You can imagine my confusion, then, when I first met someone who had gone to Harvard.
There was, first of all, that strange initial reluctance to talk about the matter of college at all—a glance downward, a shuffling of the feet, a mumbled mention of Cambridge. “Did you go to Harvard?” I would ask. I had just moved to the United States. I didn’t know the rules. An uncomfortable nod would follow. Don’t define me by my school, they seemed to be saying, which implied that their school actually could define them. And, of course, it did. Wherever there was one Harvard graduate, another lurked not far behind, ready to swap tales of late nights at the Hasty Pudding, or recount the intricacies of the college-application essay, or wonder out loud about the whereabouts of Prince So-and-So, who lived down the hall and whose family had a place in the South of France that you would not believe. In the novels they were writing, the precocious and sensitive protagonist always went to Harvard; if he was troubled, he dropped out of Harvard; in the end, he returned to Harvard to complete his senior thesis. Once, I attended a wedding of a Harvard alum in his fifties, at which the best man spoke of his college days with the groom as if neither could have accomplished anything of greater importance in the intervening thirty years. By the end, I half expected him to take off his shirt and proudly display the large crimson “H” tattooed on his chest. What is this “Harvard” of which you Americans speak so reverently?
In 1905, Harvard College adopted the College Entrance Examination Board tests as the principal basis for admission, which meant that virtually any academically gifted high-school senior who could afford a private college had a straightforward shot at attending. By 1908, the freshman class was seven per cent Jewish, nine per cent Catholic, and forty-five per cent from public schools, an astonishing transformation for a school that historically had been the preserve of the New England boarding-school complex known in the admissions world as St. Grottlesex.
As the sociologist Jerome Karabel writes in “The Chosen” (Houghton Mifflin; $28), his remarkable history of the admissions process at Harvard, Yale, and Princeton, that meritocratic spirit soon led to a crisis. The enrollment of Jews began to rise dramatically.By 1922, they made up more than a fifth of Harvard’s freshman class. The administration and alumni were up in arms. Jews were thought to be sickly and grasping, grade-grubbing and insular. They displaced the sons of wealthy Wasp alumni, which did not bode well for fund-raising. A. Lawrence Lowell, Harvard’s president in the nineteen-twenties, stated flatly that too many Jews would destroy the school: “The summer hotel that is ruined by admitting Jews meets its fate . . . because they drive away the Gentiles, and then after the Gentiles have left, they leave also.”
The difficult part, however, was coming up with a way of keeping Jews out, because as a group they were academically superior to everyone else. Lowell’s first idea—a quota limiting Jews to fifteen per cent of the student body—was roundly criticized. Lowell tried restricting the number of scholarships given to Jewish students, and made an effort to bring in students from public schools in the West, where there were fewer Jews. Neither strategy worked. Finally, Lowell—and his counterparts at Yale and Princeton—realized that if a definition of merit based on academic prowess was leading to the wrong kind of student, the solution was to change the definition of merit. Karabel argues that it was at this moment that the history and nature of the Ivy League took a significant turn.
The admissions office at Harvard became much more interested in the details of an applicant’s personal life. Lowell told his admissions officers to elicit information about the “character” of candidates from “persons who know the applicants well,” and so the letter of reference became mandatory. Harvard started asking applicants to provide a photograph. Candidates had to write personal essays, demonstrating their aptitude for leadership, and list their extracurricular activities. “Starting in the fall of 1922,” Karabel writes, “applicants were required to answer questions on ‘Race and Color,’ ‘Religious Preference,’ ‘Maiden Name of Mother,’ ‘Birthplace of Father,’ and ‘What change, if any, has been made since birth in your own name or that of your father? (Explain fully).’ ”
At Princeton, emissaries were sent to the major boarding schools, with instructions to rate potential candidates on a scale of 1 to 4, where 1 was “very desirable and apparently exceptional material from every point of view” and 4 was “undesirable from the point of view of character, and, therefore, to be excluded no matter what the results of the entrance examinations might be.” The personal interview became a key component of admissions in order, Karabel writes, “to ensure that ‘undesirables’ were identified and to assess important but subtle indicators of background and breeding such as speech, dress, deportment and physical appearance.” By 1933, the end of Lowell’s term, the percentage of Jews at Harvard was back down to fifteen per cent.
If this new admissions system seems familiar, that’s because it is essentially the same system that the Ivy League uses to this day. According to Karabel, Harvard, Yale, and Princeton didn’t abandon the elevation of character once the Jewish crisis passed. They institutionalized it.
Starting in 1953, Arthur Howe, Jr., spent a decade as the chair of admissions at Yale, and Karabel describes what happened under his guidance:
The admissions committee viewed evidence of “manliness” with particular enthusiasm. One boy gained admission despite an academic prediction of 70 because “there was apparently something manly and distinctive about him that had won over both his alumni and staff interviewers.” Another candidate, admitted despite his schoolwork being “mediocre in comparison with many others,” was accepted over an applicant with a much better record and higher exam scores because, as Howe put it, “we just thought he was more of a guy.” So preoccupied was Yale with the appearance of its students that the form used by alumni interviewers actually had a physical characteristics checklist through 1965. Each year, Yale carefully measured the height of entering freshmen, noting with pride the proportion of the class at six feet or more.
At Harvard, the key figure in that same period was Wilbur Bender, who, as the dean of admissions, had a preference for “the boy with some athletic interests and abilities, the boy with physical vigor and coordination and grace.” Bender, Karabel tells us, believed that if Harvard continued to suffer on the football field it would contribute to the school’s reputation as a place with “no college spirit, few good fellows, and no vigorous, healthy social life,” not to mention a “surfeit of ‘pansies,’ ‘decadent esthetes’ and ‘precious sophisticates.’ ” Bender concentrated on improving Harvard’s techniques for evaluating “intangibles” and, in particular, its “ability to detect homosexual tendencies and serious psychiatric problems.”
By the nineteen-sixties, Harvard’s admissions system had evolved into a series of complex algorithms. The school began by lumping all applicants into one of twenty-two dockets, according to their geographical origin. (There was one docket for Exeter and Andover, another for the eight Rocky Mountain states.) Information from interviews, references, and student essays was then used to grade each applicant on a scale of 1 to 6, along four dimensions: personal, academic, extracurricular, and athletic. Competition, critically, was within each docket, not between dockets, so there was no way for, say, the graduates of Bronx Science and Stuyvesant to shut out the graduates of Andover and Exeter. More important, academic achievement was just one of four dimensions, further diluting the value of pure intellectual accomplishment. Athletic ability, rather than falling under “extracurriculars,” got a category all to itself, which explains why, even now, recruited athletes have an acceptance rate to the Ivies at well over twice the rate of other students, despite S.A.T. scores that are on average more than a hundred points lower. And the most important category? That mysterious index of “personal” qualities. According to Harvard’s own analysis, the personal rating was a better predictor of admission than the academic rating. Those with a rank of 4 or worse on the personal scale had, in the nineteen-sixties, a rejection rate of ninety-eight per cent. Those with a personal rating of 1 had a rejection rate of 2.5 per cent. When the Office of Civil Rights at the federal education department investigated Harvard in the nineteen-eighties, they found handwritten notes scribbled in the margins of various candidates’ files. “This young woman could be one of the brightest applicants in the pool but there are several references to shyness,” read one. Another comment reads, “Seems a tad frothy.” One application—and at this point you can almost hear it going to the bottom of the pile—was notated, “Short with big ears.”
Social scientists distinguish between what are known as treatment effects and selection effects. The Marine Corps, for instance, is largely a treatment-effect institution. It doesn’t have an enormous admissions office grading applicants along four separate dimensions of toughness and intelligence. It’s confident that the experience of undergoing Marine Corps basic training will turn you into a formidable soldier. A modelling agency, by contrast, is a selection-effect institution. You don’t become beautiful by signing up with an agency. You get signed up by an agency because you’re beautiful.
At the heart of the American obsession with the Ivy League is the belief that schools like Harvard provide the social and intellectual equivalent of Marine Corps basic training—that being taught by all those brilliant professors and meeting all those other motivated students and getting a degree with that powerful name on it will confer advantages that no local state university can provide. Fuelling the treatment-effect idea are studies showing that if you take two students with the same S.A.T. scores and grades, one of whom goes to a school like Harvard and one of whom goes to a less selective college, the Ivy Leaguer will make far more money ten or twenty years down the road.
The extraordinary emphasis the Ivy League places on admissions policies, though, makes it seem more like a modelling agency than like the Marine Corps, and, sure enough, the studies based on those two apparently equivalent students turn out to be flawed. How do we know that two students who have the same S.A.T. scores and grades really are equivalent? It’s quite possible that the student who goes to Harvard is more ambitious and energetic and personable than the student who wasn’t let in, and that those same intangibles are what account for his better career success. To assess the effect of the Ivies, it makes more sense to compare the student who got into a top school with the student who got into that same school but chose to go to a less selective one. Three years ago, the economists Alan Krueger and Stacy Dale published just such a study. And they found that when you compare apples and apples the income bonus from selective schools disappears.
“As a hypothetical example, take the University of Pennsylvania and Penn State, which are two schools a lot of students choose between,” Krueger said. “One is Ivy, one is a state school. Penn is much more highly selective. If you compare the students who go to those two schools, the ones who go to Penn have higher incomes. But let’s look at those who got into both types of schools, some of whom chose Penn and some of whom chose Penn State. Within that set it doesn’t seem to matter whether you go to the more selective school. Now, you would think that the more ambitious student is the one who would choose to go to Penn, and the ones choosing to go to Penn State might be a little less confident in their abilities or have a little lower family income, and both of those factors would point to people doing worse later on. But they don’t.”
Krueger says that there is one exception to this. Students from the very lowest economic strata do seem to benefit from going to an Ivy. For most students, though, the general rule seems to be that if you are a hardworking and intelligent person you’ll end up doing well regardless of where you went to school. You’ll make good contacts at Penn. But Penn State is big enough and diverse enough that you can make good contacts there, too. Having Penn on your résumé opens doors. But if you were good enough to get into Penn you’re good enough that those doors will open for you anyway. “I can see why families are really concerned about this,” Krueger went on. “The average graduate from a top school is making nearly a hundred and twenty thousand dollars a year, the average graduate from a moderately selective school is making ninety thousand dollars. That’s an enormous difference, and I can see why parents would fight to get their kids into the better school. But I think they are just assigning to the school a lot of what the student is bringing with him to the school.”
Bender was succeeded as the dean of admissions at Harvard by Fred Glimp, who, Karabel tells us, had a particular concern with academic underperformers. “Any class, no matter how able, will always have a bottom quarter,” Glimp once wrote. “What are the effects of the psychology of feeling average, even in a very able group? Are there identifiable types with the psychological or what-not tolerance to be ‘happy’ or to make the most of education while in the bottom quarter?” Glimp thought it was critical that the students who populated the lower rungs of every Harvard class weren’t so driven and ambitious that they would be disturbed by their status. “Thus the renowned (some would say notorious) Harvard admission practice known as the ‘happy-bottom-quarter’ policy was born,” Karabel writes.
It’s unclear whether or not Glimp found any students who fit that particular description. (He wondered, in a marvellously honest moment, whether the answer was “Harvard sons.”) But Glimp had the realism of the modelling scout. Glimp believed implicitly what Krueger and Dale later confirmed: that the character and performance of an academic class is determined, to a significant extent, at the point of admission; that if you want to graduate winners you have to admit winners; that if you want the bottom quarter of your class to succeed you have to find people capable of succeeding in the bottom quarter. Karabel is quite right, then, to see the events of the nineteen-twenties as the defining moment of the modern Ivy League. You are whom you admit in the élite-education business, and when Harvard changed whom it admitted, it changed Harvard. Was that change for the better or for the worse?
In the wake of the Jewish crisis, Harvard, Yale, and Princeton chose to adopt what might be called the “best graduates” approach to admissions. France’s École Normale Supérieure, Japan’s University of Tokyo, and most of the world’s other élite schools define their task as looking for the best students—that is, the applicants who will have the greatest academic success during their time in college. The Ivy League schools justified their emphasis on character and personality, however, by arguing that they were searching for the students who would have the greatest success after college. They were looking for leaders, and leadership, the officials of the Ivy League believed, was not a simple matter of academic brilliance. “Should our goal be to select a student body with the highest possible proportions of high-ranking students, or should it be to select, within a reasonably high range of academic ability, a student body with a certain variety of talents, qualities, attitudes, and backgrounds?” Wilbur Bender asked. To him, the answer was obvious. If you let in only the brilliant, then you produced bookworms and bench scientists: you ended up as socially irrelevant as the University of Chicago (an institution Harvard officials looked upon and shuddered). “Above a reasonably good level of mental ability, above that indicated by a 550-600 level of S.A.T. score,” Bender went on, “the only thing that matters in terms of future impact on, or contribution to, society is the degree of personal inner force an individual has.”
It’s easy to find fault with the best-graduates approach. We tend to think that intellectual achievement is the fairest and highest standard of merit. The Ivy League process, quite apart from its dubious origins, seems subjective and opaque. Why should personality and athletic ability matter so much? The notion that “the ability to throw, kick, or hit a ball is a legitimate criterion in determining who should be admitted to our greatest research universities,” Karabel writes, is “a proposition that would be considered laughable in most of the world’s countries.” At the same time that Harvard was constructing its byzantine admissions system, Hunter College Elementary School, in New York, required simply that applicants take an exam, and if they scored in the top fifty they got in. It’s hard to imagine a more objective and transparent procedure.
But what did Hunter achieve with that best-students model? In the nineteen-eighties, a handful of educational researchers surveyed the students who attended the elementary school between 1948 and 1960. This was a group with an average I.Q. of 157—three and a half standard deviations above the mean—who had been given what, by any measure, was one of the finest classroom experiences in the world. As graduates, though, they weren’t nearly as distinguished as they were expected to be. “Although most of our study participants are successful and fairly content with their lives and accomplishments,” the authors conclude, “there are no superstars . . . and only one or two familiar names.” The researchers spend a great deal of time trying to figure out why Hunter graduates are so disappointing, and end up sounding very much like Wilbur Bender. Being a smart child isn’t a terribly good predictor of success in later life, they conclude. “Non-intellective” factors—like motivation and social skills—probably matter more. Perhaps, the study suggests, “after noting the sacrifices involved in trying for national or world-class leadership in a field, H.C.E.S. graduates decided that the intelligent thing to do was to choose relatively happy and successful lives.” It is a wonderful thing, of course, for a school to turn out lots of relatively happy and successful graduates. But Harvard didn’t want lots of relatively happy and successful graduates. It wanted superstars, and Bender and his colleagues recognized that if this is your goal a best-students model isn’t enough.
Most élite law schools, to cite another example, follow a best-students model. That’s why they rely so heavily on the L.S.A.T. Yet there’s no reason to believe that a person’s L.S.A.T. scores have much relation to how good a lawyer he will be. In a recent research project funded by the Law School Admission Council, the Berkeley researchers Sheldon Zedeck and Marjorie Shultz identified twenty-six “competencies” that they think effective lawyering demands—among them practical judgment, passion and engagement, legal-research skills, questioning and interviewing skills, negotiation skills, stress management, and so on—and the L.S.A.T. picks up only a handful of them. A law school that wants to select the best possible lawyers has to use a very different admissions process from a law school that wants to select the best possible law students. And wouldn’t we prefer that at least some law schools try to select good lawyers instead of good law students?
This search for good lawyers, furthermore, is necessarily going to be subjective, because things like passion and engagement can’t be measured as precisely as academic proficiency. Subjectivity in the admissions process is not just an occasion for discrimination; it is also, in better times, the only means available for giving us the social outcome we want. The first black captain of the Yale football team was a man named Levi Jackson, who graduated in 1950. Jackson was a hugely popular figure on campus. He went on to be a top executive at Ford, and is credited with persuading the company to hire thousands of African-Americans after the 1967 riots. When Jackson was tapped for the exclusive secret society Skull and Bones, he joked, “If my name had been reversed, I never would have made it.” He had a point. The strategy of discretion that Yale had once used to exclude Jews was soon being used to include people like Levi Jackson.
In the 2001 book “The Game of Life,” James L. Shulman and William Bowen (a former president of Princeton) conducted an enormous statistical analysis on an issue that has become one of the most contentious in admissions: the special preferences given to recruited athletes at selective universities. Athletes, Shulman and Bowen demonstrate, have a large and growing advantage in admission over everyone else. At the same time, they have markedly lower G.P.A.s and S.A.T. scores than their peers. Over the past twenty years, their class rankings have steadily dropped, and they tend to segregate themselves in an “athletic culture” different from the culture of the rest of the college. Shulman and Bowen think the preference given to athletes by the Ivy League is shameful.
Halfway through the book, however, Shulman and Bowen present what they call a “surprising” finding. Male athletes, despite their lower S.A.T. scores and grades, and despite the fact that many of them are members of minorities and come from lower socioeconomic backgrounds than other students, turn out to earn a lot more than their peers. Apparently, athletes are far more likely to go into the high-paying financial-services sector, where they succeed because of their personality and psychological makeup. In what can only be described as a textbook example of burying the lead, Bowen and Shulman write:
One of these characteristics can be thought of as drive—a strong desire to succeed and unswerving determination to reach a goal, whether it be winning the next game or closing a sale. Similarly, athletes tend to be more energetic than the average person, which translates into an ability to work hard over long periods of time—to meet, for example, the workload demands placed on young people by an investment bank in the throes of analyzing a transaction. In addition, athletes are more likely than others to be highly competitive, gregarious and confident of their ability to work well in groups (on teams).
Shulman and Bowen would like to argue that the attitudes of selective colleges toward athletes are a perversion of the ideals of American élite education, but that’s because they misrepresent the actual ideals of American élite education. The Ivy League is perfectly happy to accept, among others, the kind of student who makes a lot of money after graduation. As the old saying goes, the definition of a well-rounded Yale graduate is someone who can roll all the way from New Haven to Wall Street.
I once had a conversation with someone who worked for an advertising agency that represented one of the big luxury automobile brands. He said that he was worried that his client’s new lower-priced line was being bought disproportionately by black women. He insisted that he did not mean this in a racist way. It was just a fact, he said. Black women would destroy the brand’s cachet. It was his job to protect his client from the attentions of the socially undesirable.
This is, in no small part, what Ivy League admissions directors do. They are in the luxury-brand-management business, and “The Chosen,” in the end, is a testament to just how well the brand managers in Cambridge, New Haven, and Princeton have done their job in the past seventy-five years. In the nineteentwenties, when Harvard tried to figure out how many Jews they had on campus, the admissions office scoured student records and assigned each suspected Jew the designation j1 (for someone who was “conclusively Jewish”), j2 (where the “preponderance of evidence” pointed to Jewishness), or j3 (where Jewishness was a “possibility”). In the branding world, this is called customer segmentation. In the Second World War, as Yale faced plummeting enrollment and revenues, it continued to turn down qualified Jewish applicants. As Karabel writes, “In the language of sociology, Yale judged its symbolic capital to be even more precious than its economic capital.” No good brand manager would sacrifice reputation for short-term gain. The admissions directors at Harvard have always, similarly, been diligent about rewarding the children of graduates, or, as they are quaintly called, “legacies.” In the 1985-92 period, for instance, Harvard admitted children of alumni at a rate more than twice that of non-athlete, non-legacy applicants, despite the fact that, on virtually every one of the school’s magical ratings scales, legacies significantly lagged behind their peers. Karabel calls the practice “unmeritocratic at best and profoundly corrupt at worst,” but rewarding customer loyalty is what luxury brands do. Harvard wants good graduates, and part of their definition of a good graduate is someone who is a generous and loyal alumnus. And if you want generous and loyal alumni you have to reward them. Aren’t the tremendous resources provided to Harvard by its alumni part of the reason so many people want to go to Harvard in the first place? The endless battle over admissions in the United States proceeds on the assumption that some great moral principle is at stake in the matter of whom schools like Harvard choose to let in—that those who are denied admission by the whims of the admissions office have somehow been harmed. If you are sick and a hospital shuts its doors to you, you are harmed. But a selective school is not a hospital, and those it turns away are not sick. Élite schools, like any luxury brand, are an aesthetic experience—an exquisitely constructed fantasy of what it means to belong to an élite —and they have always been mindful of what must be done to maintain that experience.
In the nineteen-eighties, when Harvard was accused of enforcing a secret quota on Asian admissions, its defense was that once you adjusted for the preferences given to the children of alumni and for the preferences given to athletes, Asians really weren’t being discriminated against. But you could sense Harvard’s exasperation that the issue was being raised at all. If Harvard had too many Asians, it wouldn’t be Harvard, just as Harvard wouldn’t be Harvard with too many Jews or pansies or parlor pinks or shy types or short people with big ears.

Charlotte's Webpage

Students who frequently use computers perform more poorly academically than those who use them rarely or not at all. And it gets worse...

Charlotte's Webpage

Computers are dramatically altering the way your children learn and experience the world—and not for the better.

THOMAS EDISON WAS A GREAT INVENTOR but a lousy prognosticator. When he proclaimed in 1922 that the motion picture would replace textbooks in schools, he began a long string of spectacularly wrong predictions regarding the capacity of various technologies to revolutionize teaching. To date, none of them—from film to television—has lived up to the hype. Most were quickly relegated to the audiovisual closet. Even the computer, which is now a standard feature of most classrooms, has not been able to show a consistent record of improving education.
"There have been no advances over the past decade that can be confidently attributed to broader access to computers," said Stanford University professor of education Larry Cuban in 2001, summarizing the existing research on educational computing. "The link between test-score improvements and computer availability and use is even more contested." Part of the problem, Cuban pointed out, is that many computers simply go unused in the classroom. But more recent research, including a University of Munich study of 174,000 students in thirty-one countries, indicates that students who frequently use computers perform worse academically than those who use them rarely or not at all. Whether or not these assessments are the last word, it is clear that the computer has not fulfilled the promises made for it. Promoters of instructional technology have reverted to a much more modest claim—that the computer is just another tool: "it's what you do with it that counts." But this response ignores the ecological impact of technologies. Far from being neutral, they reconstitute all of the relationships in an environment, some for better and some for worse. Installing a computer lab in a school may mean that students have access to information they would never be able to get any other way, but it may also mean that children spend less time engaged in outdoor play, the art supply budget has to be cut, new security measures have to be employed, and Acceptable Use Agreements are needed to inform parents (for the first time in American educational history) that the school is not responsible for the material a child encounters while under its supervision.
The "just-a-tool" argument also ignores the fact that whenever we choose one learning activity over another, we are deciding what kinds of encounters with the world we value for our children, which in turn influences what they grow up to value. Computers tend to promote and support certain kinds of learning experiences, and devalue others. As technology critic Neil Postman has observed, "What we need to consider about computers has nothing to do with its efficiency as a teaching tool. We need to know in what ways it is altering our conception of learning."
If we look through that lens, I think we will see that educational computing is neither a revolution nor a passing fad, but a Faustian bargain. Children gain unprecedented power to control their external world, but at the cost of internal growth. During the two decades that I taught young people with and about digital technology, I came to realize that the power of computers can lead children into deadened, alienated, and manipulative relationships with the world, that children's increasingly pervasive use of computers jeopardizes their ability to belong fully to human and biological communities—ultimately jeopardizing the communities themselves.
Several years ago I participated in a panel discussion on Iowa Public Television that focused on some "best practices" for computers in the classroom. Early in the program, a video showed how a fourth grade class in rural Iowa used computers to produce hypertext book reports on Charlotte's Web, E. B. White's classic children's novel. In the video, students proudly demonstrated their work, which included a computer-generated "spider" jumping across the screen and an animated stick-figure boy swinging from a hayloft rope. Toward the end of the video, a student discussed the important lessons he had learned: always be nice to each other and help one another.
There were important lessons for viewers as well. Images of the students talking around computer screens dispelled (appropriately, I think) the notion that computers always isolate users. Moreover, the teacher explained that her students were so enthusiastic about the project that they chose to go to the computer lab rather than outside for recess. While she seemed impressed by this dedication, it underscores the first troubling influence of computers. The medium is so compelling that it lures children away from the kind of activities through which they have always most effectively discovered themselves and their place in the world.Ironically, students could best learn the lessons implicit in Charlotte's Web—the need to negotiate relationships, the importance of all members of a community, even the rats—by engaging in the recess they missed. In a school, recess is not just a break from intellectual demands or a chance to let off steam. It is also a break from a closely supervised social and physical environment. It is when children are most free to negotiate their own relationships, at arm's length from adult authority. Yet across the U.S., these opportunities are disappearing. By the year 2000, according to a 2001 report by University of New Orleans associate professor Judith Kieff, more than 40 percent of the elementary and middle schools in the U.S. had entirely eliminated recess. By contrast, U.S. Department of Education statistics indicate that spending on technology in schools increased by more than 300 percent from 1990 to 2000.
Structured learning certainly has its place. But if it crowds out direct, unmediated engagement with the world, it undercuts a child's education. Children learn the fragility of flowers by touching their petals. They learn to cooperate by organizing their own games. The computer cannot simulate the physical and emotional nuances of resolving a dispute during kickball, or the creativity of inventing new rhymes to the rhythm of jumping rope. These full-bodied, often deeply heartfelt experiences educate not just the intellect but also the soul of the child. When children are free to practice on their own, they can test their inner perceptions against the world around them, develop the qualities of care, self-discipline, courage, compassion, generosity, and tolerance—and gradually figure out how to be part of both social and biological communities.
It's true that engaging with others on the playground can be a harrowing experience, too. Children often need to be monitored and, at times, disciplined for acts of cruelty, carelessness, selfishness, even violence. Computers do provide an attractively reliable alternative to the dangers of unsupervised play. But schools too often use computers or other highly structured activities to prevent these problematic qualities of childhood from surfacing—out of fear or a compulsion to force-feed academics. This effectively denies children the practice and feedback they need to develop the skills and dispositions of a mature person. If children do not dip their toes in the waters of unsupervised social activity, they likely will never be able to swim in the sea of civic responsibility. If they have no opportunities to dig in the soil, discover the spiders, bugs, birds, and plants that populate even the smallest unpaved playgrounds, they will be less likely to explore, appreciate, and protect nature as adults.
Computers not only divert students from recess and other unstructured experiences, but also replace those authentic experiences with virtual ones, creating a separate set of problems. According to surveys by the Kaiser Family Foundation and others, school-age children spend, on average, around five hours a day in front of screens for recreational purposes (for children ages two to seven the average is around three hours). All that screen time is supplemented by the hundreds of impressive computer projects now taking place in schools. Yet these projects—the steady diet of virtual trips to the Antarctic, virtual climbs to the summit of Mount Everest, and trips into cyber-orbit that represent one technological high after another—generate only vicarious thrills. The student doesn't actually soar above the Earth, doesn't trek across icy terrain, doesn't climb a mountain. Increasingly, she isn't even allowed to climb to the top of the jungle gym. And unlike reading, virtual adventures leave almost nothing to, and therefore require almost nothing of, the imagination. In experiencing the virtual world, the student cannot, as philosopher Steve Talbott has put it, "connect to [her] inner essence."
On the contrary, she is exposed to a simulated world that tends to deaden her encounters with the real one. During the decade that I spent teaching a course called Advanced Computer Technology, I repeatedly found that after engaging in Internet projects, students came back down to the Earth of their immediate surroundings with boredom and disinterest—and a desire to get back online. This phenomenon was so pronounced that I started kidding my students about being BEJs: Big Event Junkies. Sadly, many readily admitted that, in general, their classes had to be conducted with the multimedia sensationalism of MTV just to keep them engaged. Having watched Discovery Channel and worked with computer simulations that severely compress both time and space, children are typically disappointed when they first approach a pond or stream: the fish aren't jumping, the frogs aren't croaking, the deer aren't drinking, the otters aren't playing, and the raccoons (not to mention bears) aren't fishing. Their electronic experiences have led them to expect to see these things happening—all at once and with no effort on their part. This distortion can also result from a diet of television and movies, but the computer's powerful interactive capabilities greatly accelerate it. And the phenomenon affects more than just experiences with the natural world. It leaves students apathetic and impatient in any number of settings—from class discussions to science experiments. The result is that the child becomes less animated and less capable of appreciating what it means to be alive, what it means to belong in the world as a biological, social being.
So what to make of the Charlotte's Web video, in which the students hunch over a ten-by-twelve-inch screen, trying to learn about what it means to be part of a community while the recess clock ticks away? It's probably unfair to blame the teacher, who would have had plenty of reasons to turn to computers. Like thousands of innovative teachers across the U.S., she must try to find alternatives to the mind-numbing routine of lectures, worksheets, and rote memorization that constitutes conventional schooling. Perhaps like many other teachers, she fully acknowledges the negative effects of computer instruction as she works to create something positive. Or her instructional choices may have simply reflected the infatuation that many parents, community leaders, school administrators, and educational scholars have had with technology. Computer-based education clearly energizes many students and it seems to offer children tremendous power. Unfortunately, what it strips away is much less obvious.

WHEN I WAS GROWING UP IN RURAL IOWA, I certainly lacked for many things. I couldn't tell a bagel from a burrito. But I always and in many ways belonged. For children, belonging is the most important function a community serves. Indeed, that is the message that lies at the heart of Charlotte's Web. None of us—whether of barnyard or human society—thrives without a sense of belonging. Communities offer it in many different ways—through stories, through language, through membership in religious, civic, or educational organizations. In my case, belonging hinged most decisively on place. I knew our farm—where the snowdrifts would be the morning after a blizzard, where and when the spring runoff would create a temporary stream through the east pasture. I knew the warmest and coolest spots. I could tell you where I was by the smells alone. Watching a massive thunderstorm build in the west, or discovering a new litter of kittens in the barn, I would be awestruck, mesmerized by mysterious wonders I could not control. One of the few moments I remember from elementary school is watching a huge black-and-yellow garden spider climb out of Lee Anfinson's pant cuffs after we came back from a field trip picking wildflowers. It set the whole class in motion with lively conversation and completely flummoxed our crusty old teacher. Somehow that spider spoke to all of us wide-eyed third graders, and we couldn't help but speak back. My experience of these moments, even if often only as a caring observer, somehow solidified my sense of belonging to a world larger than myself—and prepared me, with my parents' guidance, to participate in the larger community, human and otherwise.
Though the work of the students in the video doesn't reflect it, this kind of experience plays a major role in E. B. White's story. Charlotte's Web beautifully draws a child's attention to something that is increasingly rare in schools: the wonder of ordinary processes of nature, which grows mainly through direct contact with the real world. As Hannah Arendt and other observers have noted, we can only learn who we are as human beings by encountering what we are not. While it may seem an impossible task to provide all children with access to truly wild territories, even digging in (healthy) soil opens up a micro-universe that is wild, diverse, and "alien." Substituting the excitement of virtual connections for the deep fulfillment of firsthand engagement is like mistaking a map of a country for the land itself, or as biological philosopher Gregory Bateson put it, "eat[ing] the menu instead of your meal." No one prays over a menu. And I've never witnessed a child developing a reverence for nature while using a computer.
There is a profound difference between learning from the world and learning about it. Any young reader can find a surfeit of information about worms on the Internet. But the computer can only teach the student about worms, and only through abstract symbols—images and text cast on a two-dimensional screen. Contrast that with the way children come to know worms by hands-on experience—by digging in the soil, watching the worm retreat into its hole, and of course feeling it wiggle in the hand. There is the delight of discovery, the dirt under the fingernails, an initial squeamishness followed by a sense of pride at overcoming it. This is what can infuse knowledge with reverence, taking it beyond simple ingestion and manipulation of symbols. And it is reverence in learning that inspires responsibility to the world, the basis of belonging. So I had to wonder why the teacher from the Charlotte's Web video asked children to create animated computer pictures of spiders. Had she considered bringing terrariums into the room so students could watch real spiders fluidly spinning real webs? Sadly, I suspect not.
Rather than attempt to compensate for a growing disconnect from nature, schools seem more and more committed to reinforcing it, a problem that began long before the use of computers. Western pedagogy has always favored abstract knowledge over experiential learning. Even relying on books too much or too early inhibits the ability of children to develop direct relationships with the subjects they are studying. But because of their power, computers drastically exacerbate this tendency, leading us to believe that vivid images, massive amounts of information, and even online conversations with experts provide an adequate substitute for conversing with the things themselves.
As the computer has amplified our youths' ability to virtually "go anywhere, at any time," it has eroded their sense of belonging anywhere, at any time, to anybody, or for any reason. How does a child growing up in Kansas gain a sense of belonging when her school encourages virtual learning about Afghanistan more than firsthand learning about her hometown? How does she relate to the world while spending most of her time engaging with computer-mediated text, images, and sounds that are oddly devoid of place, texture, depth, weight, odor, or taste—empty of life? Can she still cultivate the qualities of responsibility and reverence that are the foundation of belonging to real human or biological communities?
During the years that I worked with young people on Internet telecollaboration projects, I was constantly frustrated by individuals and even entire groups of students who would suddenly disappear from cyber-conversations related to the projects. My own students indicated that they understood the departures to be a way of controlling relationships that develop online. If they get too intense, too nasty, too boring, too demanding, just stop communicating and the relationship goes away. When I inquired, the students who used e-mail regularly all admitted they had done this, the majority more than once. This avoidance of potentially difficult interaction also surfaced in a group of students in the "Talented and Gifted" class at my school. They preferred discussing cultural diversity with students on the other side of the world through the Internet rather than conversing with the school's own ESL students, many of whom came from the very same parts of the world as the online correspondents. These bright high school students feared the uncertain consequences of engaging the immigrants face-to-face. Would they want to be friends? Would they ask for favors? Would they embarrass them in front of others? Would these beginning English speakers try to engage them in frustrating conversations? Better to stay online, where they could control when and how they related to strange people—without much of the work and uncertainty involved with creating and maintaining a caring relationship with a community. If computers discourage a sense of belonging and the hard work needed to interact responsibly with others, they replace it with a promise of power. The seduction of the digital world is strong, especially for small children. What sets the computer apart from other devices, such as television, is the element of control. The most subtle, impressive message promoted by the Charlotte's Web video was that children could take charge of their own learning. Rather than passively listening to a lecture, they were directly interacting with educational content at their own pace. Children, who have so little control over so many things, often respond enthusiastically to such a gift. They feel the same sense of power and control that any of us feels when we use the computer successfully.
To develop normally, any child needs to learn to exert some control over her environment. But the control computers offer children is deceptive, and ultimately dangerous. In the first place, any control children obtain comes at a price: relinquishing the uniquely imaginative and often irrational thought processes that mark childhood. Keep in mind that a computer always has a hidden pedagogue—the programmer—who designed the software and invisibly controls the options available to students at every step of the way. If they try to think "outside the box," the box either refuses to respond or replies with an error message. The students must first surrender to the computer's hyper-rational form of "thinking" before they are awarded any control at all.
And then what exactly is awarded? Here is one of the most underappreciated hazards of the digital age: the problematic nature of a child's newfound power—and the lack of internal discipline in using it. The child pushes a button and the computer draws an X on the screen. The child didn't draw that X, she essentially "ordered" the computer to do it, and the computer employed an enormous amount of embedded adult skill to complete the task. Most of the time a user forgets this distinction because the machine so quickly and precisely processes commands. But the intensity of the frustration that we experience when the computer suddenly stops following orders (and our tendency to curse at, beg, or sweet talk it) confirms that the subtle difference is not lost on the psyche. This shift toward remote control is akin to taking the child out of the role of actor and turning her into the director. This is a very different way of engaging the world than hitting a ball, building a fort, setting a table, climbing a tree, sorting coins, speaking and listening to another person, acting in a play. In an important sense, the child gains control over a vast array of complex abstract activities by giving up or eroding her capacity to actually do them herself. We bemoan the student who uses a spell-checker instead of learning to spell, or a calculator instead of learning to add. But the sacrifice of internal growth for external power generally operates at a more subtle level, as when a child assembles a PowerPoint slideshow using little if any material that she actually created herself.Perhaps more importantly, however, this emphasis on external power teaches children a manipulative way of engaging the world. The computer does an unprecedented job of facilitating the manipulation of symbols. Every object within the virtual environment is not only an abstract representation of something tangible, but is also discrete, floating freely in a digital sea, ready at hand for the user to do with as she pleases. A picture of a tree on a computer has no roots in the earth; it is available to be dragged, cropped, shaded, and reshaped. A picture of a face can be distorted, a recording of a musical performance remixed, someone else's text altered and inserted into an essay. The very idea of the dignity of a subject evaporates when everything becomes an object to be taken apart, reassembled, or deleted. Before computers, people could certainly abstract and manipulate symbols of massive objects or living things, from trees to mountainsides, from buildings to troop movements. But in the past, the level of manipulative power found in a computer never rested in the hands of children, and little research has been done to determine its effect on them. Advocates enthuse over the "unlimited" opportunities computers afford the student for imaginative control. And the computer environment attracts children exactly because it strips away the very resistance to their will that so frustrates them in their concrete existence. Yet in the real world, it is precisely an object's resistance to unlimited manipulation that forces a child (or anyone) to acknowledge the physical limitations of the natural world, the limits of one's power over it, and the need to respect the will of others living in it. To develop normally, a child needs to learn that she cannot force the family cat to sit on her lap, make a rosebud bloom, or hurt a friend and expect to just start over again with everything just as it was before. Nevertheless, long before children have learned these lessons in the real world, parents and educators rush to supply them with digital tools. And we are only now getting our first glimpse of the results—even among teenagers, whom we would expect to have more maturity than their grade school counterparts.
On the day my Advanced Computer Technology classroom got wired to the Internet, it suddenly struck me that, like other technology teachers testing the early Internet waters, I was about to give my high school students more power to do more harm to more people than any teens had ever had in history, and all at a safe distance. They could inflict emotional pain with a few keystrokes and never have to witness the tears shed. They had the skill to destroy hours, even years, of work accomplished by others they didn't know or feel any ill-will toward—just unfortunate, poorly protected network users whose files provided convenient bull's-eyes for youth flexing their newfound technical muscles. Had anyone helped them develop the inner moral and ethical strength needed to say "no" to the flexing of that power?
On the contrary, we hand even our smallest children enormously powerful machines long before they have the moral capacities to use them properly. Then to assure that our children don't slip past the electronic fences we erect around them, we rely on yet other technologies—including Internet filters like Net Nanny—or fear of draconian punishments. This is not the way to prepare youth for membership in a democratic society that eschews authoritarian control.
That lesson hit home with particular force when I had to handle a trio of very bright high school students in one of the last computer classes I taught. These otherwise nice young men lobbied me so hard to approve their major project proposal—breaking through the school's network security—that I finally relented to see if they intended to follow through. When I told them it was up to them, they trotted off to the lab without a second thought and went right to work—until I hauled them back and reasserted my authority. Once the external controls were lifted, these teens possessed no internal controls to take over. This is something those who want to "empower" young children by handing them computers have tended to ignore: that internal moral and ethical development must precede the acquisition of power—political, economic, or technical—if it is to be employed responsibly.
Computer science pioneer Joseph Weizenbaum long ago argued that as the machines we put in our citizens' hands become more and more powerful, it is crucial that we increase our efforts to help people recognize and accept the immense responsibility they have to use those machines for the good of humanity. Technology can provide enormous assistance in figuring out how to do things, Weizenbaum pointed out, but it turns mute when it comes time to determine what we should do. Without any such moral grounding, the dependence on computers encourages a manipulative, "whatever works" attitude toward others. It also reinforces the exploitative relationship to the environment that has plagued Western society since Descartes first expressed his desire to "seize nature by the throat." Even sophisticated "environmental" simulations, which show how ecosystems respond to changes, reinforce the mistaken idea that the natural world conforms to our abstract representations of it, and therefore has no inherent value, only the instrumental value we assign to it through our symbols. Such reductionism reinforces the kind of faulty thinking that is destroying the planet: we can dam riparian systems if models show an "acceptable" level of damage, treat human beings simply as units of productivity to be discarded when inconvenient or useless, and reduce all things, even those living, to mere data. The message of the medium—abstraction, manipulation, control, and power—inevitably influences those who use it.
None of this happens overnight, of course, or with a single exposure to a computer. It takes time to shape a worldview. But that is exactly why it is wrong-headed to push such powerful worldview-shapers on impressionable children, especially during elementary school years. What happens when we immerse our children in virtual environments whose fundamental lesson is not to live fully and responsibly in the world, but to value the power to manipulate objects and relationships? How can we then expect our children to draw the line between the symbols and what they represent? When we remove resistance to a child's will to act, how can we teach that child to deal maturely with the Earth and its inhabitants?
UR TECHNOLOGICAL AGE REQUIRES A NEW DEFINITION OF MATURITY: coming to terms with the proper limits of one's own power in relation to nature, society, and one's own desires. Developing those limits may be the most crucial goal of twenty-first-century education. Given the pervasiveness of digital technology, it is not necessary or sensible to teach children to reject computers (although I found that students need just one year of high school to learn enough computer skills to enter the workplace or college). What is necessary is to confront the challenges the technology poses with wisdom and great care. A number of organizations are attempting to do just that. The Alliance for Childhood, for one, has recently published a set of curriculum guidelines that promotes an ecological understanding of the relationship between humans and technology. But that's just a beginning.
In the preface to his thoughtful book, The Whale and the Reactor, Langdon Winner writes, "I am convinced that any philosophy of technology worth its salt must eventually ask, 'How can we limit modern technology to match our best sense of who we are and the kind of world we would like to build?'" Unfortunately, our schools too often default to the inverse of that question: "How can we limit human beings to match the best use of what our technology can do and the kind of world it will build?" As a consequence, our children are likely to sustain this process of alienation—in which they treat themselves, other people, and the Earth instrumentally—in a vain attempt to materially fill up lives crippled by internal emptiness. We should not be surprised when they "solve" personal and social problems by turning to drugs, guns, hateful Web logs, or other powerful "tools," rather than digging deep within themselves or searching out others in the community for strength and support. After all, this is just what we have taught them to do.
At the heart of a child's relationship with technology is a paradox—that the more external power children have at their disposal, the more difficult it will be for them to develop the inner capacities to use that power wisely. Once educators, parents, and policymakers understand this phenomenon, perhaps education will begin to emphasize the development of human beings living in community, and not just technical virtuosity. I am convinced that this will necessarily involve unplugging the learning environment long enough to encourage children to discover who they are and what kind of world they must live in. That, in turn, will allow them to participate more wisely in using external tools to shape, and at times leave unshaped, the world in which we all must live.

Copyright 2005 The Orion Society.

Lowell Monke, who has taught young people with and about computers for seventeen years, currently gets paid by Wittenberg University to confuse aspiring teachers as to what education is all about. He lives with his family in Springfield, Ohio, and is at work on a book about children, education, and computers.


Popes and preachers were once the main beneficiaries of human gullibility. These days, says Nassim Taleb, it's stock fund managers...

"We humans are naturally gullible — disbelieving requires an extraordinary expenditure of energy. It is a limited resource. I suggest ranking the skepticism by its consequences on our lives. True, the dangers of organized religion used to be there — but they have been gradually replaced with considerably ruthless and unintrospective social-science ideology."


By Nassim Taleb

As a practitioner of science, I am opposed to teaching religious ideas in schools. But, it seems to me somewhat misplaced energy — more of a fight for principles than for any bottom line. As an empirical skeptic, I would like to introduce a dimension to the debates: relevance, consequence, and our ability to correct a situation — in other words the impact on our daily lives.
My portrait of the perfect fool of randomness is as follows: he does not believe in religion, providing entirely rational reasons for such disbelief. He opposes scientific method to superstition and blind faith. But alas, human skepticism appears to be quite domain-specific and relegated to the classroom. Somehow the skepticism of my fool undergoes a severe atrophy outside of these intellectual debates:
1) He believes in the stock market because he is told to do so. — automatically allocating a portion of his retirement money. And he does not realize that the manager of his mutual fund does not fare better than chance — actually a bit worse, after the (generous) fees. Nor does he realize that markets are far more random and far riskier that he is being made to believe by the high priests of the brokerage industry.
He disbelieves the bishops (on grounds of scientific method), but replaces him with the security analyst. He listens to the projections by security analysts and "experts"— not checking their past accuracy and track record. Had he checked them he would have discovered that these are no better than random — often worse.
2) He believes in the government's ability to "forecast" economic variables, oil prices, GNP growth, or inflation. Economics provide very complicated equations — but our historical track record in predicting is pitiful. It does not take long to verify these claims; simple empiricism would suffice. Yet we have confident forecasts of social security deficits by both sides (democrats and republicans) twenty and thirty years ahead! This Scandal of Prediction (which I capitalize) is far more severe than religion, simply because it determines policy making. Last time I checked no religious figure was consulted for long-term business and economic projections.
3) He believes in the "skills" of the chairmen of large corporations and pays them huge bonuses for their "performance". He forgets that theirs are the least observable contributions. This skills attribution is flimsy at best — there is no account of the possible role of luck in his success.
4) His scientific integrity makes him reject religion but he believes the economist because "economic science" has the word "science" in it.
5) He believes in the news media providing an accurate representation of the risks in the world. They don't. By what I call the narrative fallacy, the media distorts our mental map of the world by feeding us what can be made into a story that can be squeezed into our minds. For instance (preventable) cancer, not terrorism remains the greatest danger. The number of persons killed by hurricanes, while consequential, is dwarfed by that of the thousands of isolated daily victims dying in hospital beds. These are not story-worthy, implying; the absence of attention on the part of the press maps into disproportionately reduced resources allocated to their welfare. The difference between actual, actuarially defined risks and the perception of dangers is enormous — and, sadly, growing with the globalization and the media, and our increased vulnerability to visual stimuli.
Now I am not arguing that one should ignore the side effects of religion — given the accounts of past intolerance. But it was in these columns that Richard Dawkins, echoing the great Peter Medawar, recommended bright students to find something worthwhile "to be smart about". Likewise, I suggest exerting our skepticism "where it matters". Why? Because, alas, cognitively, our resource to doubt is rather limited.
We humans are naturally gullible — disbelieving requires an extraordinary expenditure of energy. It is a limited resource. I suggest ranking the skepticism by its consequences on our lives. True, the dangers of organized religion used to be there — but they have been gradually replaced with considerably ruthless and unintrospective social-science ideology.
Religion gives many people solace. On a personal note I have to admit that I feel more elevated in cathedrals than in stock markets — be it only on aesthetic grounds. If I were going to be gullible about a subject, I would rather pick one that is the least harmful to my future — and one that is rewarding to my thirst for aesthetics.
It is high time to worry about the opiates of the middle class.

NASSIM NICHOLAS TALEB, an essayist and mathematical trader, is the author of Fooled By Randomness.

Monday, October 03, 2005

How to Prepare for One Really Quick Getaway

The New York Times
October 1, 2005

How to Prepare for One Really Quick Getaway

What is the first thing you will grab from your home if your house floods, catches on fire or comes tumbling down in an earthquake? Family photos? The pets? The Hummel figurines?
It probably will not be your financial and medical records, the very things you will need to rebuild your life after a disaster. If you are like most people, you have documents stashed in various places throughout your home, perhaps some under lock and key. And with your mind racing as danger hits, you are not going to have the time or wherewithal to figure out which ones you need.
In any case, your financial and medical records would be such a large and unwieldy pile that you would just say forget about it, grab Fluffy and scramble out of there. Indeed, that is probably your reaction any time someone suggests you get your records organized.
But wait. Do not run away yet. New technology is making this tedious task less odious, and surprisingly, it is not that expensive.
All told, you can secure your records in a weekend afternoon. Even better, doing all this has a wonderful side effect: it can put you in better financial shape to survive a disaster because you will end up a lot smarter about how you spend and save money. For instance, one of the first things to do is compile a list of where everything is - account numbers and the locations of important documents. The list will help you or anyone in your family locate things you need for the insurance adjuster or relief worker. (Download a template for this information that you can place right on your computer.)
This is really the "if hit by a bus" list that financial planners have been recommending you compile for your heirs. If you think of the list that way, you will be reminded of your mortality and you will not want to write it. But think of the families displaced by Hurricanes Katrina and Rita or by California wildfires, and the psychological barrier collapses. The list becomes a much easier sell now, said Brent Neiser, a director for the National Endowment for Financial Education. "It forces you to think," he said.
Here is what else you have to do to protect your records and yourself:
RECORD: Once you have made your basic list, save it on a U.S.B. flash drive. A 256-megabyte drive, which you can buy for $20 or even less if you catch a store promotion, gives you enough space for that file and all the other suggestions mentioned below.
Several of the big flash drive makers, like SanDisk and Lexar Media, are now selling more advanced drives that allow you to encrypt the data so others cannot read it without knowing the alphanumeric key that unlocks the code. Some are even shock proofed with heavier rubber and plastic coatings. Those will cost about $10 to $20 more, but are certainly worth it when you consider the sensitivity of the data on them.
It is also a good idea to copy the contents onto additional drives for backup and for other members of the family.
BONUS: When you are listing the credit cards, also note the credit limits so you will know how much you could spend in an emergency. If your credit cards are at their limits now, you are not going to have any cushion to fall back on. So start paying off balances, beginning with the card carrying the highest interest rate.
SCAN: Some important documents are on paper and you will want copies of them with you: tax returns for the last three years (Form 1040 is all you will need in an emergency), a recent pay stub, birth certificates, marriage license, the deed to your home and insurance policy pages that list your coverage. If you do not have a scanner or a printer with a flat scanner, take the pile of documents down to a copy center like Kinko's to scan. Record the image files on the U.S.B. drive.
BONUS: Take the opportunity to check your insurance coverage for potential disasters like flooding. With homes appreciating in value, you may also find you need to increase coverage.
SHOOT: Some personal finance advisers suggest that you make a spreadsheet listing everything you own and enter the date and price paid and then file all the receipts and ... yeah, yeah. You will never do it. But creating a detailed inventory of everything you own need not be a major chore when technology comes to the rescue. Many households now have a camcorder or digital camera. Walk around each room and take a picture of each item. Then, either store all the photos on a memory card (unless you live in the Biltmore mansion, you can load all the photos on a 256- or 512-megabyte card). Or you can transfer them to the same U.S.B. drive with your other documents.
Describe each object on the camcorder soundtrack or in the file name of the digital photo. Make an extra copy on another card or drive. "If you give one to your insurance adjuster, you go to the front of the line," Mr. Neiser said.
For additional protection, you could upload the photos - as well as all your beloved family photos - to one of the free online photo services like,,, or Anybody you choose can then have access to them from any computer anywhere. (Make sure to set the privacy options, though.)
BONUS: You are going to discover a lot of stuff you no longer want or need. Sell it or donate it and take a tax deduction. Intuit, maker of Quicken and TurboTax, sells a $20 program called ItsDeductible that estimates the value of donated items, but and have free valuation guides.
SECURE: Now it is time for your medical records. You can place your health history as well as digitized copies of X-rays, scans and electrocardiograms on the same encrypted flash drive.
Those with serious medical conditions may want to consider a product sold by the nonprofit organization that developed the MedicAlert bracelet 50 years ago. It sells a special USB flash drive on its Web site,, called the E-HealthKey for $85. SanDisk originally developed the product for the Army. Pop the flash drive into any computer and a screen flashes with your medical condition to alert emergency room personnel, for instance, to an allergy or your use of a pacemaker. But beyond that screen, medical information you enter with the help of a user-friendly program right on the drive is encrypted.
For an additional $20-a-year fee, MedicAlert uploads your data to its server so you have a backup.
The E-HealthKey is only available for PC's running Windows XP or Windows 2000. You may want to wait until November when the organization issues an improved version.
BONUS: The E-HealthKey software, created by a division of Bio-Imaging Technologies, also plots your weight, cholesterol or anything you regularly record, onto a graph. "It's a great wellness tool," said Ramesh Srinivasan, MedicAlert's vice president for marketing. If you are going to run for your life, clutching your flash drive and the Hummels, you had better be healthy.