The ecology of a virus

What might “collapse” mean, anyway—as in the phrases “the collapse of Ur III,” around 2,000 BCE; “the collapse of the Old Kingdom Egypt,” around 2,100 BCE; “the collapse of the Minoan Palatial Regime” on Crete, around 1,450 BCE? At the very least it means the abandonment and/or destruction of the monumental court center. This is usually interpreted not merely as a redistribution of population but as a substantial, not to say catastrophic, loss of social complexity. If the population remains, it is likely to have dispersed to smaller settlements and villages. Higher-order elites disappear; monumental building activity ceases; use of literacy for administrative and religious purposes is likely to evaporate; larger-scale trade and redistribution is sharply reduced; and specialist craft production for elite consumption and trade is diminished or absent. Taken together, such changes are often understood to be a deplorable regression away from a more civilized culture. In this respect, it is just as essential to emphasize what such events do not necessarily mean. They do not necessarily mean a decline in regional population. They do not necessarily mean a decline in human health, well-being, or nutrition, and, as we shall see, may represent an improvement. Finally, a “collapse” at the center is less likely to mean a dissolution of a culture than its reformulation and decentralization.

What I wish to challenge here is a rarely examined prejudice that sees population aggregation at the apex of state centers as triumphs of civilization on the one hand, and decentralization into smaller political units on the other, as a breakdown or failure of political order. We should, I believe, aim to “normalize” collapse and see it rather as often inaugurating a periodic and possibly even salutary reformulation of political order. In the case of more centralized command-and-rationing economies such as Ur III, Crete, and Qin China, the problems were further compounded, and cycles of centralization, decentralization, and reaggregation seem to have been common.

James C. Scott, Against the Grain

For something positive to emerge from moments of crisis, it is not enough to prophecise the end of the world and to simply witness the sunset from our ivory tower. From now on, the necessary scaffolding, networks and organisation must be created, in order to respond to the wreckage, so as not to start from scratch when society breaks down and the human species loses its compass. If we do nothing in that direction, in creating today the cooperative and solidarity structures that already serve to test economic self-management and political autonomy in our neighborhoods (and not in isolated communities for the convinced), the future will look much more like what culture Pop has taught us for decades on the big screen.

Collapse is an opportunity, but not necessarily an opportunity for improvement. That depends on us.

Ruymán Rodríguez, Sociedades de papel

Beyond the quarantines, the expansion and intensification of State control, the states of emergency, justified in the name of “public health”, we share further sources for reflections on the politics of the coronavirus (COVID-19) …

Headline for an article from the NASA earth observatory (March 2, 2020):Airborne Nitrogen Dioxide Plummets Over China“.

A commentary following on the headline …

Whoever believes that pollution, once produced and out of human control, is apolitical (being political only the options and decisions that increase or decrease the quantity and content of the pollution), is mistaken: pollution can also be an ideologically and politically constructed object. It is thanks to this construction that pollution does not officially produce deaths or sickness (except when there is an accident or a catastrophe like Chernobyl), while the victims of a virus are counted one by one and country by country. And what if the appearance of the virus, in China, was a conspiracy engendered in a laboratory by “green” terrorists to combat pollution? What if this viral remedy was relatively innocuous, from the perspective of public health, when compared with the tons of gases and polluting materials produced daily when a threatening virus does not circulate? Where in the end is the illness and where is the cure? We will never know. But the idea that the virus – this one or another to come – is not a catastrophe but a salvation is a source of great enthusiasm.

(“Livro de recitações”, in the portuguese newspaper Público – 06/03/2020)

____

What follows is a partial translation of an article from the french newspaper Le monde diplomatique …

Against Pandemics, ecology: From where does the coronavirus come from?

Sonia Shah

Journalist, author of Pandemic: Tracking Contagions, From Cholera to Ebola and Beyond, Sarah Crichton Books, New York, 2016, and of The Next Great Migration: The Beauty and Terror of Life on the Move, Bloomsbury Publishing, London, to appear in June 2020. The text that follows was published in The Nation.

(Le monde diplomatique 03/2020)

Even in the 21st century, the old remedies appear to be the best means by which to combat the coronavirus epidemic for the Chinese authorities. Hundreds of millions of people will have their movements restricted. Is it not time to ask why pandemics succeed each other at an ever more sustained rhythm?

It could have been a pangolin. Or a bat. Or, as one later-debunked theory suggested, a snake. The race is on to identify the animal source of Covid-19, the coronavirus that now holds several hundred million people in quarantines and cordons sanitaires in China and elsewhere. The animal origin of the virus is a critical mystery to solve. But speculation about which wild creature originally harboured the virus obscures a more fundamental source of our growing vulnerability to pandemics: the accelerating pace of habitat loss.

Since 1940, hundreds of microbial pathogens have emerged or re-emerged into territory where they’ve never been seen before. They include HIV, Ebola in West Africa, Zika in the Americas, and many novel coronaviruses. The majority, 60%, originate in the bodies of animals. Some come from pets and livestock. Most, more than two thirds, originate in wildlife.

That’s not the fault of wild animals. Although stories illustrated with pictures of wild animals as ‘the source’ of deadly outbreaks might suggest otherwise. wild animals are not especially infested with deadly pathogens poised to infect us. In fact, most of these microbes live harmlessly in animals’ bodies. The problem is that cutting down forests and expanding towns, cities and industrial activities creates pathways for animal microbes to adapt to the human body.

Habitat destruction threatens vast numbers of wild species with extinction. including the medicinal plants and animals we’ve historically depended upon for our pharmacopeia.(2) It also forces wild species that hang on to cram into smaller fragments of remaining habitat, increasing the likelihood that they’ll come into repeated, intimate contact with the human settlements expanding into their habitat. It’s this kind of repeated, intimate contact that allows the microbes that live in their bodies to cross over into ours, transforming benign animal microbes into lethal pathogenic agents.

Ebola illustrates this well. A study conducted in 2017 revealed that the appearances of the virus, whose source was found to be in diverse species of bats, are more frequent in areas of Central and Western Africa which recently suffered deforestation. When their forests are cut down, the bats are forced to perch on the trees of our gardens and farms. From that point on, it is easy to imagine what follows: a human ingests bat saliva in biting into a fruit covered with it, or, in trying to hunt and kill the unwanted visitor, thereby exposing her/himself to the microbes who found refuge in its tissues. Thus a multitude of viruses carried by bats, but which are inoffensive for them, found their way to human populations – for example, Ebola, but also Nipah (notably in Malaysia or Bangladesh) or Marburg (singularly, in East Africa). This phenomenon is referred to as “crossing the species barrier”. However infrequently it occurs, it can allow animal borne microbes to adapt to our organisms and evolve to the point of becoming pathogenic.

The same holds for illnesses transmitted by mosquitoes, for a connection was established between the occurrence of epidemics and deforestation (3) – except that in this case it has less to do with the loss of habitat than its transformation. With the trees disappears the ground covering of dead leaves and the roots. Water and sediments flow more easily over the uncovered soil, now bathed in sun, forming puddles and ponds more favourable for the reproduction of malaria carrying mosquitoes. According to one study carried out in a dozen countries, species of mosquitoes carriers of human pathogenic agents are two times more numerous in deforested areas than in areas where the forests remained intact.

The dangers of industrial livestock breeding

The destruction of habitats also acts to modify the actual numbers of diverse species, which can increase the risk of propagation of a pathogenic agent. An example: the West Nile virus, transported by migratory birds. In North America, bird populations have fallen by more birds, were hit much harder than 25% over the last fifty years due to the loss of habitats and other destruction.(4) But not all species are effected in the same way. Specialist birds (of a habitat), such as woodpeckers and rail birds were hit much harder than generalists, such as the European robin and crows. If the first are poor vectors of the West Nile virus, the second, for their part, are excellent. Thus the strong presence of the virus among the domestic birds of the region, and the growing possibility of seeing a mosquito bite an infected bird, then a human.(5)

The same phenomenon occurs with illnesses borne by ticks. By slowly eating away at the forests of North America, urban development chase away animals like opossums who contribute to controlling tick populations, leaving behind much less effective populations at this task, such as the white-footed mouse and deer. The result: illnesses transmitted by ticks spread more easily. Among them, Lyme disease, which appeared for the first time in the United States in 1975. Over the course of the last twenty years, seven new pathogenic agents carried by ticks have been identified.(6)

The risk of the appearance of new illnesses is not only accentuated by the loss of habitats, but also by the way they are replaced. To assuage her/his carnivorous appetite, human beings have leveled a surface equivalent to that of the African continent with the aim of feeding and raising animals destined to slaughter. Some among them then follow the routes of illegal commerce or are sold at live animal markets (wet markets). There, species that would have never encountered each other in nature find themselves side by side in cages, and microbes can then freely move from one to the other. This kind of development, which already engendered in 2002-2003 the corona virus responsible for severe acute respiratory syndrome (SARS), is perhaps at the origin of the unknown corona virus that plagues us today.

Far more numerous however are the animals which grow and evolve in the heart of our industrial livestock breeding. Hundreds of thousands of animals piled up on top of each other waiting to be lead to the slaughter: these are the ideal conditions for microbes to transform themselves into agents of fatal pathogens. Avian flu virus, for example, borne by waterfowl, ravages farms filled with chickens in captivity, where they mutate and become more virulent – a process so predictable that it can be reproduced in laboratory. One of its strains, H5N1, is transmissible to human beings and kills more than half of the individuals infected. In 2014, in North America, tens of millions of chickens had to be killed to stem the propagation of another strain.(8)

The mountains of droppings produced by our livestock offer microbes of animal origin other occasions to infect populations. As there is infinitely more waste than can be absorbed by agricultural lands in the form of fertilizer, it often ends stored in permeable water pools – a dream haven for the Escherichia coli bacteria. More than half of the animals held in american feedlots are carriers, but they remain inoffensive.(9) For humans, however, E. coli cause bloody diarrhea, fever and can lead to acute kidney failure. And because it is not uncommon for animal waste to spill into our potable water and our food, 90,000 Americans are contaminated each year.

Even though this phenomenon of animal microbe mutation into human pathogenic agents is accelerating, it is not new. Its appearance dates form the neolithic revolution, when humans began to destroy wild habitats to extend cultivated lands and to domesticate animals to render them beasts of burden. In exchange, the animals offered us a few poisoned gifts: we owe measles and tuberculosis to cows, whooping cough to pigs, influenza to ducks.

The process continued during the European colonial expansion. In Congo, the railways and the cities constructed by the colonists permitted a lentivirus hosted by macaques of the region to perfect their adaption to the human body. In Bengal, the British encroached on the immense wet zone of the Sundarbans to develop rice-growing, exposing the inhabitants to the aquatic bacteria present in these brackish waters. The pandemics caused by these colonial intrusions remain with us. The lentivirus of the macaques became HIV. The aquatic bacteria of the Sundarbans, now known by the name of cholera, has already to this day provoked seven pandemics, the most recent epidemic occurring in Haiti.

Fortunately, to the extent that we were not the passive victims of these processes, we can also do a great deal to reduce the risks of the emergence of these microbes.

As the epidemiologist Larry Brilliant declared, “the emergence of viruses is inevitable, not epidemics”. However, we will not be spared these last except on the condition that we be as determined in changing our politics as we have been in disturbing nature and animal life.

  1. Kai Kupferschmidt, “This bat species may be the source of the Ebola epidemic that killed more than 11,000 people in West Africa”, Science Magazine, Washington, DC – Cambridge, January 24, 2019.
  2. Jonathan Watts, “Habitat loss threatens all our futures, world leaders warned”, The Guardian, London, November 17, 2018.
  3. Katarina Zimmer, “Deforestation tied to changes in disease dynamics”, The Scientist, New York, January 29, 2019.
  4. Carl Zimmer, “Birds are vanishing from North America”, The New York Times, September 19, 2019.
  5. BirdLife International, “Diversity of birds buffer against West Nile virus”, ScienceDaily, March 6, 2009, www.sciencedaily.com.
  6. “Lyme and other tickborne diseases increasing”, Centers for Disease Control and Prevention, April 22, 2019, www.cdc.gov.
  7. George Monbiot, “There’s a population crisis all right. But probably not the one you think”, The Guardian, November 19, 2015.
  8. “What you get when you mix chickens, China and climate change”, The New York Times, February 5, 2016. In France, Avian Flu reached livestock farms during the winter of 2015-2016, and the minister of agriculture believes that a risk exists this year for chickens coming from Poland.
  9. Cristina Venegas-Vargas et al., “Factors associated with Shiga toxin-producing Escherichia coli shedding by diary and beef cattle”, Applied and Environmental Microbiology, vol. 82, nº 16, Washington, DC, August 2016.

____

Against the Grain: A Deep history of the Earliest States

James C. Scott

Chapter Three: Zoonoses: A Perfect Epidemiological Storm

Drudgery and its History

Agro-pastoralism—ploughed fields and domestic animals—comes to dominate much of Mesopotamia and the Fertile Crescent well before the appearance of states. With the exception of areas favored by flood-retreat agriculture, this fact represents a paradox that, in my view, has still not been satisfactorily explained. Why would foragers in their right mind choose the huge increase in drudgery entailed by fixed-field agriculture and animal husbandry unless they had, as it were, a pistol at their collective temple? We know that even contemporary hunter-gatherers, reduced to living in resourcepoor environments, still spend only half their time in anything we might call subsistence labor. As the students of a rare archaeological site in Mesopotamia (Abu Hureyra), where the entire transition from hunting and gathering to full-blown agriculture can be traced, put it, “No hunter-gatherers occupying a productive locality with a range of wild foods able to provide for all seasons are likely to have started cultivating their caloric staples willingly. Energy investment per unit of energy return would have been too high.”(1) Their conclusion was that the “pistol at their temple” in this case was the cold snap of the Younger Dryas (10,500–9,600 BCE), which reduced the abundance of wild plants, together with hostile adjacent populations, which restricted their mobility. This explanation, as noted earlier, is hotly contested in terms of both evidence and logic.

I am in no position to adjudicate, let alone resolve, the controversy over what drove people over several millennia to agriculture as a dominant mode of subsistence. The long-accepted explanation, virtually an orthodoxy, was an intellectually satisfying narrative of subsistence intensification covering a span of as much as six thousand years. The first pulse of intensification was termed “the broad spectrum revolution,” a reference to the exploitation of more varied subsistence resources at lower trophic levels. The transition was brought about in the Fertile Crescent by the growing scarcity (by overhunting?) of the big-game sources of wild protein—aurochs, onager, red deer, sea turtle, gazelle—the “low-hanging fruit,” to mix metaphors, of early hunting. The result, perhaps impelled as well by population pressure, forced people to exploit resources that, while abundant, required more labor and were perhaps less desirable and/or nutritious. Evidence for this broadspectrum revolution is ubiquitous in the archaeological record as the bones of large wild animals decline and the volume of starchier plant matter, shellfish, small birds and mammals, snails, and mussels begin to predominate. For the founders of this orthodoxy, the logic behind the broad-spectrum revolution and the adoption of agriculture was identical and, moreover, worldwide. The global increase in population, especially after 9,600 BCE, when the climate improved, together with the decline in big game (clearly documented in the Middle East and the New World), forced hunters and gatherers to intensify their foraging. Pressing ever more heavily on the carrying capacity of their environment’s resources, they were obliged to work harder for their subsistence. Thus the broad-spectrum revolution was, in this view, the first step in a long increase in drudgery that later reached its logical conclusion in the even more unremitting toil of plough agriculture and livestock rearing. In most versions of this narrative, the broad-spectrum revolution and agriculture were also nutritionally damaging, resulting in poorer health and higher mortality.

As an explanation for the broad-spectrum revolution, demographic pressure on carrying capacity seems in many locations to be in conflict with the available evidence. The “revolution” occurs in settings where there seems to be little population pressure on resources. It may also be the case that the wetter and warmer conditions after 9,600 BCE promoted a much greater abundance of plant life, as in the Mesopotamian alluvium, that could be easily gathered, though this would not explain the observed nutritional deficiencies in the archaeological record. There is no doubting the reality of the broad-spectrum revolution, but the jury is still out when it comes to understanding either its causes or its consequences.

About the development of agriculture proper, some three or four millennia later, however, the jury is in. There was growing population pressure; sedentary hunters and gatherers found it harder to move and were impelled to extract more, at a higher cost in labor, from their surroundings, and most large game was in decline or gone. This, then, is no Whiggish story of human invention and progress. Planting techniques were long known and occasionally used; wild plants were routinely gathered and their seeds stored; all the tools for grain processing were at hand, and even a captive animal or two might be held in reserve. Nevertheless, planting and livestock rearing as dominant subsistence practices were avoided for as long as possible because of the work they required. And most of the work arose from the need to defend a simplified, artificial landscape from the resurgence of nature excluded from it: other plants (weeds), birds, grazing animals, rodents, insects, and the rust and fungal infections that threatened a monocropped field. The tilled agricultural field was not only labor intensive; it was fragile and vulnerable.

The Late Neolithic Multispecies Resettlement Camp: A Perfect Epidemiological Storm

The world’s population in 10,000 BCE, according to one careful estimate, was roughly 4 million. A full five thousand years later, in 5,000 BCE, it had risen only to 5 million. This hardly represents a population explosion, despite the civilizational achievements of the Neolithic revolution: sedentism and agriculture. Over the subsequent five thousand years, by contrast, world population would grow twentyfold, to more than 100 million. The five thousand–year Neolithic transition was thus something of a demographic bottleneck, reflecting a nearly static level of reproduction. Supposing even a population growth rate just barely over replacement levels (for example, 0.015 percent) the total population would have still more than doubled over these five millennia. One likely explanation for this paradox of apparent human progress in subsistence techniques together with long period of demographic stagnation is that, epidemiologically, this was perhaps the most lethal period in human history. In the case of Mesopotamia, the claim is that, owing precisely to the effects of the Neolithic revolution, it had become the focal point of chronic and acute infectious diseases that devastated the population again and again.(2)

Evidence in the archaeological record is hard to come by inasmuch as such diseases, unlike malnutrition, only rarely leave signature traces on human bones. Epidemic disease is, I believe, the “loudest” silence in the Neolithic archaeological record. Archaeology can assess only what it can recover and, in this case, we must speculate beyond the hard evidence. There are nonetheless good reasons for supposing that a great many of the sudden collapses of the earliest centers of population were due to devastating epidemic diseases.(3) Time and again there is evidence of a sudden and otherwise unexplained abandonment of previously well-populated sites. In the case of adverse climate change or soil salinization one would also expect depopulation, but in keeping with its cause it would be more likely to be regionwide and rather more gradual. Other explanations for the sudden evacuation or disappearance of a populous site are of course possible: civil war, conquest, floods. Epidemic disease, however, given the entirely novel crowding the Neolithic revolution made possible, is the most likely suspect, judging from the massive effects of disease that appear in the written records once they become available. The meaning of epidemic disease in this context is not confined to Homo sapiens alone. Epidemics affected domestic animals and crops that were also concentrated in the late-Neolithic multispecies resettlement camp. A population could as easily be devastated by a disease that swept through their flocks or their grain fields as by a plague that menaced them directly.

Once written records become available, however, we have ample evidence of deadly epidemics, which can, with caution, be read back to earlier periods. The Epic of Gilgamesh provides perhaps the most powerful evidence when its hero claims that his fame will outlive death as he depicts a scene of bodies felled, probably by pestilence, floating down the Euphrates. Mesopotamians, it seems, lived in the ever-threatening shadow of fatal epidemics. They had amulets, special prayers, prophylactic dolls, and “healing” goddesses and temples—the most famous of which was at Nippur—designed to ward off mass illness. Such events were, of course, poorly understood at the time. They were seen as “the devouring” of a god and as punishment for some transgression requiring compensatory ritual including the sacrifice of scapegoats.(4)

The first written sources also make it clear that early Mesopotamian populations understood the principle of “contagion” that spread epidemic disease. Where possible, they took steps to quarantine the first discernible cases, confining them to their quarters, letting no one out and no one in. They understood that long-distance travelers, traders, and soldiers were likely carriers of disease. Their practices of isolation and avoidance prefigured the quarantine procedures of the lazaretti of the Renaissance ports. An understanding of contagion was implicit not only in the avoidance of people who were infected but avoidance as well of their cups, dishes, clothes, and bed linen.(5) Soldiers returning from a campaign and suspected of carrying disease were obliged to burn their clothing and shields before entering the city. When isolation and quarantine failed, those who could fled the city, leaving the dying and deceased behind, and returning, if ever, only well after the epidemic had passed. In doing so, they must frequently have brought the epidemic to outlying areas, touching off a new round of quarantines and flight. There is little doubt in my mind that a good many of the earlier and unchronicled abandonments of populous areas were due more to disease than to politics.

Evidence for the role of pathogens in the diseases of humans, domesticated animals, and domesticate crops before the middle of the fourth millennium BCE is necessarily speculative. As written records proliferate, however, the evidence for epidemics grows in proportion; the texts refer, Karen Rhea Nemet-Nejat claims, to tuberculosis, typhus, bubonic plague, and smallpox.(6) One of the earliest and most amply attested is a devastating epidemic at Mari on the Euphrates in 1,800 BCE. The list of others is long, although the nature of the disease is typically obscure. The epidemic that destroyed the army of Sennachrib, son of Sargon II and Assyrian king in 701 BCE, that figures as well in the Old Testament’s litany of plagues is now ascribed to typhus or cholera, the traditional scourges of armies on campaign. Later, the crushing plague of Athens in 430 BCE, described memorably by Thucydides, and the Antonine and Justinian plagues of Rome play a decisive role in what amounts to early “imperial” history. Given the larger populations and growing long-distance trade of this later era, there is little doubt that epidemics touched more people and more areas than before. Nevertheless, Mesopotamia of the late fourth millennium BCE was a historically novel environment for epidemics. By 3,200 BCE, Uruk was the biggest city in the world, with anywhere from twenty-five thousand to fifty thousand inhabitants, together with their livestock and crops, dwarfing the concentrations of the earlier Ubaid period. As the most demographically packed area, the southern alluvium was especially vulnerable to epidemics; the Akkadian word for epidemic disease “literally meant ‘certain death’ and could be applied equally to animal as well as human epidemics.” (7) That concentration and an unprecedented flow of trade created, as we shall now explain, a uniquely new vulnerability to the diseases of crowding.

 Sedentism alone, well before widespread cultivation of domesticated crops, created conditions of crowding that were ideal “feedlots” for pathogens. The growth of large villages and small towns in the Mesopotamian alluvium represented a ten- to twentyfold increase in the population density over anything Homo sapiens had previously experienced. The logic of crowding and disease transmission is straightforward. Imagine, for example, an enclosure with ten chickens, one of which is infected with a parasite spread by droppings. After a while—depending in part on the size of the enclosure, the activity of the fowl, and the ease of transmission—another chicken will become infected. Now, instead of ten chickens, imagine five hundred chickens in the same enclosure and the chances rise at least fiftyfold that another bird will become quickly infected, and so on exponentially. Two birds are now excreting the parasite, doubling the probability of a new infection. Recall that we have increased not only the poultry but also their droppings by fifty times so that soon, the smaller the enclosure, the likelihood of other birds avoiding contact with the pathogen becomes vanishingly small.

For the present purposes we are applying the logic of crowding and diseases to Homo sapiens, but, as in the example above, it applies equally to the crowding of any disease-prone organism, flora or fauna. It is a crowding phenomenon that applies equally to flocks of birds and sheep, schools of fish, herds of reindeer or gazelle, and fields of cereals. The greater the genetic similarity—the less variation—the greater the likelihood that they will all be vulnerable to the same pathogen. Before extensive human travel, migratory birds that nested together combined long-distance travel with crowding to constitute, perhaps, the main vector for the spread of disease over distance. The association of infection with crowding was known and utilized long before the actual vectors of disease transmission were understood. Hunters and gatherers knew enough to stay clear of large settlements, and dispersal was long seen as a way to avoid contracting an epidemic disease. Late medieval Oxford and Cambridge maintained plague houses in the countryside to which students were dispatched with the first sign of the plague. Concentration could be lethal. Thus the trenches, demobilization camps, and troop ships at the conclusion of World War I provided the ideal conditions for the massive and lethal influenza pandemic of 1918. Social sites of crowding —fairs, military encampments, schools, prisons, slums, religious pilgrimages, such as the hajj to Mecca—have historically been locations where infectious diseases have been contracted and from which they have subsequently been dispersed.

The importance of sedentism and the crowding it allowed can hardly be overestimated. It means that virtually all the infectious diseases due to microorganisms specifically adapted to Homo sapiens came into existence only in the past ten thousand years, many of them perhaps only in the past five thousand. They were, in the strong sense, a “civilizational effect.” These historically novel diseases—cholera, smallpox, mumps, measles, influenza, chicken pox, and perhaps malaria—arose only as a result of the beginnings of urbanism and, as we shall see, agriculture. Until very recently they collectively represented the major overall cause of human mortality. It is not as if presedentary populations did not have their own parasites and diseases, but such diseases would have been not the crowding diseases but rather diseases characterized by long latency and/or a nonhuman reservoir: typhoid, amoebic dysentery, herpes, trachoma, leprosy, schistosomiasis, filariasis.(8)

The diseases of crowding are also called density-dependent diseases or, in contemporary public health parlance, acute community infections. For many viral diseases that have come to depend on a human host, it is possible, by knowing the mode of transmission, the duration of infectivity, and the duration of acquired immunity after infection, to infer the minimal population required to keep the infection from dying out for lack of new hosts. Epidemiologists are fond of citing the example of measles in the isolated Faroe Islands in the eighteenth and nineteenth centuries. An epidemic brought by sailors devastated the islands in 1781, and, given the lifelong immunity conferred on survivors, the islands were free of the measles for sixty-five years until 1846, when it returned, infecting all but the aged folks who had survived the earlier epidemic. A further epidemic thirty years later infected only those under thirty. For measles specifically, epidemiologists have calculated that at least 3,000 newly susceptible hosts would be required annually to sustain a permanent infection and that only a population of roughly 300,000 could provide this many hosts. Having a population far below this threshold, the Faroe Islands had to “import” its measles anew for each epidemic. By the same token, of course, this means that none of these diseases could have existed before the populations of the Neolithic. It also explains the generally vibrant good health of the New World populations—as well as their later vulnerability to the Old World pathogens. The groups crossing the Bering Strait in several waves around 13,000 BCE came before most such diseases had arisen and, in any case, in groups far too small to sustain any of the crowding diseases.

No account of the epidemiology of the Neolithic is complete without noting the key role of domesticates: livestock, commensals, and cultivated grains and legumes. The key principle of crowding is again operative. The Neolithic was not only an unprecedented gathering of people but, at the same time, a wholly unprecedented gathering of sheep, goats, cattle, pigs, dogs, cats, chickens, ducks, geese. To the degree that they were already “herd” or “flock” animals, they would have carried some species-specific pathogens of crowding. Assembled for the first time around the domus, in close and continuous contact, they quickly came to share a wide range of infective organisms. Estimates vary, but of the fourteen hundred known human pathogenic organisms, between eight hundred and nine hundred are zoonotic diseases, originating in nonhuman hosts. For most of these pathogens, Homo sapiens is a final “dead-end” host: humans do not transmit it further to another nonhuman host.

The multispecies resettlement camp was, then, not only a historic assemblage of mammals in numbers and proximity never previously known, but it was also an assembly of all the bacteria, protozoa, helminthes, and viruses that fed on them. The victors, as it were, in this pest race were those pathogens that could quickly adapt to new hosts in the domus and multiply. What was occurring was the first massive surge of pathogens across the species barrier, establishing an entirely new epidemiological order. The narrative of this breach is naturally told from the (horrified) perspective of Homo sapiens. It cannot have been any less melancholy from the perspective of, say, the goat or sheep that, after all, did not volunteer to enter the domus. I leave it to the reader to imagine how a precocious, all-knowing goat might narrate the history of disease transmission in the Neolithic.

The list of diseases shared with domesticates and commensals at the domus is quantitatively striking. In an outdated list, now surely even longer, we humans share twenty-six diseases with poultry, thirty-two with rats and mice, thirty-five with horses, forty-two with pigs, forty-six with sheep and goats, fifty with cattle, and sixty-five with our much-studied and oldest domesticate, the dog.(9) Measles is suspected to have arisen from a rinderpest virus among sheep and goats, smallpox from camel domestication and a cowpox-bearing rodent ancestor, and influenza from the domestication of waterfowl some forty-five hundred years ago. The generation of new speciesjumping zoonoses grew as populations of man and beasts swelled and contact over longer distances became more frequent. It continues today. Little wonder, then, that southeast China, specifically Guangdong, probably the largest, most crowded, and historically deepest concentration of Homo sapiens, pigs, chickens, geese, ducks, and wild animal markets in the world, has been a major world petri dish for the incubation of new strains of bird and swine flu.

The disease ecology of the late Neolithic was not simply a result of the crowding of people and their domesticates in fixed settlements. It was rather an effect of the entire domus complex as an ecological module. The clearing of the land for agriculture and the grazing of the new domesticates created an entirely new landscape, and an entirely new ecological niche with more sunlight, more exposed soils, into which new suites of flora, fauna, insects, and microorganisms moved as the previous ecological pattern was disturbed. Some of the transformation was by design, as with crops, but much more represented the second- and third-order collateral effects of the domus’s invention.

Emblematic of this collateral effect was the concentration of animal and human wastes: in particular, feces. The relative immobility of sedentary humans and livestock and their wastes permits repeated infection with the same varieties of parasites. Mosquitoes and arthropods, often the vectors of disease, find the wastes ideal sites for breeding and feeding. Mobile groups of hunter-gatherers, by contrast, often leave their parasites behind by moving to a new environment where they cannot breed. Once stationary, the domus, with its humans, livestock, grain, feces, and plant wastes, makes an attractive feedlot for many commensals, from rats and swallows down the chain of predation to fleas and lice, bacteria and protozoa. The pioneers who created this historically novel ecology could not possibly have known the disease vectors they were inadvertently unleashing. In fact, it was not until the late nineteenth-century discoveries of the founders of microbiology, Robert Koch and Louis Pasteur, that it became clear what a heavy price in chronic and lethal infections Homo sapiens was paying for the absence of clean water, sanitation, and sewage removal. As devastating new illnesses left humans not knowing what hit them, folk theories and remedies proliferated. Only one nostrum—“dispersal”—implicitly identified crowding as the basic cause.

The density-dependent diseases afflicting the populations of the late-Neolithic multispecies resettlement camp represented a new and rigorous selection pressure from pathogens never experienced by their ancestors. One imagines that not a few early concentrations of sedentary peoples were all but exterminated by diseases to which they had virtually no resistance. For smaller preliterate societies it is all but impossible to know for sure the role of epidemics in mortality, and much of the evidence from early cemeteries in inconclusive. It is quite likely, however, that the crowding diseases, including especially zoonoses, were largely responsible for the demographic bottleneck of the early Neolithic. In time—how long is uncertain and varies with the pathogen—crowded populations developed a degree of immunity to many pathogens, which in turn became endemic, signifying a stable and less lethal pathogen-host relationship. After all, only those who survive live on to have children! Some diseases—whooping cough and meningitis, for example— might still endanger the very young, while others, if contracted by a younger young person, were relatively harmless and conferred immunity: polio, smallpox, measles, mumps, and infectious hepatitis.(10)

Once a disease becomes endemic in a sedentary population, it is far less lethal, often circulating largely in a subclinical form for most carriers. At this point, unexposed populations having little or no immunity against this pathogen are likely to be uniquely vulnerable when they come into contact with a population in which it is endemic. Thus war captives, slaves, and migrants from distant or isolated villages previously outside the circle of crowd immunity have fewer defenses and are likely to succumb to diseases to which large sedentary populations have become, over time, largely immune. It was for this reason, of course, that the encounter between the Old World and the New World proved so cataclysmic for the immunologically naïve Native Americans, isolated for more than ten millennia from Old World pathogens.

The diseases of sedentism and crowding in the late Neolithic were compounded by an increasingly agricultural diet, deficient in many essential nutrients. One’s chances of surviving an epidemic disease, other things equal, especially as an infant or a pregnant woman, depended very much on one’s nutritional status. The extremely high rates of mortality for infants (40–50 percent) among most early agriculturalists was a result of the conjuncture of a diet that weakened the vulnerable with new infectious diseases that carried them off.

Evidence for the relative restriction and impoverishment of early farmers’ diets comes largely from comparisons of skeletal remains of farmers with those of hunter-gatherers living nearby at the same time. The hunter-gatherers were several inches taller on average. This presumably reflected their more varied and abundant diet. It would be hard, as we have explained, to exaggerate that variety. Not only might it span several food webs—marine, wetland, forest, savanna, arid—each with its seasonal variation, but even when it came to plant foods, the diversity was, by agricultural standards, staggering. The archaeological site of Abu Hureyra, for example, in its hunter-gatherer phase, yielded remains from 192 different plants, of which 142 could be identified, and of which 118 are known to be consumed by contemporary hunter-gatherers.(11)

A symposium devoted to assessing the impact of the Neolithic revolution on human health worldwide concluded on the basis of paleopathological data:

[Nutritional] stress . . . does not seem to have become common and widespread until after the development of high degrees of sedentism, population density, and reliance on agriculture. At this stage . . . the incidence of physiological stress increases greatly and the average mortality rates increase appreciably. Most of these agricultural populations have high frequencies of porotic hyperostasis [overgrowth of poorly formed bone associated with malnutrition, particularly iron-deficiency related malnutrition] and cribra orbitalia [a localized version of the above condition, in the eye socket], and there is a substantial increase in the number and severity of [tooth] enamel hypoplasis and pathologies associated with infectious diseases.(12)

Much of the malnutrition detected in what we might call “agricultural woman”—for women, owing to blood loss with menses, were the most severely affected—seems to be due to iron deficiency. Preagricultural women had a diet that supplied abundant amounts of omega-6 and omega-3 fatty acids derived from game, fish, and certain plant oils. These fatty acids are important because they facilitate the uptake of iron necessary for the formation of oxygen-carrying red blood cells. Cereal diets, by contrast, not only lack the essential fatty acids but actually inhibit the uptake of iron. The result of the first increasingly intensive cereal diets in the late Neolithic (wheat, barley, millet) was therefore the appearance of iron-deficiency anemia, leaving an unmistakable forensic bone signature.

Most of the added vulnerability to novel infections seems due to a relatively high and narrow carbohydrate diet without much in the way of wild foods and meat. It was likely to lack some essential vitamins and to be protein poor. Even the meat of the domesticates on which they might occasionally feast contained far fewer vital fatty acids than wild game. Illnesses attributable to the Neolithic diet that do have bone signatures, such as rickets, can be documented; those that affect the soft tissues are far harder to document (except in the occasional well-preserved mummy). Nevertheless, on the basis of dietary knowledge and early written accounts of illnesses that can probably be assumed, again on dietary knowledge, to have existed earlier, the following nutrition-related diseases have been attributed to Neolithic foodways: beriberi, pellagra, riboflavin deficiency, and kwashiorkor.

What about crops? They too were subjected to a kind of “sedentism” on fixed fields and conditions of crowding, as well as a new, human-driven selection process that reduced their genetic diversity to foster desired characteristics. They too, like any organism, were subject to their own density-dependent diseases, as we shall see. Because “both herding and agriculture are frequently afflicted with epidemics, crop failure, or other misfortunes,” Nissen and Heine claim that early farmers preferred, when possible, to rely on hunting, fishing, and gathering.(13) Here again the archaeological record is not very helpful. It is possible to show, say, that a previously populous area was suddenly abandoned; before written records, however, knowing why it was deserted is another matter. A crop fungus, a rust, an insect infestation, or even a storm that destroys a ripe crop, like softtissue diseases, leave little or no trace. Written records, when they are available, are more likely to record a “harvest failure” or famine than to specify the cause, which, in many cases, is not understood by the victims themselves.

Crops represented their own perfect “floral” epidemiological storm. Consider as a pathogen or insect might the attractions of the Neolithic agricultural landscape. It was not only crowded but, compared with wild grasslands, was largely devoted to just two major grains: wheat and barley. Furthermore, these were fixed fields cropped more or less continuously, as compared, say, with fire-field cultivation (aka swidden or slash-and-burn), where a field was planted for a year or two and then fallowed for a decade or more. Repeated annual cultivation provided, in effect, a permanent feedlot for insect pests and plant diseases—not to mention obligate weeds—which built up to population levels that could not have existed before fixed-field monocropping. Large sedentary communities necessarily meant many arable fields in close proximity, growing a similar variety of crop; this promoted a commensurate buildup of pest populations. As with the epidemiology of human crowding, it seems logical to suppose that many of the crop diseases besetting Neolithic planters were new pathogens that evolved to take advantage of such a nutritious agro-ecology. The literal meaning of “parasite,” from the original Greek root, is “beside the grain.”

Crops not only are threatened, as are humans, with bacterial, fungal, and viral diseases, but they face a host of predators large and small—snails, slugs, insects, birds, rodents, and other mammals, as well as a large variety of evolving weeds that compete with the cultivar for nutrition, water, light, and space.(14) The seed in the ground is attacked by insect larvae, rodents, and birds. During growth and grain development the same pests are still active, as well as aphids that suck sap and transmit disease. Fungal diseases are especially devastating, including mildew, smut, bunt, rusts, and ergot (famous as St. Anthony’s Fire when ingested by humans) at this stage. The part of the crop that does not succumb to these predators must compete with a host of weeds that have come to specialize in ploughed soil and to mimic certain crops. And once the harvest is in the granary it is still subject to weevils, rodents, and fungi.

It is common enough in the contemporary Middle East for several crops in succession to be lost to insects, birds, or disease. In an experiment in northern Europe, a crop of modern barley, fertilized but not protected with modern herbicides or pesticides, was reduced by half: 20 percent due to crop disease, 12 percent to animals, and 18 percent to weeds.(15) Threatened by the diseases of crowding and monoculture, domesticated crops must be constantly defended by their human custodians if they are to yield a harvest. It is largely for this reason that early agriculture was so dauntingly labor intensive. Various techniques were devised to reduce the labor involved and improve the yields. Fields were scattered so that they were less contiguous; fallowing and crop rotation was practiced; and seed was procured at a distance to reduce genetic uniformity. Ripening crops were closely guarded by farmers, their families, and scarecrows. But given the disease-prone agro-ecology of the domesticated crop, it was touch and go whether the crop would survive all the predators to feed its ultimate guardian and predator: the farmer.

The older narrative of civilizational progress is, in one basic respect, undoubtedly correct. The domestication of plants and animals made possible a degree of sedentism that did form the basis of the earliest civilizations and states and their cultural achievements. It rested, however, on an extremely slender and fragile genetic foundation: a handful of crops, a few species of livestock, and a radically simplified landscape that had to be constantly defended against a reconquest by excluded nature. At the same time, the domus was never even remotely self-sufficient. It required a constant subsidy, as it were, from that excluded nature: wood for fuel and building, fish, mollusks, woodland grazing, small game, wild vegetables, fruits, and nuts. In a famine, farmers resorted to all the extradomus resources that hunter-gatherers relied on.

The domus was at the same time a veritable feast and a pilgrimage site for uninvited commensals and pests large and small, down to the smallest viruses. Its very concentration and simplicity made it uniquely vulnerable to collapse. Late Neolithic agriculture was the first of many steps in the development of special techniques for maximizing the production of a small number of preferred plant and animal species. An illness—of crops, livestock, or people—a drought, excessive rains, a plague of locusts, rats, or birds, could bring the whole edifice down in the blink of an eye. Based on a narrow food web, Neolithic agriculture was far more productive, in a concentrated way, but also far more fragile than hunting and gathering or even shifting-cultivation, which combined mobility with a reliance on a diversity of foods. How, despite its fragility, the domus module of fixed-field agriculture became a hegemonic, agro-ecological and demographic bulldozer that transformed much of the world in its image is something of a miracle.

1. Moore, Hillman, and Legge, Village on the Euphrates, 393. This is an amazingly comprehensive and valuable survey of the richest site in Mesopotamia.

2. Burke and Pomeranz, The Environment and World History, 91, citing Peter Christensen, The Decline of Iranshahr. The period Christensen is referring to falls later, but he dates the origin of such diseases to the Neolithic transition itself. See Chapter 7 and pp. 75 ff.

3. It is quite possible that advances in the recovery of genetic material will soon provide more robust evidence for such suspicions.

4. See, among others, Porter, Mobile Pastoralism, 253–254; Radner, “Fressen und gefressen werden”; Karen Radner, “The Assyrian King and His Scholars: The Syrio-Anatolian and Egyptian Schools,” in W. Lukic and R. Mattila, eds., Of Gods, Trees, Kings, and Scholars: Neo Assyrian and Related Studies in Honour of Simo Parpola, Studia Orientalia 106 (Helsinki, 2009), 221–233; Walter Farber, “How to Marry a Disease: Epidemics, Contagion, and a Magic Ritual Against the ‘Hand of the Ghost,’” in H. F. J. Horstmanshoff and M. Stol, eds., Magic and Rationality in Ancient Near Eastern and Graeco-Roman Medicine (Leiden: Brill, 2004), 117–132.

5. Farber, “Health Care and Epidemics in Antiquity.” Evidence here comes largely from Mari on the Euphrates from Uruk around the early second millennium BCE.

6. Nemet-Rejat, Daily Life in Ancient Mesopotamia, 80.

 7. Ibid., 146. Nemet-Rejat adds, “An omen reported plague gods marching with the troops, most likely a reference to typhus.”

8. See especially Groube, “The Impact of Diseases”; Burnet and White, The Natural History of Infectious Disease, especially chapters 4–6; and McNeill, Plagues and People.

9. McNeill, Plagues and People, 51.

10. Polio is an example of an epidemic related to an excess of hygiene. In a major city in the global south like Bombay, for example, an overwhelming percentage of the children under five will have polio antibodies in their system, showing that they have been exposed to the disease, which is spread by feces and is rarely fatal to infants. For one not exposed at an early age, however, the disease contracted later in life is far more severe.

11. Moore, Hillman, and Legge, Village on the Euphrates, 369.

12. Roosevelt, “Population, Health, and the Evolution of Subsistence.”

 13. Nissen and Heine, From Mesopotamia to Iraq.

14. Dark and Gent, “Pests and Diseases of Prehistoric Crops.”

15. Ibid., 60.

James C. Scott’s Against the Grain is available online here.

This entry was posted in Commentary and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.