Thursday, May 14, 2015

Karen Armstrong's Fields of Blood: Religion and the History of Violence

Karen Armstrong's Fields of Blood: Religion and the History of Violence (Knopf, 2014).

Fields of Blood is a frustratingly good book. Karen Armstrong knows her material and has reflected on it with rare perspicuity, but it seems to me that she fails to address much that is vital.  Her thesis is that true religious violence is rare or non-existent, yet he fails to ever explain what religion is. Armstrong objects strenuously to the post-Reformation and Enlightenment tendency to place religion in a little box by itself, which was done in large part to limit what were perceived as religious wars. It seems that she knows religion when she sees it, and anything that has been tainted by politics is not religion. She also picks and choose her coverage of wars. She is able to show that the European Wars of Religion were mostly political in nature, though she also fails to explain away the extent to which they were viewed as religious in nature at the time they were fought as well as in later years. On the other hand, she skates over the two world wars, which allows her to avoid a deep discussion of how religion can abet and facilitate warfare. This was certainly the case with the First World War (see Philip Jenkins, The Great and Holy War: How World War I Became a Religious Crusade, Harper One, 2014). If it was somewhat less the case the second time around, one must still deal with the very messy question of how modern Shintoism was so deeply imbedded in every aspect of the Japanese war effort.

Having said that, Armstrong raises a lot of vital points about the history of violence and religion, as well as throwing a strong light on many points that plague us today.  Examining the Warring States period in ancient China how the military commanders and authors such as Sun-tzu "regarded themselves as sages and saw their warfare as a species of religion," while others took the same mythical and religious doctrines in completely different directions. Her point being that the same belief systems can lead away from war as easily as too it.  (Armstrong, 93.)

This is a fairly simple statement on the surface, but it is actually a fundamental principle in dealing with war and religion. It is not so much the case that religion breeds war, but that religion interacts with war. The nature of this interaction is crucial: can you truly have war without the support of myth and religion, or analogs thereof? My answer is that it cannot. While religion is not necessarily the cause of wars or violence, it usually has been used to justify violence and all too often has amplified it. One must admit though, that when religion breaks down, as it did in Russia in 1917, the results can be equally appalling. In such cases, ideologies with some aspects of religion play the same role.

Where Armstrong shines is in her discussions of the roles of ancient religion in war and of the complex relationship of fundamentalisms to war and violence. She argues persuasively that fundamentalism is a reaction to war and oppression, noting that American Protestant fundamentalism was a reaction to the Civil War and was intensified by the First World War. Its attitudes towards war and violence have changed over the decades as it perceives external threats, but it has rarely turned towards civil violence. (She fails to consider the role of the fundamentalist or evangelical religious beliefs of the neocons and how those may have shaped our recent wars and foreign policy.) In her final chapters, Armstrong traces how Islamic fundamentalism followed a trajectory from an emphasis on social justice to horrific violence in the face of failed colonial and post-colonial regimes. The book's treatment of Islam is as anuanced as its discussion of Christian fundamentalism.

We need more books of this sort, histories that challenge assumptions, grapple with complexity, and which continue to occupy and engage the mind long after the reader turns the last page. I found myself wanting to argue with almost every page in many chapters, and found something with which to take exception in almost every chapter, but Armstrong succeeded in making me revise some long-held opinions, in leaving some matters in question (always a good sign), struggling to integrate new and old insights. It is an important book for anyone interested in history, but also for anyone trying to wrap their mind around current events.



Wednesday, May 13, 2015

Adam Zamoyski's Phantom Terror

Adam Zamoyski, Phantom Terror: Political Paranoia and the Creation of the Modern State, 1789-1848 (Basic Books, 2015). 

In a previous work, Holy Madness: Romantics, Patriots, and Revolutionaries, 1776-1871 (Viking, 2000), Zamoyski chronicled the era and Weltanschaung of the Romantic revolutionaries. In his new book, he turns to the conservatives who governed Europe in the five decades following the storming of the Bastille. Here he traces how their reactions to real and imagined revolutions and conspiracies created the very atmosphere in which new kinds of revolutionaries, ones who would overthrow the old order completely in 1917-21, could flourish. The author also chroniles the paranoia that arose among the governing classes; how it gave rise to secret police forces, censorship, and other repressive laws and institutions; how those institutions propagated that paranoia in a vicious cycle, eventually creating a mindset and toxic political atmosphere still found throughout Western world. 

The book is a powerful indictment of viewing everything through the lens of conspiracy and refracting every fact, real or imagined, through that it. The book is at its best in discussing Alexander I and Metternich, but the whole work is readable and scholarly, filled with memorable characters while being grounded in primary sources. Highly recommended.

Wednesday, December 24, 2014

Some Recent Books on Climate and History

One of the more recent trend in historical writing is the integration of climate history with political, military, economic, and social history. As we extract more and more details of past weather and climate from ice cores, lake sediments, tree rings, written records and other sources, the data has increasing relevance for historians. Two years ago, Geoffrey Parker, one of the leading Early Modern Europeanists, published his award winning Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century. I am only now starting to read it, but the main thrust of Parker's book is how humans, particularly leaders and governments, exacerbated the effects of climatic catastrophes through their policies and actions.  

This past year a number of books have pursued the integration of climate with other aspects of history. Of those that I've read, William Rosen’s, The Third Horseman: Climate Change and The Great Famine of the Fourteenth Century, is the closest thematically to Parker's book. Highly readable and aimed at broader audience, Rosen lacks the depth and global breadth of Parker's work, but still shows how the political and military ambitions (and ineptitude) of the rulers of England, Scotland, France, and the Low Countries were often thwarted by, motivated by, or intensified the effects of the onset of the Little Ice Age. Rosen tried to tie climate and history together in a previous book, Justinian’s Flea, but had more difficulty due to lack of source material and a tendency to exaggerate long-term effects of cooling caused by volcanism and he Plague of Justinian. He suffers from neither problem in his new book.

Also about climate, but a much briefer episode lasting less than a decade, is Gillen D'Arcy Woods' Tambora: The Eruption That Changed the World. There have been a few books about the Year Without a Summer (1816) over the past decades, but Woods is the first author to try to bring together the threads and trace its effects throughout the Northern Hemisphere. While he fully covers such well-known topics as the lack of a summer in New England and much of Europe, the skies in Turner's paintings, the influence of the the weird weather on the writing of Frankenstein, and the lethal advance of the Swiss glaciers, Woods also explores its effects on the outbreak of Cholera in Bengal (which became a global epidemic), the devastation of Yunnan (where famines, floods, and fear led both to brilliant poetry of despair and  the spread of Opium production and consumption), and a famine in Ireland that was nearly as severe as the one the better known one of the 1840s.

A third book, where climate plays a more uncertain role, because our sources are meager and the evidence more open to interpretation, is Eric H. Cline's 1177 B.C.: The Year Civilization Collapsed. The title is a bit misleading, as the main point of Cline's account is not that civilization suddenly collapsed in 1177 (or even within a year or two of that), but the decline was much more gradual and that some, such as Egypt, survived, though much diminished. Cline has less to say about climate than the others, which is interesting, as climate changes of varying kinds have been posited for the Late Bronze Age Collapse for more than a half-century. Only recently has archeoclimatology begun to reveal solid data about the climate of the Late Bronze Age. As he does throughout much of the book, Cline dials back the rhetoric of  the past and takes s much more cautious attitude towards climate as the cause of the breakdown. Instead in his penultimate chapter, he looks at a variety of possible causes, including climate change and what are sometimes called earthquake storms (long spells of violent seismic activity in a region) that could have interacted, but for which there is a lack of conclusive evidence or even consensus among scholars. 

We are probably still in the infancy of the fusion of archeoclimatology and historical climatology with other fields of archaeology, history, and the broader humanities, but I am struck how far it has come  in the two decades since I was in graduate school. We need to see a lot more of this integration in the future, and look at very familiar events in this context. Thirty years ago, I became aware that the massive drought that afflicted the western US in 1862 and the years following, affected the Confederate economy and logistics, and mentioned it in passing in my MA thesis. I was unaware of its true extent (for a quick reference, see http://opinionator.blogs.nytimes.com/2012/10/12/the-drought-that-changed-the-war/?_r=0) and have hardly seen or heard it mentioned during the sesquicentennial of the Civil War. Likewise, there has always been a lot of comment on the hard winters during the Napoleonic and World Wars, but I have yet to see anyone go beyond the weather and look at the climate of those periods and how the policies and actions of the belligerents interacted with the climate in those periods. Quite possibly I've missed some early work in this regard, but with the exception of a few, brief periods such as the Year Without a Summer or the Great Dust Bowl, American and European historians have paid little attention to climate. Now that they have the tools, this is finally changing.

Saturday, September 21, 2013

Walled Gardens

Occasionally I read something that bothers me in obscure and subtle ways. Such things gnaw at me and lead to reflection. Recently an article at an important journalism education site (Poynter) by a distinguished e-journalist and professor (Bill Adair, who has created a Pulitzer-wining news site) provoked such a reaction. The only way I can explain it is to say that it strikes both at the heart of my beliefs about books, how we interact with them, and how important they are in shaping our patterns of cognition (and hence our fundamental construction of reality).

Adair writes about his disappointment in current ebooks and discusses the kind multimedia ebooks he would like to see. He begins with a biography of Bruce Springsteen that he feels ought to have included recordings of the music. Given the mess that music copyright has become, the author did not try to incorporate audio files, but agreed with Adair that this should happen in the future. Perhaps this would have been a good idea, but the lack of music forced Adair to go out and construct his own soundtrack as he read the book. Adair says he downloaded several albums. I wonder if he also searched out interesting and different performances of these songs on the web of which he was previously unaware? Is that important? The difference between the two behaviors is the difference between two different levels or kinds of engagement with the book and the music.

The second book Adair read is Dan Brown's Inferno. He understandably wanted to see maps, illustrations, and other materials described in the book. He did find a website that pulled together this material, but felt it should have been built into the ebook. Unlike the Springsteen bio, what he wanted here was not too different from an illustrated edition of a print book, though he would have liked animated maps showing the movements of the characters. I find myself more sympathetic to this than I do to his desire for a soundtrack, partly because of the tradition of illustrations in books and partly because of peculiarity in the way I related to music.

Adair confronts me with a fundamentally different perspective on the book than the one I have evolved over the half-century of my existence. In this article, Adair gives the impression of wanting ebooks to be self-contained vehicles for consumption. I'm not sure that is his intention, some comments towards the end of the article point beyond that, but it is how it initially struck me. Quite possibly I misrepresent him in some important ways and am merely using him as a stalking horse.

Books have been at the center of my life since infancy. I was read to even before I was born, and many of my most vivid childhood memories are of being read to or reading. That seems to be true for a lot of people, each of whom has their own understanding of, and relationship, to books. Three books that I was given as a child and teenager shaped by perception of books in ways I did not then understand. I was a precocious reader and my grandmother Reed gave me a set of the full Encyclopedia Britannica at the age of eight. This was not the children's edition, but the complete, adult, 1968 edition. It sat on the shelves of my bedroom closet where I could easily reach it. My mother still laughs about seeing me sitting cross-legged on the bed with volumes of it open in front of me. In the days before Wikipedia it was my place for quickly finding information. Anything I wanted to find was seemingly in its pages, and it created in me the habit of looking things up as they struck me. Today it sits, cherished, but rarely used, on the bottom shelf of a bookcase in my living room, replaced by the Internet. From those volumes I developed a lifetime's habits that have carried over into the age of the web, but also gained an understanding of how all information is interrelated and how one must go beyond the confines of a single book, even one as voluminous and authoritative as the Britannica.

Years later, the Britannica was inadvertently involved in another important lesson about the authority of books. In working in early modern European history, one inevitably confronts the witch craze and its difficult historiography. One of the stranger incidents is the witch-cult theory of Egyptologist Margaret Murray that rose to prominence in the 1920s and 1930s. Briefly, she held that there had been an organized pagan cult that survived from the neolithic to the end of the Middle Ages in Europe. Her influence on academics was never large, but her ideas were propagated to the wider public (being embraced by the emerging neopagan and wiccan movements, as well as by numerous horror writers and movie makers) both through her books and through having authored the article on witchcraft in the 1929 Britannica. She fell into complete disrepute academically in the Sixties after the publication of Elliot Rose's critique of her work, A Razor for a Goat. This drove home to me the transitory nature of the authority of books, but also something of the complex web of relations between books.

The second of the three works that were shaping my understanding of book at that early age was W.S. Barring-Gould's The Annotated Sherlock Holmes. Barring-Gould collected all 56 original short stories and four novels, along with several essays on pressing subjects (such as the exact number of wives Dr. Watson had), into a wonderful two-volume edition, the broad margins of which were filled with notes on obscure points of Sherlockian chronology, names, places, events, objects that were no longer so familiar (such as antimacassars) and why they were so named, and the values of old British coins. (I gained a solid working knowledge of pre-decimal British coins that stood me in good stead when studying history at a graduate level.) Barring-Gould showed me how much could be gleaned from works of fiction about the world and also how those fictional worlds could interpenetrate with our own. It made me understand that scholarship and fiction could not only live together, but could also complement and supplement one another. Later, reading Samuel Rosenberg's Naked Is the Best Disguise: The Death and Resurrection of Sherlock Holmes, a deeply flawed but fascinating book on Conan Doyle and his creation, I was led back to Barring-Gould more than once. I think this was one of the first times I began to notice the interplay that can occur between books.

The third book was Brigadier Peter Young's Edgehill, 1642: The Campaign and Battle, published in the 1970s by Roundwood Press, as part of a series that also includes Young's Marston Moor, 1644: The Campaign and Battle, and Margaret Toynbee's Cropredy Bridge, 1644: The Campaign and Battle. The book is a good, solid account of the two opposing armies and actions of the first campaign of the English Civil Wars. What distinguishes this, and its accompanying volumes, from most histories is the extensive publication of contemporary sources and accounts of the battle that make up about half of the book. It is possible for the reader to have easy access to the materials the author used and to see the battle through many different eyes. It's something that I wish could be done more often. For me as a high school student, I suppose it opened up something of the how the author had gone about his work and also something of the historical method for the first time. It also gave me access to the network of connections that exist between a published book and its sources, which of course is part of a much larger network of interdependent books and sources.

Peter Young's approach in Edgehill brings us back to a parallel track to what Adair would like to see in ebooks. Young provided original source material, pictures, and maps. All of that was possible in a printed book, but of course the animations, video, and sound have not been possible. Were he alive and writing today, I have little doubt that an Edgehill ebook would at least contain animated maps, along with animated diagrams or videos of re-enactors demonstrating the complexities of seventeenth-century military drill. In fact it could well contain performances of contemporary songs also.

So what is it that struck me so viscerally in Adair's article and gnaws at me as I write this? I think it is a fundamental difference between two different platforms, one I've not really seen explored in essays, though surely I've just missed it. What Young was doing in Edgehill and Barring-Gould did in his Annotated Sherlock Holmes was to try to expand the web of relationships beyond the text. In Young's case, he wanted to open up sources he had used, but that would require travel, or at least a microfilm machine, to access in most instances. Barring-Gould was trying to open up an existing text to make it more accessible to readers and perhaps more interesting. Both were dealing with the limitations of ink on paper, and, as happens so often, pushing up against a technical limitation led to a creative solution. Fundamentally, it was about increasing access.

The ebooks Adair critiques are completely different creatures. An ebook exists on a platform that can do many other things, far beyond just presenting text and pictures. In almost all cases, the possibility of instantly going out and exploring the net to find additional resources exists. Adair is arguing that those resources be pre-packaged by the author and publisher. While that is convenient, it is also an attempt to reduce the scope of discovery by making it too easy for the reader not to explore what might be out there. Fundamentally, it is about limiting access.

More and more we are seeing this kind of approach in etextbooks from the major publishers. For them, textbooks are no longer books, but elaborate, interactive, multimedia platforms, seemingly intended to keep the student in a walled garden and dependent on their products. The etextbook today is often an accumulation of texts, interactive animations, video, online quizzes and assignments, collaborative note-taking, to the point that the instructor can almost seem superfluous. It attempts to be a totality.

Adair may not have this in mind. He is writing for journalists and in the last part of his article, he is advocating using multimedia in books as a means for journalists to bring difficult-to-access content to readers, but in the next sentence refers to Hollywood using enhanced ebooks to promote television and the recent film version of Les Miserables.

Let me suggest a scenario. Suppose Dan Brown's publishers decide to put out a special, deluxe, enhanced ebook of his best-known work, The Da Vinci Code for its tenth anniversary. Presume for a moment that they want to provide an experience that will cause readers to pay a higher price for this ebook. They certainly have a mountain of material from which to choose. They could easily include the text, parts or all of the movie (perhaps having the book and movie keyed to each other in such as way that clicking on a line in the book would take one to the corresponding scene in the movie, and that clicking on a scene in the movie took the viewer to the corresponding chapter in the book). They could have very high definition images of the paintings that play such a critical part in the story. There is, for instance, a 16 gigapixel image of Da Vinci's Last Supper available that allows viewers to move in for super closeups. (That works out to more than one pixel for every 4/1000th's of an inch.) While the file size would make it prohibitive to include the whole image in an ebook right now, it would allow the publisher to include very high-quality closeups of specific features of the painting mentioned in the book. The ebook might also contain any of a number of essays (in print or video) that take readers through the symbolism and history of the book. Add to that an online forum where readers could discuss the book, maybe even chat with the author on occasion, and the publisher would certainly charge a premium. They might even be able to use the forum to market other products. But it would be closed, pulling the consumer (I'm not sure the buyer of this work should be called a reader) ever deeper into the product, rather than directing them out, to find other understandings of the art works, other interpretations of the symbols, the whole sordid history of the Priory of Sion, or even the larger world of Grail research or a reading of the Gnostic gospels.

In short, I am reacting to a difference between consumption and reading, between life in a walled garden and a life of exploration. This brings us back to the web of books I mentioned earlier. At some point in my life, I became aware of this phenomenon. I'm not sure when or how, but I do know that historians were actively exploring this when I was in college. Carlo Ginzburg's oddly titled work, The Cheese and the Worms: The Cosmos of a Sixteenth-Century Miller, was the first that I read. Using transcripts of two heresy trials of a Friulian miller named Domenico Scandella (also known as Menocchio), Ginzburg tried to reconstruct the books he had read, how he read them, and how these books became interconnected and exerted a mutual influence on one another (and with folk beliefs) in Menocchio's mind. (The title refers to Menocchio's belief that God did not create the world, but that it arose from spontaneous generation, just as it was believed that worms were spontaneously generated by the fermentation of cheese.)

A decade after Ginzburg, Lisa Jardine and Anthony Grafton published an important micro-study of the reading habits of Elizabethan polymath Gabriel Harvey ("Studied for Action": How Gabriel Harvey Read His Livy, Past and Present, 129, Nov. 1990). By analyzing the copious annotations Harvey made in the margins of his books, Jardine and Grafton were able to study the way he referred back and forth to them, as well as how his reading influenced an important political circle at Elizabeth's court. As with Menochhio, but in a much more organized fashion, they were able to examine the way reading one work influenced the understanding of another. In short, they were able to sketch out a small web of works and their interconnections.

My own studies at the time were attempting to unravel how printing, firearms, and, to a lesser extent clocks, influenced the cognitive world of sixteenth-century military intellectuals. The most sophisticated and studied of them, Machiavelli, gave a clear, if metaphorical, guide to the interplay of his reading and experience in a famous letter to Francesco Vettori. He describes his composition process as a conversation between various historical figures, which we knew he drew from Livy and from more recent historical works, as well as from his own personal experience as a diplomat. He constructed his longest work, the Discourses on the First Ten Books of Livy, around the great Roman historian, whose work we know he had known since childhood. Yet he clearly drew conclusions from it that are unsupported by Livy's examples and that must have come form his reading of others, as well as from personal experience as a bureaucrat and diplomat.

Isn't this the manner in which we all read. We do not read any one book in isolation. Instead we read one book in the context of all we have read before. I am most aware of this in reading non-fiction, and particularly history, but am conscious of it too in reading fiction. It may be that Sherlock Holmes pops up when reading a mystery, or Carl Jung when reading a science fiction novel. In some cases, there are straightforward reasons for this. Nero Wolfe and Archie Goodwin are clearly and openly a take off on Holmes (both Sherlock and Mycroft) and Watson. Plot elements are widely borrowed from one author of fiction to another. Rosenberg, who wrote the Sherlock Holmes study Naked Is the Best Disguise, at one point made his living as a lawyer for a film studio. One of the major lessons he learned was that when two stories share plot elements, it is more likely that they derive them from a third, common source, rather than one making a direct borrowing from the other. It's been said that most modern authors owe Shakespeare royalties.

But sometimes in fiction it is less obvious. I don't know if Frank Herbert consciously borrowed ideas about the collective unconscious from Jung in his six Dune novels, but I find it impossible to read them without thinking about Jung, as well of course, to the explicit references to the Oresteia - as most of the major characters bear the same family name (Atreides). With the references to the Oresteia, a whole other set of associations unfold, across the many different versions of the plays, starting with Aeschylus and concluding with the Sartre, Eugene O'Neill, and T.S. Eliot, the last two with their wonderful observations on the playing out of curses, metaphorical and otherwise, and Sartre's play with its desolation reflecting back on the existential crises of Herbert's main characters. Herbert's novels, for many years the best selling science fiction of all time, encourage this kind of exploration beyond their boundaries and reflection on their ideas. They draw one inward into oneself but also push one out to see the world as a complex of ecology of living beings and developing minds.

There are many different styles of reading. It may be that creating a closed and self-referencing platform is exactly what some readers want. The problem, to me, is that this may become the norm. Commerce has a way of progressively reducing and homogenizing our options until they are all the same. If our ebooks must be enhanced, I want the ability to turn those features on and off. I don't want them to be intrusive, obnoxiously distracting me from the text. For instance, the faint underlining, that Kindle uses to show popular highlights, is sometimes annoying and something I toggle on and off. Even page turning animations that most reading apps offer as an option are distracting to me.

More problematic is what the walled gardens of our textbooks, and potentially our children and young adult books might offer to the next generation. Will they be encouraged to go out and look up things on their own, or will it all be spoon fed to them? Will critical reading be possible to such a generation, or will they fall for the apparent totality of information presented to them? That is what bothers me about Adair's article. It is not that I believe he advocates anything like this, but, rather, that the path he suggests will lead publishers down a path that ends there.

Monday, December 26, 2011

A Most Dangerous Day

Sometimes turning points go unrecognized. Likewise, some things that seem like turning points are not.

Christmas Day of 1861 may have been the most dangerous day of the Civil War for the North. There were greater panics in Washington over later events. The emergence of the ironclad CSS Virginia (formerly the USS Merrimack) at Hampton Roads the following March 8 terrified many in the cabinet, though the scale of the threat was grossly exaggerated. Lee's invasion of Pennsylvania, which culminated with his defeat at Gettysburg in 1863, was a similar crisis, and is often seen, erroneously, as the turning point of the War. Neither event posed the kind of threat Lincoln's cabinet grappled with on December 25 and 26, 1861.

The threat on those dates came not from the Confederacy, but from across the Atlantic, and from the blustering of the American political establishment. It is known to history as the Trent Affair. Capt. Charles Wilkes, who was already in violation of his orders, seized Confederate representatives from the British mail packet ship Trent. The Lincoln administration had imprisoned Mason and Slidell, the Confederate diplomats Wilkes had seized. Not only was the arrest a clear violation of international law, and a betrayal of the principles over which the United States had fought the War of 1812 (which had resulted from Royal Navy ships stopping American ships and removing any sailors who appeared to be British subjects to serve in the long wars against France), but it was also a highly popular act of political insanity. Americans in 1861 loved "twisting the lions's tail," but few understood that the recent ironclad revolution made that much more dangerous than in the past.

In 1861, the ironclad was the new super weapon, the stealth bomber of its day, but it was so new that it was extremely rare. While we tend to think of ironclads as quintessentially American, in 1861, both the USA and the CSA had exactly none in commission. Both countries were aware of the need for them, and were feverishly trying to complete one, but only England and France possessed them. In December 1861, Queen Victoria had two in commission, one launched but incomplete, and one building, while Napoleon III had one in commission, and several building or launched but incomplete. It was already assumed that no wooden warship could stand against them. And unlike the soon-to-be-launched Monitor, these were all sea-going vessels with armor superior to anything the US could produce. Finally, since it was clear that Napoleon would declare war if Victoria did, there was no question of breaking a blockade through political maneuvering.

On Christmas day, 1861, Lincoln's cabinet was assembled to consider the response to the British ultimatum. London had intentionally toned down its original response, but many still believed that the administration could not accept it as is. Lincoln initially favored the recommendation of Senator Charles Sumner to submit the matter to arbitration, but the French attitude made that almost impossible. There were still voices for war with Britain in the cabinet, but Secretary of State Seward wanted to release the prisoners and comply with  the main British demands. No decision was reached that day, but Lincoln told Seward to bring his best arguments back to the cabinet the next day, while he would draw up the best argument he could make for arbitration. When the cabinet reconvened on the 26th, Lincoln presented no argument to Seward, accepting his position completely, afterward telling him that he had been unable to find a single, satisfactory argument for arbitration. He understood that a request for arbitration would not be acceptable to Britain or France, and that he would be faced with an international as well as a civil war.

Over the course of those two days, American foreign policy grew up, and the possibility of victory in world wars not even imaginable emerged. The future "Special Relationship" between American and Britain was still far off, but relations between the two countries pulled back permanently from the brink of war.



Bibliography:

Foreman, Amanda. A World on Fire: Britain’s Crucial Role in the American Civil War, Random House, 2011.

Goodwin, Doris Kearns, Team of Rivals: The Political Genius of Abraham Lincoln, Simon & Schuster, 2005.

Reed, Sir Edward James, Our Iron-Clad Ships: Their Qualities, Performances, and Cost, J. Murray, 1869,

Symonds, Craig L., Lincoln and His Admirals, Oxford, 2008.

Tuesday, March 29, 2011

A Hall of Mirrors

We live in hall of mirrors, a world of augmented memories constantly recreating not only the past, but also the memories of past futures never realized. Those futures were also built on memories reconstituted according to the beliefs of their times (as our ours) with the memory technologies of their times, in turn incorporating earlier futures past, in infinite regress. Past and future are continuously revisited, re-imagined, and reconstructed as we feel it should have been, sometimes taken as fact, sometimes as fiction, but always fictitious.

Most of the time we are no more aware of this than we are that all of our memories are re-imagined each time we recall them. Memory, both personal and collective, is an act of imagination that rewrites each memory even as it is remembered. Like Heraclitus' river, it is an ever-changing stream into which we can never step twice.

This is what neuroscience tells us about memory, but we do not act upon these findings, instead continuing to behave as if memory, and thus reality, were fixed entities. Just as the ancient Greeks and most philosophers in the West have rejected Heraclitus' notions of flux and constant change as the bases for reality, in favor of eternal verities and archetypes, we will most likely reject the science that tells us that our minds are in a constant state of flux and change, forever failing to understand or acknowledge the consequences.

Curiously, we know these things to be true, as we speak often of how unreliable our memories are, and legally accept the possibilities of false memories. Is it so much easier to ignore this and go on living in a world of a fixed and concrete sort? This is an issue long recognized by historians, though only a few books, such Thomas Desjardin's, These Honored Dead, about the construction and reconstruction of the Battle of Gettysburg, or Jill Lepore's, The Name of War, (about King Phillip's War) deal with it at length.

These are no mere academic arguments; as a people that argues political and cultural positions from history on a daily basis in the national media, these are vital issues. If we are going to make cases based on the ideas of the Founding Fathers (though which ones and at what point in their lives is always a sticky point), beliefs found in the Bible (where again we are dealing with the problem of which one and even more with what point in time), the Enlightenment (with a diversity of opinion ranging from Hume to Rousseau, generally I prefer Montesquieu), Lincoln (whose ideas evolved rapidly), FDR (the idealistic pragmatist), then we need to understand how they re-imagined events themselves, and how we have re-imagined them as well. If we can't be bothered to do this, then we are simply surrendering our minds to manipulation by extremists, propagandists, and advertisers.

This is not the worst; the way it contributes to an inflexible mindset is the greater danger. If you think you can fit the past into a little, unchanging box, you are more likely to treat the present in the same way. In an era of rapid change and rolling crises, that is a recipe for disaster, if not extinction. Neuroscience is undoubtedly getting a lot wrong that will have to be corrected latter, and a number of unsupportable claims are being based on it's findings, but our own experiences of memory and perception show that it is fundamentally right about remembering being a form of re-imagining. We need to learn to act on that insight and stop ignoring it.

Wednesday, March 9, 2011

"The problem is to change the rules...."

Last weekend, I reread a favorite essay, Gregory Bateson's, "From Versailles to Cybernetics." He delivered it as a speech in 1966 and published it a few years later in Steps to an Ecology of Mind. In places it reads as a jeremiad, but the overall point, and the overall tone are something else entirely. Speaking just a few weeks shy of his sixty-fourth birthday, he considered the two most important events of his lifetime the 1919 Treaty of Versailles and the emergence of cybernetics in the half-decade after the Second World War. The first he saw as a great tragedy, the second a great sign of hope, though one too easily misused and abused.

The Versailles Treaty seems as pivotal today, though for somewhat different reasons, as it did forty-five years ago. To us, it's importance results from its boundary drawing in western and central Asia; to Bateson it represented a series of attitudinal changes, including important ones about how international relations should be conducted.

He saw cybernetics as having a similar impact, that is, as having changed attitudes about how the world should be run. But he also warned that the use of cybernetic theories (specifically game theories) to determine the parameters of international relations, was quite dangerous. He might have added that using those theories in that way to determine any patterns of behavior is hazardous.

We tend to see cybernetics in a very limited and narrow way. It isn't just about computers, or games, or even thermostats, but about complex relationships, which is what computers and games, and thermostats analyze. Your immune system is just as much a cybernetic system as any supercomputer. Cybernetics is really the philosophy and study of systems that adapt to their environment.

One reason it represented such attitudinal change, was that it broke with older ideas of causation. (If you think concepts of causation are unimportant or incidental, I would direct your attention to the debates over evolution and creationism, to the arguments concerning the causes and remedies of the present economic and ecological crises, to most of the developments in cancer research over the past few decades, to the causes of almost any war you care to mention, or simply to how a flower grows.) As long as we saw cause as rather direct and acting in one direction, even if it might have multiple effects (so-called billiard-ball causality, as the arguments resembled a cue ball striking and transmitting motion to one or more other balls) we could have only the most limited understanding of complex systems. We were stuck with either trying to reduce everything to simple logic, physical systems, or throwing up our hands and shouting "Deus vult!" ("God wills it!")

Cybernetics added information to the equation, it developed the idea of feedback (and feedback loops) and made possible the understanding of complex interactions within cells that become cancerous, and between them and the immune system, itself dependent on a variety of feedback mechanisms. It permitted us to look more closely at the events that led to both World Wars, the arms races, the diplomatic miscalculations, and the societal and psychological states that produced them.

Unfortunately, Bateson saw in his own time, that instead of being used to interpret the detailed complexity of political, economic, or social systems, cybernetics was employed in a deterministic way to give advice about what to do. Too few variables were allowed into play. The games and simulations were based on reductionist assumptions about human behavior. The results could have led to nuclear war. Used this way, cybernetics simply reinforced the existing rules of the game. It could not lead to a way out because it did not allow for new rules. In his words, "The problem is to change the rules...." Cybernetics could be (can be) used to lead to greater flexibility or greater rigidity. (Steps to an Ecology of Mind, 1972, p. 477.)

We remain in much the same situation today. Limited variables, limited rules, and limited choices are considered viable and acceptable. That should be unacceptable and, unless it leads to a chaotic series of events through feedback, kicking the social, political, and economic systems into new states, the results are likely dire.