Saturday, August 19, 2023

From Zeppelins to Generative AI

This morning, I was reading about the planning for the fire raids on Hamburg (Kieth Lowe, Inferno, the Fiery Destruction of Hamburg. Scribner, 2007, 65) and ran across this quote from Theodor Adorno: "Technology is making gestures precise and brutal, and with them men. It expels from movements all hesitation, deliberation, civility." That gelled with other recent reading and made a sudden connection with one of my favorite books on the First World War, Diana Preston's, A Higher Form of Killing: Six Weeks in World War I That Forever Changed the Nature of Warfare (Bloomsbury, 2016), and also with Matt Ford and Andrew Hoskins, Radical War: Data, Attention and Control in the 21st Century (Oxford UP, 2022) - both books that I would highly recommend. (Were I teaching undergraduates, I would assign the former and highly recommend the latter.) I also made a connection to the present political and technological moment (never separate High Tech from politics, we've lost too much privacy, autonomy, and seen too much social and environmental damage inflicted by High Tech to ignore the connections anymore). 


It has long been clear that, while while winning the World Wars was necessary, what the Western Powers did to win was detrimental to their cultures, politics, and society in the long run. One had to become just a bit satanic to defeat the Devil. Diana Preston chronicles the six weeks in the Spring of 1915 that saw all pre-War thought about the use of new kinds of weapons and the limitation of violence and war overturned. She chronicles the first successful use of poison gas (chlorine) at Loos; the first, brief, unrestricted submarine campaign culminating in the sinking of the Lusitania; and the first Zeppelin raids on Britain, particularly London. It changed the way we thought about war, about the application of "frightfulness" (terror), about the use of new technologies, and about the relationship of soldiers to civilians. 


Those weeks, and what followed over the next 42 months of fighting, set the tone for the rest of the century and the first decades of the next. We are still under the spell. The destruction of Coventry, Hamburg, Dresden, Tokyo, Hiroshima, and Nagasaki were simply the logical continuation of the Zeppelin raids. The Cold War and post-Cold War militaries of the world continue to find no better way to fight than to target civilians directly, or through the infrastructure they rely on for survival in wartime. Everything and everyone became a target in the world after Hiroshima. 


The new century has seen the increased targeting and involvement of civilians. While hot, shooting wars continue unabated, the evolution of information and psychological warfare since 1918, has come to play a greater and greater role, contributing greatly to the growth of the forever war. As Ford and Hoskins write (149-150): 


And our answer to this question is that war does not end. That in Radical War it is more useful to see the battlefield as always lie, being reproduced, recast and reframed in ways that draw connections between history, memory and contemporaneous events. The result is an uneasy convergence of memory and history as they become forged from the same seemingly limitless circulation and accumulation of weaponised content.

 The details of the weaponization of content are complex, well explained by Ford and Hoskins but also by Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (Picador, 2021). What is important to us is the incredible extent of weaponized content, software, and devices, who is doing it, and how enmeshed, indeed submerged, in it we are.

It is coming not jsut from the military, ministries of propaganda, or intelligence agencies. It is spewed by terrorist groups, political groups, businesses, real and supposed news organizations, maybe even NGOs. It can take any form. It originates with individuals and groups in an intentional manner, but it is spread unintentionally through influencers, social media, and sheer stupidity. Its immediate effects may not be the most important. That is to say, it may persuade us that our government is lying to us, or that some event did or did not happen, but the long-term and cumulative effects are what matters most. They first shape a certain approach to a topic or a kind of event, but they grow and change our relationship with truth, trust, reality itself. Thomas Rid's book is particularly strong on this aspects. After reading it and other things on the subjects, I am convinced that the KBG/Stasi Operation Denver, a very highly organized campaign to blame HIV/AIDS on US government labs and germ warfare projects, has contributed to the general distrust of medicine and official medical organizations (such as the CDC) that are so evident both in the anti-Vaxxer movement and the widespread weaponization of anti-COVID beliefs. 


The prototype of information warfare is the infamous tract, The Protocols of the Elders of Zion, created by the Imperial Russian Okhrana, spread by the likes of Henry Ford and Nesta Webster, influenced Adolf Hitler, US Military Intelligence (and other intelligence and law-enforcement agencies), generations of anti-Semites, and those who follow variations of the New World Order beliefs. The libels the book paints are transparent. That it is a forgery is obvious to many who use it, but they choose it because they believe the core message is true, or because of how it allows them to reorganize and fight reality (and their opponents). As Colin Dickey puts it: "The Protocols is an invitation to disregard any shared consensus of meaning in, favor of a privately held truth, one that supersedes any factual correction." (Colin Dickey, Under the Eye of Power: How Fear of Secret Societies Shapes American Democracy. Viking, 2023, 182.)


We also need to recognize how deeply information warfare is tied up with our ICT devices, the companies that make them, and the research labs and universities that underpin much of their work. This is covered, in part and in different ways, in Shoshana Zuboff, Surveillance Capitalism: The Fight for a Human Future at New Frontier of Power. (Public Affairs, 2019); Malcolm Harris, Palo Alto: A History of California, Capitalism, and the World (Little, Brown: 2023); and Chris Wiggins & Matthew Jones, How Data Happened, A History from the Age of Reason to the Age of Algorithms (W.W. Norton, 2023). 


Throughout the history of advanced electronics, digital computing, of big data, and of AI (not just the currently fashionable and recently developed Generative AI (GAI) such as ChatGPT or Stable Diffusion), the ties between business, government, and universities/research institutes have been deep and complex. The current situation with GAI is illustrative. The impetus to push forward on it has come largely from a small group of companies and individuals, many of them with associations with Stanford, in some cases with intelligence and law enforcement. Most by notably Peter Thiel, founder of PayPal, Palantir, and co-founder of OpenAI - Palantir being a darling of Western intelligence and law enforcement, though there are questions whether their systems even work. The big push has come from OpenAI and Microsoft. The latter is vital to the government and a great beneficiary of weak regulation in return for cooperation with the government in general, and the military and intelligence community on numerous occasions. The hardware needed for it from Nvidia or AMD is also dependent on things like protection from Chinese competition, espionage, and military protection (of Taiwan), and lax regulation of environmental damage caused by data centers by the US government.


The extent to which other forms of AI are already embedded in our daily lives and most intimate devices may not be generally appreciated. It is at the heart of the speech recognition in smart speakers, autocorrect/autocomplete, sleep tracking, and much else. Newer iPhones, though not touting their AI capabilities are quite capable of running some Generative AI natively, without much reliance on data centers in the cloud. Basically, older and newer forms of AI are all around us and commonly used by us every day, in some cases to track our habits. This, and the broader category of data science or big data, has changed our relationship with our bodies, our thoughts, our world, our realities, causing us to value and promote what we can quantify and track digitally over what we can experience directly. 


That makes us vulnerable, perhaps more vulnerable than we have ever been, at least in modern times. GAI itself plays into that. On one level, it is forcing us to reevaluate what it means to be human, or at least intelligent, and certainly creativity. This is not the first time that technology, or even computers, have forced these issues. Famously, Plato objected to what writing did to memory. Photography did it to us again. Since then, every new technology of reproduction has done so to some extent. Then we got computers, remember they used to cal them "Electronic Brains" back in the fifties? It wasn't long before philosophers, psychologists, and brain researchers began to explore, expand, and accept that metaphor as first a good heuristic, and then, for some at least, as true. It has become the central metaphor in our society for brain and mind. 


Now we are confronted not just by a computer that seems to model the brain (we are dealing with neural networks - a suggestive term, even if their operation is orders of complexity far less impressive than anything that occurs in the physiology of mammals, birds, and cephalopods), can produce a good simulacrum of human language, art, and even music. They get better at most things, though their complexity is sufficiently high that one tweak to one part may degrade an ability in a seemingly unrelated part. We are beginning, or being asked, to wonder how much our language abilities, our creativity, our consciousness, our very minds, are simply operating on probabilities that one thing will follow another, the way GAI, or even autocomplete, does? Some are also asking if the many factual and other mistakes and errors we make are due to the same factors that cause so-called "hallucinations" in GAI implementations. In AI, this is due to trying to predict what should come next. You have likely seen the AI-generated images with too many fingers or teeth. These are less common now, but a year, or even six-months ago, were prevalent. The problem then was conceptually simple: If you draw a human finger or tooth, what is the thing most likely to follow it? The answer is another finger or tooth. The early models did not know when to stop, so you might get ten fingers on one hand or thirty teeth in a grin. Now the prevailing issues with hands is the relationship and length of the fingers, often just a little off, but just enough to make it clear that no human drew the image.


But, to return to the main thread, the overall effect of GAI, whether it is just hype, or a revolution, or some combination thereof, is to make us question our basic concepts or humanity and reality. We talk about the filter bubbles that social media creates. What about the ones that over reliance on GAI can produce? Particularly if we start talking more to our artificial friends (such as Replika AI) more than our real ones. Just as Operation Denver may have opened us to a greater distrust in our scientific and medical institutions, sapping the ability of science to influence public policy and discourse, and the Protocols showed how to transform personal beliefs into an alternative consensual reality, AI and GAI are loosening our sense of humanness (and perhaps our humanity). The meta-effects are as important as the immediate effects of bias, outright prejudice, inequities, and effects on education and the economy. In the long term, they may be more so. 


For the people who have backed and developed these tools, it may be a major point. A number of them, such as Thiel, Musk, and Altman are advocates of Long Termism. When they talk about future humans, they include virtual humans in the mix, and treat them on roughly equal terms as at least the real human proletariat. They, and others who have backed them, such as Satya Nadella, the CEO of Microsoft, who has bet the farm on AI "copilots," have every reason to want us to accept AI as our helpers and maybe even our equals or overlords. They believe they are in a race to develop Artificial General Intelligence (AGI, not to be confused with GAI), or even superhuman intelligences. These should be able to solve many of our problems. Even if they cannot achieve real AGI, which is either a couple of years or infinitely far away, if they can convince us to accept GAI into every aspect of our lives, and if they can control the technology and the infrastructure, it will put them in very, very powerful positions. This is also why they have diverted attention from the real-world effects of their technologies to apocalyptic visions, so they can focus regulation away from what they are working on now, and adopt the sort of self-regulating regime that has prevailed with most computing advancements in the US. 


But let me return to the beginning. Surely GAI's rise is not to be likened to the firebombing of Hamburg. These people are not, intentionally trying to create that kind of holocaust, and certainly not the other sort. Maybe we should liken it to the rise of the bomber. ChatGPT may not be a B-1, or even a B-17, but it might be a Zeppelin, a precursor of worse to come. Already, the effects of its vast consumption of cooling water, minerals and plastics for its hardware, and especially its huge power consumption, may be as devastating on climate refugees as all those incendiaries were to the residents of Hamburg. Maybe we are already past the Zeppelin stage. Does it matter if people literally burn up because of bomb, or die of heat, dehydration, or even burns suffered from falling on the hot ground, from climate change? 

The effects of the bombing of Coventry, Hamburg, and all the others devastated the people who lived there, demoralizing their fellow citizens, frightening even the victors. In the longer term, what has mattered more are the meta-effects. The moral damage inflicted on the victors as well as the vanquished is sizable. By changing perceptions of our fellow humans, or what we ourselves were capable of, by allowing calm, cool, scientific deliberation of mass destruction, they degraded our world, our reality. 


We face similar challenges with all our present technology. It is true that our cell phones and other devices, with their GPS and ability to triangulate off of cell towers, allows us to be used for reconnaissance and targeting in zones of military operations, or allows all sorts of governmental and other surveillance on our habits. Metadata is almost as important as the ability to read the content on our phones. Most of the systems reading it rely on one form of AI or another to make sense of it. The immediate effects of that is felt on battlefields in Ukraine, Syria, and elsewhere. They are felt on the streets of American and European cities, where failed software misidentifies people as criminals, usually minorities of one group or another, but especially non-white, and leads to false arrests, imprisonment, death. (It is felt even more on the streets of China, where it is tied to other databases and used to repress dissidents but also entire ethnic groups in a dystopia far worse than the West is presently experiencing.) They are felt in welfare offices, loan offices, insurance offices, and many other places. The consequences are serious, for some, existential. 


Those of us not put at immediate risk will also feel the meta-effects. Those from GAI, as opposed to other, older forms of AI, are just beginning to propagate. I have tried to suggest some of them above. What I am trying to suggest, though, is that GAI itself is also being weaponized, intentionally or not, by its creators. In this day and age, anything that can be weaponized, will be. We have seen it over and over. We just have to dig into the people behind a new technology and its roots to begin to see how.

x

Saturday, August 5, 2023

Is 2023 a Pivotal Year?

 I am trying to think when I have felt this way before. I suspect I must have, but cannot recall. Maybe it is age, or the converging foci of my professional and personal interests, but this year feels particularly significant. We are almost 2/3rds of the way through 2023. It is clear that something has shifted or is shifting in climate change and our understanding of it. Instead of a decades-long shift, we seem to have hit a sudden shift of state. We are seeing things happen in a few months that were supposed to take until the middle of the century. Despite some aspects slowing, such as the rate of population growth, slowing in recent years, it feels as if the Great Acceleration is now in overdrive. 

Meanwhile, the world political situation seems to continue to undergo ever more rapid change. Maybe I am exaggerating this, but it is clear that America is at a crisis point where it must move one way or another. NATO has suddenly added more members. Ukraine and Russia appear locked in a deadly embrace for the foreseeable future while Putin stumbles and appears more withdrawn. China remains an enigma, but while Xi flexes his muscles internally and externally, there are signs of something wrong - extensive corruption in his nuclear missile forces have forced the replacement of its leaders with more politically reliable outsiders, while the foreign minister first went missing, was officially replaced, and now Beijing is trying to remove all official references to him. They are locked in a long-term struggle with the US, but the two economies remain deeply enmeshed. That makes for a very different kind of Cold War than the one of my childhood and youth. The War in Ukraine continues to cause global problems with oil and grain supplies. Instability seems to have increased again in the Horn of Africa. South Asia's economic challenges are being compounded by climatological disasters. It goes on and on. Oil, wheat, heat - they are causing huge disruptions, destruction, and death across even affluent countries that are more-or-less self-sufficient in the first two. 

Since last November, we have had a heightened sense of both the threat and the promise of information and communication technology (ICT). We had grown so accustomed to the constant change and penetration of these technologies in our lives, that we (myself included) had missed much of what was happening. Over the past decade, first with ISIS, then the Syrian Civil War, and now the Russo-Ukrainian War, the more deeply sinister aspects of our connectedness have come into focus: first as a medium of propaganda, radicalization, and disinformation; now as a means of passive intelligence collection, reconnaissance, and targeting. The distinction between civilians and soldiers has been blurring for more than a century, were further dissolved by the long-term requirements for nuclear deterrence, and are all but gone now. The front line is no longer in Syria, Sudan, or Ukraine. It is your cell phone, your tablet, or your laptop. Information and drone warfare are the defining characteristics of today's conflict (a forever war created as much by the greed of ICT companies, the surreality of intelligence agencies, as by the GWOT) just as tanks, airplanes, and submarines were of that Great War that began 109 years ago last week. Oh, and just to make matters more unstable, the richest man in the world, whose actions appear to many to be increasingly erratic, owns half the satellites in orbit, communication satellites that are now vital to the functioning of most militaries around the world. 

Of course last November marked the beginning of the hype and rapid spread of generative AI (GAI). There had been a lot of warning tremors, mostly graphical in nature, as we became familiar with the strange output of applications like Stable Diffusion, DALL-E, and Midjourney. At the end of November, OpenAI, announced ChatGPT. Whatever you think of either the company or the product (or their critics), it made most of us aware of the potential for good and evil of GAI. It was gasoline thrown on a fire, or maybe on a lot of different fires. How do we react? How should we react? What does it mean? How much of it is hype? Going back to events, this time 108 years ago, I am reminded of the confusion and difficulty the British government and military had responding to the first Zeppelin raids. How to give warnings? Was it best to attack their bases or pull fighters back from the Western Front? What to do about shelters? What information to release after an attack? How to minimize disruption to war production? 

Those, however, were less intense than the situation we face today. The crisis may be less about GAI itself and more about how companies like Microsoft have chosen to implement it as part of all of their products, regarding how companies have chosen to use it to increase their holy cows (productivity and profit), or how seriously it threatens to disrupt every aspect of creativity and education. The intense exploitation of human labor to prepare training data for it is another aspect. The environmental and climatological problems will be severe if usage continues to increase and if energy and thermal efficiency of processors is not increase by two or more orders of magnitude.This week a study predicted that by 2025, GAI might use as much energy as all human workers. And, of course, it is folded into both the Cold War, and because of its ability to crank out disinformation, our forever information war. 

The responses to all of this from Left, Center, and Right have been pretty predictable, and equally inadequate. Just to take the GAI mess, I have been very attracted to the critique of it that built up around two groups, DAIR and Critical AI. I think they are right for the most part. Their months (really years) of critique are paying off in certain arenas. The press is finally beginning to understand that real dangers and pay less attention to the self-serving, existential-risk mongering of the AI leaders. Educators are tuned into their critiques. Frankly, though, I do not think they are doing much good. As with most other technologies, our society, culture, and economy embrace them regardless of cost. They may not be Don Quixotes, but it feels a bit like France in May 1940 - much depends if they are more like the French Army or the BEF. 

I have deeper concerns. While the view of mind (consciousness, sentience, intelligence, etc.) that some of the GAI creators and supporters espouse is hideous - call it a sort of reduction of everything to data science - there are many other models of mind out there, and it is not always clear, in fact it rarely seems clear, which one they follow. (I will try to explain my own model in another blog post, but it is derived from the one Gregory Bateson espoused, but with additions to take into account later research and philosophy.) I know it is too much to ask people who are writing about these issues to go around explaining just what they conceive mind to be, it is just that it bugs me. Why do I care? It tells you a great deal about their humanity, their attitudes toward humanity, and to the more general issues of dealing with the other sentient, conscious, or intelligent species (and possibly systems) that we share the planet with. Critics and commentators look at the problems GAI raised for humans, often with a very good display feminist, queer, and decolonial theory and practice, I saw nothing about what it might mean for other species. My impression is that they are often locked into ideas about the nature of mind that are too narrow.

We inhabit a world with a great many other minds. They may range from the simplest to ones as complex as our own. We need to also acknowledge that some theories of mind, and these do not require mysticism and can be firmly grounded in materialist notions of the world, do not see it as confined to the individual, but as the interactions between individuals and the world around them. Once we start down that road, we are in a different place regarding other species, our technologies, what we are doing to the environment, the climate, our cultures, societies, economies, etc. Maybe there is a theoretical framework out there that can embrace all of that. I don't know. My feeling is that our present theories are both limiting and failing us. Maybe they can expand, maybe we need a new theory, maybe we just need to walk away from any kind of overarching theory. The latter is where I am. I have been there for probably thirty-five years. Instead of elevating theory to dogma or ideology, theories to me are simply tools. Like all tools, a theory is good for some jobs and not others. Just as having the wrong tools, having the wrong theories limits what you can accomplish. While there is still a lot of good analysis going on within those theories, it feels more and more incomplete with each passing year. Maybe we need to break it apart and see what we can do with it. Maybe we need to walk away from theory for a while and work from an understanding that the problems are even bigger and more complex than we have theorized, must be approached both from a cold realism and from a deep empathy and compassion, that our best efforts will always be somewhat inadequate, and that we cannot ignore any part of world or its systems. 

We are also going to have to change our understanding of who and what we are in more than one way. We can start by recognizing that we are part of not just a biosphere but also a noƶsphere. I am not using that in the sense Vernadsky and the Russian Cosmicists mean it, nor in the usage of Pierre Teilhard de Chardin - though I was raised on some of his concepts. I propose this not as a philosophy or a theology but as a practical and pragmatic concept. The biological world is full of communication and many kinds of minds, networks of mind and communication that we know cross species. We are only now becoming aware of how complex this all is. We are adding to the complexity with our computers and digital communications. We have been thinking of how those tool affect us, and we may be thinking about how their energy and other resource demands affect the biosphere, but we have done very little so far to begin considering how they play in the larger mental life world in which they, and we, are embedded. 

What is the effect on all living minds if we are not just poisoning our physical but our mental environment with disinformation, misinformation, a basic distrust of reality, a basic distrust of ourselves and others? Does it make it easier for us to ignore them? Does it make it easier for us to destroy them? Does it keep us from making the connections we need both with them and to help keep them from the destruction and harm we create? 

There are some who think that GAI will alter what it means to be human and to be more or differently creative. I have a lot of doubts about that but the moment is pregnant with possibilities. I am convinced that we will end up with some kind of hybrid creativity. How we think about them, ourselves, and the thinking world around us, can lead to different outcomes, some of them probably unimaginable. Where we are now, our understanding or misunderstanding of the dramatic climate and weather we are experiencing, the levels and kinds of misinformation we accept, the theories or ideologies we hold to, the local, national, and global political situations and the infinite war, all of these will shape our the sorts of creativity that emerge. 

These emergent creativities will be vitally important. They could give us tools to solve our problems and answer the most important questions, allowing us to become something more than we are, allowing the thinking and feeling world to become more than it is, or they could limit, stunt, and destroy. I firmly believe that this year, and the next two or three, are really pivotal - not just because of AI, but because our reactions to the environmental, climatological, social, and political turmoil is going to shape things for a very long time. 

I hope I am wrong. I hope that I am just blowing events and temporary conditions out of all proportion. Given that I am writing this, I obviously believe that I have at least some small insight, but I really want to be wrong.