Saturday, August 19, 2023

From Zeppelins to Generative AI

This morning, I was reading about the planning for the fire raids on Hamburg (Kieth Lowe, Inferno, the Fiery Destruction of Hamburg. Scribner, 2007, 65) and ran across this quote from Theodor Adorno: "Technology is making gestures precise and brutal, and with them men. It expels from movements all hesitation, deliberation, civility." That gelled with other recent reading and made a sudden connection with one of my favorite books on the First World War, Diana Preston's, A Higher Form of Killing: Six Weeks in World War I That Forever Changed the Nature of Warfare (Bloomsbury, 2016), and also with Matt Ford and Andrew Hoskins, Radical War: Data, Attention and Control in the 21st Century (Oxford UP, 2022) - both books that I would highly recommend. (Were I teaching undergraduates, I would assign the former and highly recommend the latter.) I also made a connection to the present political and technological moment (never separate High Tech from politics, we've lost too much privacy, autonomy, and seen too much social and environmental damage inflicted by High Tech to ignore the connections anymore). 


It has long been clear that, while while winning the World Wars was necessary, what the Western Powers did to win was detrimental to their cultures, politics, and society in the long run. One had to become just a bit satanic to defeat the Devil. Diana Preston chronicles the six weeks in the Spring of 1915 that saw all pre-War thought about the use of new kinds of weapons and the limitation of violence and war overturned. She chronicles the first successful use of poison gas (chlorine) at Loos; the first, brief, unrestricted submarine campaign culminating in the sinking of the Lusitania; and the first Zeppelin raids on Britain, particularly London. It changed the way we thought about war, about the application of "frightfulness" (terror), about the use of new technologies, and about the relationship of soldiers to civilians. 


Those weeks, and what followed over the next 42 months of fighting, set the tone for the rest of the century and the first decades of the next. We are still under the spell. The destruction of Coventry, Hamburg, Dresden, Tokyo, Hiroshima, and Nagasaki were simply the logical continuation of the Zeppelin raids. The Cold War and post-Cold War militaries of the world continue to find no better way to fight than to target civilians directly, or through the infrastructure they rely on for survival in wartime. Everything and everyone became a target in the world after Hiroshima. 


The new century has seen the increased targeting and involvement of civilians. While hot, shooting wars continue unabated, the evolution of information and psychological warfare since 1918, has come to play a greater and greater role, contributing greatly to the growth of the forever war. As Ford and Hoskins write (149-150): 


And our answer to this question is that war does not end. That in Radical War it is more useful to see the battlefield as always lie, being reproduced, recast and reframed in ways that draw connections between history, memory and contemporaneous events. The result is an uneasy convergence of memory and history as they become forged from the same seemingly limitless circulation and accumulation of weaponised content.

 The details of the weaponization of content are complex, well explained by Ford and Hoskins but also by Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (Picador, 2021). What is important to us is the incredible extent of weaponized content, software, and devices, who is doing it, and how enmeshed, indeed submerged, in it we are.

It is coming not jsut from the military, ministries of propaganda, or intelligence agencies. It is spewed by terrorist groups, political groups, businesses, real and supposed news organizations, maybe even NGOs. It can take any form. It originates with individuals and groups in an intentional manner, but it is spread unintentionally through influencers, social media, and sheer stupidity. Its immediate effects may not be the most important. That is to say, it may persuade us that our government is lying to us, or that some event did or did not happen, but the long-term and cumulative effects are what matters most. They first shape a certain approach to a topic or a kind of event, but they grow and change our relationship with truth, trust, reality itself. Thomas Rid's book is particularly strong on this aspects. After reading it and other things on the subjects, I am convinced that the KBG/Stasi Operation Denver, a very highly organized campaign to blame HIV/AIDS on US government labs and germ warfare projects, has contributed to the general distrust of medicine and official medical organizations (such as the CDC) that are so evident both in the anti-Vaxxer movement and the widespread weaponization of anti-COVID beliefs. 


The prototype of information warfare is the infamous tract, The Protocols of the Elders of Zion, created by the Imperial Russian Okhrana, spread by the likes of Henry Ford and Nesta Webster, influenced Adolf Hitler, US Military Intelligence (and other intelligence and law-enforcement agencies), generations of anti-Semites, and those who follow variations of the New World Order beliefs. The libels the book paints are transparent. That it is a forgery is obvious to many who use it, but they choose it because they believe the core message is true, or because of how it allows them to reorganize and fight reality (and their opponents). As Colin Dickey puts it: "The Protocols is an invitation to disregard any shared consensus of meaning in, favor of a privately held truth, one that supersedes any factual correction." (Colin Dickey, Under the Eye of Power: How Fear of Secret Societies Shapes American Democracy. Viking, 2023, 182.)


We also need to recognize how deeply information warfare is tied up with our ICT devices, the companies that make them, and the research labs and universities that underpin much of their work. This is covered, in part and in different ways, in Shoshana Zuboff, Surveillance Capitalism: The Fight for a Human Future at New Frontier of Power. (Public Affairs, 2019); Malcolm Harris, Palo Alto: A History of California, Capitalism, and the World (Little, Brown: 2023); and Chris Wiggins & Matthew Jones, How Data Happened, A History from the Age of Reason to the Age of Algorithms (W.W. Norton, 2023). 


Throughout the history of advanced electronics, digital computing, of big data, and of AI (not just the currently fashionable and recently developed Generative AI (GAI) such as ChatGPT or Stable Diffusion), the ties between business, government, and universities/research institutes have been deep and complex. The current situation with GAI is illustrative. The impetus to push forward on it has come largely from a small group of companies and individuals, many of them with associations with Stanford, in some cases with intelligence and law enforcement. Most by notably Peter Thiel, founder of PayPal, Palantir, and co-founder of OpenAI - Palantir being a darling of Western intelligence and law enforcement, though there are questions whether their systems even work. The big push has come from OpenAI and Microsoft. The latter is vital to the government and a great beneficiary of weak regulation in return for cooperation with the government in general, and the military and intelligence community on numerous occasions. The hardware needed for it from Nvidia or AMD is also dependent on things like protection from Chinese competition, espionage, and military protection (of Taiwan), and lax regulation of environmental damage caused by data centers by the US government.


The extent to which other forms of AI are already embedded in our daily lives and most intimate devices may not be generally appreciated. It is at the heart of the speech recognition in smart speakers, autocorrect/autocomplete, sleep tracking, and much else. Newer iPhones, though not touting their AI capabilities are quite capable of running some Generative AI natively, without much reliance on data centers in the cloud. Basically, older and newer forms of AI are all around us and commonly used by us every day, in some cases to track our habits. This, and the broader category of data science or big data, has changed our relationship with our bodies, our thoughts, our world, our realities, causing us to value and promote what we can quantify and track digitally over what we can experience directly. 


That makes us vulnerable, perhaps more vulnerable than we have ever been, at least in modern times. GAI itself plays into that. On one level, it is forcing us to reevaluate what it means to be human, or at least intelligent, and certainly creativity. This is not the first time that technology, or even computers, have forced these issues. Famously, Plato objected to what writing did to memory. Photography did it to us again. Since then, every new technology of reproduction has done so to some extent. Then we got computers, remember they used to cal them "Electronic Brains" back in the fifties? It wasn't long before philosophers, psychologists, and brain researchers began to explore, expand, and accept that metaphor as first a good heuristic, and then, for some at least, as true. It has become the central metaphor in our society for brain and mind. 


Now we are confronted not just by a computer that seems to model the brain (we are dealing with neural networks - a suggestive term, even if their operation is orders of complexity far less impressive than anything that occurs in the physiology of mammals, birds, and cephalopods), can produce a good simulacrum of human language, art, and even music. They get better at most things, though their complexity is sufficiently high that one tweak to one part may degrade an ability in a seemingly unrelated part. We are beginning, or being asked, to wonder how much our language abilities, our creativity, our consciousness, our very minds, are simply operating on probabilities that one thing will follow another, the way GAI, or even autocomplete, does? Some are also asking if the many factual and other mistakes and errors we make are due to the same factors that cause so-called "hallucinations" in GAI implementations. In AI, this is due to trying to predict what should come next. You have likely seen the AI-generated images with too many fingers or teeth. These are less common now, but a year, or even six-months ago, were prevalent. The problem then was conceptually simple: If you draw a human finger or tooth, what is the thing most likely to follow it? The answer is another finger or tooth. The early models did not know when to stop, so you might get ten fingers on one hand or thirty teeth in a grin. Now the prevailing issues with hands is the relationship and length of the fingers, often just a little off, but just enough to make it clear that no human drew the image.


But, to return to the main thread, the overall effect of GAI, whether it is just hype, or a revolution, or some combination thereof, is to make us question our basic concepts or humanity and reality. We talk about the filter bubbles that social media creates. What about the ones that over reliance on GAI can produce? Particularly if we start talking more to our artificial friends (such as Replika AI) more than our real ones. Just as Operation Denver may have opened us to a greater distrust in our scientific and medical institutions, sapping the ability of science to influence public policy and discourse, and the Protocols showed how to transform personal beliefs into an alternative consensual reality, AI and GAI are loosening our sense of humanness (and perhaps our humanity). The meta-effects are as important as the immediate effects of bias, outright prejudice, inequities, and effects on education and the economy. In the long term, they may be more so. 


For the people who have backed and developed these tools, it may be a major point. A number of them, such as Thiel, Musk, and Altman are advocates of Long Termism. When they talk about future humans, they include virtual humans in the mix, and treat them on roughly equal terms as at least the real human proletariat. They, and others who have backed them, such as Satya Nadella, the CEO of Microsoft, who has bet the farm on AI "copilots," have every reason to want us to accept AI as our helpers and maybe even our equals or overlords. They believe they are in a race to develop Artificial General Intelligence (AGI, not to be confused with GAI), or even superhuman intelligences. These should be able to solve many of our problems. Even if they cannot achieve real AGI, which is either a couple of years or infinitely far away, if they can convince us to accept GAI into every aspect of our lives, and if they can control the technology and the infrastructure, it will put them in very, very powerful positions. This is also why they have diverted attention from the real-world effects of their technologies to apocalyptic visions, so they can focus regulation away from what they are working on now, and adopt the sort of self-regulating regime that has prevailed with most computing advancements in the US. 


But let me return to the beginning. Surely GAI's rise is not to be likened to the firebombing of Hamburg. These people are not, intentionally trying to create that kind of holocaust, and certainly not the other sort. Maybe we should liken it to the rise of the bomber. ChatGPT may not be a B-1, or even a B-17, but it might be a Zeppelin, a precursor of worse to come. Already, the effects of its vast consumption of cooling water, minerals and plastics for its hardware, and especially its huge power consumption, may be as devastating on climate refugees as all those incendiaries were to the residents of Hamburg. Maybe we are already past the Zeppelin stage. Does it matter if people literally burn up because of bomb, or die of heat, dehydration, or even burns suffered from falling on the hot ground, from climate change? 

The effects of the bombing of Coventry, Hamburg, and all the others devastated the people who lived there, demoralizing their fellow citizens, frightening even the victors. In the longer term, what has mattered more are the meta-effects. The moral damage inflicted on the victors as well as the vanquished is sizable. By changing perceptions of our fellow humans, or what we ourselves were capable of, by allowing calm, cool, scientific deliberation of mass destruction, they degraded our world, our reality. 


We face similar challenges with all our present technology. It is true that our cell phones and other devices, with their GPS and ability to triangulate off of cell towers, allows us to be used for reconnaissance and targeting in zones of military operations, or allows all sorts of governmental and other surveillance on our habits. Metadata is almost as important as the ability to read the content on our phones. Most of the systems reading it rely on one form of AI or another to make sense of it. The immediate effects of that is felt on battlefields in Ukraine, Syria, and elsewhere. They are felt on the streets of American and European cities, where failed software misidentifies people as criminals, usually minorities of one group or another, but especially non-white, and leads to false arrests, imprisonment, death. (It is felt even more on the streets of China, where it is tied to other databases and used to repress dissidents but also entire ethnic groups in a dystopia far worse than the West is presently experiencing.) They are felt in welfare offices, loan offices, insurance offices, and many other places. The consequences are serious, for some, existential. 


Those of us not put at immediate risk will also feel the meta-effects. Those from GAI, as opposed to other, older forms of AI, are just beginning to propagate. I have tried to suggest some of them above. What I am trying to suggest, though, is that GAI itself is also being weaponized, intentionally or not, by its creators. In this day and age, anything that can be weaponized, will be. We have seen it over and over. We just have to dig into the people behind a new technology and its roots to begin to see how.

x

No comments: