Wednesday, April 3, 2024

A Moment of Clarity

The fault is not in our technologies, it is in ourselves. (With apologies to  Will Shakespeare.) Just now I was listening to "The Low Spark of High Heeled Boys," and things momentarily jelled for me, reality was momentarily clear. The things I want to say suddenly became possible to say, which is often hard in my crowded and addled mind.

Nothing new here. I've seen others make the same point over and over. Some days though I feel it more forcefully than others. Technology has no agency that we, our corporations, our governments, do not grant it. We may feel that we have little enough in our lives. As David Runciman writes (in The Handover), we have given up much of our agency to corporations and governments, even to markets, to make our lives longer, safer, more comfortable. It is what allows our world to function to the extent it does.

We live in a time of rage over the loss of agency without any clear way to reclaim it. We may rail at governments over this. We are too enamored of wealth and markets, too brainwashed by economists and their fellow travelers to really blame the corporations or the people who lead them. Those who do just follow other economic dogmas and cannot break free. Economics is our real religion after all - the Market our Moloch.

Our impotent fury must be channeled elsewhere. For a few decades we focused on government, but it can only absorb so much and cannot keep going much longer with the hatred and loathing the extremes of our political spectrum heap on it. Technology is the convenient target now. Algorithms have been given too much power by the state, by the corporations, by the wealthy already. With the hype-driven rise of LLMs, we seem to have the perfect target. We are far from AGI. We may never even see it or even an AI worthy of the name. The stupidity and avarice of the powerful, never content with their share of the Market, itself an illusion we have created to rule over us - One Market to rule them all and in the darkness bind them.(Sorry JRR, I'm really riffing on famous quotes tonight.)

The driving factor for so many is the plausible increase in efficiency, in productivity, in profit. They are chasing illusions. We have to teach ourselves a different way of thinking and being. We can use LLMs, or we can be used by them, and by the people behind them. It is important to understand both the machines and its overlords, though even they may believe they are in thrall to their bots. The same is true with all of our other hardware and software. 

In Frank Herbert's universe, the Orange Catholic Bible enjoined: "Thou shalt not make a machine in the likeness of a human mind." (One quote I am not screwing with tonight.) We need to realize that we make our machines out of steel or silicon, but out of ideas, people, organization, as I should have known from my small acquaintance with Lewis Mumford (though it took many other writers over three decades, from Roseanne Stone, to Jeannette Winterson, to Amitav Ghosh, James Bridle, and David Runicman to make it real for me - I can be particularly obtuse about some things). There are many technologies in play now, some physical, some psychological, some sociological. 

Only if we begin waking up to this can we begin to free ourselves, begin to make a more livable world for ourselves and for our fellow creatures. 

Or we can keep going around, blundering into things, raging, deluding ourselves.  

 

Saturday, August 19, 2023

From Zeppelins to Generative AI

This morning, I was reading about the planning for the fire raids on Hamburg (Kieth Lowe, Inferno, the Fiery Destruction of Hamburg. Scribner, 2007, 65) and ran across this quote from Theodor Adorno: "Technology is making gestures precise and brutal, and with them men. It expels from movements all hesitation, deliberation, civility." That gelled with other recent reading and made a sudden connection with one of my favorite books on the First World War, Diana Preston's, A Higher Form of Killing: Six Weeks in World War I That Forever Changed the Nature of Warfare (Bloomsbury, 2016), and also with Matt Ford and Andrew Hoskins, Radical War: Data, Attention and Control in the 21st Century (Oxford UP, 2022) - both books that I would highly recommend. (Were I teaching undergraduates, I would assign the former and highly recommend the latter.) I also made a connection to the present political and technological moment (never separate High Tech from politics, we've lost too much privacy, autonomy, and seen too much social and environmental damage inflicted by High Tech to ignore the connections anymore). 


It has long been clear that, while while winning the World Wars was necessary, what the Western Powers did to win was detrimental to their cultures, politics, and society in the long run. One had to become just a bit satanic to defeat the Devil. Diana Preston chronicles the six weeks in the Spring of 1915 that saw all pre-War thought about the use of new kinds of weapons and the limitation of violence and war overturned. She chronicles the first successful use of poison gas (chlorine) at Loos; the first, brief, unrestricted submarine campaign culminating in the sinking of the Lusitania; and the first Zeppelin raids on Britain, particularly London. It changed the way we thought about war, about the application of "frightfulness" (terror), about the use of new technologies, and about the relationship of soldiers to civilians. 


Those weeks, and what followed over the next 42 months of fighting, set the tone for the rest of the century and the first decades of the next. We are still under the spell. The destruction of Coventry, Hamburg, Dresden, Tokyo, Hiroshima, and Nagasaki were simply the logical continuation of the Zeppelin raids. The Cold War and post-Cold War militaries of the world continue to find no better way to fight than to target civilians directly, or through the infrastructure they rely on for survival in wartime. Everything and everyone became a target in the world after Hiroshima. 


The new century has seen the increased targeting and involvement of civilians. While hot, shooting wars continue unabated, the evolution of information and psychological warfare since 1918, has come to play a greater and greater role, contributing greatly to the growth of the forever war. As Ford and Hoskins write (149-150): 


And our answer to this question is that war does not end. That in Radical War it is more useful to see the battlefield as always lie, being reproduced, recast and reframed in ways that draw connections between history, memory and contemporaneous events. The result is an uneasy convergence of memory and history as they become forged from the same seemingly limitless circulation and accumulation of weaponised content.

 The details of the weaponization of content are complex, well explained by Ford and Hoskins but also by Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (Picador, 2021). What is important to us is the incredible extent of weaponized content, software, and devices, who is doing it, and how enmeshed, indeed submerged, in it we are.

It is coming not jsut from the military, ministries of propaganda, or intelligence agencies. It is spewed by terrorist groups, political groups, businesses, real and supposed news organizations, maybe even NGOs. It can take any form. It originates with individuals and groups in an intentional manner, but it is spread unintentionally through influencers, social media, and sheer stupidity. Its immediate effects may not be the most important. That is to say, it may persuade us that our government is lying to us, or that some event did or did not happen, but the long-term and cumulative effects are what matters most. They first shape a certain approach to a topic or a kind of event, but they grow and change our relationship with truth, trust, reality itself. Thomas Rid's book is particularly strong on this aspects. After reading it and other things on the subjects, I am convinced that the KBG/Stasi Operation Denver, a very highly organized campaign to blame HIV/AIDS on US government labs and germ warfare projects, has contributed to the general distrust of medicine and official medical organizations (such as the CDC) that are so evident both in the anti-Vaxxer movement and the widespread weaponization of anti-COVID beliefs. 


The prototype of information warfare is the infamous tract, The Protocols of the Elders of Zion, created by the Imperial Russian Okhrana, spread by the likes of Henry Ford and Nesta Webster, influenced Adolf Hitler, US Military Intelligence (and other intelligence and law-enforcement agencies), generations of anti-Semites, and those who follow variations of the New World Order beliefs. The libels the book paints are transparent. That it is a forgery is obvious to many who use it, but they choose it because they believe the core message is true, or because of how it allows them to reorganize and fight reality (and their opponents). As Colin Dickey puts it: "The Protocols is an invitation to disregard any shared consensus of meaning in, favor of a privately held truth, one that supersedes any factual correction." (Colin Dickey, Under the Eye of Power: How Fear of Secret Societies Shapes American Democracy. Viking, 2023, 182.)


We also need to recognize how deeply information warfare is tied up with our ICT devices, the companies that make them, and the research labs and universities that underpin much of their work. This is covered, in part and in different ways, in Shoshana Zuboff, Surveillance Capitalism: The Fight for a Human Future at New Frontier of Power. (Public Affairs, 2019); Malcolm Harris, Palo Alto: A History of California, Capitalism, and the World (Little, Brown: 2023); and Chris Wiggins & Matthew Jones, How Data Happened, A History from the Age of Reason to the Age of Algorithms (W.W. Norton, 2023). 


Throughout the history of advanced electronics, digital computing, of big data, and of AI (not just the currently fashionable and recently developed Generative AI (GAI) such as ChatGPT or Stable Diffusion), the ties between business, government, and universities/research institutes have been deep and complex. The current situation with GAI is illustrative. The impetus to push forward on it has come largely from a small group of companies and individuals, many of them with associations with Stanford, in some cases with intelligence and law enforcement. Most by notably Peter Thiel, founder of PayPal, Palantir, and co-founder of OpenAI - Palantir being a darling of Western intelligence and law enforcement, though there are questions whether their systems even work. The big push has come from OpenAI and Microsoft. The latter is vital to the government and a great beneficiary of weak regulation in return for cooperation with the government in general, and the military and intelligence community on numerous occasions. The hardware needed for it from Nvidia or AMD is also dependent on things like protection from Chinese competition, espionage, and military protection (of Taiwan), and lax regulation of environmental damage caused by data centers by the US government.


The extent to which other forms of AI are already embedded in our daily lives and most intimate devices may not be generally appreciated. It is at the heart of the speech recognition in smart speakers, autocorrect/autocomplete, sleep tracking, and much else. Newer iPhones, though not touting their AI capabilities are quite capable of running some Generative AI natively, without much reliance on data centers in the cloud. Basically, older and newer forms of AI are all around us and commonly used by us every day, in some cases to track our habits. This, and the broader category of data science or big data, has changed our relationship with our bodies, our thoughts, our world, our realities, causing us to value and promote what we can quantify and track digitally over what we can experience directly. 


That makes us vulnerable, perhaps more vulnerable than we have ever been, at least in modern times. GAI itself plays into that. On one level, it is forcing us to reevaluate what it means to be human, or at least intelligent, and certainly creativity. This is not the first time that technology, or even computers, have forced these issues. Famously, Plato objected to what writing did to memory. Photography did it to us again. Since then, every new technology of reproduction has done so to some extent. Then we got computers, remember they used to cal them "Electronic Brains" back in the fifties? It wasn't long before philosophers, psychologists, and brain researchers began to explore, expand, and accept that metaphor as first a good heuristic, and then, for some at least, as true. It has become the central metaphor in our society for brain and mind. 


Now we are confronted not just by a computer that seems to model the brain (we are dealing with neural networks - a suggestive term, even if their operation is orders of complexity far less impressive than anything that occurs in the physiology of mammals, birds, and cephalopods), can produce a good simulacrum of human language, art, and even music. They get better at most things, though their complexity is sufficiently high that one tweak to one part may degrade an ability in a seemingly unrelated part. We are beginning, or being asked, to wonder how much our language abilities, our creativity, our consciousness, our very minds, are simply operating on probabilities that one thing will follow another, the way GAI, or even autocomplete, does? Some are also asking if the many factual and other mistakes and errors we make are due to the same factors that cause so-called "hallucinations" in GAI implementations. In AI, this is due to trying to predict what should come next. You have likely seen the AI-generated images with too many fingers or teeth. These are less common now, but a year, or even six-months ago, were prevalent. The problem then was conceptually simple: If you draw a human finger or tooth, what is the thing most likely to follow it? The answer is another finger or tooth. The early models did not know when to stop, so you might get ten fingers on one hand or thirty teeth in a grin. Now the prevailing issues with hands is the relationship and length of the fingers, often just a little off, but just enough to make it clear that no human drew the image.


But, to return to the main thread, the overall effect of GAI, whether it is just hype, or a revolution, or some combination thereof, is to make us question our basic concepts or humanity and reality. We talk about the filter bubbles that social media creates. What about the ones that over reliance on GAI can produce? Particularly if we start talking more to our artificial friends (such as Replika AI) more than our real ones. Just as Operation Denver may have opened us to a greater distrust in our scientific and medical institutions, sapping the ability of science to influence public policy and discourse, and the Protocols showed how to transform personal beliefs into an alternative consensual reality, AI and GAI are loosening our sense of humanness (and perhaps our humanity). The meta-effects are as important as the immediate effects of bias, outright prejudice, inequities, and effects on education and the economy. In the long term, they may be more so. 


For the people who have backed and developed these tools, it may be a major point. A number of them, such as Thiel, Musk, and Altman are advocates of Long Termism. When they talk about future humans, they include virtual humans in the mix, and treat them on roughly equal terms as at least the real human proletariat. They, and others who have backed them, such as Satya Nadella, the CEO of Microsoft, who has bet the farm on AI "copilots," have every reason to want us to accept AI as our helpers and maybe even our equals or overlords. They believe they are in a race to develop Artificial General Intelligence (AGI, not to be confused with GAI), or even superhuman intelligences. These should be able to solve many of our problems. Even if they cannot achieve real AGI, which is either a couple of years or infinitely far away, if they can convince us to accept GAI into every aspect of our lives, and if they can control the technology and the infrastructure, it will put them in very, very powerful positions. This is also why they have diverted attention from the real-world effects of their technologies to apocalyptic visions, so they can focus regulation away from what they are working on now, and adopt the sort of self-regulating regime that has prevailed with most computing advancements in the US. 


But let me return to the beginning. Surely GAI's rise is not to be likened to the firebombing of Hamburg. These people are not, intentionally trying to create that kind of holocaust, and certainly not the other sort. Maybe we should liken it to the rise of the bomber. ChatGPT may not be a B-1, or even a B-17, but it might be a Zeppelin, a precursor of worse to come. Already, the effects of its vast consumption of cooling water, minerals and plastics for its hardware, and especially its huge power consumption, may be as devastating on climate refugees as all those incendiaries were to the residents of Hamburg. Maybe we are already past the Zeppelin stage. Does it matter if people literally burn up because of bomb, or die of heat, dehydration, or even burns suffered from falling on the hot ground, from climate change? 

The effects of the bombing of Coventry, Hamburg, and all the others devastated the people who lived there, demoralizing their fellow citizens, frightening even the victors. In the longer term, what has mattered more are the meta-effects. The moral damage inflicted on the victors as well as the vanquished is sizable. By changing perceptions of our fellow humans, or what we ourselves were capable of, by allowing calm, cool, scientific deliberation of mass destruction, they degraded our world, our reality. 


We face similar challenges with all our present technology. It is true that our cell phones and other devices, with their GPS and ability to triangulate off of cell towers, allows us to be used for reconnaissance and targeting in zones of military operations, or allows all sorts of governmental and other surveillance on our habits. Metadata is almost as important as the ability to read the content on our phones. Most of the systems reading it rely on one form of AI or another to make sense of it. The immediate effects of that is felt on battlefields in Ukraine, Syria, and elsewhere. They are felt on the streets of American and European cities, where failed software misidentifies people as criminals, usually minorities of one group or another, but especially non-white, and leads to false arrests, imprisonment, death. (It is felt even more on the streets of China, where it is tied to other databases and used to repress dissidents but also entire ethnic groups in a dystopia far worse than the West is presently experiencing.) They are felt in welfare offices, loan offices, insurance offices, and many other places. The consequences are serious, for some, existential. 


Those of us not put at immediate risk will also feel the meta-effects. Those from GAI, as opposed to other, older forms of AI, are just beginning to propagate. I have tried to suggest some of them above. What I am trying to suggest, though, is that GAI itself is also being weaponized, intentionally or not, by its creators. In this day and age, anything that can be weaponized, will be. We have seen it over and over. We just have to dig into the people behind a new technology and its roots to begin to see how.

x

Saturday, August 5, 2023

Is 2023 a Pivotal Year?

 I am trying to think when I have felt this way before. I suspect I must have, but cannot recall. Maybe it is age, or the converging foci of my professional and personal interests, but this year feels particularly significant. We are almost 2/3rds of the way through 2023. It is clear that something has shifted or is shifting in climate change and our understanding of it. Instead of a decades-long shift, we seem to have hit a sudden shift of state. We are seeing things happen in a few months that were supposed to take until the middle of the century. Despite some aspects slowing, such as the rate of population growth, slowing in recent years, it feels as if the Great Acceleration is now in overdrive. 

Meanwhile, the world political situation seems to continue to undergo ever more rapid change. Maybe I am exaggerating this, but it is clear that America is at a crisis point where it must move one way or another. NATO has suddenly added more members. Ukraine and Russia appear locked in a deadly embrace for the foreseeable future while Putin stumbles and appears more withdrawn. China remains an enigma, but while Xi flexes his muscles internally and externally, there are signs of something wrong - extensive corruption in his nuclear missile forces have forced the replacement of its leaders with more politically reliable outsiders, while the foreign minister first went missing, was officially replaced, and now Beijing is trying to remove all official references to him. They are locked in a long-term struggle with the US, but the two economies remain deeply enmeshed. That makes for a very different kind of Cold War than the one of my childhood and youth. The War in Ukraine continues to cause global problems with oil and grain supplies. Instability seems to have increased again in the Horn of Africa. South Asia's economic challenges are being compounded by climatological disasters. It goes on and on. Oil, wheat, heat - they are causing huge disruptions, destruction, and death across even affluent countries that are more-or-less self-sufficient in the first two. 

Since last November, we have had a heightened sense of both the threat and the promise of information and communication technology (ICT). We had grown so accustomed to the constant change and penetration of these technologies in our lives, that we (myself included) had missed much of what was happening. Over the past decade, first with ISIS, then the Syrian Civil War, and now the Russo-Ukrainian War, the more deeply sinister aspects of our connectedness have come into focus: first as a medium of propaganda, radicalization, and disinformation; now as a means of passive intelligence collection, reconnaissance, and targeting. The distinction between civilians and soldiers has been blurring for more than a century, were further dissolved by the long-term requirements for nuclear deterrence, and are all but gone now. The front line is no longer in Syria, Sudan, or Ukraine. It is your cell phone, your tablet, or your laptop. Information and drone warfare are the defining characteristics of today's conflict (a forever war created as much by the greed of ICT companies, the surreality of intelligence agencies, as by the GWOT) just as tanks, airplanes, and submarines were of that Great War that began 109 years ago last week. Oh, and just to make matters more unstable, the richest man in the world, whose actions appear to many to be increasingly erratic, owns half the satellites in orbit, communication satellites that are now vital to the functioning of most militaries around the world. 

Of course last November marked the beginning of the hype and rapid spread of generative AI (GAI). There had been a lot of warning tremors, mostly graphical in nature, as we became familiar with the strange output of applications like Stable Diffusion, DALL-E, and Midjourney. At the end of November, OpenAI, announced ChatGPT. Whatever you think of either the company or the product (or their critics), it made most of us aware of the potential for good and evil of GAI. It was gasoline thrown on a fire, or maybe on a lot of different fires. How do we react? How should we react? What does it mean? How much of it is hype? Going back to events, this time 108 years ago, I am reminded of the confusion and difficulty the British government and military had responding to the first Zeppelin raids. How to give warnings? Was it best to attack their bases or pull fighters back from the Western Front? What to do about shelters? What information to release after an attack? How to minimize disruption to war production? 

Those, however, were less intense than the situation we face today. The crisis may be less about GAI itself and more about how companies like Microsoft have chosen to implement it as part of all of their products, regarding how companies have chosen to use it to increase their holy cows (productivity and profit), or how seriously it threatens to disrupt every aspect of creativity and education. The intense exploitation of human labor to prepare training data for it is another aspect. The environmental and climatological problems will be severe if usage continues to increase and if energy and thermal efficiency of processors is not increase by two or more orders of magnitude.This week a study predicted that by 2025, GAI might use as much energy as all human workers. And, of course, it is folded into both the Cold War, and because of its ability to crank out disinformation, our forever information war. 

The responses to all of this from Left, Center, and Right have been pretty predictable, and equally inadequate. Just to take the GAI mess, I have been very attracted to the critique of it that built up around two groups, DAIR and Critical AI. I think they are right for the most part. Their months (really years) of critique are paying off in certain arenas. The press is finally beginning to understand that real dangers and pay less attention to the self-serving, existential-risk mongering of the AI leaders. Educators are tuned into their critiques. Frankly, though, I do not think they are doing much good. As with most other technologies, our society, culture, and economy embrace them regardless of cost. They may not be Don Quixotes, but it feels a bit like France in May 1940 - much depends if they are more like the French Army or the BEF. 

I have deeper concerns. While the view of mind (consciousness, sentience, intelligence, etc.) that some of the GAI creators and supporters espouse is hideous - call it a sort of reduction of everything to data science - there are many other models of mind out there, and it is not always clear, in fact it rarely seems clear, which one they follow. (I will try to explain my own model in another blog post, but it is derived from the one Gregory Bateson espoused, but with additions to take into account later research and philosophy.) I know it is too much to ask people who are writing about these issues to go around explaining just what they conceive mind to be, it is just that it bugs me. Why do I care? It tells you a great deal about their humanity, their attitudes toward humanity, and to the more general issues of dealing with the other sentient, conscious, or intelligent species (and possibly systems) that we share the planet with. Critics and commentators look at the problems GAI raised for humans, often with a very good display feminist, queer, and decolonial theory and practice, I saw nothing about what it might mean for other species. My impression is that they are often locked into ideas about the nature of mind that are too narrow.

We inhabit a world with a great many other minds. They may range from the simplest to ones as complex as our own. We need to also acknowledge that some theories of mind, and these do not require mysticism and can be firmly grounded in materialist notions of the world, do not see it as confined to the individual, but as the interactions between individuals and the world around them. Once we start down that road, we are in a different place regarding other species, our technologies, what we are doing to the environment, the climate, our cultures, societies, economies, etc. Maybe there is a theoretical framework out there that can embrace all of that. I don't know. My feeling is that our present theories are both limiting and failing us. Maybe they can expand, maybe we need a new theory, maybe we just need to walk away from any kind of overarching theory. The latter is where I am. I have been there for probably thirty-five years. Instead of elevating theory to dogma or ideology, theories to me are simply tools. Like all tools, a theory is good for some jobs and not others. Just as having the wrong tools, having the wrong theories limits what you can accomplish. While there is still a lot of good analysis going on within those theories, it feels more and more incomplete with each passing year. Maybe we need to break it apart and see what we can do with it. Maybe we need to walk away from theory for a while and work from an understanding that the problems are even bigger and more complex than we have theorized, must be approached both from a cold realism and from a deep empathy and compassion, that our best efforts will always be somewhat inadequate, and that we cannot ignore any part of world or its systems. 

We are also going to have to change our understanding of who and what we are in more than one way. We can start by recognizing that we are part of not just a biosphere but also a noösphere. I am not using that in the sense Vernadsky and the Russian Cosmicists mean it, nor in the usage of Pierre Teilhard de Chardin - though I was raised on some of his concepts. I propose this not as a philosophy or a theology but as a practical and pragmatic concept. The biological world is full of communication and many kinds of minds, networks of mind and communication that we know cross species. We are only now becoming aware of how complex this all is. We are adding to the complexity with our computers and digital communications. We have been thinking of how those tool affect us, and we may be thinking about how their energy and other resource demands affect the biosphere, but we have done very little so far to begin considering how they play in the larger mental life world in which they, and we, are embedded. 

What is the effect on all living minds if we are not just poisoning our physical but our mental environment with disinformation, misinformation, a basic distrust of reality, a basic distrust of ourselves and others? Does it make it easier for us to ignore them? Does it make it easier for us to destroy them? Does it keep us from making the connections we need both with them and to help keep them from the destruction and harm we create? 

There are some who think that GAI will alter what it means to be human and to be more or differently creative. I have a lot of doubts about that but the moment is pregnant with possibilities. I am convinced that we will end up with some kind of hybrid creativity. How we think about them, ourselves, and the thinking world around us, can lead to different outcomes, some of them probably unimaginable. Where we are now, our understanding or misunderstanding of the dramatic climate and weather we are experiencing, the levels and kinds of misinformation we accept, the theories or ideologies we hold to, the local, national, and global political situations and the infinite war, all of these will shape our the sorts of creativity that emerge. 

These emergent creativities will be vitally important. They could give us tools to solve our problems and answer the most important questions, allowing us to become something more than we are, allowing the thinking and feeling world to become more than it is, or they could limit, stunt, and destroy. I firmly believe that this year, and the next two or three, are really pivotal - not just because of AI, but because our reactions to the environmental, climatological, social, and political turmoil is going to shape things for a very long time. 

I hope I am wrong. I hope that I am just blowing events and temporary conditions out of all proportion. Given that I am writing this, I obviously believe that I have at least some small insight, but I really want to be wrong. 

Saturday, May 13, 2023

Thoughts on Wiggins and Jones, How Data Happens

This morning I finished reading Chris Wiggins and Matthew L. Jones, How Data Happened: A History from the Age of Reason to the Age of Algorithms (Norton, 2023), based on a course the two teach at Columbia University. It does one of the things that a good history should do. It makes clear the contingent nature of the present, how things could have turned out differently but for specific events or decisions, and argues that we have more choices than we think. 

It also did something that great histories do, it made me see a subject in a new light. In the past, I have read books and articles discussing how data has been used in one fashion or another, whether it has been how it was abused by "race science," how data processing was used to facilitate the Holocaust, how governments have amassed huge archives of often incompatible and sometimes irretrievable data, how data facilitates surveillance, and much else. This book focuses on the parallel evolution of our understanding of data and of statistics over the last quarter-millennium or so in Europe, America, and India. 


Wiggins and Jones move through the early history of these subjects and explore how they were shaped by concerns about race and eugenics. So much of the early history of statistics was shaped by the concerns and beliefs of its founders about race or the possible decline of the "white race." "Moral panics can create new sciences," they note (p. 35). It was also in this era that conflicts between those who wanted statistics to be more grounded in mathematics and given a sound statistical basis arose with those concerned with applied problems of engineering, government, and business. 


The middle of the book covers the effects of World War II, the Cold War, and the growth of what we loosely call AI. The conflict between the two sides (mathematical and applied) continued. For a time, it appeared that the mathematical statisticians had won the field, and the early history of AI was shaped accordingly. It was more complex than that of course. There were smaller conflicts within the larger ones, more nuanced differences of what was thought important, and real differences in world views. The different factions have won and lost a number of skirmishes, and the results of that shape the present hype, cautions, and battles over generative AI and ethics. 


One point the authors make in the middle of the book (123-124) is that Alan Turing, whose name is bandied about so often in these discussions, had a "capacious vision of intelligence" drawn from the human and the animal world, and including much more than logic and reason. The (mostly) men who began to develop the field of AI after him, concerned about building bombs, making money, breaking codes, or on more theoretical objects, narrowed that vision to calculation, data processing, and logic. Put another way, we have been bequeathed to us impoverished visions and expectations of AI. 


The final section concerns how financial, social, political, and ethical factors have shaped the world of data that now surrounds and penetrates our every moment. This is where Wiggins and Jones really bring forward the contingency of the present and future.  Their backgrounds are important here. Jones is a professor of History at Columbia University. His previous books have been about the Scientific Revolution and the early history of calculating machines. Wiggins is an associate professor of Applied Mathematics, but, perhaps of more importance for the insights it has afforded him in the final chapters, is also chief data scientist for the New York Times. The authors understand that the way we handle data, AI, ethics, privacy, and related issues are going to have an outsized importance in the future. They are concerned with the forces and structures that produced this situation and how those can be changed.


As I read these chapters, I began to understand more about the conflicts between different AI factions that have become so prominent and vehement over the past two years, especially since the release of ChatGPT. These go back to the beginning of AI, but they also reflect approaches to ethics and governance of AI that emerged between those who would try to encode ethics and governance into algorithms, reducing them to rules, and those who understand them in terms of larger, human and political contexts.


They packed a lot into just over 300 pages, and they did it in a readable way. It is a good read. There is so much more to their topic. This book whetted my appetite for more. 

Monday, May 8, 2023

Drones, AI, and Information Warfare

I have been thinking about this Twitter thread. It has been gnawing at the back of my mind all day. It is tied up with Matthew Ford and Andrew Hoskin's book Radical War: Data, Attention and Control in the Twenty-First Century (OUP, 2022). Thomas Rid's Active Measures: The Secret History of Disinformation and Political Warfare (FSG, 2020) colors my thinking, along with the reading I have been doing on AI all year. 

 What is emerging in Ukraine is a form of war based on the resources of Western militaries harnessed to the tactics of the underdog derived from the asymmetric warfare of the last few decades. The extreme importance of networks, cheap computing, and vast numbers of drones is striking. 

At the same time, we are witnessing the emergence and rapid evolution of so-called generative AI in the US and China. That keeps getting characterized in Cold War terms without too much thought being given to what that might mean. On the one hand, it means that, as with every information technology the PRC or the Soviets faced in earlier decades, it has to be tightly controlled. It has to be available for the state to manipulate the people but prevent the people from manipulating information in turn. The flip side is that AI will be used as a powerful driver of information warfare to manipulate the citizens of other countries. 

It also means the West will exercise comparatively less control (at least directly), which makes it more open to attack but also more open to novel uses and unexpected developments in AI. We need to recognize the military and intelligence complex is always deeply invested and involved in AI and all computing. We frankly would not have a lot of these techniques and technologies without DARPA and NSA. If the military and intelligence agencies can integrate these AI developments with the rapidly evolving techniques of warfare emerging in Ukraine, we should see further destabilization of our notions of warfare.

It is anything but certain that the American military can reorient itself that way in short order. It is also possible that anti-government groups in the US could reorient this way quickly and use the combination of AI, drones, and new tactics to try to create an environment they believe will allow them to triumph. 

We are already living in a world where our phones and watches are simultaneously devices intelligence agencies can use for real-time data collection and surveillance, the military can use for reconnaissance and targeting, and journalists and NGOs collecting information on war crimes can use for reporting. They also open us to constant propaganda and information warfare.

Something like this has been gestating in my mind for a few days. I am struggling to put together a coherent set of ideas, so, for now, it is just something I need to express so I can work out other ideas that may be more pertinent or that may constellate with it. 



Saturday, May 6, 2023

AI Tsunamis and Learning to See Larger Contexts

My thoughts about generative AI have been all over the place over the last few months. Trying to understand it and help others understand it has become a major focus since January, both at work and outside of it. For so many of us, ChatGPT hit as a tsunami. I was aware of what was happening with text-to-image apps like Stable Diffusion, DALL-E, and Midjourney, but was not following AI developments in general and only paid attention to them in the context of art and art history, of intellectual property and copyright.


At first, ChatGPT was just a distant rumbling beneath the sea. That was back in December. Then the hype and the angst built into a full-fledged eruption. A new landmass was rising from the boiling depths, and the waves it created towered over us. That was January and early February. We kept afloat and tried to steer our ships in the right direction. Then in mid-March, just as some small stability seemed achievable, we were hit by a whole succession of new waves: GPT-4, Midjourney 5, a string of announcements from Google, Microsoft, and Nvidia (most of these systems run on their hardware for now). It was a second tsunami. Over the following weeks, we had the notorious "Sparks of AGI" paper and the calls for a six-month moratorium on AI development, a well-developed critique of the motives behind it (which may include apocalyptic ideologies, eugenics, and the desire of some signers to catch up), and a fairly constant stream of other developments.


All of this is to say that I have learned a lot, been caught up in events, made mistakes, gotten equally caught up in the technology at times, sometimes, like now, been reflective, and often been in intellectual and emotional turmoil about the whole thing. I have not become an expert, but I keep plugging away at understanding it.


There are so many aspects that we need to comprehend. There is the technology itself. There are deep and extensive ethical issues, compounded by the ideologies of the backers and creators of this technology, as well as the boosters and the critics. (For the record, I am closer to ideas coming from DAIR and Critical AI in my views on the ethics and hype than any other groups. This is tempered by my own thinking about technology which often seems a little out of tune with anyone else.) It is hard to overestimate the importance of the ethics of AI and to ground our approach to the technology in a realistic assessment of them, rather than the paroxysm of existential apocalyptic thinking we have had for the last few months. 


I tell people that it is like the famous story of the person who said the Earth rests on the backs of elephants, that the elephants are on the back of a giant turtle, that the turtle rests on the back of another turtle, and that it is turtles all the way down. The difference is that these are ravenously hungry snapping turtles. That is to say that, every time I think I understand the extent of the issues, I discover it is even bigger than I thought. There is always another snapping turtle - often just a baby but sometimes a very old and irritable one.


Ethics must inform our decisions about how we use it as much or more than politics, economics, or, as I suspect will happen, religion. We also need to understand it from a global perspective. I do not mean the supposed AI arms race between the United States and China, or the specific policies of EU countries, or how it might factor into Russian disinformation campaigns. All of those are important but are not my present concern. The American perspective, to the extent there is one in this chaos, is largely that of Silicon Valley, Redmond Washington, Hollywood, New York, and from inside the D.C. Beltway. It is shaped by utopian fantasies and apocalyptic fears, economic beliefs, and the scramble for power, profit, and position. It is caught up in ideas and fallacies that have been brewing since the 1850s at least. 


The dominant perspective here is one of relentless, unstoppable technological change and unlimited economic potential and development.  It can be decked out with flags and bunting, dressed in robes and vestments, and pronounced to be logical and scientific. It is what we, or at least late Baby Boomers like myself, were spoon-fed. I knew there were things wrong with it by the time I was ten, there were too many nuclear missiles in our area to let me accept it at face value, but it was, and is, so pervasive that it remains hard to shake. Sometimes the response it evokes feels like an atavistic instinct.


It is, frankly, baloney. Maybe if we had unlimited, clean, electrical power, unlimited natural resources, a better moral compass, and lived in a world where everyone had just adopted Americans' view of ourselves, it might work. That is one hell of a counterfactual, though a lot of people treat the world as if it is, or will soon be, the case. One reason that some ethicists like Timnit Gebru and Emily Bender are too often ignored is that they do not buy it. They try to correct it. Maha Bali is gentler in her criticism and equally insightful. Even something that we take as a great triumph of AI, machine translation, is fraught with problems, not least of them problems translating across language families or the possibility that AI is reinforcing English as a hegemonic or imperial language. (See Emily Bender, Timnit Gebru, et al., On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, Paris Marx's interviews with Timnit Gebru and Emily Bender for the Tech Won't Save Us podcast, and Maha Bali, Agentic and Equitable Educational Development For a Potential Postplagiarism Era and her blog, Reflecting Allowed, in general. For the idea of imperial languages, see Nicholas Ostler's Empires of the Word: A Language History of the World. For the critical importance to a nation or a people of control over how its own language is expressed and transmitted, see Jing Tsu, Kingdom of Characters: The Language That Made China Modern.)


It goes way beyond language. That is an area that fascinates me but one I have hardly explored. There is also the exploration of cheap African labor in cleaning inappropriate material from the training data. Resource extraction for the manufacture of the huge numbers of chips and other hardware required is also a concern. Heavy energy demands generate greenhouse gasses and contribute to climate degradation, affecting many of the countries most exploited for their human and natural resources, is yet another.


Those are just some of the bigger issues and not to be minimized. They point to another aspect of the global - the environmental and climate costs of generative AI. It already has a substantial footprint in energy usage, greenhouse emissions, resource depletion, pollution (materials  mined for chips and servers leave behind a lot of toxic materials locally), and water use (both in manufacturing and also for cooling data centers). The chips and servers will get more efficient, and there may be a turn towards more use of renewable energy. Still, we should expect the use and demand for generative AI to jump orders of magnitude, so we may see a net increase in all of these negative effects.


I want to return to language though. Generative AI and language interest me for other reasons. Images do as well. Whenever humans encounter language, they assume thought like their own, even across cultural divides. We largely define both intelligence and humanity through language, and certainly through symbolization. We define behaviorally modern humans, that is Homo sapiens exhibiting signs of consciousness and intelligence like our own, chiefly through the creation of images, which is why any decorative marks on early artifacts are closely studied, and also why cave art fascinates. Whenever a non-human exhibits any signs of language or begins to show an understanding, however basic, of human language, being a primate, a bird, a dog, or a dolphin, we go a little bit ape. Some react reflexively saying there must be a mistake. Others are overjoyed. Some of us just want to know more and are deeply fascinated with the phenomenon of language.


For years, we have been dealing with animals that can deal with human language. Likewise, we have had computers for decades that can carry on written or spoken conversations within limits. We knew this was a result of programming and that the ability was no sign of intelligence. We knew it was possible for someone to explain every step of the process, even if we ourselves could not. Now we are confronted by an "intelligence" that is "trained" rather than programmed. We are repeatedly told that no one can really explain every step of the process that creates the output. We may intellectually understand that there is no thinking going on in any fashion we would understand. Instead, it is all about probabilities that one word will follow another in a given context. The output is constructed of tokens, and the AI is doing something like auto-complete on steroids.


That is not how it feels. It is not the impression it leaves on us. It feels more like we are talking to a person. My first impression of Bard AI was that I was chatting with a very polite, somewhat inept, reference librarian. It is easy to impute feelings and thought to these programs. We tend to think of them in somewhat human terms. I even apologized to Bing AI on one occasion. This has a lot of implications. Do they have agency? Can they be held liable for their actions? Can they claim copyright? Are they really creative? Should they have rights? Are they sentient beings? 


One direction this is taking us is to focus on the human-like qualities they exhibit and to reinforce the idea or conviction that intelligence (and maybe sentience) are specifically human. Indeed, much of the fear evinced in the last two months has come from those who understand that they are not human-like and are something more like an alien mind. I find that fascinating, both because of the fear of the alienness of other intelligences, and for the possibilities it might create to extend our understanding of intelligence and mind, that is if there is really something more going on in those servers than we suspect. 


But it also means that we are still fixated on intelligence like our own. In the past few decades, we have begun to learn that intelligence and sentience are much more widespread than we ever thought. We are even beginning to see signs of it in plants, or at least in the complex ecosystems that plants create. We need AI to be non-human, less human-like, more alien, just to maintain our bearings in a world full of other intelligences. I believe there is a real danger that by focusing on the seemingly human qualities of AI, we will be led away from considering and trying to comprehend all of the intelligences that surround us and interact with us, often without our knowledge.


There is a double-edged sword here. One edge may force us to reconsider aspects of our own intelligence, sentience, creativity, agency, and uniqueness. This is a very real possibility and when some AI proponents suggest that we think like an AI, they are doing what we have long done, likening our minds to the latest technology. We have been doing it for centuries. We like those machine metaphors and slip easily into them, often without understanding the implications for how we think of ourselves and behave towards others. 


The other edge cuts us off from the nascent understanding we have of sentience and intelligence across the living world. By appearing to think somewhat like a human, even though they are not, these synthetic intelligences promote the idea that only human intelligence matters. We should focus just on ourselves and on these machines. For a long time, our attempts to communicate with "higher" mammals - primates and dolphins - followed this direction and reinforced false views of intelligence. We taught chimpanzees and gorillas to communicate with us through sign language or keyboards full of pictures and symbols. John C. Lilly even tried to teach dolphins to speak English. Those attempts met with limited success and began to show us a little more about the minds of our fellow mammals, at least. Things began to get a little stranger with birds, particularly an African Grey Parrot named Alex who commanded a vocabulary of a hundred words and some apparent cognitive abilities of a two-year-old human. We also began to realize that birds use tools and have cultures, particularly members of the crow family. (Suddenly, Poe's Raven seemed less far-fetched.) Birds are pretty far from us evolutionarily, tens of millions of years. 


Then things got really strange. We began to understand that octopuses are not only sentient, but extremely smart, very resourceful, and even seem to have a sense of aesthetics. That shook things up. An octopus is about as alien, and for many, as scary, as any creature on Earth. Despite Ringo wanting to while away the time with his beloved in an Octopus's Garden, and Japan has had an erotic sub-genre dedicated to women and cephalopods for a couple of centuries, they have not been so well regarded in the Anglophone sphere. It is no accident that H.G. Wells made his Martians resemble octopuses, or that Lovecraft's horror god Cthulhu was a mashup of human, bat, and cuttlefish. To make things worse, they are invertebrates and have decentralized brains that work differently than humans. Yet they can solve problems and arrange their surroundings to their own liking. James Bridle suggests, in Ways of Being: Animals, Plants, and the Search for Planetary Intelligence, that the octopus teaches us there is more than one "way of 'doing' intelligence," and that intelligence "is not something to be tested, but something to be recognized, in all the multiple forms that it takes." (Bridle, 51-52)


Bridle goes on in his book to argue that we need to recognize all types of intelligences and sentience, even the collective sorts seemingly found in social insects and across plant species. We need all of them to give context to our own, and also to the artificial varieties were are developing. He also argues that we need to bring them all into conversation to save life on Earth. That may sound a little bizarre and idealistic, it may be, but we are seeing more and more thinkers who see the world in similar terms. As I said, one possibility of our current obsession with the apparent humanity, as well as the inhuman threat, of generative AI may take our attention away from those other kinds of intelligence. It might make it easier to disregard them and allow us to continue to destroy them without a second thought. 


Maybe it also keeps us from understanding artificial intelligence. Do we need to have a lot of models of intelligence to understand artificial intelligence and to recognize the point at which it might become something more than the sum of what we put into it? I have no idea if computer intelligence can ever become conscious and exhibit cognitive abilities. I am pretty sure that generative AI as we know it today will not, but we also have to ask what happens, and it is happening very quickly when it can interact with other types of artificial intelligence, to access all kinds of tools (as one story put it, I do not recall the reference, ChatGPT learned to use a calculator), and begin to be embodied (both Google and OpenAI are working on this, and GPT has already been incorporated into at least one Boston Dynamics "dog"). 


Maybe generative AI will be just a piece of what is needed to create an AGI (Artificial General Intelligence, the dream of many of the creators of generative intelligence). Think of it as being like Broca's area, or some of the other areas associated with speech and symbolic expression, in the human brain. If we get AGI, I am betting it comes from synergies between different kinds of what we so loosely and inaccurately refer to as AI today. On the other head, it may not have anything to do with AGI, or AGI may never be developed. If anything, the latter seems the most likely. 


But then I am no expert. I am just trying to understand. What I do know is that we have to watch out for both utopian and apocalyptic thought in regard to AI. We need to understand it within larger contexts. Obviously, I think some speculation about it is a good thing, or I would not have written this, but it needs to be grounded, and the very real dangers and potentials of the technology have to be kept firmly in view. 



Note: Like everything else in this blog, this is my interpretation, and does not reflect the views of the University of Missouri System - or anyone else for that matter. My take on things is often idiosyncratic and sometimes eccentric.



Sunday, May 8, 2022

Attention and the Future

 For the past week, we have seen the full fury of battle in the culture war, one that signals exactly where the conservatives wish to take things. In the immediate instance, it is a matter of women's rights and privacy rights. It is a fight about the future on very specific terms, but it does assume there is a future. 

"Culture War" has an air of illegitimacy to it. In a world full of constant and literal warfare, it sounds like a cheap metaphor. In a country where "the economy" is all that matters, culture is taken as nothing more than something people manipulate for personal or corporate greed (and many, many do - one need look no further than our big nostalgia merchants, some presently embattled on the right, but which did much to create the corrupted, simplistic nostalgia culture we inhabit). 

Time, I think to recast the "Culture Wars" in different terms. The key question is no longer about the kind of future but the very existence of a human future. We do have a significant percentage of the population who do not care. For some, the reason is derived from nineteenth-century Christian theology. For others, there is a nihilism that cares nothing for what comes after them, or who cannot imagine a world without them (that takes quite an ego). There are those who misunderstand what it means to construct a reality (a basic category mistake). Others may be forced by the struggle of daily existence (economic or emotional) to ignore the longer-term future. There are surely plenty, like myself, who are concerned, but who like their comforts, and understand those comforts are traps they may lack the will to escape, and likewise know those comforts are inimical to the future. 

We could frame this as a war of conservatives and progressives, even as a war of two realities. Does that really get us very far? Neither is monolithic, they are collections of more-or-less related beliefs and movements. Both are ultimately grounded in world views, largely economic and religious, that exist along continua and which precede the discovery of our present omni-crisis. It would be too easy to say they are different sides of the same coin, but maybe they do share too much of the same worldview to find their way forward. Their realities are not that different. They represent only a small part of the many psychological and cultural realities humans have inhabited over the last few hundred thousand years, or even over the last few thousand. 

A lot has been written about the economics of attention or the war for our attention, in various permutations. I do think this is about attention, just not in the limited ways it is often portrayed. A culture is about attention. It is about that which we are allowed to attend and against those things that are "dangerous" or prohibited to attend. The individual is also about attention, what is necessary, what is interesting, and how far to push the boundaries set by the culture. 

The attention of the culture and of the individual changes over time. Now the question is whether individuals and cultures can accept a major change in the boundaries and the accompanying shattering of illusions. If we can, then we may forge a new culture, a new civilization, that can find a future. That future may not look anything like our present world or any world we can predict. We may need to think and believe in terms completely different from our present-day economics, religion, or even what we think of as science (which is already changing in intriguing ways). These may be evolutionary changes or revolutionary, or, more likely a combination of both. It may be totally alien to what we perceive now.

We may also forge one that is worse than we have now, one so mired in a false nostalgia that denies the needs of present-day people, killing, maiming, and stunting individual lives while leaving no room for a viable future. That may be where both our conservative and progressive visions are taking us. While I would prefer a progressive future to a conservative one, I do not think either vision can produce a viable human future. 

We are not fighting a Culture War. We are not yet really fighting a War of Futures. We are trapped in ways of attending to the world that no longer work. It may be that the ways in which some of us have come to understand ourselves, attending to whom we truly are, not what culture, society, and economics says we have to be, will lead to a larger change in cultural attention. That is the promise of the progressive side (though perhaps not of the radicals who are locked into their own kind of ideological puritanism). It does open possibilities for profound cultural change that may provide a path forward. 

I do not see the same on the conservative side. There may be individual conservatives who could find a new kind of attention, but the general trend, particularly of the more reactionary sort, is to further constrict attention along with human rights, to limit what we may consider real and proper. That might lead to a kind of revolution that completely repudiates the restrictions, but that might never occur or be completely suppressed. 

It may be that neither side can really change. Locked as they are in constant combat, it seems likely that they will only intensify and focus their attention into ever narrower paths. Personally, I  look elsewhere for the changes and the new kinds of attention that can permit us to find a human future. That may be a long shot, but it may also produce the most livable future.