Complicated Feelings: Generative AI and Technofascism, Part 2
Studio Ghibli, White House Propaganda, and the War on Art
Welcome to the oppressive month of April. This is a follow-up to my February 5th newsletter of the same title, which I wrote towards the beginning of Elon Musk and DOGE’s still ongoing blind slash of government workers and services, domestically and internationally. Since writing that entry I feel as though, in the process of writing regular newsletters, I have inched my way towards my voice. I am very proud of the work I have done on this Substack, and every day I feel more self-assured as a writer.
Washington, D.C. is coming down from the intense high that is “peak bloom”, a season that brings massive crowds into the District to admire the cherry blossoms in their splendor. The tidal basin at this time of year is one of the most breathtaking sights, despite the crowds, and I would recommend it to anyone who can make the trip. It is also one of my favorite times of the year, weather-wise: thunderstorm season, where a person can spend a day adequately clothed in a light jacket and watch a concrete wall of flashing clouds approach a retreat just as quickly. On Monday I boarded a bus just as the first raindrops stained the pavement and, by the time I reached my stop, the gutters had begun to act like rivers to be crossed. I worried I would get caught in the deluge without an umbrella, but, seemingly in an instant, the sun overtook my charcoal sky just as I stepped out into the open air. The world, despite its heavy fatigue, is still a stubbornly beautiful place… perhaps we could learn a thing or two.
Again, as of today, April 2nd 2025, I write this for free and because I feel like it. As always, you can find me on Instagram and nowhere else on the Internet for now. If you enjoy this post please consider sharing it with your network or subscribing (for free!) if you haven’t already. I write poetry, fiction, and sometimes I just feel like having a quick, well-researched rant. Today is one of those days.
I’ve wanted to write about this for quite some time and have held myself back for reasons that, as I’ve examined them, surprise me. The first is less surprising: fear. I am frightened to talk about this because a lot of people I deeply respect hold a fairly idealistic view of the systems I am about to trash talk; further, a lot of people on the Internet are rabidly militant in response to people expressing concern about the “next big thing”. What I’ve identified as a deeper reason, and the reason that I have held back from writing on this topic that surprises me, is certainty. I am so deeply convinced of this that I am terrified to put it in writing for fear that I cannot adequately articulate what is effectively a gut feeling. I’m not afraid of being wrong. I would actually prefer to be wrong, because sounding an alarm that turns out to have been unnecessary is far more ideal than sounding an alarm that goes unheeded. I do not believe that I am wrong here.
There are applications of AI that I think are really exciting, especially in the medical field and for accessibility, but the value of a product cannot be measured by its ideal uses, only by its real world consequences, so I am going to detail what I’ve been experiencing online, what I’ve been discussing with friends and neighbors, and what I fear are the implications of it all. Let’s get started.
Social Media and Art
As a writer and avid reader, my social media activity/personalized algorithms routinely tailor my newsfeeds to include content from book-related pages and groups. About a month or two ago I began to notice a pattern – I was being served content from pages I had never previously followed that all looked the same: a picture of a well-known book accompanied by a long, analytical caption in a bullet-list format. “Here are 7 Lessons From ‘What I Talk About When I Talk About Running’ by Haruki Murakami…” Around this time, Meta announced that it would be increasing the amount of AI-generated profiles and content on its platforms, including Facebook and Instagram. It strikes me that this is not easily recognizable as AI-generated content. It is straightforward, not labelled as such, and a number of real-appearing users interact with it uncritically. Content like this is all over every social media feed I have, and I assume I am not the only person experiencing this.
A depressing and frustrating evolution of the past decade is the one-sided agreement that has been foisted on artists of every medium: the invention of the Internet allows work to be seen and has effectively monopolized the “marketplace”. So the best way to get your work seen is to put it online, and now anything you post online is stolen, without pay, to train these GenAI models, which in turn generate billions of dollars of profit for tech companies, and nothing for the artists whose work was used. Further, because the Internet is home to nearly all media created in the past century (the Lost Media Wiki is a very fun rabbit hole to fall down), and there is very little regulation in the United States on big tech companies, all art is subject to be stolen and fed into “the machine”.
This brings me to Studio Ghibli, the animation studio behind masterpieces like “Spirited Away”, “Howl’s Moving Castle”, “Princess Mononoke”, and “My Neighbor Totoro”. By now I’m sure that just about everyone has seen a “Ghibli-style recreation” of a friend’s photo on their social media feeds as a result of OpenAI’s latest Chat-GPT update allowing users to upload a photo and generate such an image in seconds. An old clip of Hayao Miyazaki, a co-founder of the studio and one of the most influential artists in the world, began circulating in response. Miyazaki’s work, which heavily focuses on humanity’s destructive impact on nature and which is a testament to the deliberate and delicate process of creativity, being stolen in such a manner and for such an environmentally destructive medium, is deeply and darkly ironic. Images generated with the tool have been shared by a number of large corporations as marketing materials, and a propaganda image was posted by the Official White House Twitter account.
There is a bizarre glee from people who purport to care about the pursuit of excellence at highly skilled and trained artists losing work. More than a couple of times I have come across lazily generated images from a specific type of nerd with captions like “graphic designers are so fucked!”, or “time to get a real job”, that receive tens of thousands of likes and reshares. The crux of the issue, in my mind, is that AI creates the illusion of expertise in anyone which is then used to undermine the credibility of actual experts; giving rise to the antivax movement, to conspiracy theories, to outrage culture… it is the very death of objectivity. It strikes me that this particular subsection of nerd’s identity is hitched to the idea that excellence is innate, but that it exclusively belongs to their in-group. This is the central tenet of white supremacy translated into techified language.
For what little it is worth to say this: I do not want my work to be used to train AI models. I do not want my work to be fed into a machine for editing or to be summarized, and I do not want my work to appear next to AI-generated “art”. I do not consent to that. Don’t be a dick.
Human Needs and Productivity
Let’s discuss, for a moment, ideal use-cases of AI. As I mentioned in my first post on the topic, I wrote a philosophical argument in 2018 while pursuing my degree at the University of Melbourne. In the paper, I argued that AI as a supplementary tool could be used to revolutionize the way we, societally, solve complex problems, but that increased usage would need to be coupled with worker protections. At the time I had not fully wrapped my head around how many different applications this technology could have, did not yet understand the massive power of data, and was not pessimistic enough about the profit-by-any-means-necessary nature of the tech industry. Instead of utilizing this massively capable and powerful tool to solve pressing issues like hunger, disease, and poverty, the companies that profit (massively) from them are building it into literally every system we use without providing any means of opting out. Data is power and can effectively be leveraged for good, but it never will be when profitability is the first consideration, and large companies have very easily convinced the general public to freely provide their data and personal photographs without thinking about how it might be used. It is not a coincidence that fascism is on the rise in the United States and that AI tools are becoming more and more common.
I read a lot about Mussolini in my research for this newsletter. Specifically, his characterization of fascism which, contrary to other prevalent areas of thought at the time, did not purport to be a means to the end of suffering. Liberation from basic needs, he argued, would “...cause the pacification of drives and, consequently, the end of movement and the decline of civilization” (Source: Falasca-Zamponi, 2008). That is, hardship is not only unavoidable, but necessary to ensure productivity, which could be partly true if we were talking about innate human hardships like grief and discomfort, not when we are discussing manmade hardships like poverty. I hear echoes of this constantly, in pro-birth sentiments and Elon Musk’s weird sex-selective obsession with having as many children as possible, in hustle culture sentiments and arguments for longer work weeks/fewer days off, and as I continue to tug on the mental thread, in anti-trans, anti-DEI, anti-intellectual, and anti-immigrant mentality.
ICE, DOGE, and Surveillance
I must also mention ICE in this discussion. In my previous post about AI I predicted it would be used by law enforcement agencies for surveillance purposes. Perhaps this was less of a prediction than an observation, but it is now a reality. We are currently observing mass deportations to notoriously violent prisons overseas without due process, the previously mentioned Ghiblified AI propaganda, illegal detentions of Mahmoud Khalil and a number of other students legally residing and studying in the U.S. due to social media activity that the current administration, through the use of A.I. surveillance, deems “pro-Hamas”. So, how many more abductions have taken place that were not filmed and, thus, could not go viral online? What happens when agents try to detain the “wrong” person – someone with diplomatic immunity, a legal gun-carrier, a disabled person? And what happens when people start dying?
Meanwhile, massive cuts to the federal workforce by the so-called Department of Government Efficiency are simultaneously shocking essentially bureaucratic systems, disrupting human lives, and prompting massive, expensive, and necessary legal responses across government and across the country, but especially in D.C. as the president turns a vengeful eye toward the District and its citizens. From the very start, these actions have been heavy-handed, sloppily executed, and really lame. Because, after all, these efforts are not about efficiency, they are about centralizing power, diminishing governmental checks and balances, and demonstrating authority to the obvious ends of a technofascist regime led by the current president, who is actively alienating the United States’ oldest allies, ensuring the end of American educational dominance, and taking golf trips to the tune of 26 million taxpayer dollars. If due process and privacy are not protected for everyone, they cease to exist for anyone.
Conclusion
I want to be idealistic about the evolution of artificial intelligence. I, in fact, would love to not have to think about it. It is irresponsible to ignore the parallel proliferation of AI systems and of fascism in the United States, and more people should be talking, loudly, about it. Especially elected officials. Technology is inseparably tied to the political climate in which it exists; the atom bomb was invented to be used in a time of war and for the purpose of war. I judge AI by its most destructive effects, not by its most promising capabilities. I am not foolish, I know that people will continue to use these tools directly or indirectly, and I know that the answers to complex ethical questions usually lie in the gray space between multiple answers, but a person cannot make a compelling philosophical argument while standing in the gray space. The rise of Generative AI is intimately tied to the rise of fascism, and we are standing with our toes dangling over the precipice.
Thank you for reading. Please consider sharing this post with your circle if you feel so inclined, and please subscribe (select “no pledge”) to receive future posts. I hope, for my sake and for the world’s, that I can write beautiful poetry in the coming months and not get swept up in the catastrophic headlines that cross my feed with too much frequency. Stay sane, stay focused, and if you can manage it, please stop (intentionally) using GenAI.
J.K.