The Algorithm Is Your Asshole Boyfriend
Why creators are incentivized toward hyperbole, and how to stop it
Welcome to The Wayfinder, your guide to our toxic information environment. I’m so glad you’re here.
This newsletter is free to read, but in an increasingly challenging time for disinformation and online harms research and writing, paid subscribers help keep this public service content available.
If you purchase a yearly or founding subscription, I’ll send you a 10 Day Privacy and Online Safety Toolkit. This tune up is designed for busy people who want to make sure they have their digital i’s dotted and t’s crossed but don’t know where to start. The toolkit will be released in April; subscribe today to get yours!
ON MARCH 5 I SPENT a few hours making a reaction video about a Congressional hearing in which Representative Bill Huizenga namechecked me and the conspiracy theory that I was hired by the Biden Administration to censor the internet. My goal was to make a lighthearted response that mocked MAGA’s continued obsession with a four-year-old lie and informed my audience about the cruelties of this administration; I’ve been trying—through Substack, through Instagram, through Bluesky—to “reach people where they are.”
I wrote a script. I pulled the relevant footage of the hearing. I filmed myself. I took screenshots of articles I mentioned, found old videos in my archive, and edited everything together.
As of this writing, it has 788 views.
On Instagram, I’m not a mega- or even a micro-influencer. I have just under 2500 followers that I’ve been painstakingly cultivating for the past four years. My videos generally earn a respectable few thousand views, breaking five figures if I’m lucky. But 700 views after more than a week is not the payoff I expected.
A lot of online creators would probably be mouthing off about being shadowbanned. If I were a Biden-era conservative, I would call this censorship, and allege that disfavored views were being suppressed because of some sort of shadowy cabal between the administration and the platforms.
The reality is more banal. Is Meta suppressing me, personally? Probably not. Is it suppressing the topics I work on? Maybe; educating people about how disinformation works is actively bad for Meta’s bottom line, after all. Is Meta adding artificial friction between people and politics? Yes, and so are other platforms.
But the most frustrating obstacle to trying to inform people “where they are” is that what the algorithm demands is not only different on each platform; it changes regularly and opaquely within each environment. Keeping experts and journalists and influencers and policymakers guessing is baked into the social media platforms’ business models. If every piece of content were a banger, there would be no incentive to keep on creating, seeking users’—and the algorithm’s—approval.
The algorithm is our asshole boyfriend, and babe? It’s probably time to leave.
A short accounting of the ways that I, a writer, have contorted myself to please the algorithm looks something like this. I learned how to edit video. I learned basic graphic design. I have tested psuedo-ASMR voice over videos, and B-roll videos that show me “at work.” I have made montages and curated meme carousels, substanceless “connection” videos slapped over trending audio. I have tested AI platforms, asking them to write hooks (they’re almost uniformly inaccurate and incendiary), and used moving captions and split screens to maintain interest. I wrote a song parody that, years later, generated a lot of ridicule (and no, I don’t regret it). I bought stuff—tripods, mini microphones, lighting. I have dropped everything to make reaction videos, disappearing from evenings and weekends with my family to get them online.
What haven’t I done? I do not post things I don’t firmly believe or can’t back up with a source. I don’t write clickbaity headlines or hooks. I don’t say anything I wouldn’t say on the news in a video online.
Sometimes, the algorithm rewards me with the manipulative partner’s equivalent of jewelry or a nice dinner. A few thousand views, maybe even tens of thousands. Nice comments. New subscribers and followers. I feel satisfied that I’m reaching my goals of reaching people “where they are.”
More frequently, I’m left wondering what I did wrong. Why did the algorithm ghost me this time? Should I try a “get ready with me” video? A walk and talk? (Can I make those work with my unique security concerns?) Can I show people my life without putting myself at risk? Do I want to? Should I have worn more makeup? Less? Am I too formal? Inauthentic? Too old?
Is the problem just...me?
No.
If you’ve been following me for a while—or read my first book—you’ll have heard this story before, but many of you are new here, so bear with me.
In 2018, I spoke at a conference for government communicators, most of whom were representing countries in Russia’s shadow. The information they needed to get to their followers wasn’t cute trending content. It was a national security imperative. But in the wake of revelations about Russia’s influence on social media in the 2016 U.S. elections, Facebook tweaked the algorithm to boost content that was from “friends and family” or about culture and suppress content that was “political.”
This change also affected information from official government sources, and the folks at the conference were annoyed. Luckily, a representative from Facebook was also at the conference. In clicking the “follow” button, one communications professional told her, their citizens had opted in to receiving content from them, but their engagement rates were a fraction of what they had been a few months prior. Were they expected to spend money on ads to reach these followers?
Of course not, the Facebook rep said, in her Zuckerbergian benevolence. But had they thought about making a Group? With the changes in Facebook’s algorithm, Groups were where users went to have “personal” conversations, so they might have more luck reaching their followers that way.
My jaw hit the floor. Facebook was telling this European government representative they couldn’t reach their constituents unless they paid or started a new form of outreach from scratch. The platform had risen to prominence in part because people were using it to learn about and participate in politics, everywhere from the Middle East to Ukraine to the United States. Now, it was suppressing legitimate content in favor of paid ads and rage bait. If you had the resources—like Trump, for instance, who spent an estimated $70 million per month on online ads during the 2016 campaign—you could reliably reach and grow your audience. If you were the communications rep from a small European country, you needed to pay or gamble on experimenting with a new format. If you were a plebe like me, you started constantly contorting yourself and your content into new shapes to fit whatever the platform was prioritizing that month.
In psychology, this is called intermittent reinforcement—“when rewards are handed out inconsistently and occasionally.” It’s the foundation of gambling addictions and toxic relationships, whether with asshole boyfriends or asshole algorithms. One of my best friends, psychologist Dr. Alexandra Stratyner, puts it this way:
The unpredictability of gambling is mimicked in social media algorithms; because the algorithm’s patterns (including manipulation of these patterns on an ongoing basis by social media companies) are concealed, we cannot predict when our posts will be rewarded (e.g. with high viewership) and this, in turn, reinforces a belief that we must be continuously posting. This is understandable, especially when social media is an important aspect of your career, because the consequence of not continuing to engage can be substantial (e.g., loss of livelihood); the potential for what behaviorists would describe as punishment (negative consequences for a behavior, in this case, not being active on social media) is high and potentially high-stakes.
If we’re successful all the time, we get bored. When we’re only successful sometimes, we keep coming back—and platforms want us coming back. Without our content, they can’t survive.1
Since that 2018 discussion, platforms have further incentivized creators to stay locked in through monetization. Creators get compensated for content that performs well, generating more clicks, more views, more engagement. And what performs well? It’s certainly not even-keeled, nuanced explainer videos. The most engaging content is frequently the most enraging content, meaning that the creators who were already chasing intermittent reinforcement are further motivated to shapeshift to pay their bills—and that’s before adding off-platform incentives into the mix.
In a recent viral LinkedIn post, political creator Maria Comstock detailed how she was offered $2000 per video for left-leaning content, and $36,000 for right-leaning content—a 1700% difference. “For many creators, that delta makes the decision for you,” she writes. “Brand deals aren’t “extra.” They’re rent. They’re healthcare. They’re payroll for editors. When someone offers you the equivalent of several months’ income for one deliverable, it’s not an abstract ethical debate; it’s survival math.”
The impacts of the asshole algorithm aren’t just financial. Online content creators experience suicidal ideation at a rate of nearly double that of the general U.S. population. They also report worsening burnout over time: “49% of newer creators report burnout, compared to 74% of those with eight or more years of experience.” Moreover, content creators are frequently self-employed, lacking health insurance or the support networks—from editors to researchers to assistants—that traditional media rely on to create high-quality content and protect their people.
TLDR: asshole algorithms controlled by billionaires in bed with fascists—along with moneyed political interests—are driving people to create content they may not believe so they can keep paying their bills, and it’s driving them to burnout and suicidal thoughts, while polluting our information environment along the way.
Online content creation does not represent a major form of income for me; I do it because I want to reach people and help them better navigate the internet. But even that goal drives me to intense frustration when my content doesn’t perform.
So how should anyone making content for the internet navigate a relationship with an algorithm that turns toxic?
Break Up: In late 2023, writer and Substack superstar Emma Gannon announced she was breaking up with Instagram:
“The relationship is finally over after dragging on miserably for a while. I haven’t deleted my account for now, but I’ve handed over the keys to someone else who will post updates for me, sparingly. (I have some upcoming book announcements that will be posted, that’s practically it.) But personally, I won’t use it. I don’t have the app on my phone, I will no longer have access to DMs or find my thumb reaching for a mind-numbing scroll. I haven’t felt this free in ages.”2
My friend Dr. Stratyner calls this abstinence. It was possible for Emma because she had a significant following on another platform (namely Substack) that she cultivated before it became increasingly enshittified (also Substack). She also had help in the form of a friend or colleague who posted for her so she didn’t fully sacrifice the platform. Dr. Stratyner notes that “abstinence—either short-term or permanently—is not a viable option [for some] because social media is central to work.” As a less drastic measure, creators might…
Set boundaries: Dr. Stratyner calls this “harm reduction” which “might look like more intentional, introspective engagement online.” Creator Kristofer Goldsmith says his “theory of staying sane producing content for these apps” is “fire and forget...I assume everything is doing terrible by default, and just don’t look. I find out about my occasional successes when my wife tells me something went viral.” This seems like the most likely approach for me in the season of life I’m in. I already turned off push notifications and app badges a long time ago; maybe now it’s time for me to turn off notifications in the app altogether. “Fire and forget” might also have the added benefit of bringing back a little more spontanaeity to my online creations, which have been almost entirely scripted and engineered since I dealt with major online abuse.
Get support: I’m lucky to have a therapist that I’ve leaned on throughout the ups and downs of my career, on and offline. That’s critical, according to Dr. Stratyner: “If your relationship to social media is getting in the way of self-care (e.g. sleep, eating, hygiene), your relationships, or other important roles (e.g., parenting, work/school performance), or if you are experiencing symptoms of, for example, depression or anxiety (in particular, suicidal thoughts), it’s time to ask for help.” Initiatives like CreatorCare aim to support content creators through the unique struggles that the asshole algorithms put them through.
If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat 988lifeline.org. You can also reach Crisis Text Line by texting HOME to 741741. If you are in immediate danger, call 911, or go to the nearest emergency room.
If you’re consuming content but not creating it, I hope this piece pulls back the curtain on the realities of the creator economy that may not be apparent when mindlessly scrolling your feeds. Maybe that fringey creator that the algorithm suggested to you is saying crazy things because their livelihood depends on it. Try not to share or comment out of emotion alone. Make deliberate, reasoned choices about which content you consume, support, and engage with….If you can resist the pull of the intermittent reinforcement you might get from your story or comment racking in hundreds of likes, that is. 🧭
Well, maybe in the AI slop era they can, but that’s a topic for another post.
Emma is still on Instagram, seemingly posting more frequently, including to her stories, though it’s very possible her account is still managed by someone else.





