In Outrage Machine (Legacy Lit, July), journalist Rose-Stockwell analyzes how social media negatively impacts users.

What do most people misunderstand about social media?

We tend to think that we are somewhat immune to how it might be training us to be slightly more extreme versions of ourselves—for some people, much more extreme versions of themselves. People think that they have a little bit more control than they perhaps do.

How would you summarize the patterns you identified in prior information revolutions?

For technologies that confer some kind of power, there is this thing called “the dark valley” in which the harms of a technology are invisible for a period as it becomes widely adopted. Our attention is diverted for that time, but then there’s a moral panic as we arrive at the bottom of this dark valley, and eventually we come out the other side and reach another plateau that was higher than before in terms of human thriving. There’s always, particularly with media technology, this point of significant confusion, and even violence, that comes with new media technologies that we’re exposed to.

You write that “as far as we know... algorithms have no understanding of what they’re creating.” Are you worried about the possibility of machine sentience?

It’s really incredible how little we actually understand about what’s happening inside of these more sophisticated algorithms, how little of it is actually legible to us. We don’t think there’s any sentience, but there’s not a high confidence interval in saying that. We don’t think they have a sense of self, but the level of legibility into what they’re actually doing is so small compared to how much we are relying on them and how influential they are in our lives. That is a strange and creepy truth of the current state of the field,
that we really don’t know as much as we should about what’s happening.

How have advances in AI exacerbated the problem?

I think that the ability to create convincing, human-like videos, faces, and text is going to approach a point in the next year when most of us are not going to be able to tell whether something is generated by a human or a machine. The dangers of that are so much greater than I think people really understand. For example, the cost of making deep fakes is now approaching zero, so there’s going to be a step function increase in the quantity and quality of these things. My friend Tristan Harris says that 2024 might be the last human election, because after that it will be too difficult to determine politicians’ real remarks from artificially generated falsehoods.