Anyone who’s mapped a route via GPS, taken advantage of predictive text, or let Spotify choose a playlist has used AI. But it’s one thing to reap the rewards of a new technology and another to interact with it directly. For most people, the November 2022 launch of OpenAI’s ChatGPT was the moment AI truly came to life. Suddenly, students could more easily fudge their term papers, job seekers could create a hundred targeted résumés with a few keystrokes, and meme makers had a whole new garden to play in.

ChatGPT relies on large language models, or LLMs, and those require the input of millions of lines of text from a wide array of sources, all originally written by human beings. The technology is still clunky—for instance, AI-generated advertisements for the much-maligned “Willy Wonka Experience” in Glasgow promised “cartchy tuns, exarserdray lollipops, a pasadise of sweet teats.” As ChatGPT continues to hone its ability to mimic human conversation, new books explore the implications of its training and use.

Neuroscientist Terrence J. Sejnowski offers a history of the development of LLMs and considers the distinctions between intelligence and information in ChatGPT and the Future of AI: The Deep Language Revolution (MIT, Oct.). He probes the reach and limits of LLMs’ abilities, including how much “independent” thinking they may produce, how the underlying algorithms work, and the size of the energy-efficient technologies needed to power them. And he test-drives his hypotheses: with the help of LLMs, he writes, “this book took about half the time” as he spent on his previous work, 2018’s Deep Learning Revolution.

Where Sejnowski focuses attention on the tech itself, Bloomberg opinion columnist Parmy Olson follows the competition between two companies jostling for dominance in Supremacy: AI, ChatGPT, and the Race That Will Change the World (St. Martin’s, Sept.). Sam Altman, CEO of OpenAI, and DeepMind CEO Demis Hassabis raced to bring the tech to market, heedless of its risks, according to Olson, author of the hacker exposé We Are Anonymous. ChatGPT, she says, is rife with inherent training bias and flawed data, and the draw of advertising revenue made the technology available to users long before it should have left the lab.

“Their story is one of idealism but also one of naivety and ego, and of how it can be virtually impossible to keep an ethical code in the bubbles of Big Tech and Silicon Valley,” Olson writes of Altman and Hassabis. The upshot: all-too-human failings are inextricable from ChatGPT’s DNA and, perhaps, its future as well.

Return to main feature.