In the future, AI assistants may predict and influence our decisions early on, even selling these emerging "intentions" in real-time to companies that can meet the demand before we realize we are making a decision. This is not science fiction but a warning from AI ethicists at Cambridge University. They believe we are at the beginning of a "profitable yet unsettling new market for digital intent signals," affecting everything from purchasing movie tickets to voting for candidates. They refer to this as the "intent economy."
The researchers at the Cambridge University Leverhulme Centre for the Future of Intelligence (LCFI) believe that the explosion of generative AI and our growing familiarity with chatbots have opened a new realm of "persuasive technology," as hinted by the latest announcements from tech giants.
Human-like AI agents, such as chatbot assistants, digital mentors, and even virtual partners, will have access to vast amounts of personal psychological and behavioral data, often collected through informal conversations. These AIs not only know our online habits but also possess a remarkable ability to connect with us in ways we find comfortable—mimicking personalities and predicting our expected responses. Researchers warn that this level of trust and understanding could enable large-scale social manipulation.
Dr. Yaqub Chaudhary, a visiting scholar at LCFI, states, "Investing huge resources to position AI assistants in all aspects of life raises the question of whose interests and purposes these so-called assistants are designed for." He emphasizes that what people say, how they speak, and the resulting real-time inferences are far more intimate than mere online interaction records. "We warn that AI tools are already in development to elicit, infer, collect, record, understand, predict, and ultimately manipulate human plans and purposes for commercialization."
Dr. Jonnie Penn, a technology historian at LCFI, points out, "For decades, attention has been the currency of the internet. Sharing your attention on social media platforms like Facebook and Instagram has driven the growth of the online economy." He warns, "Without regulation, the intent economy will treat your motivations as a new currency. It will be a gold rush for those who target, guide, and sell human intentions."
Dr. Penn and Dr. Chaudhary noted in a paper in the Harvard Data Science Review that the intent economy will be a "temporalization" of the attention economy: analyzing the connections between user attention, communication styles, behavioral patterns, and final decisions. Dr. Chaudhary explains, "While some intentions are fleeting, categorizing and locking in those enduring intentions will be highly profitable for advertisers."
In the intent economy, large language models (LLMs) could be used to low-costly lock in users' tone, political stance, vocabulary, age, gender, online history, and even preferences for flattery. This information will be linked to broker bidding networks to maximize specific goals, such as selling movie tickets ("You mentioned feeling overwhelmed at work; let me help you book that movie ticket we talked about?"). Researchers believe this could include steering conversations to serve specific platforms, advertisers, businesses, or even political organizations.
While researchers believe the intent economy is still a "vision" for the tech industry, they have tracked early signs of this trend through published research and hints from several major tech companies. This includes OpenAI's public solicitation in a 2023 blog post for "data that expresses human intentions, including any language, topics, and formats," as well as comments from Shopify's product director at a conference that year about how chatbots can "clearly elicit user intentions."
The CEO of Nvidia has publicly discussed using LLMs to understand intentions and desires, while Meta released the "Intentonomy" research back in 2021, which is a dataset for understanding human intentions. In 2024, Apple's new "App Intents" developer framework, designed to connect apps to Siri, includes protocols for "predicting what actions someone might take in the future" and "using the predictions you [the developer] provide to recommend app intents to someone in the future."
Dr. Chaudhary points out that Meta's AI agent CICERO reportedly reached human-level performance in the game "Diplomacy" by relying on inferred and predicted intentions and using persuasive dialogue to advance its position. He warns, "These companies are already selling our attention. The logical next step for them to gain a competitive advantage is to leverage the technology they are developing to predict our intentions and sell our desires before we fully understand these intentions."
Dr. Penn notes that these developments are not necessarily bad but could lead to destructive consequences. "Public awareness of what is about to happen is key to ensuring we do not go down the wrong path," he says.