Date
Share

Ted Sumers

Anthropic

My research develops computational models of communication, with an emphasis on the real-world consequences of speech acts.
For example, how will an utterance affect a listener’s beliefs and actions? And how do people (or LLMs) weigh these outcomes? Answering these questions can both further our understanding of humans and ensure the deployment of LLMs goes well for society.I’m currently working on LLM safety at Anthropic. Previously, I got a PhD in cognitive science (advised by Tom Griffiths at Princeton, and supported by a NDSEG fellowship; with an internship at DeepMind). Before the PhD, I was a data scientist and engineering manager at Automatic Labs (2013-2014) and Uber (2014-2019).

Cognitive Architectures for Language Agents

Recent efforts have augmented Large Language Models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, creating a new class of “Language Agents”. While these agents have achieved substantial empirical success, we presently lack organizing principles to measure progress or guide further development. We draw on the rich history of cognitive science and symbolic artificial intelligence to propose ‘Cognitive Architectures for Language Agents’ – CoALA. CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to survey and organize a large body of recent work, then identify actionable directions towards more capable agents.