Power Rankings

The researchers, builders, and thought leaders shaping the future of AI agent memory. From pioneering papers to production systems, these are the people defining how agents remember.

ResearcherAcademic/Research focus
BuilderIndustry/Product focus
BothResearch + Building

🏆 Top 9

#1

Harrison Chase

Co-founder & CEO @ LangChain

builder

As co-founder and CEO of LangChain, Chase is a central figure in LLM agents and memory systems. LangChain is the de facto standard agent framework (for Python and JS) that first connected LLMs with tools and actions. Under Chase's guidance, LangChain has introduced high-level memory modules (vector memories, summarization memories, etc.) and recently launched LangGraph for managing agent context. His Sequoia-backed platform (valued over $1B) means he shapes much of the tooling and best practices around RAG and agent memory.

Key Contributions:

  • Created LangChain, the de facto standard LLM agent framework
  • Introduced high-level memory modules (vector, summarization, etc.)
  • Launched LangGraph for agent context management
  • Built Sequoia-backed platform valued over $1B
#2

Jerry Liu

Co-founder & CEO @ LlamaIndex

builder

Liu is co-founder/CEO of LlamaIndex (formerly GPT-Index), a popular data framework for LLM applications. LlamaIndex provides structured data ingestion and vector-based retrieval (RAG) tools that effectively serve as external memory for LLMs. Under Liu's leadership, LlamaIndex became widely adopted for building LLM knowledge graphs and memory buffers, cementing his influence on memory architectures (like semantic, document, and graph memory) in the agentic AI community.

Key Contributions:

  • Founded LlamaIndex, the leading RAG/data framework
  • Pioneered vector-based retrieval tools for LLM memory
  • Built widely-adopted LLM knowledge graph tooling
  • Shaped semantic, document, and graph memory architectures
#3

Toran Bruce Richards

Creator of AutoGPT @ Significant Gravitas

builder

The creator of AutoGPT, Richards (aka "Significant Gravitas") ignited mainstream interest in autonomous LLM agents. AutoGPT is an open-source agent that automatically breaks goals into sub-tasks and manages its own short-term memory to complete long-term objectives. Although not an academic, Richards's early 2023 release of AutoGPT (on GitHub) and its viral demos made him a highly visible innovator. His project demonstrated both the power and the limitations of LLM agents' memory, spurring efforts to add better memory modules.

Key Contributions:

  • Created AutoGPT, sparking the autonomous agent movement
  • Demonstrated goal decomposition with memory management
  • Inspired industry-wide focus on agent memory limitations
  • Built one of the most-starred AI repos on GitHub
#4

Ahmed Hassan Awadallah

Senior Researcher, AI Frontiers @ Microsoft Research

researcher

A senior researcher at MSR AI Frontiers, Awadallah co-led AutoGen, Microsoft's multi-agent LLM framework. AutoGen lets developers compose teams of LLM-based agents with shared memory and knowledge (including "teachable" memory updates). The published AutoGen paper and tools (COLM 2024, open-source GitHub) are widely cited by industry as enabling agents with robust memory modules, making Awadallah a key figure in agentic memory systems.

Key Contributions:

  • Co-led Microsoft AutoGen multi-agent framework
  • Developed teachable memory updates for agents
  • Published influential COLM 2024 paper
  • Open-sourced tools enabling robust agent memory
#5

Wujiang Xu

AI Researcher @ Rutgers University / AIOS Foundation

researcher

An AI researcher at Rutgers and the AIOS Foundation, Xu introduced A-Mem: Agentic Memory for LLM agents. His work applies the Zettelkasten linking method to LLM memory: new "memories" automatically get structured notes and links to related past memories, with dynamic evolution of context vectors. By formalizing agent-driven memory updating, Xu's A-Mem paper is among the first systems explicitly targeting long-term memory organization for LLM agents.

Key Contributions:

  • Created A-Mem framework for agentic memory
  • Applied Zettelkasten method to LLM memory organization
  • Pioneered dynamic context vector evolution
  • Formalized agent-driven memory updating
#6

Charles Packer

Co-founder & CEO @ Letta (formerly MemGPT)

both

A PhD alumnus of Berkeley and now co-founder/CEO of startup Letta, Packer was the lead author of MemGPT. He originated the idea of teaching an LLM to manage "main memory" vs. "external memory" much like an OS. His open-source MemGPT code and talks have popularized the OS-inspired memory model for LLM agents. This earned him visibility as a technical innovator in LLM memory systems.

Key Contributions:

  • Lead author of the influential MemGPT paper
  • Originated OS-inspired memory management for LLMs
  • Founded Letta to commercialize agent memory research
  • Popularized main vs. external memory architecture
#7

Ion Stoica

Professor & Co-founder (Databricks, Anyscale) @ UC Berkeley

both

A prominent computer systems researcher, Prof. Stoica co-led development of MemGPT, an OS-inspired memory-management framework for LLMs. As co-author on the MemGPT paper, he helped design a hierarchical memory architecture (main vs. external memory) that lets LLM agents "swap" information to extend context. Stoica's leadership in scaling AI systems and his high profile at Berkeley AI lend significant clout to the agentic-memory field.

Key Contributions:

  • Co-authored MemGPT paper with hierarchical memory design
  • Co-founded Databricks and Anyscale
  • Created Apache Spark and Ray distributed systems
  • Brought systems research rigor to agent memory
#8

Mariya Toneva

Research Group Leader @ Max Planck Institute for Software Systems

researcher

A researcher specializing in memory and language, Toneva co-authored recent position papers on episodic memory in LLM agents. Formerly a senior scientist at Meta/Facebook AI, she now works at MPI-SWS and has championed modeling how LLMs can store and retrieve episodic details. Her work on integrating long-term, context-specific memory (drawing on cognitive memory theory) makes her a key academic influencer in agentic memory research.

Key Contributions:

  • Co-authored position papers on episodic memory in LLM agents
  • Champions cognitive memory theory for AI
  • Research on long-term, context-specific memory integration
  • Bridges neuroscience and LLM memory research
#9

Kenneth A. Norman

Professor of Psychology & Neuroscience @ Princeton University

researcher

A cognitive neuroscientist who helped pioneer computational memory models, Dr. Norman co-authored recent work evaluating episodic memory in LLMs. His Princeton lab's models of human memory inform memory architectures for AI, and he is a lead author on an arXiv paper on LLMs' sequence-recall tasks (episodic memory). Norman's insights into how agents remember context and temporal order position him as a thought leader on memory fundamentals in autonomous LLM systems.

Key Contributions:

  • Pioneered computational models of human memory
  • Lead author on LLM episodic memory evaluation
  • Research on sequence-recall and temporal memory
  • Bridges cognitive neuroscience and AI memory

Honorable Mentions

More influential figures advancing agent memory, reasoning, and AI systems.

10

Tariq Shaukat

Mem0

Co-founder & CEO building the memory layer for AI applications

11

Denny Zhou

Google DeepMind

Research on chain-of-thought reasoning and memory in LLMs

12

Jason Wei

OpenAI (formerly Google)

Co-authored chain-of-thought prompting and emergent abilities research

13

Shunyu Yao

Princeton / OpenAI

Creator of ReAct framework combining reasoning and acting

14

Noah Shinn

Princeton

Co-author of Reflexion, enabling agents to learn from memory

15

Joon Sung Park

Stanford

Lead author of Generative Agents (AI town simulation)

16

Ofir Press

Princeton

Research on self-ask and compositional reasoning in LLMs

17

Zihao Wang

UC Berkeley

Research on agent architectures and memory systems

18

Yujia Qin

Tsinghua University

Research on tool-augmented language models

19

Chi Wang

Microsoft Research

Co-lead of AutoGen, multi-agent conversation systems

20

Daniel Khashabi

Johns Hopkins / AI2

Research on reasoning and knowledge in NLP systems

21

Sewon Min

University of Washington

Research on in-context learning and memory in LLMs

22

Tianyi Zhang

Stanford

Research on code generation and agent capabilities

23

Saurav Kadavath

Anthropic

Research on language model capabilities and self-knowledge

24

Andy Zou

CMU

Research on AI safety and adversarial robustness

25

Eric Zelikman

Stanford

Co-author of STaR (Self-Taught Reasoner) for iterative learning

26

Aman Madaan

CMU

Author of Self-Refine, teaching LLMs to improve outputs

27

Hyung Won Chung

OpenAI (formerly Google)

Research on instruction tuning and model scaling

28

Yoav Shoham

AI21 Labs

Research on knowledge and reasoning in AI systems

29

Percy Liang

Stanford

HELM benchmark creator, research on foundation models

30

Christopher Manning

Stanford

Pioneer in NLP and neural network approaches to language

31

Yann LeCun

Meta AI

Chief AI Scientist, research on world models and memory

32

Jeff Dean

Google DeepMind

Chief Scientist, led development of transformers at scale

33

Ilya Sutskever

Safe Superintelligence Inc

Former OpenAI Chief Scientist, co-invented seq2seq

34

Andrej Karpathy

Independent

Former Tesla AI Director, educator on neural networks

35

Jim Fan

NVIDIA

Research on foundation agents and embodied AI

36

Alexandr Wang

Scale AI

CEO building data infrastructure for AI training

37

Arthur Mensch

Mistral AI

CEO, building efficient open-weight LLMs

38

Tri Dao

Together AI / Princeton

Creator of FlashAttention, enabling longer context

39

Matei Zaharia

Databricks / UC Berkeley

Co-creator of Apache Spark, MLflow, and Dolly

Know someone who should be on this list? Rankings are updated periodically based on contributions to the field.

Suggest an addition →