Can LLMs Think About Thinking? Exploring Metacognition in AI

Deep Dive Series: Can LLMs Think About Thinking, and Can We Leverage Such Traits?
This article explores the burgeoning field of metacognition in Large Language Models (LLMs), drawing parallels with human cognitive development and recent advancements in AI research.
The Origins of Metacognition
The concept of metacognition, defined as "knowledge and cognition about one's own cognitive phenomena," was coined by John H. Flavell in 1979. His research with children demonstrated that metacognitive abilities develop with age, and that fostering these abilities can lead to wiser decision-making and improved learning outcomes. Flavell's foresight has been validated by subsequent educational studies, highlighting the benefits of teaching children to "think about thinking."
Metacognition in Large Language Models (LLMs)
Given LLMs' impressive capabilities in natural language tasks, researchers are investigating whether these models exhibit metacognitive traits. At Princeton Language and Intelligence (PLI), studies suggest that LLMs not only display aspects of metacognition but that these abilities can be harnessed to enhance their performance.
Establishing Cognition in LLMs
Before exploring metacognition, researchers needed to confirm that LLMs possess a form of cognition or understanding. This involved demonstrating that LLMs can process prompts and generate text in a way that suggests genuine comprehension, rather than merely recombining text from their training data – a concept often referred to as being "stochastic parrots."
Two key papers published on ArXiv in 2023 by PLI researchers in collaboration with Google DeepMind addressed this: "A Theory for Emergence of Complex Skills in Language Models" (Arora et al.) and "Skill-Mix: A Flexible and Expandable Family of Evaluations for AI models" (Yu et al.).
- A Theory for Emergence of Complex Skills in Language Models: This work utilized neural scaling laws and random graph theory to link model size and training data to performance. It proposed that as LLMs grow, their ability to combine skills necessary for language tasks improves. Specifically, scaling up parameters by an order of magnitude can double a model's competence in combining skills.
- Skill-Mix: A Flexible and Expandable Family of Evaluations for AI models: This paper introduced an evaluation framework to test the skill-combining abilities of LLMs. It involved prompting LLMs to generate text demonstrating a specified set of k language skills on various topics. The generated texts were then evaluated for skill usage, coherence, and adherence to length constraints. The results indicated that models like GPT-4, with sufficient scale, could move beyond the "stochastic parrot" label, demonstrating genuine cognition.
Exploring Metacognitive Capabilities
With the establishment of cognition, the focus shifted to metacognition. Researchers investigated whether LLMs possess "knowledge about what they know."
- Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving: In this study (Didolkar et al.), researchers prompted GPT-4 to identify the concepts needed to solve math problems from the GSM8K dataset. GPT-4 generated a list of fine-grained skills, which were then grouped into more abstract, compound skills (e.g., "basic arithmetic operations"). This process created a repository of "skill exemplars" – question/answer pairs associated with each compound skill. By providing these metacognitive insights as in-context examples, LLMs showed improved performance on the GSM8K dataset, demonstrating the practical application of metacognition in enhancing AI capabilities.
Fine-tuning LLMs with Metacognition: INSTRUCT-SKILLMIX
Building on these findings, researchers explored whether a small LLM could be fine-tuned using a synthetic dataset generated from a larger LLM's metacognition. This led to the development of the INSTRUCT-SKILLMIX methodology.
- Skill Extraction: A frontier LLM (GPT-4-Turbo) was used to generate lists of relevant topics, identify necessary skills for each topic, and define associated tasks (e.g., "information seeking").
- Data Generation: The frontier LLM then generated instruction-response pairs by combining random sets of skills (k) and tasks (t). This resulted in a 4,000-pair INSTRUCT-SKILLMIX dataset.
Fine-tuning a LLaMA-3-8B model on this dataset yielded impressive results, with the smaller model achieving a higher win rate on AlpacaEval2.0 than larger models like Claude 3 Opus. This demonstrated that INSTRUCT-SKILLMIX provides an efficient method for endowing smaller LLMs with advanced capabilities.
The Significance of Metacognition in AI
Understanding LLMs through the lens of metacognition offers deeper insights into their internal workings. Research has shown evidence of LLMs developing self-consistency checks and the potential for metacognition to steer AI towards safe and human-aligned behavior. These explorations suggest that leveraging LLM metacognition is a promising avenue for future AI development.
Image Credit: Gemini Flash 2.0
Original article available at: https://blog.ai.princeton.edu/page/4/