Explaining the behavior of trained neural networks remains a compelling puzzle, especially as these models grow in size and sophistication. Like other scientific challenges throughout history, reverse-engineering how artificial intelligence systems work requires a substantial amount of experimentation: making hypotheses, intervening on behavior, and even dissecting large networks to examine individual neurons.
Facilitating this timely endeavor, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel approach that uses AI models to conduct experiments on other systems and explain their behavior. Their method uses agents built from pretrained language models to produce intuitive explanations of computations inside trained networks.
Central to this strategy is the “automated interpretability agent” (AIA), designed to mimic a scientist’s experimental processes. Interpretability agents plan and perform tests on other computational systems, which can range in scale from individual neurons to entire models, in order to produce explanations of these systems in a variety of forms: language descriptions of what a system does and where it fails, and code that reproduces the system’s behavior.
Unlike existing interpretability procedures that passively classify or summarize examples, the AIA actively participates in hypothesis formation, experimental testing, and iterative learning, thereby refining its understanding of other systems in real time.
Complementing the AIA method is the new “function interpretation and description” (FIND) benchmark, a test bed of functions resembling computations inside trained networks, and accompanying descriptions of their behavior.
One key challenge in evaluating the quality of descriptions of real-world network components is that descriptions are only as good as their explanatory power: Researchers don’t have access to ground-truth labels of units or descriptions of learned computations. FIND addresses this long-standing issue in the field by providing a reliable standard for evaluating interpretability procedures: explanations of functions (e.g., produced by an AIA) can be evaluated against function descriptions in the benchmark.
For example, FIND contains synthetic neurons designed to mimic the behavior of real neurons inside language models, some of which are selective for individual concepts such as “ground transportation.” AIAs are given black-box access to synthetic neurons and design inputs (such as “tree,” “happiness,” and “car”) to test a neuron’s response. After noticing that a synthetic neuron produces higher response values for “car” than other inputs, an AIA might design more fine-grained tests to distinguish the neuron’s selectivity for cars from other forms of transportation, such as planes and boats.
When the AIA produces a description such as “this neuron is selective for road transportation, and not air or sea travel,” this description is evaluated against the ground-truth description of the synthetic neuron (“selective for ground transportation”) in FIND. The benchmark can then be used to compare the capabilities of AIAs to other methods in the literature.
Sarah Schwettmann, Ph.D., co-lead author of a paper on the new work and a research scientist at CSAIL, emphasizes the advantages of this approach. The paper is available on the arXiv preprint server.
“The AIAs’ capacity for autonomous hypothesis generation and testing may be able to surface behaviors that would otherwise be difficult for scientists to detect. It’s remarkable that language models, when equipped with tools for probing other systems, are capable of this type of experimental design,” says Schwettmann. “Clean, simple benchmarks with ground-truth answers have been a major driver of more general capabilities in language models, and we hope that FIND can play a similar role in interpretability research.”
Automating interpretability
Large language models are still holding their status as the in-demand celebrities of the tech world. The recent advancements in LLMs have highlighted their ability to perform complex reasoning tasks across diverse domains. The team at CSAIL recognized that given these capabilities, language models may be able to serve as backbones of generalized agents for automated interpretability.
“Interpretability has historically been a very multifaceted field,” says Schwettmann. “There is no one-size-fits-all approach; most procedures are very specific to individual questions we might have about a system, and to individual modalities like vision or language. Existing approaches to labeling individual neurons inside vision models have required training specialized models on human data, where these models perform only this single task.
“Interpretability agents built from language models could provide a general interface for explaining other systems—synthesizing results across experiments, integrating over different modalities, even discovering new experimental techniques at a very fundamental level.”
As we enter a regime where the models doing the explaining are black boxes themselves, external evaluations of interpretability methods are becoming increasingly vital. The team’s new benchmark addresses this need with a suite of functions, with known structure, that are modeled after behaviors observed in the wild. The functions inside FIND span a diversity of domains, from mathematical reasoning to symbolic operations on strings to synthetic neurons built from word-level tasks.
The dataset of interactive functions is procedurally constructed; real-world complexity is introduced to simple functions by adding noise, composing functions, and simulating biases. This allows for comparison of interpretability methods in a setting that translates to real-world performance.
In addition to the dataset of functions, the researchers introduced an innovative evaluation protocol to assess the effectiveness of AIAs and existing automated interpretability methods. This protocol involves two approaches. For tasks that require replicating the function in code, the evaluation directly compares the AI-generated estimations and the original, ground-truth functions. The evaluation becomes more intricate for tasks involving natural language descriptions of functions.
In these cases, accurately gauging the quality of these descriptions requires an automated understanding of their semantic content. To tackle this challenge, the researchers developed a specialized “third-party” language model. This model is specifically trained to evaluate the accuracy and coherence of the natural language descriptions provided by the AI systems, and compares it to the ground-truth function behavior.
FIND enables evaluation revealing that we are still far from fully automating interpretability; although AIAs outperform existing interpretability approaches, they still fail to accurately describe almost half of the functions in the benchmark.
Tamar Rott Shaham, co-lead author of the study and a postdoc in CSAIL, notes that “while this generation of AIAs is effective in describing high-level functionality, they still often overlook finer-grained details, particularly in function subdomains with noise or irregular behavior.
“This likely stems from insufficient sampling in these areas. One issue is that the AIAs’ effectiveness may be hampered by their initial exploratory data. To counter this, we tried guiding the AIAs’ exploration by initializing their search with specific, relevant inputs, which significantly enhanced interpretation accuracy.” This approach combines new AIA methods with previous techniques using pre-computed examples for initiating the interpretation process.
The researchers are also developing a toolkit to augment the AIAs’ ability to conduct more precise experiments on neural networks, both in black-box and white-box settings. This toolkit aims to equip AIAs with better tools for selecting inputs and refining hypothesis-testing capabilities for more nuanced and accurate neural network analysis.
The team is also tackling practical challenges in AI interpretability, focusing on determining the right questions to ask when analyzing models in real-world scenarios. Their goal is to develop automated interpretability procedures that could eventually help people audit systems—e.g., for autonomous driving or face recognition—to diagnose potential failure modes, hidden biases, or surprising behaviors before deployment.
Watching the watchers
The team envisions one day developing nearly autonomous AIAs that can audit other systems, with human scientists providing oversight and guidance. Advanced AIAs could develop new kinds of experiments and questions, potentially beyond human scientists’ initial considerations.
The focus is on expanding AI interpretability to include more complex behaviors, such as entire neural circuits or subnetworks, and predicting inputs that might lead to undesired behaviors. This development represents a significant step forward in AI research, aiming to make AI systems more understandable and reliable.
“A good benchmark is a power tool for tackling difficult challenges,” says Martin Wattenberg, computer science professor at Harvard University who was not involved in the study. “It’s wonderful to see this sophisticated benchmark for interpretability, one of the most important challenges in machine learning today. I’m particularly impressed with the automated interpretability agent the authors created. It’s a kind of interpretability jiu-jitsu, turning AI back on itself in order to help human understanding.”
Schwettmann, Rott Shaham, and their colleagues presented their work at NeurIPS 2023 in December. Additional MIT co-authors, all affiliates of the CSAIL and the Department of Electrical Engineering and Computer Science (EECS), include graduate student Joanna Materzynska, undergraduate student Neil Chowdhury, Shuang Li, Ph.D., Assistant Professor Jacob Andreas, and Professor Antonio Torralba. Northeastern University Assistant Professor David Bau is an additional co-author.
More information: Sarah Schwettmann et al, FIND: A Function Description Benchmark for Evaluating Interpretability Methods, arXiv (2023). DOI: 10.48550/arxiv.2309.03886

News
Baffling Scientists for Centuries: New Study Unravels Mystery of Static Electricity
ISTA physicists demonstrate that contact electrification depends on the contact history of materials. For centuries, static electricity has intrigued and perplexed scientists. Now, researchers from the Waitukaitis group at the Institute of Science and [...]
Tumor “Stickiness” – Scientists Develop Potential New Way To Predict Cancer’s Spread
UC San Diego researchers have developed a device that predicts breast cancer aggressiveness by measuring tumor cell adhesion. Weakly adherent cells indicate a higher risk of metastasis, especially in early-stage DCIS. This innovation could [...]
Scientists Just Watched Atoms Move for the First Time Using AI
Scientists have developed a groundbreaking AI-driven technique that reveals the hidden movements of nanoparticles, essential in materials science, pharmaceuticals, and electronics. By integrating artificial intelligence with electron microscopy, researchers can now visualize atomic-level changes that were [...]
Scientists Sound Alarm: “Safe” Antibiotic Has Led to an Almost Untreatable Superbug
A recent study reveals that an antibiotic used for liver disease patients may increase their risk of contracting a dangerous superbug. An international team of researchers has discovered that rifaximin, a commonly prescribed antibiotic [...]
Scientists Discover Natural Compound That Stops Cancer Progression
A discovery led by OHSU was made possible by years of study conducted by University of Portland undergraduates. Scientists have discovered a natural compound that can halt a key process involved in the progression [...]
Scientists Just Discovered an RNA That Repairs DNA Damage – And It’s a Game-Changer
Our DNA is constantly under threat — from cell division errors to external factors like sunlight and smoking. Fortunately, cells have intricate repair mechanisms to counteract this damage. Scientists have uncovered a surprising role played by [...]
What Scientists Just Discovered About COVID-19’s Hidden Death Toll
COVID-19 didn’t just claim lives directly—it reshaped mortality patterns worldwide. A major international study found that life expectancy plummeted across most of the 24 analyzed countries, with additional deaths from cardiovascular disease, substance abuse, and mental [...]
Self-Propelled Nanoparticles Improve Immunotherapy for Non-Invasive Bladder Cancer
A study led by Pohang University of Science and Technology (POSTECH) and the Institute for Bioengineering of Catalonia (IBEC) in South Korea details the creation of urea-powered nanomotors that enhance immunotherapy for bladder cancer. The nanomotors [...]
Scientists Develop New System That Produces Drinking Water From Thin Air
UT Austin researchers have developed a biodegradable, biomass-based hydrogel that efficiently extracts drinkable water from the air, offering a scalable, sustainable solution for water access in off-grid communities, emergency relief, and agriculture. Discarded food [...]
AI Unveils Hidden Nanoparticles – A Breakthrough in Early Disease Detection
Deep Nanometry (DNM) is an innovative technique combining high-speed optical detection with AI-driven noise reduction, allowing researchers to find rare nanoparticles like extracellular vesicles (EVs). Since EVs play a role in disease detection, DNM [...]
Inhalable nanoparticles could help treat chronic lung disease
Nanoparticles designed to release antibiotics deep inside the lungs reduced inflammation and improved lung function in mice with symptoms of chronic obstructive pulmonary disease By Grace Wade Delivering medication to the lungs with inhalable nanoparticles [...]
New MRI Study Uncovers Hidden Lung Abnormalities in Children With Long COVID
Long COVID is more than just lingering symptoms—it may have a hidden biological basis that standard medical tests fail to detect. A groundbreaking study using advanced MRI technology has uncovered significant lung abnormalities in [...]
AI Struggles with Abstract Thought: Study Reveals GPT-4’s Limits
While GPT-4 performs well in structured reasoning tasks, a new study shows that its ability to adapt to variations is weak—suggesting AI still lacks true abstract understanding and flexibility in decision-making. Artificial Intelligence (AI), [...]
Turning Off Nerve Signals: Scientists Develop Promising New Pancreatic Cancer Treatment
Pancreatic cancer reprograms nerve cells to fuel its growth, but blocking these connections can shrink tumors and boost treatment effectiveness. Pancreatic cancer is closely linked to the nervous system, according to researchers from the [...]
New human antibody shows promise for Ebola virus treatment
New research led by scientists at La Jolla Institute for Immunology (LJI) reveals the workings of a human antibody called mAb 3A6, which may prove to be an important component for Ebola virus therapeutics. [...]
Early Alzheimer’s Detection Test – Years Before Symptoms Appear
A new biomarker test can detect early-stage tau protein clumping up to a decade before it appears on brain scans, improving early Alzheimer’s diagnosis. Unlike amyloid-beta, tau neurofibrillary tangles are directly linked to cognitive decline. Years [...]