MIT researchers have introduced an efficient reinforcement learning algorithm that enhances AI’s decision-making in complex scenarios, such as city traffic control.
By strategically selecting optimal tasks for training, the algorithm achieves significantly improved performance with far less data, offering a 50x boost in efficiency. This method not only saves time and resources but also paves the way for more effective AI applications in real-world settings.
AI Decision-Making
Across fields like robotics, medicine, and political science, researchers are working to train AI systems to make meaningful and impactful decisions. For instance, an AI system designed to manage traffic in a congested city could help drivers reach their destinations more quickly while enhancing safety and sustainability.
However, teaching AI to make effective decisions is a complex challenge.
Challenges in Reinforcement Learning
Reinforcement learning models, the foundation of many AI decision-making systems, often struggle when confronted with even slight changes in the tasks they are trained for. For example, in traffic management, a model might falter when handling intersections with varying speed limits, lane configurations, or traffic patterns.
To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them.
Strategic Task Selection in AI Training
The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city.
By focusing on a smaller number of intersections that contribute the most to the algorithm’s overall effectiveness, this method maximizes performance while keeping the training cost low.
Enhancing AI Efficiency With a Simple Algorithm
The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent.
“We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand,” says senior author Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).
She is joined on the paper by lead author Jung-Hoon Cho, a CEE graduate student; Vindula Jayawardana, a graduate student in the Department of Electrical Engineering and Computer Science (EECS); and Sirui Li, an IDSS graduate student. The research will be presented at the Conference on Neural Information Processing Systems.
Balancing Training Approaches
To train an algorithm to control traffic lights at many intersections in a city, an engineer would typically choose between two main approaches. She can train one algorithm for each intersection independently, using only that intersection’s data, or train a larger algorithm using data from all intersections and then apply it to each one.
But each approach comes with its share of downsides. Training a separate algorithm for each task (such as a given intersection) is a time-consuming process that requires an enormous amount of data and computation, while training one algorithm for all tasks often leads to subpar performance.
Wu and her collaborators sought a sweet spot between these two approaches.
Advantages of Model-Based Transfer Learning
For their method, they choose a subset of tasks and train one algorithm for each task independently. Importantly, they strategically select individual tasks that are most likely to improve the algorithm’s overall performance on all tasks.
They leverage a common trick from the reinforcement learning field called zero-shot transfer learning, in which an already trained model is applied to a new task without being further trained. With transfer learning, the model often performs remarkably well on the new neighbor task.
“We know it would be ideal to train on all the tasks, but we wondered if we could get away with training on a subset of those tasks, apply the result to all the tasks, and still see a performance increase,” Wu says.
MBTL Algorithm: Optimizing Task Selection
To identify which tasks they should select to maximize expected performance, the researchers developed an algorithm called Model-Based Transfer Learning (MBTL).
The MBTL algorithm has two pieces. For one, it models how well each algorithm would perform if it were trained independently on one task. Then it models how much each algorithm’s performance would degrade if it were transferred to each other task, a concept known as generalization performance.
Explicitly modeling generalization performance allows MBTL to estimate the value of training on a new task.
MBTL does this sequentially, choosing the task which leads to the highest performance gain first, then selecting additional tasks that provide the biggest subsequent marginal improvements to overall performance.
Since MBTL only focuses on the most promising tasks, it can dramatically improve the efficiency of the training process.
Implications for Future AI Development
When the researchers tested this technique on simulated tasks, including controlling traffic signals, managing real-time speed advisories, and executing several classic control tasks, it was five to 50 times more efficient than other methods.
This means they could arrive at the same solution by training on far less data. For instance, with a 50x efficiency boost, the MBTL algorithm could train on just two tasks and achieve the same performance as a standard method which uses data from 100 tasks.
“From the perspective of the two main approaches, that means data from the other 98 tasks was not necessary or that training on all 100 tasks is confusing to the algorithm, so the performance ends up worse than ours,” Wu says.
With MBTL, adding even a small amount of additional training time could lead to much better performance.
In the future, the researchers plan to design MBTL algorithms that can extend to more complex problems, such as high-dimensional task spaces. They are also interested in applying their approach to real-world problems, especially in next-generation mobility systems.
Reference: “Model-Based Transfer Learning for Contextual Reinforcement Learning” by Jung-Hoon Cho, Vindula Jayawardana, Sirui Li and Cathy Wu, 21 November 2024, Computer Science > Machine Learning.
arXiv:2408.04498
The research is funded, in part, by a National Science Foundation CAREER Award, the Kwanjeong Educational Foundation PhD Scholarship Program, and an Amazon Robotics PhD Fellowship.

News
Baffling Scientists for Centuries: New Study Unravels Mystery of Static Electricity
ISTA physicists demonstrate that contact electrification depends on the contact history of materials. For centuries, static electricity has intrigued and perplexed scientists. Now, researchers from the Waitukaitis group at the Institute of Science and [...]
Tumor “Stickiness” – Scientists Develop Potential New Way To Predict Cancer’s Spread
UC San Diego researchers have developed a device that predicts breast cancer aggressiveness by measuring tumor cell adhesion. Weakly adherent cells indicate a higher risk of metastasis, especially in early-stage DCIS. This innovation could [...]
Scientists Just Watched Atoms Move for the First Time Using AI
Scientists have developed a groundbreaking AI-driven technique that reveals the hidden movements of nanoparticles, essential in materials science, pharmaceuticals, and electronics. By integrating artificial intelligence with electron microscopy, researchers can now visualize atomic-level changes that were [...]
Scientists Sound Alarm: “Safe” Antibiotic Has Led to an Almost Untreatable Superbug
A recent study reveals that an antibiotic used for liver disease patients may increase their risk of contracting a dangerous superbug. An international team of researchers has discovered that rifaximin, a commonly prescribed antibiotic [...]
Scientists Discover Natural Compound That Stops Cancer Progression
A discovery led by OHSU was made possible by years of study conducted by University of Portland undergraduates. Scientists have discovered a natural compound that can halt a key process involved in the progression [...]
Scientists Just Discovered an RNA That Repairs DNA Damage – And It’s a Game-Changer
Our DNA is constantly under threat — from cell division errors to external factors like sunlight and smoking. Fortunately, cells have intricate repair mechanisms to counteract this damage. Scientists have uncovered a surprising role played by [...]
What Scientists Just Discovered About COVID-19’s Hidden Death Toll
COVID-19 didn’t just claim lives directly—it reshaped mortality patterns worldwide. A major international study found that life expectancy plummeted across most of the 24 analyzed countries, with additional deaths from cardiovascular disease, substance abuse, and mental [...]
Self-Propelled Nanoparticles Improve Immunotherapy for Non-Invasive Bladder Cancer
A study led by Pohang University of Science and Technology (POSTECH) and the Institute for Bioengineering of Catalonia (IBEC) in South Korea details the creation of urea-powered nanomotors that enhance immunotherapy for bladder cancer. The nanomotors [...]
Scientists Develop New System That Produces Drinking Water From Thin Air
UT Austin researchers have developed a biodegradable, biomass-based hydrogel that efficiently extracts drinkable water from the air, offering a scalable, sustainable solution for water access in off-grid communities, emergency relief, and agriculture. Discarded food [...]
AI Unveils Hidden Nanoparticles – A Breakthrough in Early Disease Detection
Deep Nanometry (DNM) is an innovative technique combining high-speed optical detection with AI-driven noise reduction, allowing researchers to find rare nanoparticles like extracellular vesicles (EVs). Since EVs play a role in disease detection, DNM [...]
Inhalable nanoparticles could help treat chronic lung disease
Nanoparticles designed to release antibiotics deep inside the lungs reduced inflammation and improved lung function in mice with symptoms of chronic obstructive pulmonary disease By Grace Wade Delivering medication to the lungs with inhalable nanoparticles [...]
New MRI Study Uncovers Hidden Lung Abnormalities in Children With Long COVID
Long COVID is more than just lingering symptoms—it may have a hidden biological basis that standard medical tests fail to detect. A groundbreaking study using advanced MRI technology has uncovered significant lung abnormalities in [...]
AI Struggles with Abstract Thought: Study Reveals GPT-4’s Limits
While GPT-4 performs well in structured reasoning tasks, a new study shows that its ability to adapt to variations is weak—suggesting AI still lacks true abstract understanding and flexibility in decision-making. Artificial Intelligence (AI), [...]
Turning Off Nerve Signals: Scientists Develop Promising New Pancreatic Cancer Treatment
Pancreatic cancer reprograms nerve cells to fuel its growth, but blocking these connections can shrink tumors and boost treatment effectiveness. Pancreatic cancer is closely linked to the nervous system, according to researchers from the [...]
New human antibody shows promise for Ebola virus treatment
New research led by scientists at La Jolla Institute for Immunology (LJI) reveals the workings of a human antibody called mAb 3A6, which may prove to be an important component for Ebola virus therapeutics. [...]
Early Alzheimer’s Detection Test – Years Before Symptoms Appear
A new biomarker test can detect early-stage tau protein clumping up to a decade before it appears on brain scans, improving early Alzheimer’s diagnosis. Unlike amyloid-beta, tau neurofibrillary tangles are directly linked to cognitive decline. Years [...]