Machine Learning Research

313 Posts

A Transformer Alternative Emerges: Mamba, a new approach that may outperform transformers
Machine Learning Research

A Transformer Alternative Emerges: Mamba, a new approach that may outperform transformers

An architectural innovation improves upon transformers — up to 2 billion parameters, at least...
More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback
Machine Learning Research

More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback

Large language models sometimes generate false statements. New work makes them more likely to produce factual output.
Robo-Football From Simulation to Reality: Reinforcement learning powers humanoid robots to play football
Machine Learning Research

Robo-Football From Simulation to Reality: Reinforcement learning powers humanoid robots to play football

Humanoid robots can play football (known as soccer in the United States) in the real world, thanks to reinforcement learning.
Cutting the Cost of Pretrained Models: FrugalGPT, a method to cut AI costs and maintain quality
Machine Learning Research

Cutting the Cost of Pretrained Models: FrugalGPT, a method to cut AI costs and maintain quality

Research aims to help users select large language models that minimize expenses while maintaining quality.
Toward Safer, More Helpful Models
Machine Learning Research

Toward Safer, More Helpful Models

The technique known as reinforcement learning from human feedback fine-tunes large language models to be helpful and avoid generating harmful responses such as suggesting illegal or dangerous activities. An alternative method streamlines this approach and achieves better results.
Learning Language by Exploration: Agent develops language skills through simulated exploration tasks
Machine Learning Research

Learning Language by Exploration: Agent develops language skills through simulated exploration tasks

Machine learning models typically learn language by training on tasks like predicting the next word in a given text. Researchers trained a language model in a less focused, more human-like way.
Schooling Language Models in Math: GOAT (Good at Arithmetic Tasks), a method to boost large language models' arithmetic abilities
Machine Learning Research

Schooling Language Models in Math: GOAT (Good at Arithmetic Tasks), a method to boost large language models' arithmetic abilities

Large language models are not good at math. Researchers devised a way to make them better. Tiedong Liu and Bryan Kian Hsiang Low at the National University of Singapore proposed a method to fine-tune large language models for arithmetic tasks.
Human Feedback Without Reinforcement Learning: Direct Preference Optimization (DPO) fine-tunes pretrained large language models on human preferences without the cumbersome step of reinforcement learning.
Machine Learning Research

Human Feedback Without Reinforcement Learning: Direct Preference Optimization (DPO) fine-tunes pretrained large language models on human preferences without the cumbersome step of reinforcement learning.

Reinforcement learning from human feedback (RLHF) is widely used to fine-tune pretrained models to deliver outputs that align with human preferences. New work aligns pretrained models without the cumbersome step of reinforcement learning.
Swiss Army LLM
Machine Learning Research

Swiss Army LLM

The combination of  language models that are equipped for retrieval augmented generation can retrieve text from a database to improve their output. Further work extends this capability to retrieve information from any application that comes with an API. 
Better, Faster Network Pruning: Researchers devise pruning method that boosts AI speed
Machine Learning Research

Better, Faster Network Pruning: Researchers devise pruning method that boosts AI speed

Pruning weights from a neural network makes it smaller and faster, but it can take a lot of computation to choose weights that can be removed without degrading the network’s performance.
Memory-Efficient Optimizer: A method to reduce memory needs when fine-tuning AI models
Machine Learning Research

Memory-Efficient Optimizer: A method to reduce memory needs when fine-tuning AI models

Researchers devised a way to reduce memory requirements when fine-tuning large language models. Kai Lv and colleagues at Fudan University proposed low memory optimization (LOMO), a modification of stochastic gradient descent that stores less data than other optimizers during fine-tuning.
Better Images, Less Training: WĂĽrstchen, a speedy, high-quality image generator
Machine Learning Research

Better Images, Less Training: WĂĽrstchen, a speedy, high-quality image generator

The longer text-to-image models train, the better their output — but the training is costly. Researchers built a system that produced superior images after far less training.
LLMs Can Get Inside Your Head: AI models show promise in understanding human beliefs, research reveals
Machine Learning Research

LLMs Can Get Inside Your Head: AI models show promise in understanding human beliefs, research reveals

Most people understand that others’ mental states can differ from their own. For instance, if your friend leaves a smartphone on a table and you privately put it in your pocket, you understand that your friend continues to believe it was on the table.
More Consistent Generated Videos: Lumiere, a system that achieves unprecedented motion realism in video
Machine Learning Research

More Consistent Generated Videos: Lumiere, a system that achieves unprecedented motion realism in video

Text-to-video has struggled to produce consistent motions like walking and rotation. A new approach achieves more realistic motion.
Learning the Language of Geometry: AlphaGeometry, a system that nears expert proficiency in proving complex geometry theorems
Machine Learning Research

Learning the Language of Geometry: AlphaGeometry, a system that nears expert proficiency in proving complex geometry theorems

Machine learning algorithms often struggle with geometry. A language model learned to prove relatively difficult theorems. 
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox