Stanford University

58 Posts

More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback
Stanford University

More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback

Large language models sometimes generate false statements. New work makes them more likely to produce factual output.
Cross-Species Cell Embeddings: AI enhances cell type discovery, identifies previously elusive "Norn cells"
Stanford University

Cross-Species Cell Embeddings: AI enhances cell type discovery, identifies previously elusive "Norn cells"

Researchers used an AI system to identify animal cell types from gene sequences, including a cell type that conventional approaches had discovered only in the past year. 
Cutting the Cost of Pretrained Models: FrugalGPT, a method to cut AI costs and maintain quality
Stanford University

Cutting the Cost of Pretrained Models: FrugalGPT, a method to cut AI costs and maintain quality

Research aims to help users select large language models that minimize expenses while maintaining quality.
Learning Language by Exploration: Agent develops language skills through simulated exploration tasks
Stanford University

Learning Language by Exploration: Agent develops language skills through simulated exploration tasks

Machine learning models typically learn language by training on tasks like predicting the next word in a given text. Researchers trained a language model in a less focused, more human-like way.
Animated diagram depicting the problem setup and proposed method
Stanford University

Robot, Find My Keys: A machine learning model for robots to predict the location of objects in households

Researchers proposed a way for robots to find objects in households where things get moved around. Andrey Kurenkov and colleagues at Stanford University introduced Node Edge Predictor, a model that learned to predict where objects were located in houses.
Foundation Model Transparency Index by The Stanford Center for Research of Foundation Models
Stanford University

What We Know — and Don’t Know — About Foundation Models: A new Stanford index to assess the transparency of leading AI models

A new index ranks popular AI models in terms of information their developers provide about their training, architecture, and usage. Few score well.
LLMs Get a Life: The generative agents that mimic human behavior in a simulated town
Stanford University

LLMs Get a Life: The generative agents that mimic human behavior in a simulated town

Large language models increasingly reply to prompts with a believably human response. Can they also mimic human behavior?
ChatGPT Ain’t What It Used to Be: ChatGPT's behavior change over time
Stanford University

ChatGPT Ain’t What It Used to Be: ChatGPT's behavior change over time

It wasn’t your imagination: OpenAI’s large language models have changed. Researchers at Stanford and UC Berkeley found that the performance of GPT-4 and GPT-3.5 has drifted in recent months. In a limited selection of tasks, some prompts yielded better results than before, some worse.
Bug Finder: A system that provides feedback with near human-level accuracy
Stanford University

Bug Finder: A system that provides feedback with near human-level accuracy

One challenge to making online education available worldwide is evaluating an immense volume of student work. Especially difficult is evaluating interactive computer programming assignments such as coding a game.
Three Methods for Detecting Generated Text: Techniques to tell when you're reading AI-generated text
Stanford University

Three Methods for Detecting Generated Text: Techniques to tell when you're reading AI-generated text

How can you tell when you’re reading machine-generated text? Three recent papers proposed solutions: Watermarking, classification, and a statistical method.
AI Trends Tracked: 2023's AI trends from Stanford's AI Index
Stanford University

AI Trends Tracked: 2023's AI trends from Stanford's AI Index

Stanford’s sixth annual AI Index takes stock of a rapidly growing field. The sprawling, 386-page report from the Institute for Human-Centered AI presents the past year’s developments in AI based on a wide variety of sources including benchmarks, papers, market research, job listings, and polls.
Graph with difference in test error in keeping hard versus easy examples
Stanford University

Unsupervised Data Pruning: New method removes useless machine learning data.

Large datasets often contain overly similar examples that consume training cycles without contributing to learning. A new paper identifies similar training examples, even if they’re not labeled.
Participant responses (Likert-scale) to post-survey questions about belief about OpenAI's Codex
Stanford University

Generated Code Generates Overconfident Coders: Copilot AI tool encourages programmers to write buggy code.

Tools that automatically write computer code may make their human users overconfident that the programs are bug-free. Stanford University researchers found that programmers who used OpenAI’s Codex, a model that generates computer code, were more likely...
Douwe Kiela with a l
Stanford University

Douwe Kiela: Natural language processing researcher Douwe Kiela calls for less hype, more caution.

This year we really started to see the mainstreaming of AI. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field.
Ground truth video of a road on the left and predicted video with MaskViT on the right
Stanford University

Seeing What Comes Next: Transformers predict future video frames.

If a robot can predict what it’s likely to see next, it may have a better basis for choosing an appropriate action — but it has to predict quickly. Transformers, for all their utility in computer vision, aren’t well suited to this because of their steep computational and memory requirements...
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox