From System 1 to System 2: A Survey of Reasoning Large Language Models
论文概览 论文标题:From System 1 to System 2: A Survey of Reasoning Large Language Models 核心主题:从快速直觉到深度推理的AI认知进化 关键洞察:推理型LLM代表了从System 1到System 2思维模式的重大转变 ...
论文概览 论文标题:From System 1 to System 2: A Survey of Reasoning Large Language Models 核心主题:从快速直觉到深度推理的AI认知进化 关键洞察:推理型LLM代表了从System 1到System 2思维模式的重大转变 ...
TEST-TIME TRAINING ON NEAREST NEIGHBORS FOR LARGE LANGUAGE MODELS ICLR 2024 最近的工作都聚焦于将检索到的数据添加到输入上下文中来增强具有检索能力的LLM,这种方式虽然能取得很好的效果,但是必须在训练和测试时添加检索到的数据。此外由于输入长度随着检索到的数据大小线性增长,Transformer的复杂度和计算成本急速上升。 ...
论文概览 论文标题:ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates 研究机构:Princeton University, Peking University 代码仓库:https://github.com/Gen-Verse/ReasonFlux ...
论文概览 论文标题:SUPERCORRECT: SUPERVISING AND CORRECTING LANGUAGE MODELS WITH ERROR-DRIVEN INSIGHTS 研究机构:Peking University, National University of Singapore, UC Berkeley, Stanford University 代码仓库:https://github.com/YangLing0818/SuperCorrect-llm ...
施工中 论文翻译:https://dppemvhuzp.feishu.cn/docx/Rp4YdgRXAohJBaxWqL7cO9FPnJf?from=from_copylink ...
论文概览 论文标题:Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models 核心主题:从普通LLM到大型推理模型的演进之路 关键洞察:OpenAI o1系列标志着AI推理能力的重大突破 ...
论文概览 论文标题:Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought 研究机构:SynthLabs.ai, Stanford University, UC Berkeley 核心创新:元思维链(Meta-CoT)框架,从CoT到深度推理的革命性跃升 ...