LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm

论文概览 论文标题:LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm 研究机构:Multiple institutions 数据规模:166份跨三大领域的真实长文本样本 核心创新:首创双范式长文本生成评估框架(直接生成 vs 规划生成) ...

2025年03月25日 · 12 分钟 · 5563 字 · ZhaoYang

From System 1 to System 2: A Survey of Reasoning Large Language Models

论文概览 论文标题:From System 1 to System 2: A Survey of Reasoning Large Language Models 核心主题:从快速直觉到深度推理的AI认知进化 关键洞察:推理型LLM代表了从System 1到System 2思维模式的重大转变 ...

2025年03月16日 · 12 分钟 · 5827 字 · ZhaoYang

SciQAG:A Framework for Auto-Generated Science Question Answering Dataset with Fine-grained Evaluation

地址:https://arxiv.org/abs/2405.09939 代码:https://github.com/MasterAI-EAM/SciQAG/ ...

2025年03月12日 · 6 分钟 · 2560 字 · ZhaoYang

TEST-TIME TRAINING ON NEAREST NEIGHBORS FOR LARGE LANGUAGE MODELS

TEST-TIME TRAINING ON NEAREST NEIGHBORS FOR LARGE LANGUAGE MODELS ICLR 2024 最近的工作都聚焦于将检索到的数据添加到输入上下文中来增强具有检索能力的LLM,这种方式虽然能取得很好的效果,但是必须在训练和测试时添加检索到的数据。此外由于输入长度随着检索到的数据大小线性增长,Transformer的复杂度和计算成本急速上升。 ...

2025年03月12日 · 2 分钟 · 813 字 · ZhaoYang

ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates

论文概览 论文标题:ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates 研究机构:Princeton University, Peking University 代码仓库:https://github.com/Gen-Verse/ReasonFlux ...

2025年03月07日 · 5 分钟 · 2316 字 · ZhaoYang

SUPERCORRECT:SUPERVISING AND CORRECTING LANGUAGE MODELS WITH ERROR-DRIVEN INSIGHTS

论文概览 论文标题:SUPERCORRECT: SUPERVISING AND CORRECTING LANGUAGE MODELS WITH ERROR-DRIVEN INSIGHTS 研究机构:Peking University, National University of Singapore, UC Berkeley, Stanford University 代码仓库:https://github.com/YangLing0818/SuperCorrect-llm ...

2025年03月02日 · 8 分钟 · 3711 字 · ZhaoYang

Buffer of Thoughts:Thought-Augmented Reasoning with Large Language Models

施工中 论文翻译:https://dppemvhuzp.feishu.cn/docx/Rp4YdgRXAohJBaxWqL7cO9FPnJf?from=from_copylink ...

2025年03月01日 · 1 分钟 · 88 字 · ZhaoYang