WritingBench: A Comprehensive Benchmark for Generative Writing
论文概览 论文标题:WritingBench: A Comprehensive Benchmark for Generative Writing 数据规模:1,239个精心设计的查询,跨越6大核心领域100个子领域 核心创新:首创查询依赖评估框架,动态生成实例特定标准 ...
论文概览 论文标题:WritingBench: A Comprehensive Benchmark for Generative Writing 数据规模:1,239个精心设计的查询,跨越6大核心领域100个子领域 核心创新:首创查询依赖评估框架,动态生成实例特定标准 ...
论文概览 论文标题:LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm 研究机构:Multiple institutions 数据规模:166份跨三大领域的真实长文本样本 核心创新:首创双范式长文本生成评估框架(直接生成 vs 规划生成) ...
论文概览 论文标题:From System 1 to System 2: A Survey of Reasoning Large Language Models 核心主题:从快速直觉到深度推理的AI认知进化 关键洞察:推理型LLM代表了从System 1到System 2思维模式的重大转变 ...
地址:https://arxiv.org/abs/2405.09939 代码:https://github.com/MasterAI-EAM/SciQAG/ ...
TEST-TIME TRAINING ON NEAREST NEIGHBORS FOR LARGE LANGUAGE MODELS ICLR 2024 最近的工作都聚焦于将检索到的数据添加到输入上下文中来增强具有检索能力的LLM,这种方式虽然能取得很好的效果,但是必须在训练和测试时添加检索到的数据。此外由于输入长度随着检索到的数据大小线性增长,Transformer的复杂度和计算成本急速上升。 ...
论文概览 论文标题:ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates 研究机构:Princeton University, Peking University 代码仓库:https://github.com/Gen-Verse/ReasonFlux ...
论文概览 论文标题:SUPERCORRECT: SUPERVISING AND CORRECTING LANGUAGE MODELS WITH ERROR-DRIVEN INSIGHTS 研究机构:Peking University, National University of Singapore, UC Berkeley, Stanford University 代码仓库:https://github.com/YangLing0818/SuperCorrect-llm ...