以1000个令牌/秒的速度编辑文件
Frontier models such as GPT-4o struggle on large edits, with problems of laziness, inaccuracy, and high-latency.
前沿模型如 GPT-4o 在大规模编辑上表现不佳,存在懒惰、不准确和高延迟的问题。
This is a weakness visible in coding agents. Accurately editing hundreds of lines can take multiple model calls, at times trapping the agent in an infinite loop. Even small, isolated edits are plagued with bugs:
这是编码代理中显而易见的一个弱点。准确编辑数百行代码可能需要多次模型调用,有时会使代理陷入无限循环。即使是小的、孤立的编辑也常常伴随着错误:
Worst of all, existing models are slow at large edits, breaking the programmer out of flow.
最糟糕的是,现有模型在大规模编辑时速度较慢,打断了程序员的工作流。
We've trained a specialized model on an important version of the full-file code edit task called fast apply.
我们已经在一个重要版本的全文件代码编辑任务上训练了一个专门的模型,称为快速应用。
Difficult code edits can be broken down into two stages: planning, and applying.
困难的代码编辑可以分为两个阶段:规划和应用。
In Cursor, the planning phase takes the form of a chat interface with a powerful frontier model. Applying the change to the current file should be straightforward and instant.
在 Cursor 中,规划阶段以强大的前沿模型的聊天界面的形式进行。将更改应用于当前文件应该是直接且 即时 的。
Our fast-apply model surpasses GPT-4 and GPT-4o performance and pushes the pareto frontier on the accuracy / latency curve. We achieve speeds of ~1000 tokens/s (around 3500 char/s) on our 70b model using a speculative-decoding variant tailored for code-edits, called speculative edits.
我们的快速应用模型超越了 GPT-4 和 GPT-4o 的性能,并推动了准确性/延迟曲线的帕累托前沿。我们在使用为代码编辑量身定制的推测解码变体的 70b 模型上实现了 ~1000 tokens/s(约 3500 char/s)的速度,称为 推测编辑。
This means a ~13x speedup over vanilla inference using Llama-3-70b and a ~9x speedup over our previous GPT-4 speculative edits deployment.
这意味着使用 Llama-3-70b 相较于普通推理速度提升 ~13x,相较于我们之前的 GPT-4 推测编辑部署速度提升 ~9x。
By default, we have language models generate the fully rewritten file conditioned on the current file, the conversation history, and the current code block.
默认情况下,我们让语言模型生成基于 完全重写的文件,该文件以 当前文件、对话历史 和 当前代码块 为条件。
In this post, we explain how we trained and evaluated our new model. We show why we rewrite the file instead of using diffs and how speculative edits give us such ridiculous...