More accurate coding: Researchers adapt Sequential Monte Carlo for AI-generated code
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Coding with the help of AI models continues to gain popularity, but many have highlighted issues that arise when developers rely on coding assistants.
However, researchers from MIT, McGill University, ETH Zurich, Johns Hopkins University, Yale and the Mila-Quebec Artificial Intelligence Institute have developed a new method for ensuring that AI-generated codes are more accurate and useful. This method spans various programming languages and instructs the large language model (LLM) to adhere to the rules of each language.
The group found that by adapting new sampling methods, AI models can be guided to follow programming language rules and even enhance the performance of small language models (SLMs), which are typically used for code generation, surpassing that of large language models.
In the paper, the researchers used Sequential Monte Carlo (SMC) to “tackle a number of challenging semantic parsing problems, guiding generation with incremental static and dynamic analysis.” Sequential Monte Carlo refers to a family of algorithms that help figure out solutions to filtering problems.
João Loula, co-lead writer of the paper, said in an interview with MIT’s campus paper that the method “could improve programming assistants, AI-powered data analysis and scientific discovery tools.” It can also cut compute costs and be more efficient than reranking methods.
The researchers noted that AI-generated code can be powerful, but it can also often lead to code that disregards the semantic rules of programming languages. Other methods to prevent this can distort models or are too time-consuming.
Their method makes the LLM adhere to programming language rules by discarding code outputs that may not work early in the process and “allocate efforts towards outputs that more most likely to be valid and accurate.”
Adapting SMC to code generation
The researchers developed an architecture that brings SMC to code generation “under diverse syntactic and semantic constraints.”
“Unlike many previous frameworks for constrained decoding, our algorithm can integrate constraints that cannot be incrementally evaluated over the entire token vocabulary, as well as constraints that can only be evaluated at irregular intervals during generation,” the researchers said in the paper.
Key features of adapting SMC sampling to model generation include proposal distribution where the token-by-token sampling is guided by cheap constraints, important weights that correct for biases and resampling which reallocates compute effort towards partial generations.
The researchers noted that while SMC can guide models towards more correct and useful code, they acknowledged that the method may have some problems.
“While importance sampling addresses several shortcomings of local decoding, it too suffers from a major weakness: weight corrections and expensive potentials are not integrated until after a complete sequence has been generated from the proposal. This is even though critical information about whether a sequence can satisfy a constraint is often available much earlier and can be used to avoid large amounts of unnecessary computation,” they said.
Model testing
To prove their theory, Loula and his team ran experiments to see if using SMC to engineer more accurate code works.
These experiments were:
- Python Code Generation on Data Science tasks, which used Llama 3 70B to code line-by-line and test early versions
- Text-to-SQL Generation with Llama 3 8B- Instruct
- Goal Inference in Planning Tasks to predict an agent’s goal condition, and also used Llama 3 8B
- Molecular Synthesis for drug discovery
They found that using SMC improved small language models, improved accuracy and robustness, and outperformed larger models.
Why is it important
AI models have made engineers and other coders work faster and more efficiently. It’s also given rise to a whole new kind of software engineer: the vibe coder. But there have been concerns over code quality, lack of support for more complex coding and compute costs for simple code generation.
New methods, such as adapting SMC, may make AI-powered coding more useful and enable engineers to trust the code generated by models more.
Other companies have explored ways to improve AI-generated code. Together AI and Agentica released DeepCoder-14B, which harnesses fewer parameters. Google also improved its Code Assist feature to help enhance code quality.