Maximizing Coding Productivity with Large Language Models
Learn how to maximize developer productivity by leveraging large language models for rapid code refactoring.
Large language models like ChatGPT have tremendous potential to automate repetitive coding tasks and boost team effectiveness.
In this MAAS Show And Tell, Peter Makowski, Senior Web Engineer at Canonical, shares insights and a real-world example of using LLM for a successful large-scale migration of hundreds of tests from enzyme to @testing-library/react.
Further reading:
https://discourse.maas.io/tag/show-and-tell
https://petermakowski.io/
Key moments:
0:00 Introduction
01:09 Challenges of refactoring large portions of legacy code
03:02 LLM code refactoring loop
04:51 Prompt engineering for optimal LLM results
12:42 Final Prompt
14:35 Automating refactoring at scale with scripting
15:45 Integrating LLMs into the development workflow
The presentation covers:
- Crafting effective prompts for LLMs
- Improving prompts through an iterative loop
- LLM code refactoring loop
- Automation scripts to scale LLM usage
- Example of migrating hundreds of tests using LLMs
Subscribe to Ubuntu on YouTube for more content like this:
https://bit.ly/3Sp6PKY
And follow our other social accounts:
LinkedIn:
https://bit.ly/3Jw6jGN
Twitter:
https://bit.ly/3OXSIJE
Facebook:
https://bit.ly/3Q15Yyn
Instagram:
https://bit.ly/3vE7Kxk
For more information visit https://www.ubuntu.com and https://www.canonical.com
#largelanguagemodels #coding #machinelearning #canonical