How to Catch AI Code Mistakes Before They Reach Production
AI can write code fast, but it makes mistakes humans often don't. In this session from Ole Lensmar, CTO of Testkube, breaks down the real quality risks of AI-generated code and how engineering teams can build guardrails before those bugs hit production.
What you'll learn:
Common mistakes LLMs make (and which ones are unique to AI)
- How to use prompt engineering and context engineering to guide AI toward correct, secure code
- Why "shift left AND shift right" testing is critical for gen AI workflows
- How to build a continuous testing strategy across your entire SDLC — from pre-commit to production
- Why CI/CD tools alone aren't enough for continuous testing at scale
Whether you're a developer leaning on AI to ship faster or a QA lead trying to keep up with the pace of AI-generated code, this talk gives you a practical framework for staying ahead of quality issues.
GitKraken Desktop:
http://tr.ee/GKDYT
GitKraken CLI:
http://tr.ee/CLIYT
GitLens for VS Code:
http://tr.ee/GLYT
Git Integration for Jira:
http://tr.ee/GijYT
Git Blog:
http://gitkraken.com/blog