Operations | Monitoring | ITSM | DevOps | Cloud

Split your Bitbucket Pipelines workflows across multiple files | Bitbucket Blitz | Atlassian

Building and maintaining a 2000+ line bitbucket-pipelines.yml can be a lot of work. Now you can split large bitbucket-pipelines.yml files into multiple, smaller pipelines.yml files. These smaller files can be composed via shared pipeline syntax to replicate the functionality of the original bitbucket-pipelines.yml file. They can also be shared with and reused in other repositories.

That's Not a Job for an LLM: The Right Way to Apply AI to Network Operations

LLMs have sucked all the oxygen out of the AI conversation — but AI is much more than just LLMs, and network engineers have been using AI techniques (machine learning, statistics, fuzzy logic, expert systems, neural networks) for decades. So what should LLMs be doing in network operations, what shouldn't they be doing, and how do agentic AI architectures fit in?

90% AI Adoption. Still Failing. DORA Explains Why.

AI adoption is nearly universal. So why are most teams still struggling? In this session from GitKon, Nathen Harvey, head of DORA at Google Cloud, shares findings from the 2025 DORA State of AI-Assisted Software Development report, drawing on data from nearly 5,000 developers worldwide. The answer isn't more AI. It's what surrounds it.

Do Hospitals Still Use Pagers in 2026? Why They're Not Secure (And What's Replacing Them)

Are hospitals still using pagers in 2026? The answer might surprise you. In this video, we break down why hospital pagers are still used today, the security risks of pagers, and whether they meet HIPAA compliance standards. While pagers have long been trusted for their reliability, many healthcare organizations are now re-evaluating their role in modern clinical communication. We also explore why pagers are considered insecure, including the lack of encryption, no read receipts, and limited communication capabilities, all of which can impact patient care and coordination.

Debug Live Production Apps in Codex with Lightrun MCP

Lightrun’s Dan Putman demonstrates the power of the latest Lightrun MCP skill. Watch how your AI code agent can now debug live applications directly in production. By connecting OpenAI's Codex to real-time runtime data via the Lightrun MCP, engineers can now generate and validate hypotheses using live telemetry and snapshots, without breaking flow. Ready to bring runtime context to your AI agents?

Live Runtime Investigation in Claude Code with Lightrun MCP

In this video, Lightrun’s Dan Putman demonstrates what happens when Lightrun MCP is integrated within Claude Code. See how, once activated, Claude can ask specific questions about what services it can see and instrument in order to perform a deep investigation in production to get to a validated root cause analysis without the friction of redeploying or switching contexts.