Arctic Vaults, AI Agents & The Future Of Dev Tools

An interview with Ryan J. Salva, Senior Director of Product Management at Google. ✨

👋 Howdy to the 1,243 new legends who joined this week! You are now part of a 148,213 strong tribe outperforming the competition together.

LATEST POSTS 📚

If you’re new, not yet a subscriber, or just plain missed it, here are some of our recent editions.

🫆 The Google & Coinbase Veteran Building The Agentic Future. An interview with Surojit Chatterjee, Founder & CEO at Ema.
🎯 Leading Intercom’s AI Transformation. Des Traynor on zero-to-one mess, AI momentum, and leading with intent.
🗝️ Unfiltered: Vanta’s Product & Growth Leaders. Vanta leaders spill the beans on how they are going from 1 to 100

PARTNERS 💫

Scale globally with Paddle.

Self-serve software is global by default. But selling to every country is complicated. That’s where Paddle comes in.

They manage payments, sales tax, and compliance on every transaction around the world, so you can focus on what you do best: building a great product and serving your customers.

As a Merchant of Record, Paddle makes it easy for SaaS and digital product companies to sell anywhere.

Wherever you want to take your business, go there with Paddle.

Interested in sponsoring these emails? See our partnership options here.

HOUSEKEEPING 📨

You know, I never thought we would progress to the stage where you watched me sleeping, but this photo was too good not to share. That’s me, and Ziggy, the VP of barkingz at Athyna, snoozing recently. I’m the one at the back.

Anyway, I don’t have a lot of stuff on my mind today. I am still going super hard on my AI course, which is actually pretty excellent. This is not a sponsored post, but the guy who runs it, Mark, gave me a code and a discount (BILLxMAVEN) if you’d like to do the course. It’s pretty epic.

I am doing my AI course after I hit publish here, actually, then off to Spanish. Mucho learning for me today amigos.

As for today’s piece, we pulled in one of the big hitters for you. Ryan runs a large part of Google’s AI product suite for dev tooling and may be one of the more plugged-in men on the planet when it comes to what’s happening today and what may come tomorrow. I hope you enjoy it!

INTERVIEW 🎙️

Ryan J. Salva @ Google

Ryan J Salva is Senior Director of Product at Google, where he leads the development of AI-assisted tools for developers, with a mission to transform how software is built—from code creation to full system orchestration. He brings deep expertise from his previous role as VP of Product at GitHub, where he helped launch groundbreaking initiatives like GitHub Copilot and the Arctic Code Vault.

His career has been defined by a focus on empowering developers through better tooling and automation. At GitHub, he championed projects that merged open source culture with long-term digital preservation. At Google, he’s now reimagining how AI can go beyond autocomplete to operate across the entire software development lifecycle. With a strong foundation in engineering and product strategy, Ryan continues to push the boundaries of what it means to build—and rebuild—technology at scale.

Tell us about the Arctic Code Vault. What is it and why is it important?

Sure. The Arctic Code Vault is essentially a long-term archival project where we stored a copy of open source code on silver halide film and placed it in a decommissioned coal mine in Svalbard, Norway, back in 2018.

At GitHub, where I was working at the time, we drew inspiration from global seed vaults—places that preserve seeds of the world's flora like apples, pears, wheat, and so on, in case of catastrophic events such as nuclear disasters or asteroid impacts. These vaults ensure that we have a way to restore biodiversity if the worst happens.

We started thinking: what are the ‘seeds’ of human knowledge in the digital age? Open source software plays a foundational role in our modern technological ecosystem. If we ever face a global disaster, we won't just need to reboot our environment—we’ll also need to reboot our technology.

This is the seed vault.

So we created a snapshot of all the open-source software on GitHub at that time and stored it in the Arctic Vault. It’s a way to preserve the knowledge and code that power so much of today’s infrastructure, ensuring it’s available for future generations—even in the event of a global catastrophe.

In your role, what is a problem that you are currently solving at Google?

I’m responsible for developer tools and services at Google, with a particular focus on AI-assisted tooling. Right now, a big part of my work is figuring out how we move beyond simple predictive text or conversational AI in an IDE, toward a world where we can deploy entire fleets of AI agents to help developers, not just as coding assistants, but as collaborators throughout the software development lifecycle.

For the past five years or so, most AI toolmakers have focused on helping developers write code a little faster—anticipating what they’re about to write, saving a few keystrokes, or sparking ideas.

That’s useful, but we’re thinking bigger now.

We want AI to write entire applications, not just code, but with awareness of database schemas, third-party services like Datadog or Snowflake, and the infrastructure where the code is deployed. It’s not enough to build a cute demo like a Tetris app. Developers want to build software that people use at scale. That means AI has to understand things like microservice architecture, Kubernetes pod configurations, database types, and system bottlenecks.

So the problem I’m solving is: how do we use AI not just to write better code, but to assist across the full software development lifecycle—from coding to infrastructure to operations?

What's an example of this?

Picture this: it's 2 am, and you've just deployed an app, maybe even a mobile app, and it's gaining traction. Then, suddenly, you get a live site incident notification. Users are seeing 404 errors, and you have to fix it. You’re groggy, so you crawl out of bed, open your laptop, and start digging through a bunch of dashboards. You’re checking recent deployments, trying to roll back changes as quickly as possible. Sometimes, you didn’t even write the code that caused the problem—it might’ve come from another team. So now you’re scrambling to understand what was changed, by whom, and how it all connects.

You might have 100 tabs open, each one representing a question: a dashboard, a chat thread, a bit of documentation. Every tab is a data source you’re consulting to troubleshoot the issue. And after all that, you might only temporarily fix the problem, without really understanding the root cause.

So what we’re trying to do is consolidate all that information. We want to bring together data about your code repositories, infrastructure, test runs, artifacts, and even the JIRA tickets or GitHub issues that led to those changes. Then we combine all of that with historical playbooks: in the past, when we saw this type of error message or stack trace, it usually meant this. From there, we can surface two or three strong hypotheses for the developer. Instead of just sending a vague "Good luck, the site’s down!" notification, we provide context: “Hey, this error pattern typically relates to X, and here’s what helped resolve it before.” The goal is to get developers back to sleep by 3 a.m.—with fewer tabs open, less stress, and a faster path to resolution.

Have you quantified what this would mean for productivity?

That’s a great question—and yes, productivity is exactly where the industry has been focused for the last several years. How can we write more code? How can we release more often? But there’s another layer we need to consider.

Are you familiar with the DORA Report? It’s been published for over a decade and is one of the key benchmark studies for software development, along with things like the Stack Overflow Developer Survey and GitHub’s Octoverse.

Not this one.

This past year, DORA had a special focus on AI. Traditionally, the report balances metrics of productivity and velocity—like deployment frequency—with metrics of quality, like change failure rate and mean time to recovery.

What they found was interesting: in teams where at least 25% of developers used AI-assisted tools, there was a 7.2% regression in delivery stability. In other words, yes, teams were going faster—but they were also experiencing more failures, more bugs, and more incidents in production. So, as we look ahead, I think 2025 will be a turning point. It’s the year the industry shifts from focusing solely on speed to prioritizing quality. We’ve spent a lot of time trying to write more code, but now we need to write better code.

Over the next three to five years, I do think we’ll see a big shift. Engineers have huge backlogs full of ideas they want to build—things that bring real value. But time and maintenance are always blockers. AI will help us eliminate a lot of technical debt—like upgrading legacy Java apps or migrating off COBOL systems. But it’s not just about shipping faster. It’s about being able to maintain those features too. I hope we’ll see engineers spending less time fixing things and more time actually building.

Today, about 20% of a developer’s time is spent on operational tasks—things like capacity planning or environment management. And most teams, including mine, reserve 25–30% of our time for ‘keep the lights on’ work: little bugs, interruptions, emergencies. No one likes doing KTLO work—it’s not creative, and it’s rarely celebrated. That’s why my goal is that, by the end of this year, 50% of our KTLO tasks will be handled by AI. That will free up more capacity to focus on the features and innovations we’ve been dreaming of for years.

What is standing between us and the future you just laid out?

I’d say the biggest hurdle, though it’s a solvable one, isn’t the models themselves. Everyone assumes that model quality is the limiting factor, but that’s less and less the case. Whether it’s OpenAI, Anthropic, Google, or others, we can be confident the models will keep getting better. That’s almost a given.

The real challenge is the connective tissue—the integrations between the models and the engineering systems we rely on every day. Right now, an AI model is like an intelligence in a dark room, yelling into the void. It can’t affect the world unless it’s connected to real tools: databases, dashboards, services, and visual interfaces.

So the core problem is building those integrations—those connectors between AI and the actual engineering tools we use. This includes defining the protocols, standards, and APIs that allow models to interact meaningfully with software systems. We’ve seen some progress here, like MCP servers, which provide a useful standard for tool interaction. But there’s still a lot of engineering work required to stitch it all together.

There’s also the issue of context management. Even though model context windows are getting bigger—millions of tokens now, in some cases—that still fills up fast when you add in videos or large datasets. So we need smart ways to select and deliver just the right context at the right time. Tools like knowledge graphs and RAG (retrieval-augmented generation) are helping, but they still have real limitations. We haven’t yet had a breakthrough in orchestrating many different tools across complex tasks.

And lastly, this one is a bit more human, it’s ego. Now that AI has been around in the dev space for five or six years, people are catching onto the terminology. They know what a large language model is, they’ve played with RAG, they understand knowledge graphs. That’s great—it means folks are experimenting.

But it also means a lot of early demos are being mistaken for finished products. You can get a ‘magical moment’ prototype up and running in two weeks—but getting it to produce reliable, consistent results? That takes months or years of deep experimentation, real data science, and new skill sets that go far beyond traditional software engineering. So part of what we need is patience—and a willingness to dig in and build sustainable systems, not just flashy demos.

What do you see as the future of the developer vs the AI developer agent?

A lot of this depends on the time horizon, but the general trend is clear: the builders of the future will look more like architects than syntax experts.

They’ll spend less time worrying about the specifics of an if-then-else statement or a switch case, and more time thinking about the systems they want to create—how those systems are structured, how the components interact, and what the success and failure outcomes should look like. In other words, the real value of engineers isn’t in knowing Python or Java or C#. Those are just languages.

The core skillset is problem solving: knowing which problems are worth tackling, breaking complex challenges into smaller parts, and applying critical thinking to build something meaningful.

That’s what developers will still be doing in the future—just with more powerful tools at their side. Yes, we’ll always have specialists who stay close to the machine, and that will continue to be important. But for most developers, it’s going to be about designing systems and orchestrating collaboration between humans and AI agents.

AI will help us move faster and build better, but it won’t replace the human intuition, creativity, and judgment that make great engineering possible. Developers aren’t just experts in writing code; they’re builders, and that role will only grow more important in the future.

And that’s it! You can follow and connect with Ryan over on LinkedIn here, or on Twitter over here.

HIRING ZONE 👀 

Today we are highlighting AI talent available through, Athyna. If you are looking for the best bespoke tech talent, these stars are ready to work with you—today! Reach out here if we can make an introduction to these talents and get $1,000 discount on behalf of us.

BRAIN FOOD 🧠

TWEETS OF THE WEEK 🐣 

ASK ME ANYTHING 🗣

I want to be a trusted resource for you. If you think anything I know in relation to brand, culture, global teams, sales and growth would help you unblock a problem in your weeks shoot me a line.

Ask in the comments or reply to this email and I will do my best to answer it in a future edition. 🙌🏼

TOOLS WE USE 🛠️

Every week we highlight tools we actually use inside of our business and give them an honest review. Today we are highlighting Attio—powerful, flexible and data-driven, the exact CRM your business needs.

See the full set of tools we use inside of Athyna & Open Source CEO here.

HOW I CAN HELP 🥳

P.S. Want to work together?

  1. Hiring global talent: If you are hiring tech, business or ops talent and want to do it for up to 80% off check out my startup Athyna. 🌏

  2. Want to see my tech stack: See our suite of tools & resources for both this newsletter and Athyna you check them out here. 🧰 

  3. Reach an audience of tech leaders: Advertise with us if you want to get in front of founders, investor and leaders in tech. 👀 

That’s it from me. See you next week, Doc 🫡 

P.P.S. Let’s connect on LinkedIn and Twitter.

Reply

or to participate.