Podcast: Doc testing, skills files, and the guardians of knowledge -- with Manny Silva
In this podcast, Fabrizio Ferri-Benedetti (passo.uno) and I chat with Manny Silva (instructionmanuel.com), head of documentation at Skyflow and author of *Docs as Tests*. Manny is working on a follow-up book that incorporates AI, covering validated generation, trusted agents, and self-healing documentation.
Our conversation in this podcast covers some of the topics he’s exploring. For example: documentation testing (testing docs vs. testing the product), skills files (versus regular markdown files that don’t follow the skills spec), the consultant model of docs (and whether this is the future of tech comm in companies), externalizing and sharing skills files (and why one might or might not want to do that), and much more.
Throughout, we wrestle with the big question lurking behind all of it: as tech writers pour their expertise into systems that machines can run, are we accelerating ourselves or automating ourselves out of a job? #ai#documentation#technicalwriting#automation#podcast Listen here: https://lnkd.in/gRpFKMR7
Technical writers should move towards building the rules that run AI.
This involves:
- writing clear instructions for machines
- testing that AI tools give accurate answers
- architecting the guardrails for organisation knowledge
The future of technical writing is about building information systems we can trust.
Must watch:
Here are a few points made regarding the concern that technical writers might be "automating ourselves" out of a job:
Our roles are evolving from writers to orchestrators: The tech writing industry is in constant flux, and the role is shifting from simply writing content to orchestrating the documentation pipeline. Technical writers need to become maintainers of the company's skill files, agent definitions, and AI artifacts.
AI accelerates rather than replaces: Automating processes with AI should be viewed as an acceleration of our abilities, allowing us to build, maintain, and orchestrate systems that help us do our jobs much better.
We must own the automation to own our future: Manny views owning the skill files and automation processes as the answer to the "existential question" of job security. By actively owning and building these systems, technical writers secure their future.
"Security by obscurity" is gone: You cannot protect your job by keeping your complex workflows secret. If you do not build formal skills and automated pipelines, someone else (like a well-meaning TPM using an AI tool) will eventually do it themselves, likely in a non-standard or inferior way.
If we don't build it, others will: The industry is moving toward semi-autonomous, self-healing documentation systems. It is essential that technical writers build these systems themselves rather than relinquishing control to people who lack the vision or taste for high-quality documentation.
Tech writers maintain the ultimate responsibility: Even if AI agents, engineers, or product managers use automation to generate documentation, they do not want the responsibility of ensuring its accuracy. Technical writers retain their value by taking on this responsibility and acting as the final quality gatekeeper.
The future is "content curation": Rather than just creating content from scratch, writers will act as "content curators" who gladly accept AI-generated or human-generated drafts but block publication if the material isn't accurate or doesn't meet quality standards.
Proving our value is the real challenge: The most difficult part of this transition isn't building the automated systems, but rather maintaining ownership over them and successfully demonstrating to others the unique value that technical writers bring to these workflows.
Podcast: Doc testing, skills files, and the guardians of knowledge -- with Manny Silva
In this podcast, Fabrizio Ferri-Benedetti (passo.uno) and I chat with Manny Silva (instructionmanuel.com), head of documentation at Skyflow and author of *Docs as Tests*. Manny is working on a follow-up book that incorporates AI, covering validated generation, trusted agents, and self-healing documentation.
Our conversation in this podcast covers some of the topics he’s exploring. For example: documentation testing (testing docs vs. testing the product), skills files (versus regular markdown files that don’t follow the skills spec), the consultant model of docs (and whether this is the future of tech comm in companies), externalizing and sharing skills files (and why one might or might not want to do that), and much more.
Throughout, we wrestle with the big question lurking behind all of it: as tech writers pour their expertise into systems that machines can run, are we accelerating ourselves or automating ourselves out of a job? #ai#documentation#technicalwriting#automation#podcast Listen here: https://lnkd.in/gRpFKMR7
I discussed my in-development book on Docs as Tests and AI with Tom Johnson and Fabrizio Ferri-Benedetti! It was a fantastic conversation, even as we grappled with some difficult questions. Plenty of constructive disagreement and openness.
Skills, AGENTS.md, evals, testing, all the current hotness is there, but also responsibility, dread, resilience, and what the future of our profession may well hold.
Give it a listen.
Podcast: Doc testing, skills files, and the guardians of knowledge -- with Manny Silva
In this podcast, Fabrizio Ferri-Benedetti (passo.uno) and I chat with Manny Silva (instructionmanuel.com), head of documentation at Skyflow and author of *Docs as Tests*. Manny is working on a follow-up book that incorporates AI, covering validated generation, trusted agents, and self-healing documentation.
Our conversation in this podcast covers some of the topics he’s exploring. For example: documentation testing (testing docs vs. testing the product), skills files (versus regular markdown files that don’t follow the skills spec), the consultant model of docs (and whether this is the future of tech comm in companies), externalizing and sharing skills files (and why one might or might not want to do that), and much more.
Throughout, we wrestle with the big question lurking behind all of it: as tech writers pour their expertise into systems that machines can run, are we accelerating ourselves or automating ourselves out of a job? #ai#documentation#technicalwriting#automation#podcast Listen here: https://lnkd.in/gRpFKMR7
Lesson 4 from this week's Scaling With AI Podcast With AI best bits episode: Build internal AI
tools with the right people in the room
Ben Lee from Bidwells said something that applies to any AI project: you need three people in the room.
The subject matter expert who knows what the output needs to achieve. The IT team knows what's feasible and safe with your tech stack. And someone who connects the two - someone focused on innovation, what’s possible and what the solution should look like.
Thirty minutes with those three people in a room saves weeks of rework.
Link in the comments.
We are honored to share this insightful podcast episode featuring Mr. Ali Aboauf, Product Lead of the Faheem Application.
In this conversation Mr. Ali tackles the biggest challenge facing Gen Z and Gen Alpha today: Distraction. He shares how the Faheem application is designed to help students navigate the overwhelming sea of information and focus on what truly matters. His passion for creating smarter, more intuitive learning tools offers a refreshing look at the role of AI in personalizing education.
Don't let distraction hold you back—watch the full interview now on our YouTube channel for actionable insights.
https://lnkd.in/egE7PRcgElStudioo - الاستوديو#EduVationSummit2025#FutureOfLearning#EdTech#InnovationInEducation
Saving time isn’t the real win.
Reducing mental load is.
My podcast used to require constant context switching:
> Recording
> Writing show notes
> Drafting emails
> Organizing content
Now, that process runs through a trained system.
AI handles the repeatable pieces. I stay focused on the thinking.
That’s the role AI should play in a service business.
If you want to see how to build this, comment AI Voice.
AI adoption is everywhere.
But measurable AI ROI is still surprisingly rare.
In the latest episode of the ThinkData Podcast, I sat down with Jason Li, CTO at Laurel
Jason previously spent nearly a decade as CTO at Ironclad, and now leads engineering at Laurel, a Series C company building AI-powered time intelligence software for professional services firms.
We discussed:
⌚Why most companies struggle to measure real AI ROI
⌚The hidden inefficiencies in how professional services firms track time
⌚What it takes to build AI that actually works in complex enterprise environments
⌚The biggest mistakes organisations make when deploying AI tools
⌚How engineering teams are evolving in an AI-first world
A fascinating conversation about where AI is actually delivering value today.
🎧 Full episode below.
We're thrilled to have Jason Li as our CTO. Jason spent nearly a decade building world-class engineering at Ironclad, and now he's bringing that same rigor to one of the most important problems in enterprise AI: proving that it actually works.
Measuring real AI ROI is still surprisingly rare. Jason is here to change that. More next week 🔜
AI adoption is everywhere.
But measurable AI ROI is still surprisingly rare.
In the latest episode of the ThinkData Podcast, I sat down with Jason Li, CTO at Laurel
Jason previously spent nearly a decade as CTO at Ironclad, and now leads engineering at Laurel, a Series C company building AI-powered time intelligence software for professional services firms.
We discussed:
⌚Why most companies struggle to measure real AI ROI
⌚The hidden inefficiencies in how professional services firms track time
⌚What it takes to build AI that actually works in complex enterprise environments
⌚The biggest mistakes organisations make when deploying AI tools
⌚How engineering teams are evolving in an AI-first world
A fascinating conversation about where AI is actually delivering value today.
🎧 Full episode below.
Let's go Season 2!!!
The world-renowned Building with AI podcast is live with a spanking new season! The first episode was so good, we actually broke it into two parts.
Part one is live NOW:
• YouTube - https://lnkd.in/gkmurbNV
• Spotify - https://lnkd.in/g6vH3Rpe
• Apple Podcasts - https://lnkd.in/g-vGZyMQ
In this fantastic episode with Todd James we zoom in on what organizations are getting wrong: AI is not the goal. It's the golf club.
If you’re building AI inside a real company (not a demo org), Todd’s core point is the one most teams avoid:
𝗜𝘁’𝘀 𝗻𝗼𝘁 𝗔𝗜. 𝗜𝘁’𝘀 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗲𝗻𝗮𝗯𝗹𝗲𝗱 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗔𝗜.
We talked about:
• Why most AI programs get stuck in pilots (and what it takes to scale)
• The 70/30 reality: 70% org + operating model, 30% tech
• Why fragmented analytics kills “strategic AI”
• The human side of automation: people becoming managers of bots/agents
• Why incubations fail when you “plant them in the mothership”
For those who prefer the abridged version, read this: https://lnkd.in/gsEJX8Ru
AI is shifting the software translation layer from humans to computers, simplifying development. Full episode on The Web Talk Show podcast. #AI#SoftwareDevelopment#Technology#Innovation
🦞 My AI agent Köbi reads Hacker News, makes me podcasts, and once nuked my entire inbox 💥. Here's the full story.
OpenClaw is a personal AI agent that lives on your machine — not a chatbot you visit, but a colleague that's always running. I've been using it for weeks. In this 5-minute video, I share what it can do, how I use it daily, and the one time it went seriously wrong.
#OpenClaw#AIAgent#PersonalAI#Automation#CTO
Technical writers should move towards building the rules that run AI. This involves: - writing clear instructions for machines - testing that AI tools give accurate answers - architecting the guardrails for organisation knowledge The future of technical writing is about building information systems we can trust.