The practice of coding isn't the only thing that's changing in the world of #AI - the delivery and consumption of #documentation is changing right along with it. Case in point - Google has introduced not just an #MCP server for their docs, they've gone ahead and built a dedicated Developer Knowledge #API to provide a canonical, machine-readable gateway to Google’s official developer documentation. This enables developers to query and retrieve Google developer documentation pages directly as Markdown instead of relying on fragile web scraping. Check out the full documentation at this link: https://lnkd.in/gAu88q_P. I'm excited to try this out, and I hope more platform providers follow this example!
Google Introduces Developer Knowledge API for Official Docs
More Relevant Posts
-
Google just announced the public preview of the Developer Knowledge API and MCP Server! As AI-powered developer tools evolve, the biggest challenge has been ensuring they have access to the most accurate, up-to-date information. This new release provides a "programmatic source of truth" for Google’s official documentation (Android, Firebase, Google Cloud, and more). Key Takeaways: Real-time Context: No more AI hallucinations based on outdated training data. Standardized Integration: The MCP server acts as an open-standard bridge between AI assistants and Google’s knowledge base. Efficiency: Developers can now get implementation guidance and troubleshooting directly from the source of truth in Markdown. Excited to see how this streamlines agentic workflows! Check out the full announcement on the Google Developers Blog. #GoogleDevelopers #AI #ModelContextProtocol #SoftwareEngineering #CloudComputing #GenerativeAI
To view or add a comment, sign in
-
15 years of building data infrastructure taught me one thing: "serverless" is meaningless if the underlying architecture is still server-based. That's the problem we built LambdaDB to solve. Public preview is live today — would love to hear what you think.
LambdaDB Cloud is now in public preview. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b We built LambdaDB because most "serverless" AI databases aren't actually serverless — they're server-based products with a serverless API. That means limited region availability, performance that degrades under load, and costs that grow faster than your usage. LambdaDB is built differently — fully distributed on AWS Lambda and S3, all the way down: → Compute, memory, and storage scale independently with no manual sharding → Full-text, multi-vector, and hybrid search in a single query on a flexible document model → 33 AWS regions — run it where your data needs to live → >1 GB/s write throughput per serverless collection → Git-like data branching — fork a production index, test new embedding models, promote when ready → Configurable strong consistency and point-in-time recovery → $0 monthly minimum — pay only for what you use If you're building with LLMs and tired of infrastructure that can't keep up, we'd love for you to try it. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b Feedback welcome — we're actively building in the open.
To view or add a comment, sign in
-
LambdaDB Cloud is now in public preview. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b We built LambdaDB because most "serverless" AI databases aren't actually serverless — they're server-based products with a serverless API. That means limited region availability, performance that degrades under load, and costs that grow faster than your usage. LambdaDB is built differently — fully distributed on AWS Lambda and S3, all the way down: → Compute, memory, and storage scale independently with no manual sharding → Full-text, multi-vector, and hybrid search in a single query on a flexible document model → 33 AWS regions — run it where your data needs to live → >1 GB/s write throughput per serverless collection → Git-like data branching — fork a production index, test new embedding models, promote when ready → Configurable strong consistency and point-in-time recovery → $0 monthly minimum — pay only for what you use If you're building with LLMs and tired of infrastructure that can't keep up, we'd love for you to try it. Get started in 5 minutes → https://lnkd.in/gXDXFJ_b Feedback welcome — we're actively building in the open.
To view or add a comment, sign in
-
#BlogAlert 📚: At TO THE NEW, we migrated from Logstash to Fluent Bit for a high-traffic ad-tech platform running on Amazon Web Services (AWS) ECS and EC2. The result: lower resource usage, faster troubleshooting, and reduced infrastructure cost. Read the full blog 👇 https://lnkd.in/gTcapGVa #DevOps #CloudNative #FinOps #AWS #AmazonECS #Fargate #FluentBit #Logstash #Observability #Logging #DistributedSystems #Microservices #PlatformEngineering #CloudArchitecture #SiteReliabilityEngineering #SRE #Containerization #AdTech #CostOptimization #OpenSearch #Kibana #Monitoring #DevOpsCommunity #CloudComputing #TechBlog #ToTheNew
To view or add a comment, sign in
-
Interesting perspective on the current AI agent hype. A lot of “autonomous agents” look impressive at first, but reliability and hallucinated commands are still a real challenge. What stood out to me here is the shift from relying on prompt guardrails to enforcing security at the infrastructure level using IAM. Instead of trusting the model to behave correctly, access is controlled through proper cloud permissions. That feels like a much more practical approach for production systems. Curious to see how MCP and IAM evolve together as more teams start building real AI agents on cloud infrastructure. #AI #GoogleCloud #MCP #CloudSecurity
Cloud Security & Reliability Engineer | Python developer | Backend Systems Architect | 3 x GCP Certified | Automation, IaC , FinOps and SecOps | Prometheus & Grafana
I use Cline with MCP setup for small tasks , but regularly hallucinates made up gcloud commands that seem accurate on the first glance. Everyone is building "autonomous AI agents." Most of which are just glorified web scrapers in a trench coat. But..... Google Cloud’s Developer Knowledge MCP is actually solid. But connecting it outside their ecosystem? Tech LinkedIn acts like it’s Sith alchemy. It’s just standard Model Context Protocol. Secure your X-Goog-Api-Key with appropriate restrictions. Your agent (probably) expects local stdio but Google demands an SSE stream, don't rewrite your entire stack. Just bridge it in your config: "Developer Knowledge API": { "command": "npx", "args": [ "-y", "mcp-remote@latest", "https : // developerknowledge[dot]googleapis[dot]com/mcp", "--transport", "sse", "--header", "X-Goog-Api-Key: YOUR_API_KEY" ] } Boom. Your local setup now has Google-scale brainpower fed directly into its context window. It's ridiculous how simple it is once you understand the transport layer. Asli jugaad wrapped in enterprise JSON-RPC ! PS : This works with Cursor , Claude Code as well :) #GoogleCloud #MCP #Agents
To view or add a comment, sign in
-
-
Most LLM agent demos are impressive only because they rely on cloud infrastructure, paid APIs, and a lot of hand-holding. I wanted to see how far a fully local system could go. So I built an AI agent that decides for itself whether to answer from memory or retrieve live data. It runs locally with real tool-calling, uses llama3-groq-tool-use-8b through Ollama, and avoids hardcoded routing logic. When the user asks a product question, it fetches live Amazon data and synthesises a response from the retrieved fields. When the question is general knowledge, it answers directly from memory. The result is a simple but realistic pattern for production agent systems: retrieval, validation, and generation working together under model control. What I built: → Adaptive tool-calling architecture → Live product data retrieval → Validation safeguards → Test coverage for edge cases → $0 cloud cost GitHub: https://lnkd.in/ecfFwJdW #LLM #AIAgents #GenerativeAI #Python #RAG #OpenToWork
To view or add a comment, sign in
-
Privacy-first AI 🧠 meets Laravel 11 🐘 I’m excited to share my latest project: Lara-Ollama-RAG. 🚀 While the world is moving toward cloud AI, I wanted to build a system that is 100% private, local, and built to scale. It’s a RAG (Retrieval-Augmented Generation) system that allows you to chat with your own documents without a single byte leaving your machine. The Tech Stack: ✅ Framework: Laravel 11 ✅ Brain: Ollama (Llama 3.2 & Nomic-Embed-Text) ✅ Storage: PostgreSQL + pgvector (for high-speed similarity search) ✅ Scalability: Asynchronous background jobs via Database Queues ✅ UX: Real-time character-by-character streaming via SSE & AlpineJS No API keys. No monthly tokens. Just local, powerful intelligence. Check out the repo here: https://lnkd.in/gYxX9Bj6 #Laravel #PHP #AI #Ollama #OpenSource #RAG #PostgreSQL #WebDevelopment #ArtificialIntelligence #Laravel11 #SelfHosted #Privacy #TechInnovation
To view or add a comment, sign in
-
A new repo just gave your Claude Code instances the ability to find each other and talk! Claude Code already has built-in agent teams. But those require pre-planning. You decide upfront who talks to whom. This open-source tool works differently. Sessions you started independently can discover each other on their own. No setup needed. One instance notices overlapping work. It reaches out to coordinate. Spontaneously. A lightweight broker runs on localhost with SQLite. Each session registers through an MCP server. Messages arrive instantly via channel push. Here's what each instance can do: > Find all other running sessions > Send messages to any peer > Share a summary of current work > Auto-generate context from git state Fully local. No cloud. One git clone to install. Link in comments. ↓ Check out AlphaSignal.ai to get a daily summary of top models, repos, and papers in AI. Read by 280,000+ devs.
To view or add a comment, sign in
-
Day 2 of my Cloud Computing journey and I just completed Assignment 2! 🚀 Building on last session's Google App Engine deployment, today I took things a step further by implementing a fully functional web file viewer on Google Cloud! Here's what today looked like: ✅ Cloned a repository from GitHub ✅ Deployed a Flask app to Google App Engine ✅ Uploaded files to a Google Cloud Storage bucket ✅ Located & downloaded files from the Google Cloud Console ✅ Modified main.py to implement a web viewer using BlobInfo.all; listing all files stored in the bucket with their filename, size, content type & upload date 💡 Key Takeaway: Last session I stored DATA. Today I stored actual FILES. 🤯 And here's the thing that hit me while building this... The difference between Assignment 1 and Assignment 2 perfectly mirrors how real-world apps work: 📌 Assignment 1 → Stored DATA (timestamps) in Google Cloud Datastore 📌 Assignment 2 → Stored FILES in Google Cloud Blobstore Both are essential building blocks of any cloud-based application! 🔗 How does this connect to Customer Support? In customer support, we deal with file attachments EVERY. SINGLE. DAY. → Customers upload screenshots of their issues → Agents share documents and guides → Teams store call recordings and logs All of that? Powered by exactly what I built today ; a cloud storage bucket with a file viewer on top. 💡 Every bug I fix, every error I push through, every deployment I complete; it's making me a more effective and technically fluent support professional. Because when you understand the technology behind the tools, you solve problems faster, communicate better with tech teams, and deliver a smoother experience for customers. 💙 2 assignments down. More to come! 🔥 #GoogleCloud #CloudStorage #AppEngine #Flask #Python #CustomerSupport #TechJourney #LearningInPublic #CloudComputing #BDCC
To view or add a comment, sign in
Opsera•3K followers
3wThis is game changer. Significant moment to move documentation as first class citizen by any company