You've written API extraction code before. Pagination, retries, rate limits, nested JSON, none of that is hard. The problem is writing it again for the 15th API this month. That's the actual bottleneck: repetitive pipeline setup that takes 2 hours every time you add a new source. dltHub is built for this. Instead of rewriting pagination logic, retry mechanisms, and schema normalization for every API, you define the source once: - Pagination and retries are handled automatically - Nested JSON gets flattened without custom transforms - Schema inference and versioning work Learn how to use it here: https://lnkd.in/eKhz9YWV
More Relevant Posts
-
Got 2 minutes? ⏱️ That's all you need to build an API workflow (without writing a single line of code 😉). http://spr.ly/6046h2j3K #AgenticAutomation
To view or add a comment, sign in
-
This Claude Code cheatsheet is the BEST I've seen in 2026 12 sections. Every workflow pattern. Every shortcut ! Here's what the full workflow looks like: 📌 Layer 1: CLAUDE.md This is Claude's persistent memory about your project It loads automatically at every session Put your tech stack, architecture, and gotchas in here Keep it under 200 lines and commit it to Git 📌 Layer 2: Skills Skills are markdown guides Claude auto-invokes from natural language You don't call them manually, Claude reads the description and decides Store them in `.claude/skills/<name>/SKILL.md` for project scope Or in `~/.claude/skills/<name>/SKILL.md` for personal use 📌 Layer 3: Hooks Hooks are deterministic callbacks that run before or after tool use Use them for security scripts, linting, or notifications Exit code 0 means allow. Exit code 2 means block 📌 Layer 4: Agents Subagents with their own context for complex multi-step work Store agent prompts in `agents/` and reference them with `@filename` The daily workflow that ties it all together : → `cd project && claude` → Shift + Tab + Tab: Plan Mode → Describe your feature intent → Shift + Tab: Auto Accept → `/compact` to compress context → Esc Esc to rewind if needed → Commit frequently, start a new session per feature One cheatsheet. The entire Claude Code stack Credit: Brij Kishore Pandey for creating this #aileadersvietnam #aileaders
To view or add a comment, sign in
-
-
🚀 Building a REST API in Go Without Frameworks Sometimes the best way to learn a technology is to remove the abstractions. I created a small project demonstrating how to build a REST API in Go using only the standard library. No frameworks. Just: • net/http • encoding/json • simple routing • concurrency with sync.Mutex The API exposes three endpoints: GET /health GET /users POST /users It stores users in memory and demonstrates how Go handles requests internally. Example handler: http.HandleFunc("/users", usersHandler) log.Fatal(http.ListenAndServe(":8080", nil)) Why I like this approach: Before using frameworks like Gin, Echo, or Fiber, it's important to understand what happens underneath. Go’s standard library is powerful enough to build real services. Repository: https://lnkd.in/dxHEhvzq Full Article: https://lnkd.in/djxGbVEC
To view or add a comment, sign in
-
Auto-label your Claude Code sessions If you're juggling multiple Claude Code sessions in VS Code, you know the pain - every tab says "Claude Code" and you can't tell which is which. I built a small open-source tool that fixes this. It auto-generates a label for each session (based on what you're working on) and shows it in the status line + optionally renames VS Code tabs. AI native setup takes a couple of commands: git clone <repo link> cd claude-session-labels claude "install this" It uses Claude Code hooks + the built-in statusline API, so no external dependencies. Would love feedback if you try it! https://lnkd.in/dU2zsS6V
To view or add a comment, sign in
-
-
I built a little something to make our lives easier. I've spent a lot of time thinking about why we still write so much boilerplate for APIs in 2026. Often, we find ourselves spending a lot of time, or recently LLM tokens, generating the same CRUD routes and serialisation logic repeatedly. As a developer, I just wanted to define my data and start building, without the "plumbing" or the prompts getting in the way. So, I built Crate - a D language framework that turns a simple struct API into a full REST, GraphQL, and MCP API in just three lines of code. Why is this different? While ecosystems like Rust (crudcrate), F# (Type Providers), or Haskell (Servant) have ways to reduce boilerplate, Crate is built on D's unique compile-time introspection. It doesn't just "generate" code probabilistically; it semantically understands your models to bake in authentication, pagination, and multi-protocol support automatically. It's been a labour of love to get that level of safety and flexibility working seamlessly. If you're a fan of clean code and saving time, I'd love for you to take a look. It's open-source and free to use: https://lnkd.in/dQ5jPerb. Feedback, stars, or just a "hello" are all very much appreciated! #OpenSource #SideProject #BuildInPublic #DLang #SoftwareEngineering #Rust #FSharp #WebDev #AI
To view or add a comment, sign in
-
This week I rebuilt the same small API twice, once in Flask and once in FastAPI. Both frameworks work well, but they push you toward very different design habits. With Flask, the process feels very open: • define routes • parse inputs manually • organize the structure however you want With FastAPI, the framework pushes structure earlier: • typed request/response models • automatic validation • built in API documentation What stood out wasn’t performance. It was how quickly API contracts become explicit. With Flask, the structure tends to emerge over time. With FastAPI, the structure is encouraged from the beginning. Frameworks don’t just help you build APIs, they shape how the system evolves.
To view or add a comment, sign in
-
There are two effective contracts for engaging with LLMs: - BDD-formatted .feature files: The BDD format is a perfect contract language between humans and LLMs. It is easy to agree upon, write, and comprehend for both sides. - Strictly-typed API clients: These include requests and responses generated directly from OpenAPI JSON files. They serve as the source of truth for the features. Furthermore, they are easy to follow and facilitate the cross-contextual translation of information within the domain.
To view or add a comment, sign in
-
If you’ve used code agents on a complex repository, you know the pain: the first part of the session is often just context building. What files to look at. What broke before. What the hidden requirements are. What your team already knows. Codeset helps by giving your existing agent, like Claude Code or Codex, that repository-specific context up front. So it can get to the actual work faster. If you’ve felt this pain too, we’d love to hear what your agent misses most. Try Codeset: https://codeset.ai
To view or add a comment, sign in
-
-
I've found adding this to CLAUDE.md helps to keep Claude Code on point when reviewing PRs. I've found this, combined with CoPilot code reviews very effective. CoPilot can be a little too pedantic at times but it also picks up on some valid issues that Claude Code misses. When it goes over-the-top Claude Code calls it, so on the whole, the combination works very well. ## PR Review Comments When asked to address PR review comments: 1. **Fix the code** for each valid comment 2. **Reply to every comment** explaining what was done (or why it's not applicable), referencing the commit hash 3. **Resolve valid comments** after replying using the GraphQL `resolveReviewThread` mutation 4. **Do not resolve comments you disagree with** — reply explaining why and leave unresolved for discussion Use `gh api` to fetch comments, reply via `POST /repos/{owner}/{repo}/pulls/{number}/comments` with `in_reply_to`, and resolve via GraphQL. Always fetch thread IDs first with the `reviewThreads` GraphQL query. #Claude #ClaudeCode #CodeReviews #PRs
To view or add a comment, sign in
-
🧠 Turned an 8,400-function C++ codebase into a searchable knowledge base — using Groq + Qwen3 Honestly, the first time I opened llama.cpp, I felt completely lost. Thousands of functions, deep tensor math, barely any documentation. Navigating that codebase felt less like engineering and more like archaeology. So I built Dev Cognition System to fix exactly that. What it does: It takes any C/C++ repository, extracts every function using Tree-Sitter (proper AST-level parsing — no brittle regex), sends each one to a Groq-powered Qwen3-32B model for deep semantic analysis, and writes structured markdown notes into an Obsidian vault — automatically, overnight, completely hands-free. Every note includes: - Plain-English summary & design rationale - Performance characteristics - Hidden implementation insights - Auto-tags: #memory #gpu #threading #kernel #recursion #accel Open the vault in Obsidian's Graph View and the architecture of the entire codebase literally appears in front of you. It's one of those moments that genuinely feels like a superpower. Why Groq + Qwen3-32B? 1. Two reasons — quality and speed. 2. Qwen3-32B brings significantly deeper reasoning to code analysis. The function summaries, hidden insights, and rationale it generates are noticeably more nuanced compared to smaller models. And Groq's inference hardware makes sure that running this across thousands of functions doesn't become a week-long waiting game. Paired with a custom thread-safe, rate-limit-aware Groq client (exponential backoff, 6 retries, configurable intervals) — the pipeline runs unattended without babysitting. Start it before you sleep → wake up to a fully annotated codebase. 🌙 What landed in the latest push: ✅ Full test suite — 690+ lines across 7 modules (parser, extractor, Groq client, tagger, writer, prompts, pipeline) ✅ 60+ new AI-generated vault notes — SWiGLU, GEGLU (standard + quick + erf variants), RMS Norm, Group Norm, L2 Norm, Silu, Leaky ReLU, Out-Product, Scale, Set, and optimised GEMM ops ✅ Vault now spans 77+ module directories from llama.cpp ✅ Rate-limit tuning validated — stable on Groq free tier across multi-thousand function runs. Stack: Python · Groq API · Qwen3-32B · Tree-Sitter · Obsidian · Multithreading This whole thing started from a personal frustration. Turns out a lot of developers share the exact same one. If you work with large C/C++ codebases — or just appreciate AI actually solving a real engineering problem rather than a toy demo — do check it out. (Fun Fact Linux Kernel can be annotated too with a powerful model) 👉 https://lnkd.in/dAupDEyX Have you ever felt completely lost dropping into a massive unfamiliar codebase? Drop a comment — would genuinely love to know how others handle it. #Groq #Qwen3 #AI #LLM #OpenSource #Python #TreeSitter #Obsidian #DeveloperTools #BuildInPublic #CodeIntelligence
To view or add a comment, sign in