Gemini now offers a built-in markup tool, allowing you to annotate text and pinpoint precisely what you want to edit.
More Relevant Posts
-
React Context API – Clean State Management As React applications grow, prop drilling quickly becomes hard to manage. Passing data through multiple component layers makes the codebase messy and difficult to maintain. This is where React Context API becomes a powerful solution. Context API allows you to share global state across components without manually passing props at every level. It helps keep your components clean, readable, and well-structured. Common use cases where Context API works best: Authentication & user data Theme management (dark/light mode) Global UI states (loaders, modals, alerts) Shared configuration or settings For small to medium-sized projects, Context API is a lightweight and efficient alternative to heavier state management libraries. When used correctly, it improves both performance and maintainability. 👉 Clean architecture is not about using more tools, but using the right tool. Fiverr: https://lnkd.in/g2S2bkAk
To view or add a comment, sign in
-
-
Clean Text for LLMs: A 2025 Preprocessing Checklist LLMs don’t fail because they’re “bad”, they fail because the input is messy. In this infographic, I break down a 2025-ready checklist for preparing text before sending it to an LLM: • Remove noise (HTML, boilerplate) • Normalize & deduplicate • Preserve structure • Chunk intelligently • Optimize for tokens Whether you’re building RAG systems, embeddings, chatbots, or summarization pipelines, clean text is still the foundation. 👉 Full guide here: https://lnkd.in/d5qFmsyw
To view or add a comment, sign in
-
-
China's new opensource code model beats Claude Sonnet 4.5 & GPT 5.1 despite way fewer params. SWE-Bench Verified (81.4%), BigCodeBench (49.9%), LiveCodeBench v6 (81.1%) - with just 40B-param model. IQuest-Coder from Quest Research, backed by China’s quant hedge fund giant UBIQUANT. Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following). Efficient Architecture: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. Native Long Context: All models natively support up to 128K tokens without requiring additional scaling techniques. Check out the models: https://lnkd.in/gPFWWXeG Technical report: https://lnkd.in/gTDyYPuk
To view or add a comment, sign in
-
Free Resource: Framer MCP Guide 🎁 I've been working with Framer's MCP (Model Context Protocol) integration and put together a comprehensive reference guide that I'm sharing with the community. If you're using AI tools like Claude to programmatically interact with Framer projects, this guide covers: → Core workflow for reading and updating projects → Layout system (stack & grid) with all options → Styling patterns for colors and text → CMS operations → Component management → Code file creation and updates It's the reference I wished I had when I started. GitHub: https://lnkd.in/gSW7JdpH Hope this helps someone out there. PRs welcome if you want to contribute! #Framer #MCP #WebDevelopment #AI #OpenSource #DeveloperTools
To view or add a comment, sign in
-
Context engineering is the delicate art and science of filling the context window with just the right information for the next step. - Karpathy I have been working on this to demonstrate the Anthropic's Agent Skills Standard using LangChain and hence the demo of a simple terminal based content writer agent. When the agent starts, it scans a directory of Markdown files and reads only the YAML frontmatter, just the skill name and description. This gives the agent a lightweight understanding of what it can do, such as blog_writer or technical_writer, or socialmedia_writer without loading the full instructions into the context window. When I ask for something specific, like a blog post, the agent recognizes the intent and dynamically loads the full contents of blog_writer.md. At that point, the detailed instructions, tone guidelines, and templates for that task are injected into the context only when they are needed. This approach has three major benefits: Token efficiency: You are not paying for the tokens of every available skill when only one is in use. Focus: The model avoids conflicting instructions by activating only the relevant skill for the task. Extensibility: Adding a new capability is as simple as dropping a new Markdown file into the skills folder. Overall, this is a practical implementation of the modular Agent Skills pattern advocated by Anthropic, built with LangChain and LangGraph for orchestration. LLM used : Mistral Large Latest Langchain docs : https://lnkd.in/gfjJH9UA Anthropic Documentation : https://lnkd.in/gadXDbbx The full code is available on my GitHub with a detailed ReadME: https://lnkd.in/gyEVVTdg #Langchain #Langgraph #AgentSkills #Anthropic #ClaudeCode
To view or add a comment, sign in
-
Hey guys, just saw this on github and its absolutely wild 🤯 A new open-source code model called IQuest-Coder is apparently beating Claude Sonnet 4.5 and GPT-4o on coding benchmarks. Yeah, you read that right - an OPEN SOURCE model competing with the big players. As someone building my own systems, this is huge. I think we're hitting that point where the gap between proprietary and open-source AI is shrinking fast. The implications for indie devs and small teams are massive - imagine having GPT-level coding assistance without the API costs or data privacy concerns. What really gets me excited is this could democratize AI-assisted development. No more being locked into expensive subscriptions or sending your code to third-party servers. For me, this changes the economics of building products completely. The timing feels right too - with so many devs getting priced out of premium AI tools, having a strong open-source alternative could be a game changer for the whole industry. 🚀 Anyone tried it yet? Would love to hear if it actually lives up to the benchmarks in real-world use. #ArtificialIntelligence #OpenSource #SoftwareDevelopment #AI Read more: https://lnkd.in/g-a5GThW
To view or add a comment, sign in
-
📌 React Context API — Solving Prop Drilling the Right Way If you’ve faced prop drilling, you already know the pain: ➡️ Passing data through multiple components ➡️ Components receiving props they don’t even use ➡️ Tight coupling and messy code This is where React Context API becomes useful. 🔍 What Is Context API? Context API provides a way to share data globally across a component tree without manually passing props at every level. It’s built directly into React — no external library needed. Typical use cases: ✔ Authentication state ✔ Theme (dark/light mode) ✔ Language / locale ✔ User preferences 🧩 How Context Works (Conceptually) Context has three core parts: 1️⃣ Context Creation 2️⃣ Context Provider - Wraps components and provides shared data. 3️⃣ Context Consumer - Accessed using useContext() hook. Once wrapped, any component inside the tree can access the data — no prop drilling. ❗When context value changes, all consuming components re-render. 🧩 When Context API Is the RIGHT Choice Use Context API when: ✔ State is global ✔ Updates are infrequent ✔ App size is small to medium 🧠 When NOT to Use Context API Avoid Context when: ❌ State updates frequently ❌ App is large and complex ❌ Debugging needs predictability That’s where Redux Toolkit or React Query shines. 🎯 Final Thought Context API is a powerful architectural tool, not a shortcut. #ReactJS #ContextAPI #FrontendDevelopment #StateManagement
To view or add a comment, sign in
-
-
We just shipped a new code editor in Base44. You can now edit code and see the preview side by side - no more jumping between modes to check if your change actually worked. We also added something that's become my favorite little feature: click any element in the preview, and the editor jumps straight to that code. Like inspect element, but you can actually edit it. The kind of thing that just makes building faster.
To view or add a comment, sign in
-
I have seen even experienced developers cursing LLMs for not giving what they were hoping for. To settle this, I am documenting my learnings mostly for my own good and I’ve just started an open-source GitHub page. Repo link → https://lnkd.in/gECEmHcN WHY I STARTED THIS A lot of prompt advice sounds good but fails when you try to use it with real codebases, migrations, agents, and production constraints. This repo focuses on prompt engineering as an engineering discipline, not a chat trick. TL;DR OF THE PAGE (So far) • Garbage in, garbage out still applies • Strong prompts include context, examples, and validation • Examples beat long explanations (including edge and failure cases) • Ask models to surface assumptions and uncertainty • Tests are first-class: unit, integration, functional • For complex tasks: plan first, then step-by-step execution • Maintain repo-level agent instructions (global, project, folder) STATUS This is just the beginning. I’ll keep iterating, refining, and adding real-world examples. I’ll also keep sharing updates with the community as this evolves. OPEN SOURCE The repo is open source. Everyone is welcome to contribute: ideas, templates, counterexamples, or lessons learned. QUESTION FOR YOU If you know of any other good repos or write-ups on prompt engineering for development tasks, please share them. I’d love to learn and link to them.
To view or add a comment, sign in