There’s a new breed of GenAI Application Engineers who can build more-powerful applications faster than was possible before, thanks to generative AI. Individuals who can play this role are highly sought-after by businesses, but the job description is still coming into focus. Let me describe their key skills, as well as the sorts of interview questions I use to identify them. Skilled GenAI Application Engineers meet two primary criteria: (i) They are able to use the new AI building blocks to quickly build powerful applications. (ii) They are able to use AI assistance to carry out rapid engineering, building software systems in dramatically less time than was possible before. In addition, good product/design instincts are a significant bonus. AI building blocks. If you own a lot of copies of only a single type of Lego brick, you might be able to build some basic structures. But if you own many types of bricks, you can combine them rapidly to form complex, functional structures. Software frameworks, SDKs, and other such tools are like that. If all you know is how to call a large language model (LLM) API, that's a great start. But if you have a broad range of building block types — such as prompting techniques, agentic frameworks, evals, guardrails, RAG, voice stack, async programming, data extraction, embeddings/vectorDBs, model fine tuning, graphDB usage with LLMs, agentic browser/computer use, MCP, reasoning models, and so on — then you can create much richer combinations of building blocks. The number of powerful AI building blocks continues to grow rapidly. But as open-source contributors and businesses make more building blocks available, staying on top of what is available helps you keep on expanding what you can build. Even though new building blocks are created, many building blocks from 1 to 2 years ago (such as eval techniques or frameworks for using vectorDBs) are still very relevant today. AI-assisted coding. AI-assisted coding tools enable developers to be far more productive, and such tools are advancing rapidly. Github Copilot, first announced in 2021 (and made widely available in 2022), pioneered modern code autocompletion. But shortly after, a new breed of AI-enabled IDEs such as Cursor and Windsurf offered much better code-QA and code generation. As LLMs improved, these AI-assisted coding tools that were built on them improved as well. Now we have highly agentic coding assistants such as OpenAI’s Codex and Anthropic’s Claude Code (which I really enjoy using and find impressive in its ability to write code, test, and debug autonomously for many iterations). In the hands of skilled engineers — who don’t just “vibe code” but deeply understand AI and software architecture fundamentals and can steer a system toward a thoughtfully selected product goal — these tools make it possible to build software with unmatched speed and efficiency. [Truncated due to length limit. Full post: https://lnkd.in/gsztgv2f ]
How AI Coding Tools Drive Rapid Adoption
Explore top LinkedIn content from expert professionals.
Summary
AI-powered coding tools are transforming the software development landscape by enabling rapid application creation and automation, thanks to capabilities like code generation, debugging, and integration of complex functionalities. These tools are enhancing collaboration, boosting productivity, and driving the faster adoption of AI in industries.
- Master AI building blocks: Stay updated on diverse and evolving AI tools like large language model APIs, embeddings, and agentic frameworks to design innovative, robust software systems.
- Adopt code-first approaches: Embrace code-driven workflows to improve speed, scalability, and efficiency while maintaining high governance and easy integration with AI technologies.
- Encourage collaboration: Create opportunities for team-wide AI knowledge-sharing through initiatives like hackathons, cross-functional working groups, and regular demonstrations of AI-powered prototypes.
-
-
UI Tools vs. Code-Driven Tools in an AI World (I’m betting on code) UI-based tools are powerful for certain teams (think Informatica, Talend, Ab Initio, NiFi, ODI, OCI Data Integration, etc. — correct me if I missed or miscategorized anything). But in 2025, code-driven stacks (Airflow, dbt, PL/SQL, Python/SQL scripts, CI/CD) are where AI multiplies productivity. Why code-first wins with AI: 👉 AI-native workflow. LLMs excel at reading/writing code. Prompt → generate DAGs, dbt models, tests, docs. PRs auto-reviewed with suggestions. Hard to do that on drag-and-drop canvases. 👉 Code review & governance. Git, pull requests, code owners, linters, type checks, unit tests → real software engineering for data. 👉 Faster iteration. Autocomplete & code generation speed up new pipelines, refactors, and bug fixes without hunting through visual canvases. 👉 Reproducibility. Everything as code (pipelines, configs, infra) → deterministic builds, ephemeral envs, rollback in seconds. 👉 Composability. Leverage rich ecosystems (Airflow providers, dbt packages, Python libs). Extend in minutes, not months. 👉 Portability & cost control. Run anywhere (containers/K8s). Avoid “designer tax” and proprietary lock-in for everyday changes. 👉 Observability-as-code. Embed SLAs, data tests, lineage, and quality checks directly next to the logic. A day in code-first with AI: 1. Describe a new transformation in plain English. 2. AI scaffolds a dbt model, tests, and documentation. 3. PR triggers CI (compile, unit/data tests, style). 4. AI reviews the diff; you approve → auto-deploy. 5. Airflow picks it up with alerting & SLAs baked in. TL;DR: In an AI-driven world, pipelines written as code are easier to generate, review, test, version, ship, and scale.UI tools still have a place for governed, one-off, or citizen scenarios — but for velocity and reliability, code-first + AI is unmatched. If this resonates, Follow Durga Gadiraju to hear more on how AI is disrupting Data Engineering, ETL/ELT, and Analytics — with practical playbooks you can apply today. #DataEngineering #ETL #ELT #AI #GenAI #Airflow #dbt #PLSQL #Python #DataQuality #MLOps #DevOpsForData #Infolob #OCI #GCP #CloudData
-
This seems to be on everyone’s mind: how to operationalize your product team around AI. Peter Yang and I recently chatted about this topic and here’s what I shared about how we are doing this at Duolingo. For improving our product: -Using AI to solve problems that weren’t solvable before. One of the problems we had been trying to solve for years was conversation practice. With our Max feature, Video Call, learners can now practice conversations with our character Lily. The conversations are also personalized to each learner’s proficiency level. -Prototyping with AI to speed up the product process. For example, for our Duolingo Chess, PMs vibe-coded with LLMs to quickly build a prototype. This decreased rounds of iteration, allowing our Engineers to start building the final product much sooner. -Integrating AI into our tooling to scale. This allowed us to go from 100 language courses in 12 years to nearly 150 new ones in the last 12 months. For increasing AI adoption: -Building with AI Slack channels. Created an AI Slack channel for people to show and tell and share prototypes and tips. -“AI Show and Tell” at All-Hands meetings. Added a five‑minute live demo slot in every all hands meeting for people to share updates on AI work. -FriAIdays. Protected a two‑hour block every Friday for hands-on experimentation and demos. -Function-specific AI working groups. Assembled a cross-functional group (Eng, PM, Design, etc.) to test new tools and share best practices with the rest of the org. -Company-wide AI hackathon. Scheduled a 3-day hackathon focused on using generative AI. Here are some of our favorite AI tools and how we are using them: -ChatGPT as a general assistant -Cursor or Replit for vibe coding or prototyping -Granola or Fathom for taking meeting notes -Glean for internal company search #productmanagement #duolingo