How to Use Prompt Engineering for AI Projects

Explore top LinkedIn content from expert professionals.

Summary

Prompt engineering is the process of crafting clear, detailed instructions for artificial intelligence tools to guide them toward accurate and relevant results. Using prompt engineering for AI projects means thoughtfully designing how you talk to AI so it understands your needs and delivers helpful responses.

  • Specify context clearly: Give the AI information about your project, who you are, and what you need so it can tailor its responses to your situation.
  • Structure your requests: Break down tasks into smaller steps, define how you want the output formatted, and include constraints like tone or length for better results.
  • Adapt to each tool: Match your prompting style to the strengths of different AI platforms—some excel with structured commands, others with conversational or research-focused instructions.
Summarized by AI based on LinkedIn member posts
  • View profile for Usman Sheikh

    I co-found companies with experts ready to own outcomes, not give advice.

    56,083 followers

    Prompt engineering is the new consulting superpower. Most haven't realized it yet. Over the last couple of days, I reviewed the latest guides by Google, Anthropic and OpenAI. Some of the key recommendations to improve output: → Being very specific about expertise levels requested → Using structured instructions or meta prompts → Explicitly referencing project documents in the prompt → Asking the model to "think step by step" Based on the guides, here are four ways to immediately level up your prompting skill set as a consultant: 1. Define the expert persona precisely "You're a specialist with 15 years in retail supply chain optimization who has worked with Target and Walmart." Why it matters: The model draws from deeper technical patterns, not just general concepts. 2. Structure the deliverable explicitly "Provide 3 key insights, their implications and then support each with data-driven evidence." Why it matters: This gives me structured material that needs minimal editing. 3. Set distinctive success parameters "Focus on operational inefficiencies that competitors typically overlook." Why it matters: You push the model beyond obvious answers to genuine competitive insights. 4. Establish the decision context "This is for a CEO with a risk-averse investor applying pressure to improve their gross margins." Why it matters: The recommendations align with stakeholder realities and urgency. The above were the main takeaways I took from the guides which I found helpful. When you run these prompts versus generic statements, you will see a massive difference in quality and relevance. Bonus tips which are working for me: → Create prompt templates using the four elements → Test different expert personas against the same problem (I regularly use "Senior McKinsey partner" to counter my position detecting gaps in my thinking.) → Ask the model to identify contradictions or gaps in the data before finalizing any recommendations. We’re only scratching the surface of what these “intelligence partners” can offer. Getting better at prompting may be one of the most asymmetric skill opportunities all of us have today. Share your favourite prompting tip below! P.S Was this post helpful? Should I share one post per week on how I’m improving my AI-related skills?

  • View profile for Edward Frank Morris
    Edward Frank Morris Edward Frank Morris is an Influencer

    I build AI frameworks, lead strategy, and teach AI to anyone from Fortune 500s to universities. My face has been on NASDAQ, FT, and Forbes. My jokes have not. Yet.

    35,478 followers

    A few months ago, a colleague screamed at Microsoft Copilot like he was auditioning for Bring Me The Horizon. He typed, “Make this into a presentation.” Copilot spat out something. He yelled, “NO, I SAID PROFESSIONAL!” It revised it. Still wrong. “WHY ARE YOU SO STUPID?” And that, dear reader, is when it hit me. It’s not the AI. It’s you. Or rather, your prompts. So, if you've ever felt like ChatGPT, Copilot, Gemini, or any of those AI Agents are more "artificial" than "intelligent"? Then rethink how you’re talking to them. Here are 10 prompt engineering fundamentals that’ll stop you from sounding like you're yelling into the void. 1. Lead with Intent. Start with a clear command: “You are an expert…,” “Generate a monthly report…,” “Translate this to French…" This orients the model instantly. 2. Scope & Constraints First. Define boundaries up front. Length limits, style guides, data sources, even forbidden terms. 3. Format Your Output. Specify JSON schema, markdown headers, or table columns. Models love explicit structure over free form prose. 4. Provide Minimal, High Quality Examples. Two or three exemplar Q→A pairs beat a paragraph of explanation every time. 5. Isolate Subtasks. Break complex workflows into discrete prompts (chain of thought). One prompt per action: analyze, summarize, critique, then assemble. 6. Anchor with Delimiters. Use triple backticks or XML tags to fence inputs. Cuts hallucinations in half. 7. Inject Domain Signals. Name specific frameworks (“Use SWOT analysis,” “Apply the Eisenhower Matrix,” “Leverage Porter’s Five Forces”) to nudge depth. 8. Iterate Rapidly. Version your prompts like code. A/B test variations, track which phrasing yields the cleanest output. 9. Tune the “Why.” Always ask for reasoning steps. Always. 10. Template & Automate. Build parameterized prompt templates in your repo. Still with me? Good. Bonus tips. 1. Token Economy Awareness. Place critical context in the first 200 tokens. Anything beyond 1,500 risks context drift. 2. Temperature vs. Prompt Depth. Higher temperature amplifies creativity. Only if your prompt is concise. Otherwise you get noise. 3. Use “Chain of Questions.” Instead of one long prompt, fire sequential, linked questions. You’ll maintain context and sharpen focus. 4. Mirror the LLM’s Own Language. Scan model outputs for phrasing patterns and reflect those idioms back in your prompts. 5. Treat Prompts as Living Docs. Embed metrics in comments: note output quality, error rates, hallucination frequency. Keep iterating until ROI justifies the effort. And finally, the bit no one wants to hear. You get better at using AI by using AI. Practice like you’re training a dragon. Eventually, it listens. And when it does, it’s magic. You now know more about prompt engineering than 98% of LinkedIn. Which means you should probably repost this. Just saying. ♻️

  • View profile for Laura Jeffords Greenberg

    General Counsel at Worksome | Building AI-Native Legal Functions | Board Member & Speaker

    18,008 followers

    Most people don’t realize: AI can coach you on how to prompt it better. Here’s how to turn AI into your personal prompt coach, so you get better results and learn how to use AI faster. Try this two-step fix: 1. State your goal and context. 2. Ask one of these questions: ➡️ "How would you rewrite my prompt to get more [specific, creative, detailed, etc.] responses?" ➡️ "If you were trying to get [desired outcome], how would you modify this prompt?" ➡️ "If this were your prompt, what would you change to make it more effective?" ➡️ "What elements are missing from my prompt that would help you generate better responses?" ➡️ "How might you enhance this prompt to avoid common pitfalls or misinterpretations?" ➡️ Or simply: "Improve my prompt." Before: "Explain force majeure clauses." After: "Analyze how courts in California have interpreted force majeure clauses in commercial leases since COVID-19, focusing on what constitutes 'unforeseeable circumstances' and the burden of proof required to invoke these provisions." The difference? A broad, non-jx specific, superficial overview vs. actionable legal insights for commercial leases in California. Not only will you get better outcomes, but you will learn how to improve your prompting in the process. What are your go-to strategies or favorite prompts to optimize AI responses?

  • View profile for Archana Dhankar
    Archana Dhankar Archana Dhankar is an Influencer

    LinkedIn Top Voice | VP Marketing EMEA, Proofpoint | TEDx Speaker | AI-First Marketing Leader | Building the Future of B2B Marketing & Sales with AI

    7,694 followers

    The difference between poor AI outputs and great ones? It's not the tool. It's how you prompt it. After working with teams across multiple industries on AI adoption, I've noticed this pattern: Most people write prompts. The best people architect them. Here's what a typical prompt looks like: "Write me an email about our new product." That's just a task. You've given the AI 20% of what it needs. Here's the 5-part Universal Prompt Architecture that works across ChatGPT, Claude, Gemini, Copilot, and any platform: 1. CONTEXT: Who you are + what the AI needs to know 2. TASK: The specific output you need 3. CONSTRAINTS: Your non-negotiables (tone, length, what to avoid) 4. OUTPUT FORMAT: Show the structure, don't make AI guess 5. QUALITY CHECK: How you'll validate the output When you use all 5 parts together: ✅ Output quality jumps 50%+ ✅ Revision cycles drop dramatically ✅ It works across every major AI platform I've trained hundreds of people on this framework. It sticks because it forces you to think before you prompt. The copy-paste template is pinned in the comments 📌👇 This is Week 1 of my 5-part series: "AI That Ships" Every Tuesday for the next 5 weeks, I'm sharing practical AI frameworks that actually work, across tools, teams, and industries. Follow me to get the full series 🔔 What's the one thing you struggle with when prompting AI? #AIThatShips #AIinMarketing #PromptEngineering 

  • View profile for Jonathan M K.

    VP of GTM Strategy & Marketing - Momentum | Founder GTM AI Academy & Cofounder AI Business Network | Business impact > Learning Tools | Proud Dad of Twins

    42,775 followers

    Most people prompt every AI the same way. That’s why their outputs are mediocre. I’ve tested hundreds of prompts across every major AI platform. The difference between average and exceptional outputs isn’t prompt length. It’s prompt style matched to the tool. This framework breaks it down: ChatGPT → Prompt like an instructor. Start with a role assignment: “Act as a productivity coach.” Define the specific task. Ask for step-by-step action plans with timelines. Specify your desired format—table, outline, bullet list. Request tool recommendations. ChatGPT excels at structured guidance and task planning. Give it constraints and it delivers. Perplexity → Prompt like a research analyst. Lead with specific information requests. Include relevant keywords, timeframes, and geographies. Ask for cited sources and reference links for verification. Request trend summaries with citations. Follow up with comparison questions that require data-backed reasoning. Perplexity is built for evidence-based analysis. Treat it like a junior analyst who needs clear research parameters. Grok → Prompt like a candid friend. Use conversational tone: “Hey Grok, what do you think about…” Add emotional context. Ask for honest, unfiltered feedback and alternative perspectives. Request comparisons or opposing viewpoints to challenge your assumptions. Ask for common pitfalls and mistakes to avoid. Grok thrives on casual brainstorming and identifying blind spots others miss. Gemini → Prompt like a project planner. Explain the overall project goal upfront. Define expected outputs—tasks, subtasks, timelines. Ask about Google Workspace integrations. Request detailed weekly or daily action plans. Ask for dependency breakdowns and milestones. Request formatted outputs like tables and charts. Gemini is optimized for project management and collaborative workflows. Why this matters: Each model has a personality bias baked into its training data and architecture. ChatGPT leans toward structured helpfulness. Perplexity toward verification and sourcing. Grok toward irreverence and contrarianism. Gemini toward organizational workflows. When you fight these tendencies, you get generic outputs. When you lean into them, you unlock capabilities most users never see. The tactical shift: Stop copying prompts between platforms. Start adapting your communication style to each tool’s strengths. Same question, different framing = dramatically different quality. One prompt style for all tools is lazy. Adapted prompting is leverage.

  • The most underrated skill for 2025? (Not code. Not ads. Not funnels.) It's knowing how to talk to AI. Seriously. Prompt writing is becoming the new leverage skill. And no one’s teaching it right until now. I’ve built AI workflows for content, marketing, and growth. They save me 10+ hours/week and cut down on team overhead. The key? 👉 It’s not just asking ChatGPT questions. It’s knowing how to structure your prompts with frameworks like these: Here are 4 frameworks I use to get 🔥 outputs in minutes: 1. R-T-F → Role → Task → Format “Act as a copywriter. Write an Instagram ad script. Format it as a conversation.” 2. T-A-G → Task → Action → Goal “Review my website copy. Suggest changes. Goal: Boost conversion by 15%.” 3. B-A-B → Before → After → Bridge “Traffic is low. I want 10k monthly visitors. Give me a 90-day SEO plan.” 4. C-A-R-E → Context → Action → Result → Example “We’re launching a podcast. Write a guest outreach email. Goal: Book 10 experts.” You’re not just prompting. You’re building AI systems. Mastering this skill will: ✅ 10x your productivity ✅ Reduce dependency on agencies ✅ Help you scale solo (or with a lean team) The AI era belongs to the strategic communicators. Learn how to prompt, and you won’t need to hire half as much. 📌 Save this post. 🔁 Repost if you believe AI is a partner, not a replacement. #ChatGPT #PromptEngineering

  • View profile for Racheal Kuranchie

    AWS Community Builder | Backend Engineer | AI Security & Cloud Infrastructure | 98% Latency Reduction | Ex-Telecel | Google Certified GenAI Leader | Speaker | Helping Non-Techies Pivot into Tech

    6,113 followers

    Monday Technical Deep Dive: Prompting for Precision You've probably heard about AI everywhere, but are you prompting it right to get the best results? Getting useful output from models like Gemini or ChatGPT isn't magic; it's a skill called Prompt Engineering. If your prompt is weak, your output will be too. I recently attended Google’s Generative AI Leader Program and solidified a core principle: Better Inputs = Better Outputs. Here are three simple techniques to immediately improve your results: 1. Zero-Shot Prompting (The Baseline) This is the simplest approach. You give the model no examples, just the instruction. Example: "Explain the concept of API idempotency." When to use it: For basic questions, definitions, or tasks where the model already has extensive knowledge. It's your starting point. 2. Few-Shot Prompting (The Teacher) This is where you give the model a few examples of the desired input/output format before asking your actual question. You are essentially teaching it your style. Example: "Here are three examples of how I write a professional email closing: [Example 1], [Example 2], [Example 3]. Now, write an email to a recruiter following this style." When to use it: When the output needs to match a specific format, tone, or structure (e.g., code functions, marketing copy, or technical documentation). 3. Chain-of-Thought (CoT) Prompting (The Analyst) This is the most powerful technique for complex tasks. You instruct the model to explain its reasoning step-by-step before providing the final answer. Example: "Before giving the final answer, first list and explain the security risks associated with deploying this new cloud function. Then, suggest three mitigation strategies." When to use it: For complex analysis, multi-step problem-solving, or debugging. For me, this is essential when working on AI and Security concepts, as you need verifiable reasoning. Prompting is a skill that will only grow in importance. Which of these techniques are you going to test today? Let me know your results! #GenerativeAI #PromptEngineering #TechnicalDeepDive #SoftwareEngineering #AI

  • View profile for Vipul Kella MD, MBA

    ER Doc | Chief Medical Officer | Venture Capital | Medical Expert

    7,417 followers

    The Most Valuable Skill for the Future of Healthcare? Prompt Engineering. Most health tech conversations these days revolve around AI models, FDA pathways, or integration into clinical workflows. But an overlooked skill, the one that will separate winners from everyone else? Prompt engineering. Let me explain. AI is only as good as the instructions it receives. In healthcare, precision isn’t optional; it’s the difference between true insight and noise. And we are inundated with noise right now. Most discussions on adoption ignore the fact that the quality of the output depends entirely on the quality of the prompt. Think of prompts as the new “clinical order set.” Generic orders: generic care. Nuanced, context-rich orders: actionable outcomes. The same applies to AI: “Summarize this chart” yields a generic note. “Summarize the last three ER visits, focusing on NIHSS scores, timing of neuro consults, and anticoagulant use” produces a document you can act on. In practice: Clinicians will need to master diagnostic prompting to ensure AI tools triage, document, and support care with reliability . Administrators will need strategic prompting to model staffing, reimbursement, and readmission scenarios with real complexity. Researchers and policymakers will need analytical prompting to extract meaning from vast datasets and policy landscapes without falling into bias. Prompt engineering isn’t some technical hack. It’s a mindset shift toward structured, systems-level thinking. In healthcare, we already value precision in handoffs, consults, and policies; prompting is simply the extension of that discipline into the AI era. Over the next 3–5 years, prompt fluency will become as essential as EHR literacy was a generation ago. The difference: those who master it won’t just keep up with change - they’ll accelerate ahead of it.

  • View profile for Kashif M.

    President | CTO | GenAI • Cloud • SaaS • FinOps • M&A | Board & C-Suite Advisor

    4,239 followers

    🧠 Designing AI That Thinks: Mastering Agentic Prompting for Smarter Results Have you ever used an LLM and felt it gave up too soon? Or worse, guessed its way through a task? Yeah, I've been there. Most of the time, the prompt is the problem. To get AI that acts more like a helpful agent and less like a chatbot on autopilot, you need to prompt it like one. Here are the three key components of an effective 🔁 Persistence: Ensure the model understands it's in a multi-turn interaction and shouldn't yield control prematurely. 🧾 Example: "You are an agent; please continue working until the user's query is resolved. Only terminate your turn when you are certain the problem is solved." 🧰 Tool Usage: Encourage the model to use available tools, especially when uncertain, instead of guessing. 🧾 Example:" If you're unsure about file content or codebase structure related to the user's request, use your tools to read files and gather the necessary information. Do not guess or fabricate answers." 🧠 Planning: Prompt it to plan before actions and reflect afterward. Prevent reactive tool calls with no strategy. 🧾 Example: "You must plan extensively before each function call and reflect on the outcomes of previous calls. Avoid completing the task solely through a sequence of function calls, as this can hinder insightful problem-solving." 💡 I've used this format in AI-powered research and decision-support tools and saw a clear boost in response quality and reliability. 👉 Takeaway: Agentic prompting turns a passive assistant into an active problem solver. The difference is in the details. Are you using these techniques in your prompts? I would love to hear what's working for you; leave a comment, or let's connect! #PromptEngineering #AgenticPrompting #LLM #AIWorkflow

  • View profile for Jason Gulya

    Exploring the Connections Between GenAI, Alternative Assessment, and Process-Minded Teaching | Professor of English and Communications at Berkeley College | Keynote Speaker | Mentor for AAC&U’s AI Institute

    41,663 followers

    At Berkeley College, I teach structured prompting. Or rather, I teach a structured approach to pre-prompting, prompting, and post-prompting. Here’s what it looks like. +++++++++ PRE-PREPROMPTING 1. Reflect on your purpose and goal. Go in with a clear sense (as much as you can) of what you want to accomplish. Actually write it down. Lay out what your approach will be initially. Are you going to approach AI as a co-pilot, co-thinker, or as a hybrid? Why are you choosing that model, for this specific project and purpose? 2. Design a workflow that you think will complete that project. Lay out the list of steps. Figure out what your role will be and what the AI’s role will be. Set up a brief pivot plan, if things start to go awry. —— PROMPTING 3. Engage with the bot. If you’re treating AI as a co-pilot, set up a detailed prompt. If you’re treating it as a co-thinker, shorten the prompt and spend more time on follow-up messages. If you’re treating it as a hybrid, set up how you want to create it and defend that approach. 4. Iterate. Keep going until you get what you want out of the bot. Share that full transcript. (I’m working with Mike Kentz on ways to assess the transcript.) —— POST-PROMPTING 5. Work the AI’s response into your workflow. How are you going to use what the AI generated? Where does it fit into your process? This is where you will check for hallucinations, move on from the AI to another AI (or to a non-AI approach), or something else. Defend your use of AI. 6. Reflect What did you gain or lose when you used AI? What would you change about your process? What were the implications (ethical and otherwise) of you using AI this way? ++++++++++ My students submit a process-folio detailing the process they used. (This is for our class, AI-Powered Communication.) I’m teaching my students to put their use of AI into the context of a larger, self-directed work process. Sometimes, they keep going with how they’re using AI. Sometimes, they move on from AI altogether. I’m happy as long as they’re directing it and making their own informed decisions.

Explore categories