From the course: Building AI Literacy and Fluency with Microsoft

Prompt engineering

The way we interact with AI has transformed. We've moved beyond simple commands and queries to a more sophisticated method of communication known as prompt engineering. Prompt engineering is the art of crafting prompts that guides an LLM-based generative AI to produce a desired response. A prompt is the input you give to a generative AI experience like Microsoft Copilot, telling it what you want it to do and how. It consists of two main components, instruction and context. The instruction is the written part of the prompt that states the task and the objective. It should be clear and specific, so CoPilot knows exactly what you expect from it. The context provides information for the response, such as the intended audience and the desired tone. It should be relevant and appropriate, so it can tailor its response to your needs. Once a prompt is submitted, CoPilot reads text in chunks called tokens, which can be as short as a single character or a word. This is important because every model has a limit on the amount of text it can process at once. This is why it's important to keep our prompts concise and to the point. Let's look at this using Microsoft CoPilot. Step 1, understanding the prompt. A prompt is your direct line of communication with Copilot. It's how you instruct it to assist you in your tasks. A well-constructed prompt consists of two essential elements, the instruction and the context. Instruction. This is what you're asking Copilot to do. It should be concise and explicit. For instance, draft an e-mail to a client, or generate a project management plan. Context, this gives Copilot the necessary background to customize its response to your needs. It includes details like the target audience, desired tone, level of detail, and any specific guidelines or limitations. Step 2. Crafting Your Prompt. When you're prepared to engage with Copilot, launch the webpage Please Create a Comprehensive Professional Guide for Adult Learners on how to effectively network in their industry with the focus on digital platforms, breaking down the example. Instruction, create a comprehensive professional guide. Context, for adult learners on how to effectively network in their industry with a focus on digital platforms. Step three, submitting the prompt. After crafting your prompt, submit it. Copilot will then dissect the instruction and context into tokens and produce a response grounded in its interpretation. Tokens are the segments of text that the model uses to comprehend and formulate responses. They can be complete words or fragments of words. For example, Network is one token, while Networking may be divided into Network and ING based on the model's tokenization method. Step 4. Reviewing the Response or Output Once Copilot delivers its output, examine it to verify it aligns with your objectives. If it doesn't quite hit the mark, you can refine your prompt and resubmit it to achieve a more precise outcome. This process helps you collaborate more effectively over time. As we advance in LLM-based generative AI, the art of prompt engineering becomes increasingly important. It's not just about asking, but how we ask it. By understanding and effectively using prompts, we can harness the full potential of generative AI experiences like Microsoft Copilot. The key to a successful interaction lies in the clarity of your instruction and the relevance of your context.

Contents