Writing Effective Prompts

Video Tutorial

Writing Effective Prompts

Practical how-to guide with detailed techniques for writing prompts that get better results from Microsoft 365 Copilot. Covers specificity, context, examples, iteration, and many government-relevant prompt examples.

10:00 February 08, 2026 End-user

Overview

Knowing the prompting framework—Goal, Context, Source, Expectations—is the foundation. But knowing the framework and consistently getting great results are two different things. The difference comes from practical techniques: being precise with your language, providing the right amount of context, showing Copilot what you want through examples, and refining responses through iteration.

This video gives you a toolkit of techniques with dozens of government-relevant examples that you can start using immediately.

What You’ll Learn

  • Specificity: How to replace vague prompts with precise requests
  • Context: How to layer context for richer, more relevant responses
  • Examples: How to use samples and templates to guide Copilot’s output
  • Iteration: How to refine responses through follow-up prompts

Script

Hook: good prompts vs. great prompts

You’ve learned the prompting framework. You know about Goal, Context, Source, and Expectations. But you’re still getting responses that feel generic, miss the mark, or need heavy editing before you can use them.

That’s normal. Knowing the framework is step one. Technique and practice make the difference between a prompt that gets a passable response and one that gets exactly what you need on the first try.

In the next ten minutes, you’ll learn practical techniques with many examples—all focused on government work. By the end, you’ll have specific strategies you can apply to your very next Copilot interaction.

Be specific about what you want

The single biggest improvement most people can make is replacing vague language with precise language. Specificity is what turns a mediocre prompt into an effective one.

Start with your verbs. “Help me with” is vague. “Summarize,” “Draft,” “Compare,” “Analyze,” “List,” “Recommend”—these are specific. Each verb tells Copilot exactly what action to take. “Help me with the report” could mean almost anything. “Summarize the key findings from the report” tells Copilot precisely what to do.

Next, specify the format. Do you want bullet points, a table, a narrative paragraph, or a numbered list? Don’t leave this to chance. “Give me the highlights from the meeting” might produce a paragraph when you needed bullets, or a long summary when you needed three key points.

Specify the length. “In 100 words or less” or “in no more than five bullet points” keeps output concise. “Provide a detailed analysis of at least 500 words” signals you want depth.

Specify the audience. “Written for a technical team” produces different vocabulary and detail than “written for senior leadership.”

And specify the tone. “Formal and professional” versus “direct and conversational” changes how Copilot structures sentences and chooses words.

Here’s a government example. Bad prompt: “Write something about our budget.” What does “something” mean? A memo? An email? A presentation? For whom?

Better prompt: “Draft a two-paragraph budget justification for our FY2026 cloud infrastructure request, written for the CFO in formal language, highlighting cost savings over five years compared to on-premises hosting.”

That prompt specifies the action (draft), the format (two paragraphs), the subject (budget justification for cloud infrastructure), the audience (CFO), the tone (formal), and even the angle (cost savings comparison). The response will be dramatically more useful.

Specificity reduces back-and-forth iterations. Every detail you add to your prompt is one fewer correction you’ll need to make afterward.

Provide rich context

Context is the background information that makes Copilot’s output relevant to your specific situation. The richer the context, the more tailored the response.

Start with your role and situation. “I’m a cybersecurity analyst in a DoD agency preparing for an upcoming audit” tells Copilot volumes about the vocabulary, standards, and formality level you expect. Without this, Copilot defaults to generic assumptions.

Describe your audience and their needs. “My audience is a group of non-technical program managers who need to understand why we’re recommending this security upgrade” changes how Copilot explains technical concepts. It will simplify jargon, use analogies, and focus on business impact instead of technical details.

Explain the purpose and constraints. “I need this for an interagency briefing tomorrow morning, so it needs to be concise and highlight only the most critical findings.” Purpose shapes what Copilot emphasizes. Constraints shape how much it produces.

Government work involves specific context that Copilot benefits from knowing. For inter-agency briefings, mention the agencies involved and the level of formality expected. For Hill inquiries, note the committee and the specific question being asked. For Inspector General requests, specify the nature of the review and the timeframe.

Context stacking means layering multiple context elements for maximum relevance. Here’s an example: “I’m the IT program manager for a mid-size civilian agency. We’re preparing for our quarterly review with the agency CIO. Our cloud migration is two months behind schedule due to staffing shortages, and I need to present the situation honestly while showing a credible recovery plan. The CIO values brevity and directness.”

That’s role, situation, audience, constraint, and tone preference—all in context. Copilot will now generate output perfectly tuned to that specific scenario.

How much context is too much? You’ll hit diminishing returns when you’re adding information that doesn’t change the output. If your context is longer than your expected response, you’ve probably over-specified. Start with the essentials—role, audience, purpose—and add more only if the output isn’t hitting the mark.

Use examples in your prompts

One of the most underused prompting techniques is showing Copilot what good output looks like. Instead of describing what you want, show it.

The simplest approach is to paste a sample and say “Write something similar for a different topic.” If you have a well-written executive briefing from a previous quarter, paste it into your prompt and say “Following this structure and tone, draft an executive briefing on our Q2 cybersecurity posture.” Copilot will mirror the format, length, vocabulary level, and structure of your example.

You can also use templates. “Following this structure: Background, Current Situation, Options, Recommendation, Next Steps—draft a decision memo on migrating our email system to Exchange Online.” The template tells Copilot exactly how to organize the output without you needing to describe each section.

In government work, this technique is especially powerful because so many deliverables follow established formats. Congressional correspondence has a specific structure. Decision memos follow a standard template. Briefing slides have expected formats. Instead of describing these formats in your prompt, paste an example and let Copilot match it.

When do examples help most? When you need Copilot to produce content in an unfamiliar or specialized format. When you want output that matches a specific organizational style. When you’ve been iterating without success and want to show rather than tell.

Here’s a practical tip. Keep a small collection of your best examples—a well-written briefing, a good status update, a strong email to leadership. When you need similar output, paste the relevant example into your prompt. This becomes your personal prompt library and saves time on every similar request.

Iterate and refine

The first response Copilot gives you is a starting point, not a final product. The best Copilot users know this and plan for it. Iteration is how you go from a decent first draft to output that’s actually ready to use.

Start with straightforward modifications. “Make it more formal” adjusts tone. “Shorten to 100 words” adjusts length. “Add a section on risks and mitigation” adds content. “Remove the background section and get straight to the recommendations” restructures the output. Each of these takes the existing response and refines it without starting over.

Building on responses is where iteration gets powerful. Ask Copilot to draft an outline. Then say “Now expand section two into full paragraphs.” Then “Add specific metrics to support each recommendation.” You’re building a complex deliverable piece by piece, and each step builds on the previous one.

The refinement cycle has three steps. Generate—write your prompt and get the initial response. Evaluate—read the response critically. What’s good? What’s missing? What’s off-target? Refine—tell Copilot exactly what to change. “The tone is too casual for this audience. Make it more formal and add specific data points to support each claim.”

Two or three rounds of refinement usually get you to a usable product. If you find yourself going beyond five rounds, the initial prompt probably needs to be rewritten entirely—which is also a valid strategy.

Here’s a government example of iterative refinement. You need a policy brief on your agency’s remote work practices. Round one: “Draft a policy brief on our agency’s remote work framework.” Copilot produces a general overview. Round two: “Good structure. Now make the tone more formal and add a section comparing our approach to the latest OPM guidance.” Round three: “Shorten the background section to two sentences and expand the recommendations to include specific timelines.” After three rounds, you have a polished policy brief that would have taken much longer to write from scratch.

Save prompts that produce great results. When you find a prompt structure that consistently works for a particular type of deliverable—status updates, decision memos, briefing slides—save it. Reuse it next time, substituting the specific details. Over time, you’ll build a personal library of proven prompts that dramatically speed up your workflow.

Common prompting mistakes to avoid

Even experienced Copilot users fall into patterns that undermine their results. Here are the most common mistakes and how to avoid them.

Being too vague is the number one issue. “Help me with this document” gives Copilot almost nothing to work with. Always specify the action, the subject, and at least one constraint.

Overloading a single prompt with too many requests is the second major mistake. “Summarize this report, then draft a response email, then create a presentation slide, then list the action items” asks Copilot to do four different things at once. The quality of each will suffer. Break complex requests into separate prompts and handle them sequentially.

Not reviewing output before using it is risky in any context but especially in government work. Copilot can produce plausible-sounding content that contains inaccuracies. Always read the full response, verify key facts, and check that the tone matches your needs before sharing or submitting.

Assuming Copilot knows your organization’s jargon is a subtle mistake. Acronyms like “ATO,” “POAM,” or “FISMA” may be interpreted differently or not at all without context. When using specialized terminology, add a brief definition or context the first time you use it in a prompt.

For government-specific work, don’t forget to consider the sensitivity of information in your prompts. While Copilot in GCC, GCC High, and DoD environments maintains appropriate security postures, you should still be thoughtful about what information you include in prompts and ensure it aligns with your organization’s guidance on AI usage.

Close: building your prompt library

The techniques in this video—specificity, rich context, examples, and iteration—compound over time. The more you use them, the faster they become, and the better your results get.

Start building your prompt library today. Save prompts that work well. Share effective prompts with your team—when one person discovers a prompt that produces great status updates or briefing slides, the whole team benefits.

Prompting is a skill that improves with daily practice. You don’t need to master everything at once. Pick one technique from this video—start with specificity—and apply it to every prompt you write this week. Next week, add rich context. The improvement will be noticeable and cumulative.

In the next video, we’ll cover iterating on responses in depth—the techniques for turning Copilot’s first draft into exactly what you need.

Sources & References

GCC GCC-HIGH DOD Prompting Prompt-techniques Effective-prompts

Related Resources

Watch on YouTube

Like, comment, and subscribe for more content

View on YouTube