Prompting Fundamentals: Goal, Context, Source, Expectations

Video Tutorial

Prompting Fundamentals: Goal, Context, Source, Expectations

How-to guide covering the four elements of effective prompting for Microsoft 365 Copilot in government environments. Learn the Goal-Context-Source-Expectations framework that applies to every prompt you write.

8:00 February 08, 2026 End-user

Overview

Most people type a single sentence into Copilot and hope for a useful response. Sometimes it works, but more often the output is generic, off-target, or missing critical details. The difference between a prompt that produces mediocre results and one that produces exactly what you need comes down to a simple four-part framework: Goal, Context, Source, and Expectations.

This video teaches you each element of that framework with government-relevant examples so you can write better prompts every time.

What You’ll Learn

  • Goal: How to define exactly what you want Copilot to do
  • Context: How to provide the background Copilot needs for relevant output
  • Source: How to point Copilot to the right information
  • Expectations: How to specify the format, length, and tone of the output

Script

Hook: why most prompts fail

Here’s a prompt: “Help me with the budget.” And here’s what Copilot gives you—a generic overview of budgeting best practices that has nothing to do with your actual situation.

Now here’s a different prompt: “Draft a one-page executive summary of our FY2026 budget request, based on the budget spreadsheet shared in last week’s finance meeting, written for our agency director in a formal tone with bullet points for each major line item.”

Same person, same need—completely different results. The difference is structure.

Most people type one sentence and hope Copilot reads their mind. There’s a framework that fixes this. It has four elements: Goal, Context, Source, and Expectations. In the next eight minutes, you’ll learn each one and start writing prompts that consistently deliver.

Element 1: Goal – what you want Copilot to do

Every prompt needs a Goal. This is the single most important element—it tells Copilot what action to take and what deliverable to produce.

Start with a clear action verb. “Summarize” tells Copilot to condense information. “Draft” tells it to create new content. “Compare” tells it to analyze differences. “Analyze” tells it to examine data and draw insights. “List” tells it to enumerate items. Each verb sets a different expectation for what Copilot should produce.

Be specific about the deliverable. Don’t say “Help me write something about our project.” Say “Draft a project status update.” Don’t say “Tell me about the meeting.” Say “Summarize the three key decisions from yesterday’s meeting.”

Here’s a government example. Imagine you need to write a decision memo. A vague prompt would be “Help me write something for leadership about the new tool.” That gives Copilot almost nothing to work with. A goal-driven prompt would be “Draft a decision memo recommending adoption of the new collaboration platform.” The action verb is “draft,” the deliverable is a “decision memo,” and the subject is clearly defined. Copilot now knows exactly what to produce.

The goal doesn’t need to be long. It just needs to be specific. One clear sentence with a strong action verb and a defined deliverable is often enough to get a dramatically better response.

Element 2: Context – background Copilot needs

Context is what transforms a generic response into one that fits your specific situation. It answers three questions: Who are you? Who is the audience? Why do you need this?

Start with your role. “I’m a program manager in a federal civilian agency” gives Copilot a completely different frame of reference than “I’m an IT administrator in a DoD environment.” Your role shapes the vocabulary, depth, and perspective of the response.

Next, define the audience. Writing for your agency director requires a different approach than writing for your technical team or for the general public. Tell Copilot who will read or hear the output. “This is for senior leadership who need a high-level overview” produces very different content than “This is for the technical team who needs implementation details.”

Then explain the purpose. Why do you need this? “I need to brief the deputy secretary by end of day” adds urgency and formality. “I’m preparing talking points for an interagency call” tells Copilot the content needs to be concise and suitable for verbal delivery.

Government work adds specific context considerations. Think about your audience tiers—are you writing for political appointees, career senior executives, program staff, or the public? Each requires different language, detail levels, and framing.

Here’s how context changes everything. Same goal, different context. Goal: “Draft talking points about our cloud migration.” Context A: “I’m briefing our agency CIO who wants technical progress and risks.” Context B: “I’m preparing remarks for a Congressional staffer who wants cost savings and timeline.” Same topic, same action—but the context produces completely different and appropriate output for each audience.

The more relevant context you provide, the less you’ll need to iterate on the response. Think of context as front-loading the information that would otherwise require three rounds of follow-up prompts.

Element 3: Source – where to look

Source tells Copilot where to find the information it needs to complete your request. Without a source, Copilot relies on its general knowledge or whatever it can find in your organizational data. With a source, it works with exactly the right material.

You can point Copilot to specific files. “Based on the quarterly report in our team SharePoint site” or “Using the attached document” directs Copilot to a particular source. You can reference emails—”Based on the email thread from John about the procurement timeline.” You can reference meetings—”Using the notes from yesterday’s sprint review.”

When you’re working in Copilot Chat with a Microsoft 365 license, Copilot can search your organizational data. But pointing it to a specific source narrows the search and improves accuracy. “Summarize the project charter” is less precise than “Summarize the project charter that was uploaded to the Program Delta SharePoint site last week.”

There’s an important government consideration here. Copilot operates within your permissions boundary. It can only access content you’re authorized to see. When you reference sources, make sure they’re within your accessible scope. If you reference a document on a SharePoint site you don’t have access to, Copilot won’t be able to find it.

When should you upload files versus reference organizational data? Upload files when you want Copilot to work with a specific version of a document, when the file isn’t stored in Microsoft 365, or when you want to ensure Copilot uses exactly that file. Reference organizational data when the content is already in your Microsoft 365 environment and you want Copilot to find and use it in context.

Element 4: Expectations – output format and constraints

Expectations tell Copilot how to format and constrain its response. Without expectations, Copilot decides on its own—and its default might not match what you need.

Specify the format. “Present this as bullet points” or “Create a table comparing the three options” or “Write this as a narrative paragraph” or “Format this as an executive summary with headers.” Each format instruction produces fundamentally different output from the same information.

Set length constraints. “In 200 words or less” keeps Copilot concise. “Provide a comprehensive analysis of at least 500 words” signals you want depth. Without length guidance, Copilot tends to produce medium-length responses that may be too long for a quick brief or too short for a detailed analysis.

Define the tone. “Use a formal, professional tone suitable for official correspondence” versus “Keep it conversational for a team update.” Tone shapes word choice, sentence structure, and overall feel.

Here are government-specific expectations that work well. “In the format of an executive briefing with key findings and recommended actions.” “As a Federal Register notice following standard formatting conventions.” “In the structure of a decision memo with background, options, recommendation, and next steps.” “As a set of talking points with no more than five bullets, each under 25 words.”

Combining format, length, and tone in your expectations gives Copilot a clear target. Instead of generating something you need to reshape, it delivers output you can use with minimal editing.

Putting it all together

Let’s combine all four elements in a single prompt for a government scenario.

Here’s the scenario: you need to prepare a response to a Congressional inquiry about your agency’s AI adoption progress.

Without the framework, you might type: “Help me respond to Congress about AI.” That’s a goal without context, source, or expectations. The result will be generic.

With the framework: “Draft a formal response to a Congressional inquiry about our agency’s AI adoption progress. I’m the deputy CIO, and this response will go to the House Oversight Committee. Base the response on the AI implementation roadmap document in our CTO SharePoint site and the quarterly progress report from last month. Format it as a two-page memo with an executive summary, current status, planned milestones, and budget allocation. Use formal language appropriate for Congressional correspondence.”

Goal: draft a formal Congressional inquiry response. Context: your role, the audience, the purpose. Source: two specific documents. Expectations: format, length, and tone.

That single prompt will produce a response dramatically closer to what you actually need—often usable with only minor edits.

Close: the framework in practice

Here’s the good news: you don’t need all four elements in every prompt. Start with a clear Goal—that alone improves most prompts significantly. Add Context when the audience or purpose matters. Include Source when you want Copilot to use specific information. Set Expectations when format or tone is important.

With practice, this framework becomes second nature. You’ll find yourself structuring prompts automatically, and your results from Copilot will improve immediately.

Sources & References

GCC GCC-HIGH DOD Prompting Prompt-engineering Copilot-fundamentals

Related Resources

Watch on YouTube

Like, comment, and subscribe for more content

View on YouTube