Agentic AI in Power BI and Fabric, Part 1: Concepts, Terminology, and How to Think About It

It has been a while since I published my last blog and YouTube video. Life got a bit busy, and to be honest, finding enough focused time became harder than I expected. But here I am, on the very last day of 2025.

I do not really see this blog as the final post of 2025. I see it more as an opening for what is coming next. In a couple of hours, we will be in 2026. Looking back, 2025 was a year full of ups and downs. Some very good moments, some sad ones too. But all in all, as Brian May from Queen once said, “The Show Must Go On”.

So let us start the next year with a topic that has been on my mind a lot recently. Agentic AI, and how it can realistically help us in Microsoft Fabric and Power BI projects.

If you like to listen to the content on the go, here is the AI generated podcast explaining everything about this blog 👇.

Why this topic needs a series, not a single blog

Before we go into any definitions, I want to explain why I am turning this into a multi-part series.

Agentic AI is a broad topic. It touches tooling, process, safety, productivity, and also mindset. Trying to cover all of this properly in a single blog post would either make it too shallow, or too long and hard to follow. Neither is useful.

So I decided to break it down into a series:

  • This first blog is about concepts and terminology
  • The next blog will cover initial setup and tools
  • The following one will focus on hands-on Power BI scenarios

This first part intentionally stays away from tools and demos. The goal is to build a solid mental foundation first.

What this series is and what it is not

Agentic AI is one of those topics where expectations can easily go in the wrong direction. So it is important to be very clear.

This series is not:

  • A story about replacing engineers, analysts, or architects
  • A full AI or machine learning theory course
  • A generic prompt list without context

This series is:

  • About improving productivity in real delivery projects
  • About assisting people, not replacing them
  • About using AI in a controlled and responsible way
  • Focused on Microsoft Fabric and Power BI implementations

If you are expecting magic or shortcuts, this series is probably not for you.

Where Agentic AI fits today in the Microsoft Fabric world

Before going further, one important clarification is needed.

At the time of writing this blog, Agentic AI is not available in the built-in Copilot experiences in Microsoft Fabric or Power BI. Copilot today is mainly a conversational assistant. It does not plan tasks, use external tools freely, or execute multi-step workflows in the way Agentic AI does.

Everything discussed in this series is about agentic setups, for example using tools like VS Code, external agents, and Model Context Protocol servers, which we will cover later in the series.

This distinction is important, otherwise expectations will be wrong from the start.

Why Agentic AI makes sense for data and analytics work

Now let us talk about why Agentic AI even matters for data and analytics projects.

Most Power BI and Fabric projects are not hard because of advanced maths or algorithms. They are hard because of process. The same types of tasks come up again and again:

  • Reviewing semantic models
  • Checking relationships and cardinality
  • Validating measures and business logic
  • Reading and understanding existing documentation
  • Repeating the same checks across multiple projects

These tasks are important, but also repetitive and time consuming. This is where Agentic AI fits very well.

Not because it is smarter than us, but because it is good at following structured steps and rules consistently.

Chat-based AI vs Agentic AI

Most of us already use chat-based AI tools. You ask a question, and you get an answer. This works well for learning and quick explanations.

But delivery work is different.

In real projects, you usually want:

  • A repeatable process
  • Evidence from real systems
  • Structured outputs you can review

Agentic AI is designed for this.

With Agentic AI:

  • You give a goal, not just a question
  • The agent breaks the goal into steps
  • It uses tools to inspect real systems
  • It applies rules and boundaries
  • It produces structured results

In simple terms, chat-based AI talks.
Agentic AI follows a workflow.

A simple mental model to keep in mind

Before defining individual terms, it helps to have a clear mental model.

There is always a human in control. The human defines the goal and gives feedback.

At the centre sits the AI agent. The agent plans what to do next. It does not act randomly.

Around the agent are several building blocks:

  • Skills
  • Guardrails
  • Memory
  • Tools

The agent uses planning to break goals into steps and executes them as actions.

The tools are exposed through a Model Context Protocol (MCP) server, which acts as a controlled bridge to real systems like files, APIs, Microsoft Fabric, or Power BI metadata.

Nothing here is magic. Everything is explicit and structured.

Agentic AI

Before defining Agentic AI, it is worth taking a step back and thinking about why this term even exists. Over the last couple of years, many of us have been using AI tools in a conversational way. We ask questions, we get answers, and sometimes those answers are very good. But in real project work, especially in data and analytics, this style quickly hits its limits.

In real Power BI and Fabric projects, we rarely need just an answer. We need a sequence of steps. We need to inspect real systems, apply rules, check assumptions, and then produce something that we can review and trust. This is where the idea of Agentic AI comes in.

Agentic AI is not about making AI smarter. It is about making AI more structured.

When we say Agentic AI, we are talking about AI systems that are designed to behave more like an assistant that follows a process, rather than a chatbot that responds to individual questions. The key difference is not intelligence, but behaviour.

Agentic AI refers to AI systems that can:

  • Take a goal instead of a single question
  • Break that goal into smaller steps
  • Decide what needs to happen first and what comes next
  • Use tools to gather real information
  • Perform actions in a controlled way
  • Stop when boundaries are reached

This does not mean the AI is acting on its own without supervision. In fact, the opposite is true. Agentic AI only makes sense when a human is clearly in control. The human defines the goal, the boundaries, and what is considered acceptable output.

Another important point is that Agentic AI is not something you currently get from the built-in Copilot experience in Microsoft Fabric or Power BI. Today, Copilot is mainly conversational. It can explain, summarise, and suggest, but it does not plan multi-step workflows or use external tools in a controlled, agentic way. The Agentic AI discussed in this series is implemented outside of Fabric, using external tools and configurations, which we will cover later.

In simple terms, Agentic AI is about turning AI from a talking assistant into a working assistant. One that follows steps, uses tools, respects rules, and produces outputs you can review, validate, and trust.

This concept is the foundation for everything else in this series. Skills, tools, guardrails, memory, and MCP servers all exist to support this way of working. If this idea is clear, the rest of the concepts will start to make much more sense as we move forward.

Got it. Thank you for pointing exactly to the section. You are right again. That section was almost empty and does not match your style at all.

The AI Agent

So far, we talked about Agentic AI at a high level and why it exists. At this point, it is natural to ask a very simple question. If Agentic AI is about planning, actions, tools, and rules, then what exactly is the thing that ties all of these together?

This is where the AI agent comes in.

When people hear the word “agent”, they often imagine something autonomous, acting on its own, maybe even making decisions without supervision. That mental image is not very helpful here. In the context of Agentic AI, an agent is not a free actor. It is a coordinator.

The AI agent is the component that sits in the middle of everything. Its main job is to decide what should happen next, based on the goal it was given, the rules it must follow, and the information it has access to.

The agent does not do the work itself. It does not directly read files, query systems, or change anything. Instead, it decides:

  • Which step should come next
  • Whether more information is needed
  • Which tool should be used
  • Whether a boundary or guardrail has been reached
  • When the task should stop

In other words, the agent thinks and orchestrates. It does not execute.

This distinction is very important, especially for data and analytics projects. In Power BI and Fabric work, we care a lot about traceability and accountability. If something goes wrong, we want to know why it happened and which decision led to it. Having a clear agent that makes decisions, separate from tools that execute actions, makes this much easier to reason about.

Another important point is that the agent always operates under instructions. These instructions usually come from system or chat-level configurations in the tool you are using, for example in VS Code. This is where you define:

  • What the agent is allowed to do
  • What its role is
  • What it should never attempt
  • How cautious it should be

The agent does not invent its role on the fly. It follows what you define for it.

It is also worth repeating that, today, this kind of AI agent does not exist inside the built-in Copilot experience in Microsoft Fabric. Copilot can assist through conversation, but it does not act as a coordinating agent that plans steps and uses tools in a controlled workflow. The agentic behaviour described in this series is achieved through external setups, which we will cover later.

If you keep only one thing in mind from this section, let it be this.

The AI agent is not the worker.
The AI agent is the coordinator.

Once this idea is clear, concepts like skills, guardrails, tools, and MCP servers start to fall into place much more naturally in the sections that follow.

Tools

Up to this point, we talked about the agent, planning, skills, and guardrails. All of these describe how decisions are made and controlled. However, none of that matters much if the agent cannot actually interact with the real world.

This is where tools come in.

Without tools, an agent can only think and talk. It can reason, explain, and suggest ideas, but it cannot inspect a semantic model, read a file, or check metadata. Tools are what turn an agent from a thinking assistant into a practical one.

In simple terms, tools are the agent’s way of touching real systems.

A tool is a very small and very focused capability. Each tool is designed to do one specific thing, and nothing more. This design is intentional. Tools are kept simple so they are predictable, safe, and easy to reason about.

Examples of tools in data and analytics work include:

  • Reading files from a folder or repository
  • Querying metadata from a semantic model
  • Calling an API to list Fabric items
  • Searching official documentation
  • Running a validation query

It is important to understand that tools do not make decisions. They do not analyse results or decide what to do next. A tool only executes an action and returns the result. The thinking always stays with the agent.

Another important point is that tools are not prompts. They are executable capabilities. When an agent uses a tool, it is not guessing or hallucinating. It is asking a real system for real information.

This distinction is critical, especially in Power BI and Fabric scenarios. When an agent reviews a semantic model using tools, it is working with actual metadata, not assumptions. That is what makes the output useful and trustworthy.

Later in this series, when we move into setup and hands-on scenarios, you will see how tools are exposed to the agent through Model Context Protocol servers, and how we control exactly what the agent is allowed to do with them.

For now, the key takeaway is this.

Tools are the agent’s hands.
They do not think.
They do not decide.
They simply do what they are told, and nothing more.

This is by design, and it is one of the reasons Agentic AI can be used safely in real projects.

Skills

Before going further, it is worth mentioning where the term skills comes from.

The concept of skills as a first-class building block in agentic systems was popularised by Anthropic. Anthropic introduced skills as reusable capabilities that sit between the agent and tools, helping structure how work is done. You can find more about this on their website and technical blogs.

A skill is a reusable recipe for completing a task.

A skill:

  • Uses one or more tools
  • Follows defined rules
  • Applies checks
  • Produces consistent outputs

In data projects, skills can represent things like:

  • A semantic model audit
  • A measure naming review
  • A governance readiness check

Skills are not tools, and they are not just prompts. They are structured task definitions.

Model Context Protocol (MCP)

By now, we have talked about agents, skills, tools, and guardrails. At this point, a very important question usually comes up, even if people do not ask it directly. If an agent can use tools, how does it actually connect to real systems in a safe and controlled way?

This is where the Model Context Protocol, usually referred to as MCP, comes into the picture.

Without MCP, every agentic setup would need its own custom and often messy way of connecting to files, APIs, databases, or services like Microsoft Fabric. That quickly becomes hard to manage, hard to secure, and very hard to reason about. MCP exists to solve this exact problem.

Model Context Protocol (MCP) is a standard protocol designed to expose tools, data, and capabilities to an AI agent in a structured and secure way. It defines how an agent can discover and use tools without knowing the internal details of the systems behind them.

An MCP server is an external service or process that implements this protocol. Its job is to sit between the agent and real systems.

In practice, an MCP server:

  • Exposes a set of tools the agent is allowed to use
  • Controls how those tools can be called
  • Enforces access rules and permissions
  • Acts as a clear boundary between the agent and external systems

This point is very important. An MCP server is not part of the language model. It is not a prompt. It is not a chat instruction. It runs outside of the AI interface you use, for example outside VS Code, and is configured separately.

Think of the MCP server as a controlled gateway. The agent can only see and use what the MCP server exposes. If a tool is not exposed through MCP, the agent cannot use it, no matter how clever it is.

In a Power BI and Microsoft Fabric context, MCP servers are what allow an agent to safely:

  • Read semantic model metadata
  • List workspace items
  • Access files or repositories
  • Call approved APIs

At the same time, MCP servers are also where many safety decisions are enforced. For example, read-only access, environment separation, and permission boundaries often live at this layer.

This separation is intentional. It keeps responsibilities clear:

  • The agent plans and decides
  • Skills define how work should be done
  • Tools execute small actions
  • MCP servers control access to real systems

Later in this series, when we move into setup and hands-on scenarios, you will see how MCP servers are configured and connected to the tools we use. For now, the key takeaway is simple.

Model Context Protocol is the foundation that makes Agentic AI practical and safe. Without it, agentic systems would be fragile and risky, especially in real data and analytics projects.

Guardrails

By the time people reach this point in the discussion, they usually start feeling both excited and slightly uncomfortable. Excited, because the agent can plan, use tools, and interact with real systems. Uncomfortable, because a natural question appears very quickly. What stops this thing from doing something it should not do?

This is exactly why guardrails exist.

Guardrails are not an optional extra in Agentic AI. They are a core part of the design. In fact, without guardrails, Agentic AI should not be used at all in real projects, especially not in data and analytics environments where mistakes can be expensive.

In simple terms, guardrails define the boundaries of behaviour. They describe what the agent is allowed to do, what it must never do, and how cautious it should be when working with real systems.

It is important to understand that guardrails are not a single thing. They do not live in one place, and they are not just a paragraph of text somewhere in a prompt. Guardrails usually exist across multiple layers of an agentic setup.

At the highest level, guardrails often start in the system or chat instructions of the agent. This is where you define the role of the agent and its general behaviour. For example, you may state that the agent is only allowed to analyse and review, not to modify or deploy anything. These instructions shape how the agent thinks and plans.

Guardrails also exist inside skills. A skill may explicitly state that it must run in read-only mode, or that it must stop if certain conditions are met. For example, a semantic model audit skill might be allowed to read metadata and run validation queries, but never allowed to change a model or write files back.

Another very important layer for guardrails is external configuration, especially access and permissions. This is where tools and MCP servers come into play. Even if an agent tries to do something unsafe, it should not be technically possible. For example, if an MCP server exposes only read-only tools, then destructive actions are simply not available to the agent.

Common examples of guardrails in data and analytics projects include:

  • Read-only access to models and metadata
  • No access to production environments
  • No execution of destructive operations
  • No handling or storage of secrets
  • Explicit stop conditions when uncertainty is high

One important thing to keep in mind is that guardrails are not there to slow you down. They are there to make the system predictable. When guardrails are clear, you can trust the agent more, because you know exactly what it cannot do.

In Power BI and Microsoft Fabric projects, guardrails are especially critical. We often work with shared datasets, production workspaces, and sensitive business logic. An agent that can inspect and analyse these safely is useful. An agent that can freely change them is dangerous.

As we move into the next blogs, you will see guardrails applied again and again. Sometimes as part of instructions, sometimes inside skills, and sometimes enforced entirely by MCP servers and permissions. This layered approach is intentional.

If you remember only one thing from this section, remember this.

Guardrails are not about limiting the agent.
They are about protecting your project, your data, and your responsibility.

Memory

After talking about agents, skills, tools, MCP servers, and guardrails, there is another concept that often gets misunderstood very quickly. Memory. Many people hear this word and immediately think about something mysterious or even risky, like the AI remembering everything forever. That is not a helpful way to think about it.

In Agentic AI, memory exists for a very practical reason.

In real projects, work is rarely done in a single step. Decisions are made, assumptions are agreed on, constraints are discovered, and context builds up over time. If the agent forgets everything between steps, it will keep asking the same questions, repeating the same checks, or even contradicting itself. That is where memory comes in.

Memory allows the agent to retain useful context across steps and tasks, so it can behave consistently instead of starting from zero every time.

It is important to be clear that memory is not the same as knowledge. The agent does not suddenly become smarter because it has memory. Memory simply helps the agent remember things that were already decided or discovered.

Examples of what memory might include in data and analytics projects:

  • Business rules that were clarified earlier
  • Assumptions about data granularity
  • Known limitations of a semantic model
  • Decisions made during an audit
  • Constraints such as read-only access or environment boundaries

Just like guardrails, memory does not live in one single place.

In practice, memory can exist in different forms:

  • Some tools manage short-term memory automatically during a session
  • Some setups store memory explicitly in files, such as notes or decision logs
  • Some memory is written and read as part of skill execution

What matters is not where the memory lives, but that it is explicit and reviewable. Hidden or implicit memory is dangerous. You should always be able to see what the agent remembers and why.

Another important point is that memory should be treated as context, not truth. Memory can become outdated. Assumptions can change. That is why good agentic setups allow memory to be updated, corrected, or cleared when needed.

In Power BI and Microsoft Fabric projects, memory is especially useful when working across multiple steps. For example, during a semantic model review, the agent may identify certain design decisions early on and then use that context when reviewing measures or relationships later. Without memory, each step would feel disconnected.

Later in this series, when we look at hands-on scenarios, you will see memory used in a very controlled way. Often as simple as a small set of notes or a decision log that the agent reads and updates as it goes.

For now, the key idea to keep in mind is this.

Memory is not about making the agent clever.
It is about making the agent consistent.

Planning and Actions

At this stage, we have talked about many building blocks. The agent, skills, tools, MCP servers, guardrails, and memory. All of these pieces are important, but without one final concept, they do not really come together into something useful.

That missing piece is how work actually progresses from start to finish. This is where planning and actions come in.

In real data and analytics projects, work rarely happens in one big jump. We do not go from “review this semantic model” directly to a finished result. We first look at metadata, then relationships, then measures, then performance, and only after that we form conclusions. This step by step way of working is very natural for humans, and Agentic AI follows the same pattern.

Planning is the phase where the agent takes a goal and breaks it down into smaller, manageable steps. Instead of trying to do everything at once, the agent asks itself what needs to happen first, what depends on what, and what information is missing.

For example, if the goal is to review a Power BI semantic model, the plan might include steps like:

  • Inspect model metadata
  • Identify tables and relationships
  • Review measures and calculations
  • Check naming conventions
  • Summarise findings

The plan is not the work itself. It is a roadmap.

Once a plan exists, the agent moves into actions.

Actions are the individual steps the agent executes one by one. Each action usually involves using a tool. For example, calling a tool to read metadata, or running a query to inspect measures. After each action, the agent looks at the result and decides what to do next.

This loop is important. Plan, act, observe, then act again. The agent does not blindly follow a fixed script. It adapts based on what it finds, while still staying within guardrails.

This is also where the difference between Agentic AI and chat-based AI becomes very clear. A chat-based system responds once and stops. An agentic system plans, executes actions, checks results, and continues until the goal is reached or a boundary is hit.

Another important point is that planning and actions are usually visible. Good agentic tools show you the plan and the steps being taken. This transparency is critical in professional environments like Power BI and Microsoft Fabric projects, where you need to understand why a conclusion was reached.

Later in this series, when we move into hands-on examples, you will see planning and actions working together very clearly. Especially in scenarios like auditing a semantic model or starting a project from scratch, this step by step flow is what makes Agentic AI reliable instead of unpredictable.

For now, remember this.

Planning decides what should happen.
Actions perform what actually happens.

Together, they are what turn Agentic AI into a structured assistant instead of just another chat window.

Prompts

This is usually where another very common question comes up. If the agent plans and acts, where do prompts fit into all of this? Are prompts still important, or are they replaced by skills and tools?

The short answer is that prompts still matter a lot, but their role is different than what many people are used to.

In chat-based AI, prompts are often everything. You carefully craft a long prompt, hope it covers all cases, and then wait for a single response. In Agentic AI, prompts are no longer the whole solution. They become one part of a larger system.

A prompt in an agentic setup is mainly used to shape behaviour and intent. It tells the agent who it is, how it should behave, what tone to use, and what general rules to follow. Prompts provide guidance, not execution.

In practice, prompts are usually split into different layers.

At the top level, there are system or agent prompts. These define the role of the agent. For example, you might state that the agent is acting as a Power BI reviewer, that it must be cautious, and that it must never attempt to change production assets. These prompts live inside the agent configuration of the tool you are using, such as an agent definition in VS Code.

Then there are task or goal prompts. These are the instructions you give when you start a specific piece of work. For example, asking the agent to review a semantic model or to analyse a set of measures. These prompts are usually short and focused, because most of the behaviour is already defined elsewhere.

It is important to understand what prompts are not in an agentic setup. Prompts are not tools. They are not skills. And they are not guardrails by themselves. A prompt can say “do not modify anything”, but real safety should still be enforced by guardrails, permissions, and MCP server configuration.

Another important difference is that prompts in Agentic AI are often supported by files. Instead of writing everything inline, prompts can reference:

  • Skill definitions stored in separate files
  • Project context stored as documentation
  • Assumptions or decisions stored as notes

This makes prompts smaller, clearer, and easier to maintain.

In Power BI and Microsoft Fabric projects, this approach is especially useful. Rather than writing a huge prompt every time you want to review a model, you define the behaviour once, reuse skills, and then use short prompts to trigger specific tasks.

So when working with Agentic AI, think of prompts as the voice and intent of the agent, not its brain or its hands. Planning decides the steps. Actions execute them. Prompts simply guide how the agent behaves along the way.

Understanding this separation early will save you a lot of confusion later, especially when we move into setup and hands-on examples in the next blogs.

Where these concepts live in practice

Up to now, we talked about many concepts. Agent, skills, tools, guardrails, memory, planning, actions, and MCP servers. Each one was explained on its own. This is usually the point where readers start feeling that everything makes sense individually, but the full picture is still a bit blurry. That is normal.

The confusion usually comes from one simple question that is not always asked clearly. Where do these things actually live when we use an agentic AI tool in real life?

If we do not answer this properly, everything stays theoretical. So let us bring all these concepts out of the abstract world and place them clearly into a real setup.

First, the AI agent itself lives inside the tool you are using. For example, if you are working in VS Code with an agentic extension, the agent is defined by that tool. Its role, behaviour, and general attitude are usually defined through system-level or chat-level instructions. This is also where the system prompt or agent prompt lives. These prompts define who the agent is, how it should behave, and what it must never attempt.

Next, skills usually live outside the chat window. They are often defined as separate prompt templates, instruction files, or structured configurations. The key point is that skills are reusable. You do not want to rewrite how to audit a semantic model every time. You define that once as a skill, then reuse it across projects.

Task prompts or goal prompts are different from skills. These are the short instructions you give when you start a specific piece of work. For example, asking the agent to review a semantic model or to analyse a particular issue. These prompts are usually written inline when you interact with the agent, and they rely on skills and guardrails that are already defined.

Guardrails do not live in a single place. This is very important to understand. Some guardrails are defined in the agent or system prompts, such as telling the agent it is only allowed to analyse and not modify anything. Some guardrails are defined inside skills, for example forcing a skill to run in read-only mode. Other guardrails are enforced technically, through permissions, credentials, and MCP server configuration. Good setups always use more than one layer.

Memory can live in different places depending on the tool and the setup. Sometimes it is managed automatically during a session. Sometimes it is stored explicitly in files, notes, or decision logs that the agent reads and updates. What matters most is not the storage method, but visibility. You should always know what the agent remembers and why.

Tools are usually provided by the platform or by extensions. They are not written inside prompts. A tool is something executable, like reading a file or calling an API. The agent can only use the tools that are exposed to it.

This is where Model Context Protocol (MCP) servers come in. MCP servers live completely outside the agent interface. They are external services or processes that expose tools to the agent in a controlled way. They define what tools exist, what data can be accessed, and under what permissions. If a tool is not exposed by an MCP server, the agent simply cannot use it.

Finally, planning and actions live inside the agent’s execution loop. Planning is how the agent decides what to do next. Actions are the individual steps it executes using tools. Good tools make this visible, so you can see the plan and follow each step.

If you put all of this together, the picture becomes much clearer.

  • The agent thinks and coordinates
  • Prompts shape behaviour and intent
  • Skills define how tasks should be done
  • Guardrails limit behaviour at multiple layers
  • Memory keeps context consistent
  • Tools execute small actions
  • MCP servers control access to real systems

Once you see where each concept lives, Agentic AI stops feeling like a black box. It becomes a structured system with clear responsibilities. This clarity is what makes it usable and safe in real Power BI and Microsoft Fabric projects.

Best practices to keep in mind

At this point in the blog, we have covered many concepts and it can start to feel a bit theoretical. This is usually the moment where readers ask a very practical question. “If I want to try this, how do I avoid making a mess?”

That is exactly why it makes sense to talk about best practices now, before touching any tools or setup. These are simple habits, but they make a big difference when working with Agentic AI in real Power BI and Microsoft Fabric projects.

The first and most important practice is still to start in read-only mode. Especially in data and analytics work, there is rarely a good reason for an agent to modify anything early on. Reading metadata, analysing models, and producing recommendations already deliver a lot of value. Write access can always come later, if it is needed at all.

Another important practice is to keep the scope small and clear. This applies very strongly to prompts. Do not give the agent a vague or overly broad instruction like “review everything”. Instead, be explicit about what you want reviewed, what is in scope, and what is not. Clear prompts lead to predictable behaviour.

You should also be careful to separate prompts by responsibility. System or agent prompts should define behaviour and boundaries. Skill definitions should describe how a task is performed. Task prompts should only describe the goal of the current work. Mixing these together into one long prompt usually creates confusion and inconsistent results.

It is also a good habit to avoid putting critical rules only in prompts. A prompt can say “do not modify anything”, but that should never be the only line of defence. Important rules must also be enforced through guardrails, permissions, and MCP server configuration. Prompts guide behaviour, but they do not guarantee safety.

Another key practice is to always ask for evidence in prompts. Especially in Power BI and Fabric scenarios, you should expect the agent to point to metadata, query results, or files that support its conclusions. If a prompt does not explicitly ask for evidence, the output is more likely to stay at a high and less useful level.

You should also review and refine prompts over time. Prompts are not one-off instructions. As you learn how the agent behaves, you will notice where prompts can be simplified, tightened, or clarified. Keeping prompts small and focused usually works better than writing very long ones.

Finally, remember to document important prompts and decisions. If a certain prompt structure works well for auditing a semantic model, save it. If a prompt caused confusion, note why. Over time, this builds a small but very valuable library of prompts that fit your way of working.

When these practices are followed, prompts stop feeling like magic words you must get exactly right. They become simple instructions that sit alongside skills, tools, and guardrails. This is when Agentic AI starts to feel boring in a good way. Predictable, controlled, and trustworthy.

Where this fits in Power BI and Fabric projects

After going through all these concepts, it is fair to pause and ask a very practical question. Even if all of this sounds interesting, where does it actually make sense to use Agentic AI in Power BI and Microsoft Fabric projects?

The answer is not “everywhere”. Agentic AI is most useful in areas where work is structured, repeatable, and based on inspection rather than creativity. Luckily, a lot of data and analytics work falls exactly into that category.

One of the strongest use cases is reviewing existing semantic models. This includes tasks like checking relationships, reviewing measures, validating naming conventions, and identifying common modelling issues. These activities follow clear patterns and rules, which makes them a good fit for skills and structured workflows.

Another good fit is auditing and validation work. For example, checking whether a model follows internal standards, whether calculations align with agreed business rules, or whether certain governance requirements are met. Agentic AI can apply the same checks consistently across multiple models or projects, something that is hard to do manually at scale.

Agentic AI also fits well when you are joining an existing project and need to understand it quickly. Reading through models, metadata, and documentation can be time consuming. An agent can help gather and summarise this information in a structured way, giving you a faster starting point.

In greenfield projects, Agentic AI can be helpful during the early stages. For example, when clarifying requirements, outlining a model structure, or creating a checklist for what needs to be built. It should not replace design decisions, but it can support them by making sure nothing obvious is missed.

What Agentic AI is not well suited for are areas that require strong business judgement or accountability. Decisions about architecture, trade-offs, or stakeholder priorities still belong to people. The agent can support these decisions, but it should not make them.

In the context of Microsoft Fabric and Power BI, it is also important to remember that Agentic AI, as described in this series, lives outside the built-in Copilot experience. We are talking about external agentic setups that interact with Fabric and Power BI through tools and controlled access, not about clicking a Copilot button inside the product.

If used in the right places, Agentic AI can remove a lot of friction from day-to-day work. If used in the wrong places, it can quickly become noise. Knowing where it fits is what makes the difference.

What comes next

This blog was about building a shared understanding.

In the next blog, we will move into:

  • Tools and setup
  • VS Code as the working environment
  • Skills in practice
  • MCP servers for Fabric and Power BI use cases

Once the foundation is clear, the hands-on work will be much easier to follow.

Summary

This blog was intentionally focused on concepts. No tools, no setup, and no demos. The goal was to build a clear and shared understanding before moving into anything practical.

We started by explaining why Agentic AI deserves more than a single blog post, especially in the context of real Power BI and Microsoft Fabric projects. Agentic AI is not about replacing people or automating decisions. It is about assisting structured work in a controlled and predictable way.

We then walked through the core building blocks one by one. The AI agent as the coordinator. Planning and actions as the way work progresses. Tools as the agent’s hands. Skills as reusable task definitions. Guardrails as safety boundaries. Memory as a way to keep context consistent. Model Context Protocol servers as the controlled bridge to real systems. Prompts as the way we shape behaviour and intent.

We also clarified where each of these concepts actually lives in a real setup. Some live in prompts, some in files, some in external services, and some in configuration. Understanding this separation is key to avoiding confusion and unsafe designs.

Finally, we discussed best practices and where Agentic AI fits, and where it does not fit, in Power BI and Fabric projects. Used in the right places, it can remove a lot of repetitive effort. Used in the wrong places, it can quickly become noise or risk.

In the next blog, we will move from concepts to practice. We will look at tools, VS Code setup, skills in action, and how to connect everything together safely. Now that the foundation is clear, the hands-on work will be much easier to follow.

Thanks for following this series so far. I hope this first part helped you better understand the big picture of Agentic AI, as well as the key technical concepts behind it, especially in the context of Power BI and Microsoft Fabric projects.

Since we are just stepping into a new year, I also want to wish you a very happy new year. I hope 2026 brings you good health, interesting projects, and plenty of learning opportunities.

You can follow me on LinkedIn, YouTube, Bluesky, and X, where I share more content around Power BI, Microsoft Fabric, and real-world data and analytics projects.


Discover more from BI Insight

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.