You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.MD
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,20 +13,20 @@ If you will be delivering this session, check the [session-delivery-sources](./s
13
13
14
14
### Session Description
15
15
16
-
Build a conversational AI agent for Zava, a retail DIY company, that analyzes sales data and helps customers find products. Learn to create secure, intelligent agents using Azure AI Foundry Agent Service, Model Context Protocol (MCP) for external data connections, and PostgreSQL with Row Level Security (RLS) and pgvector for role-based data protection and semantic search.
16
+
Build a conversational AI agent for Zava, a retail DIY company, that analyzes sales data and helps customers find products. Learn to create secure, intelligent agents using Microsoft Foundry Agent Service, Model Context Protocol (MCP) for external data connections, and PostgreSQL with Row Level Security (RLS) and pgvector for role-based data protection and semantic search.
17
17
18
18
### 🧠 Learning Outcomes
19
19
20
20
By the end of this session, learners will be able to:
21
21
22
-
1.**Azure AI Foundry Agent Service**: Build and deploy AI agents with integrated tools and observability.
22
+
1.**Microsoft Foundry Agent Service**: Build and deploy AI agents with integrated tools and observability.
23
23
2.**Model Context Protocol (MCP)**: Connects the Agent Service to external tools and data over industry standard protocols to enhance agent functionality.
24
24
3.**PostgreSQL**: Use PostgreSQL as a vector database for semantic search and implement Row Level Security (RLS) to protect sensitive data based on user roles.
25
-
4.**Azure AI Foundry**: An enterprise-grade AI development platform providing unified model access, comprehensive monitoring, distributed tracing capabilities, and production-ready governance for AI applications at scale.
25
+
4.**Microsoft Foundry**: An enterprise-grade AI development platform providing unified model access, comprehensive monitoring, distributed tracing capabilities, and production-ready governance for AI applications at scale.
26
26
27
27
### 💻 Technologies Used
28
28
29
-
1.Azure AI Foundry
29
+
1.Microsoft Foundry
30
30
1. PostgreSQL including Row Level Security (RLS) and Semantic Search with the pgvector extension
31
31
1. Model Context Protocol (MCP)
32
32
@@ -36,7 +36,7 @@ By the end of this session, learners will be able to:
| Workshop Repository |[Unlock your Agents Potential with MCP and PostgreSQL](https://github.com/microsoft/Unlock-Your-Agents-Potential-with-MCP-and-PostgreSQL)| Workshop materials and resources |
38
38
| Workshop Docs |[Workshop Documentation](https://microsoft.github.io/aitour26-WRK540-unlock-your-agents-potential-with-model-context-protocol/)| Workshop documentation site |
39
-
| Documentation |[Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/)|Azure AI Foundry documentation |
| Module |[Fundamentals of AI agents on Azure](https://learn.microsoft.com/training/modules/ai-agent-fundamentals/)| Training module on AI agent fundamentals |
41
41
| Documentation |[Tracing using Application Insights](https://learn.microsoft.com/azure/ai-services/agents/concepts/tracing)| Guide to tracing with Application Insights |
42
42
| Documentation |[Evaluating your AI agents with Azure AI Evaluation SDK](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk)| AI agent evaluation documentation |
@@ -96,11 +96,11 @@ Microsoft’s approach to responsible AI is grounded in our AI principles of f
96
96
97
97
Large-scale natural language, image, and speech models - like the ones used in this sample - can potentially behave in ways that are unfair, unreliable, or offensive, in turn causing harms. Please consult the [Azure OpenAI service Transparency note](https://learn.microsoft.com/legal/cognitive-services/openai/transparency-note?tabs=text) to be informed about risks and limitations.
98
98
99
-
The recommended approach to mitigating these risks is to include a safety system in your architecture that can detect and prevent harmful behavior. [Azure AI Content Safety](https://learn.microsoft.com/azure/ai-services/content-safety/overview) provides an independent layer of protection, able to detect harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful. Within Azure AI Foundry portal, the Content Safety service allows you to view, explore and try out sample code for detecting harmful content across different modalities. The following [quickstart documentation](https://learn.microsoft.com/azure/ai-services/content-safety/quickstart-text?tabs=visual-studio%2Clinux&pivots=programming-language-rest) guides you through making requests to the service.
99
+
The recommended approach to mitigating these risks is to include a safety system in your architecture that can detect and prevent harmful behavior. [Azure AI Content Safety](https://learn.microsoft.com/azure/ai-services/content-safety/overview) provides an independent layer of protection, able to detect harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful. Within Microsoft Foundry portal, the Content Safety service allows you to view, explore and try out sample code for detecting harmful content across different modalities. The following [quickstart documentation](https://learn.microsoft.com/azure/ai-services/content-safety/quickstart-text?tabs=visual-studio%2Clinux&pivots=programming-language-rest) guides you through making requests to the service.
100
100
101
101
Another aspect to take into account is the overall application performance. With multi-modal and multi-models applications, we consider performance to mean that the system performs as you and your users expect, including not generating harmful outputs. It's important to assess the performance of your overall application using [Performance and Quality and Risk and Safety evaluators](https://learn.microsoft.com/azure/ai-studio/concepts/evaluation-metrics-built-in). You also have the ability to create and evaluate with [custom evaluators](https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk#custom-evaluators).
102
102
103
-
You can evaluate your AI application in your development environment using the [Azure AI Evaluation SDK](https://microsoft.github.io/promptflow/index.html). Given either a test dataset or a target, your generative AI application generations are quantitatively measured with built-in evaluators or custom evaluators of your choice. To get started with the azure ai evaluation sdk to evaluate your system, you can follow the [quickstart guide](https://learn.microsoft.com/azure/ai-studio/how-to/develop/flow-evaluate-sdk). Once you execute an evaluation run, you can [visualize the results in Azure AI Foundry portal](https://learn.microsoft.com/azure/ai-studio/how-to/evaluate-flow-results).
103
+
You can evaluate your AI application in your development environment using the [Azure AI Evaluation SDK](https://microsoft.github.io/promptflow/index.html). Given either a test dataset or a target, your generative AI application generations are quantitatively measured with built-in evaluators or custom evaluators of your choice. To get started with the azure ai evaluation sdk to evaluate your system, you can follow the [quickstart guide](https://learn.microsoft.com/azure/ai-studio/how-to/develop/flow-evaluate-sdk). Once you execute an evaluation run, you can [visualize the results in Microsoft Foundry portal](https://learn.microsoft.com/azure/ai-studio/how-to/evaluate-flow-results).
Copy file name to clipboardExpand all lines: docs/docs/en/agent-service-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
The Foundry Agent Service offers a fully managed cloud service with SDKs for [Python](https://learn.microsoft.com/azure/ai-services/agents/quickstart?pivots=programming-language-python-azure){:target="_blank"}, [C#](https://learn.microsoft.com/azure/ai-services/agents/quickstart?pivots=programming-language-csharp){:target="_blank"}, and [TypeScript](https://learn.microsoft.com/azure/ai-foundry/agents/quickstart?pivots=programming-language-typescript){:target="_blank"}. The Foundry SDKs simplify AI agent development, reducing complex tasks like tool calling MCP Server tools with just a few lines of code.
2
2
3
-
The Azure AI Foundry Agent Service offers several advantages over traditional agent platforms:
3
+
The Microsoft Foundry Agent Service offers several advantages over traditional agent platforms:
4
4
5
5
-**Rapid Deployment**: Optimized SDK for fast deployment, letting developers focus on building agents.
6
6
-**Scalability**: Designed to handle varying user loads without performance issues.
Copy file name to clipboardExpand all lines: docs/docs/en/architecture.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
## Core technologies at a glance
2
2
3
-
-**Azure AI Foundry Agent Service**
3
+
-**Microsoft Foundry Agent Service**
4
4
Hosts the LLM-driven agent; orchestrates tools (including MCP Servers); manages context, Code Interpreter, and token streaming; and provides authentication, logging, and scaling.
5
5
-**MCP Servers**
6
6
MCP (Model Context Protocol) is an open standard that gives LLMs a unified interface to external tools, APIs, and data. It standardizes tool discovery (like OpenAPI for REST) and improves composability by making tools easy to update or swap as needs evolve.
@@ -13,7 +13,7 @@
13
13
14
14
The Zava Sales Analysis solution architecture includes:
15
15
16
-
- An **Azure AI Foundry Agent Service** instance that hosts the Zava Sales Analysis agent.
16
+
- An **Microsoft Foundry Agent Service** instance that hosts the Zava Sales Analysis agent.
17
17
- A **PostgreSQL** database with the **pgvector** extension, storing Zava sales data and embeddings.
18
18
- An **MCP Server** that exposes the PostgreSQL database to the agent via MCP.
19
19
- An **Agent Manager** app that manages the interaction between the user and the agent.
Copy file name to clipboardExpand all lines: docs/docs/en/hw-1-start-mcp-server.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ In this lab, you will:
7
7
8
8
## Introduction
9
9
10
-
The Model Context Protocol (MCP) server is a crucial component that handles the communication between Large Language Models (LLMs) and external tools and data sources. You’ll run the MCP server on your local machine, but the Azure AI Foundry Agent Service requires internet access to connect to it. To make your local MCP server accessible from the internet, you’ll use a DevTunnel. This allows the Agent Service to communicate with your MCP server as if it were running as a service in Azure.
10
+
The Model Context Protocol (MCP) server is a crucial component that handles the communication between Large Language Models (LLMs) and external tools and data sources. You’ll run the MCP server on your local machine, but the Microsoft Foundry Agent Service requires internet access to connect to it. To make your local MCP server accessible from the internet, you’ll use a DevTunnel. This allows the Agent Service to communicate with your MCP server as if it were running as a service in Azure.
11
11
12
12
## Interface options for MCP
13
13
@@ -16,14 +16,14 @@ MCP supports two main interfaces for connecting LLMs with tools:
16
16
-**Streamable HTTP Transport**: For web-based APIs and services.
17
17
-**Stdio Transport**: For local scripts and command-line tools.
18
18
19
-
This lab uses the Streamable HTTP transport interface to integrate with the Azure AI Foundry Agent Service.
19
+
This lab uses the Streamable HTTP transport interface to integrate with the Microsoft Foundry Agent Service.
20
20
21
21
!!! note
22
22
Normally, you'd deploy the MCP server in a production environment, but for this workshop, you'll run it locally in your development environment. This allows you to test and interact with the MCP tools without needing a full deployment.
23
23
24
24
### Start up a DevTunnel for the MCP Server
25
25
26
-
1. In a new terminal, authenticate DevTunnel. You'll be prompted to log in with your Azure account, use the same account you used to log in to the Azure AI Foundry Agent Service or Azure Portal. Run the following command:
26
+
1. In a new terminal, authenticate DevTunnel. You'll be prompted to log in with your Azure account, use the same account you used to log in to the Microsoft Foundry Agent Service or Azure Portal. Run the following command:
Copy file name to clipboardExpand all lines: docs/docs/en/index.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,14 +12,14 @@ You need to analyze sales data to find trends, understand customer preferences,
12
12
13
13
Learn to build an AI agent that analyzes sales data, answers product questions, and helps customers find products. Key topics:
14
14
15
-
1.**Azure AI Foundry Agent Service**: Build and deploy AI agents with integrated tools and observability.
15
+
1.**Microsoft Foundry Agent Service**: Build and deploy AI agents with integrated tools and observability.
16
16
2.**Model Context Protocol (MCP)**: Connects the Agent Service to external tools and data over industry standard protocols to enhance agent functionality.
17
17
3.**PostgreSQL**: Use PostgreSQL as a vector database for semantic search and implement Row Level Security (RLS) to protect sensitive data based on user roles.
18
-
4.**Azure AI Foundry**: An enterprise-grade AI development platform providing unified model access, comprehensive monitoring, distributed tracing capabilities, and production-ready governance for AI applications at scale.
18
+
4.**Microsoft Foundry**: An enterprise-grade AI development platform providing unified model access, comprehensive monitoring, distributed tracing capabilities, and production-ready governance for AI applications at scale.
19
19
20
20
### Just starting your AI Agents journey?
21
21
22
-
New to AI agents? Start with [Build your code-first agent with Azure AI Foundry](https://aka.ms/aitour/WRK552){:target="_blank"}. You'll build a code-first agent integrating LLMs with databases, documents, and Bing Search—a solid foundation for advanced agents like Zava.
22
+
New to AI agents? Start with [Build your code-first agent with Microsoft Foundry](https://aka.ms/aitour/WRK552){:target="_blank"}. You'll build a code-first agent integrating LLMs with databases, documents, and Bing Search—a solid foundation for advanced agents like Zava.
23
23
24
24
## What is an LLM-Powered AI Agent?
25
25
@@ -29,8 +29,8 @@ Example: If a user asks, "Show total sales per store as a pie chart", the agent
29
29
30
30
This shifts much of the application logic from developers to the model. Clear instructions and dependable tools are essential for predictable agent behavior and results.
31
31
32
-
## Introduction to the Azure AI Foundry
32
+
## Introduction to the Microsoft Foundry
33
33
34
-
[Azure AI Foundry](https://azure.microsoft.com/products/ai-foundry/){:target="_blank"} is Microsoft’s secure, flexible platform for designing, customizing, and managing AI apps and agents. Everything—models, agents, tools, and observability—lives behind a single portal, SDK, and REST endpoint, so you can ship to cloud or edge with governance and cost controls in place from day one.
34
+
[Microsoft Foundry](https://azure.microsoft.com/products/ai-foundry/){:target="_blank"} is Microsoft’s secure, flexible platform for designing, customizing, and managing AI apps and agents. Everything—models, agents, tools, and observability—lives behind a single portal, SDK, and REST endpoint, so you can ship to cloud or edge with governance and cost controls in place from day one.
35
35
36
-

Copy file name to clipboardExpand all lines: docs/docs/en/lab-2-start-the-agent.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -177,13 +177,13 @@ From the web chat client, you can start a conversation with the agent. The agent
177
177
!!! tip
178
178
=== "Python"
179
179
180
-
Switch back to VS Code and select **MCP Server (workspace)** from the TERMINAL panel and you'll see the calls made to the MCP Server by the Azure AI Foundry Agent Service.
180
+
Switch back to VS Code and select **MCP Server (workspace)** from the TERMINAL panel and you'll see the calls made to the MCP Server by the Microsoft Foundry Agent Service.
181
181
182
182

183
183
184
184
=== "C#"
185
185
186
-
In the Aspire dashboard, you can select the logs for the `dotnet-mcp-server` resource to see the calls made to the MCP Server by the Azure AI Foundry Agent Service.
186
+
In the Aspire dashboard, you can select the logs for the `dotnet-mcp-server` resource to see the calls made to the MCP Server by the Microsoft Foundry Agent Service.
187
187
188
188
You can also open the trace view and find the end-to-end trace of the application, from the user input in the web chat, through to the agent calls and MCP tool calls.
Copy file name to clipboardExpand all lines: docs/docs/en/lab-5-monitoring.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
1
## Introduction
2
2
3
-
Monitoring keeps your Azure AI Foundry Agent Service available, performant, and reliable. Azure Monitor collects metrics and logs, provides real‑time insights, and sends alerts. Use dashboards and custom alerts to track key metrics, analyze trends, and respond proactively. Access monitoring via the Azure portal, CLI, REST API, or client libraries.
3
+
Monitoring keeps your Microsoft Foundry Agent Service available, performant, and reliable. Azure Monitor collects metrics and logs, provides real‑time insights, and sends alerts. Use dashboards and custom alerts to track key metrics, analyze trends, and respond proactively. Access monitoring via the Azure portal, CLI, REST API, or client libraries.
4
4
5
5
## Lab Exercise
6
6
7
7
1. From the VS Code file explorer, open the `resources.txt` file in the `workshop` folder.
8
8
1.**Copy** the value for the `AI Project Name` key to the clipboard.
9
-
1. Navigate to the [Azure AI Foundry Portal](https://ai.azure.com){:target="_blank"} page.
9
+
1. Navigate to the [Microsoft Foundry Portal](https://ai.azure.com){:target="_blank"} page.
10
10
1. Select your project from the list of foundry projects.
Copy file name to clipboardExpand all lines: docs/docs/en/lab-6-tracing.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
## Introduction
2
2
3
-
Tracing helps you understand and debug your agent's behavior by showing the sequence of steps, inputs, and outputs during execution. In Azure AI Foundry, tracing lets you observe how your agent processes requests, calls tools, and generates responses. You can use the Azure AI Foundry portal or integrate with OpenTelemetry and Application Insights to collect and analyze trace data, making it easier to troubleshoot and optimize your agent.
3
+
Tracing helps you understand and debug your agent's behavior by showing the sequence of steps, inputs, and outputs during execution. In Microsoft Foundry, tracing lets you observe how your agent processes requests, calls tools, and generates responses. You can use the Microsoft Foundry portal or integrate with OpenTelemetry and Application Insights to collect and analyze trace data, making it easier to troubleshoot and optimize your agent.
4
4
5
5
<!-- ## Lab Exercise
6
6
@@ -40,13 +40,13 @@ Write an executive report that analysis the top 5 product categories and compare
40
40
41
41
## View Traces
42
42
43
-
You can view the traces of your agent's execution in the Azure AI Foundry portal or by using OpenTelemetry. The traces will show the sequence of steps, tool calls, and data exchanged during the agent's execution. This information is crucial for debugging and optimizing your agent's performance.
43
+
You can view the traces of your agent's execution in the Microsoft Foundry portal or by using OpenTelemetry. The traces will show the sequence of steps, tool calls, and data exchanged during the agent's execution. This information is crucial for debugging and optimizing your agent's performance.
44
44
45
-
### Using Azure AI Foundry Portal
45
+
### Using Microsoft Foundry Portal
46
46
47
-
To view traces in the Azure AI Foundry portal, follow these steps:
47
+
To view traces in the Microsoft Foundry portal, follow these steps:
48
48
49
-
1. Navigate to the [Azure AI Foundry](https://ai.azure.com/) portal.
49
+
1. Navigate to the [Microsoft Foundry](https://ai.azure.com/) portal.
50
50
2. Select your project.
51
51
3. Select the **Tracing** tab in the left-hand menu.
52
52
4. Here, you can see the traces generated by your agent.
0 commit comments