Getting started with the Azure AI Foundry SDK
One SDK to rule your entire AI project
Getting started with the Azure AI Foundry SDK
If you have been building on Azure AI for a while, you will have noticed the landscape of services and SDKs has been a little scattered. You had the Azure OpenAI SDK, the Azure Cognitive Services SDK, the Azure Machine Learning SDK and a few others depending on what you were doing. Microsoft has been working to consolidate all of this under one roof and the Azure AI Foundry SDK is the result of that effort. It gives you a unified way to work with models, indexes, evaluations and deployments from a single package.
Install the SDK and dependencies:
pip install azure-ai-projects azure-identity openaiSet up your environment variables:
AZURE_SUBSCRIPTION_ID=<your subscription id>
AZURE_RESOURCE_GROUP=<your resource group>
AZURE_AI_PROJECT_NAME=<your project name>
AZURE_OPENAI_CONNECTION_NAME=<your openai connection name>The first thing you need is a project client which is your entry point into the SDK:
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
client = AIProjectClient(
subscription_id=os.getenv("AZURE_SUBSCRIPTION_ID"),
resource_group_name=os.getenv("AZURE_RESOURCE_GROUP"),
project_name=os.getenv("AZURE_AI_PROJECT_NAME"),
credential=DefaultAzureCredential(),
)From this client you can get a configured OpenAI connection without needing to manage endpoints and keys separately:
openai_client = client.inference.get_azure_openai_client(api_version="2024-12-01-preview")
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What are the key benefits of moving AI workloads to the cloud?"}
]
)
print(response.choices[0].message.content)What I like about this approach is that the credential management is handled through the project connection. You dont need to pass around endpoint URLs and API keys in your code. In a team environment, this makes a big difference because everyone connects to the same project and gets the same configurations.
You can also list the models available in your project:
for model in client.models.list():
print(f"{model.name} - {model.model_id}")And deploy a new model directly from code:
from azure.ai.projects.models import ModelDeployment
deployment = client.deployments.begin_create_or_update(
deployment_name="my-gpt4o-deployment",
deployment=ModelDeployment(model_id="gpt-4o")
).result()
print(f"Deployment status: {deployment.provisioning_state}")Why this matters for enterprise teams
When you are working with a larger team or multiple projects, having a centralised way to manage your AI resources is critical. The Foundry SDK ties directly into your Azure subscription so access control, cost tracking and governance all flow through your existing Azure setup. No more passing around API keys in slack messages or storing them in notebooks.
It also means you can build internal tooling and deployment pipelines that work consistently across environments because the project is the source of truth.
I have been using this on a few recent projects and it has simplified the setup quite a bit. Give it a try on your next project and let me know how you get on.