I am creating a small POC where i am trying to get the response from LLM using gpt-4.1 model Please Note i am using SDK here, i am able to get the response if i create plain REST API.
Below is my code:
appSettings.json:
{
"AzureOpenAI": {
"Endpoint": "https://srch-nvi-genwizard-devres.openai.azure.com/",
"ApiKey": "MyAPIKey",
"Deployment": "gpt-4.1"
},
}
Program.cs -> Just the configuration part
using Azure;
using Azure.AI.Inference;
using Azure.Core.Diagnostics;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddSingleton(sp =>
{
var config = sp.GetRequiredService<IConfiguration>();
return new ChatCompletionsClient(
new Uri(config["AzureOpenAI:Endpoint"]),
new AzureKeyCredential(config["AzureOpenAI:ApiKey"])
);
});
Controller (I am pasting an image, since there's some issue updating more code: Hotelcontroller.cs
When i run this i am getting 404 error : Request not found
While if i use the same configuration in Python, i am able to get the response
FYI: Endpoint URL, APIKey and Deployment name is correct here.
Deploymentthat is seemingly not being used, is that intentional?var endpoint = config["AzureOpenAI:Endpoint"];and other values.await _client.CompleteAsync- in which case please post the full exception details, including the full exception type-name, its.Message, the.StackTrace, et cetera, and for all nested.InnerExceptionobjects.