Skip to main content

Key Approaches to AI/ML Integration with .NET Core

 


Download Your Copy Here 

Integrating AI/ML-driven features like prediction services and classification engines into a .NET Core Web API is a powerful way to add intelligence to your applications. Here's a breakdown of good tools, resources, and approaches to consider:

Key Approaches to AI/ML Integration with .NET Core

There are generally two main ways to integrate AI/ML into your .NET Core Web API:

  1. Using ML.NET: This is Microsoft's open-source, cross-platform machine learning framework built specifically for .NET developers. It allows you to build, train, and deploy custom ML models directly within your .NET applications using C# or F#.

 

  1. Consuming External AI/ML Services/APIs: This involves leveraging pre-built or custom-trained models hosted as services (e.g., in the cloud like Azure Machine Learning, Google Cloud AI, AWS ML services, or third-party AI APIs like OpenAI). Your .NET Core Web API then acts as a client to these external services.

 Good Tools and Resources

1. ML.NET (for building and deploying models directly in .NET)

Why use it?

  • Native .NET Experience: If you're a .NET developer, ML.NET allows you to stay within the familiar .NET ecosystem, using C# or F# for your entire ML workflow (data preparation, model training, evaluation, and consumption).
  • Offline/On-premises Deployment: Models trained with ML.NET can be directly embedded into your .NET Core Web API, allowing for predictions without an internet connection (once the model is downloaded/deployed).
  • Customization: You can build highly custom models tailored to your specific data and problem.
  • Performance: For scenarios where low-latency predictions are crucial, hosting the model directly in your API can reduce network overhead.

 Tools & Resources:

  • ML.NET Library (NuGet Package: Microsoft.ML): The core library for all ML.NET operations. You'll add this to your .NET Core Web API project.
  • ML.NET Model Builder (Visual Studio Extension): This is an excellent tool for beginners and experienced developers alike. It's a GUI tool integrated into Visual Studio that helps you build, train, and consume ML.NET models without writing a lot of code. It can:
    • Walk you through the process of choosing an ML scenario (classification, regression, object detection, etc.).
    • Help you prepare your data (from files or SQL Server).
    • Automate model training (AutoML) and select the best algorithm.
    • Generate C# code for model consumption, which you can then integrate into your Web API.
  • ML.NET CLI: For command-line users or CI/CD pipelines, the ML.NET CLI allows you to train and evaluate models without Visual Studio.
  • ML.NET Samples (GitHub): The official ML.NET GitHub repository has numerous samples demonstrating various ML scenarios and how to implement them.
  • Microsoft Learn Documentation for ML.NET: Comprehensive guides, tutorials, and API references are available on the Microsoft Learn platform. Look for "ML.NET Tutorial - Get started in 10 minutes" and "Make predictions with a trained model - ML.NET".
  • PredictionEnginePool: For improved performance and thread safety when making multiple predictions with an ML.NET model in a Web API, use PredictionEnginePool with dependency injection. This object pool reuses PredictionEngine objects, which are not thread-safe.

 How to integrate with .NET Core Web API using ML.NET:

  1. Train your ML.NET model: Use Model Builder or write C# code to train your model. This will typically result in a .zip file containing the trained model.
  2. Add Microsoft.ML NuGet package to your .NET Core Web API project.
  3. Load the model: In your API startup or a dedicated service, load the .zip model file using mlContext.Model.Load().
  4. Create a PredictionEnginePool: Configure PredictionEnginePool in your Startup.cs (or Program.cs for .NET 6+ Minimal APIs) for efficient, thread-safe predictions.
  5. Define input and output classes: Create C# classes that represent the input features your model expects and the output predictions it provides. These often correspond to the data schema used during training.
  6. Create API Endpoints: Expose HTTP endpoints (e.g., POST requests) that accept the input data, use the PredictionEnginePool to make predictions, and return the results.

 Example (simplified): C#

// Input class for your model

public class MyModelInput

{

    [ColumnName("Feature1")]

    public float Feature1 { get; set; }

    [ColumnName("Feature2")]

    public string Feature2 { get; set; }

}

// Output class for your model

public class MyModelOutput

{

    [ColumnName("Score")]

    public float Prediction { get; set; }

}

// In your Startup.cs (or Program.cs for Minimal APIs)

public void ConfigureServices(IServiceCollection services)

{

    services.AddPredictionEnginePool<MyModelInput, MyModelOutput>()

            .FromFile(modelPath: "path/to/your/model.zip", watchForChanges: true);

    // ... other services

}

// In your API Controller

[ApiController]

[Route("[controller]")]

public class PredictionController : ControllerBase

{

    private readonly PredictionEnginePool<MyModelInput, MyModelOutput> _predictionEnginePool;

 

    public PredictionController(PredictionEnginePool<MyModelInput, MyModelOutput> predictionEnginePool)

    {

        _predictionEnginePool = predictionEnginePool;

    }

     [HttpPost("predict")]

    public IActionResult Predict([FromBody] MyModelInput input)

    {

        MyModelOutput prediction = _predictionEnginePool.Predict(input);

        return Ok(prediction);

    }

}

2. Consuming External AI/ML Services/APIs

Why use it?

  • Managed Services: Offload the burden of managing and scaling ML infrastructure to cloud providers.
  • Pre-trained Models: Access powerful, ready-to-use models for common tasks (e.g., natural language processing, image recognition, speech-to-text) without needing to train your own.
  • Scalability: Cloud services are designed for high scalability and can handle large volumes of requests.
  • Interoperability (ONNX): If your models are trained in other frameworks (like TensorFlow or PyTorch), you can often export them to the ONNX (Open Neural Network Exchange) format and then potentially consume them with ML.NET or directly with ONNX Runtime bindings in .NET.

 Tools & Resources:

  • Azure Machine Learning:
    • Azure ML Studio: A web portal for building, training, deploying, and managing your ML models.
    • Azure Machine Learning SDK for .NET: While not as direct as Python, you can interact with Azure ML resources (e.g., deploying models as web services) programmatically.
    • Managed Endpoints: Azure ML allows you to deploy trained models as RESTful endpoints, which your .NET Core Web API can call.
    • Azure Cognitive Services (Azure AI Services): A collection of pre-built AI services for common tasks like:
      • Language: Text Analytics (sentiment, key phrases, entity recognition), Translator, Speech (text-to-speech, speech-to-text).
      • Vision: Computer Vision (image analysis, object detection), Face, Custom Vision.
      • Decision: Anomaly Detector, Content Moderator.
      • OpenAI Service on Azure: Access to powerful LLMs like GPT-3.5 and GPT-4 through Azure.
    • Azure OpenAI SDK for .NET (Azure.AI.OpenAI NuGet package): Provides a client library for interacting with Azure OpenAI Service from .NET applications.
  • Google Cloud AI Platform / Vertex AI:
    • Similar to Azure ML, offers tools for building and deploying custom models.
    • Google Cloud AI APIs: Pre-trained models for vision, natural language, speech, and more (e.g., Vision AI, Natural Language AI, Speech-to-Text, Dialogflow).
    • Google Cloud .NET Client Libraries: Libraries available for interacting with various Google Cloud services, including AI APIs.
  • AWS Machine Learning Services:
    • Amazon SageMaker: A fully managed service for building, training, and deploying ML models.
    • AWS AI Services: Pre-trained services like Amazon Rekognition (image/video analysis), Amazon Comprehend (NLP), Amazon Polly (text-to-speech), Amazon Lex (chatbot building).
    • AWS SDK for .NET: Used to interact with AWS services from your .NET Core application.
  • Third-party AI APIs (e.g., OpenAI API):
    • Many AI companies provide public APIs (e.g., OpenAI's standard API, Hugging Face).
    • HTTPClient: The built-in HttpClient in .NET Core is your primary tool for making HTTP requests to these external APIs.
    • JSON Serialization/Deserialization: Use System.Text.Json (built-in) or Newtonsoft.Json (third-party) to serialize your request bodies to JSON and deserialize the API responses.
  • ONNX Runtime: If you have models in ONNX format (trained in Python, etc.), you can use the ONNX Runtime with its .NET bindings to run these models directly within your .NET Core application for inference. This offers a good balance between using a familiar .NET environment and leveraging models from diverse ML ecosystems.

 How to integrate with .NET Core Web API by consuming external APIs:

  1. Choose your AI/ML service: Decide which cloud provider or third-party API best suits your needs.
  2. Obtain API keys/credentials: Securely store and retrieve these (e.g., using Azure Key Vault, environment variables, or .NET user secrets).
  3. Use HttpClient: In your .NET Core Web API, create a service that uses HttpClient to send requests to the external AI/ML API.
  4. Handle request/response: Construct the JSON request body according to the API's documentation and deserialize the JSON response into appropriate C# objects.
  5. Implement retry policies and error handling: External APIs can have rate limits or temporary issues, so robust error handling is crucial. Consider using libraries like Polly for transient fault handling.
  6. Create API Endpoints: Expose your own HTTP endpoints that act as proxies or orchestrators, calling the external AI/ML service and returning processed results to your clients.

 Example (simplified, calling a hypothetical external prediction service):

C#

// In your appsettings.json or Azure Key Vault

// "ExternalAIService:BaseUrl": "https://api.externalai.com/predict",

// "ExternalAIService:ApiKey": "YOUR_API_KEY"

// Input DTO for the external service

public class ExternalServiceInput

{

    public string Text { get; set; }

}

 // Output DTO from the external service

public class ExternalServiceOutput

{

    public string Classification { get; set; }

    public float Confidence { get; set; }

}

 // Prediction Service

public class ExternalPredictionService

{

    private readonly HttpClient _httpClient;

    private readonly IConfiguration _configuration;

     public ExternalPredictionService(HttpClient httpClient, IConfiguration configuration)

    {

        _httpClient = httpClient;

        _configuration = configuration;

        _httpClient.BaseAddress = new Uri(_configuration["ExternalAIService:BaseUrl"]);

        _httpClient.DefaultRequestHeaders.Add("Authorization", $"Bearer {_configuration["ExternalAIService:ApiKey"]}");

    }

     public async Task<ExternalServiceOutput> GetPredictionAsync(string text)

    {

        var input = new ExternalServiceInput { Text = text };

        var jsonContent = JsonContent.Create(input); // Requires System.Net.Http.Json

        var response = await _httpClient.PostAsync("classify", jsonContent);

        response.EnsureSuccessStatusCode();

        return await response.Content.ReadFromJsonAsync<ExternalServiceOutput>();

    }

}

 // In your Startup.cs (or Program.cs for Minimal APIs)

public void ConfigureServices(IServiceCollection services)

{

    services.AddHttpClient<ExternalPredictionService>();

    // ... other services

}

 // In your API Controller

[ApiController]

[Route("[controller]")]

public class ExternalAIController : ControllerBase

{

    private readonly ExternalPredictionService _predictionService;

    public ExternalAIController(ExternalPredictionService predictionService)

    {

        _predictionService = predictionService;

    }

    [HttpPost("classifyText")]

    public async Task<IActionResult> ClassifyText([FromBody] string text)

    {

        if (string.IsNullOrWhiteSpace(text))

        {

            return BadRequest("Text cannot be empty.");

        }

         var result = await _predictionService.GetPredictionAsync(text);

        return Ok(result);

    }

}

 General Best Practices for AI/ML Integration

  • Define Clear Use Cases: Before diving into tools, clearly understand what AI/ML feature you want to build (e.g., spam detection, customer churn prediction, image classification).
  • Data Strategy: AI/ML relies heavily on data. Plan for data collection, storage, preprocessing, and ongoing updates.
  • Model Versioning: As models evolve, implement a strategy for versioning them to ensure consistent behavior and enable rollbacks.
  • Monitoring and Retraining: ML models can "drift" over time as real-world data changes. Implement monitoring to track model performance and establish a process for periodic retraining and redeployment.
  • Error Handling and Fallbacks: What happens if the AI/ML service is down or returns an unexpected result? Have robust error handling and potentially fallback mechanisms.
  • Performance Considerations:
    • Latency: For real-time predictions, choose solutions that offer low latency.
    • Throughput: Consider how many predictions your API needs to handle per second and scale accordingly.
    • Resource Usage: Be mindful of CPU/memory usage if hosting models directly in your API.
  • Security: Protect your API keys, restrict access to your prediction endpoints, and consider data privacy.
  • Responsible AI: Address potential biases in your models, ensure data privacy, and maintain transparency where possible. 

By considering these tools, approaches, and best practices, you can effectively integrate AI/ML capabilities into your .NET Core Web API, building intelligent and powerful applications.

Download Your Copy Here 


Comments

Popular posts from this blog

The RADIO framework provides a strong foundation for designing APIs and system integrations with consistency and maintainability in mind

  System Integrations & API Design: The RADIO Framework The RADIO framework provides a consistent, maintainable, and scalable approach to designing APIs and system integrations. It stands for Resource-oriented, Addressable, Documentable, Idempotent, and Observable. Resource-Oriented (R) Principle Focus on nouns (resources) over verbs (actions). 1. Aspect Implementation Detail Maintainability/Consistency Impact API Endpoints Use nouns in the URI (e.g., /users, /products/{id}). Employ standard HTTP methods (GET, POST, PUT, DELETE, PATCH) for CRUD operations. Predictability: Developers easily infer endpoint purpose. Clarity: Leverages standard REST principles, separating the what (resource) from the how (action). Data Models Define stable, versioned schemas (JSON/XML) for resource representations that reflect the resource's state. Decoupling: Protects consumers from internal system changes by maintaining a stable external API contract. 2. Addressable (A) Principle Every resource ...

Whooping cough is an illness that can spread easily. It's also called pertussis

  Whooping cough is an illness that can spread easily. It's also called pertussis. An infection with bacteria causes it. Many people with the illness get a serious hacking cough. Breathing in after coughing often causes a high-pitched noise that sounds like a "whoop." A case of Whooping Cough (pertussis) has been reported. Due to the nature of the illness we want to provide you with the necessary information about Whooping Cough and what steps you can take to protect your child and family. Extra advice may need to be sort if you have a newborn baby or are currently pregnant. Most students have been vaccinated against Pertussis when they were and infant. What is Whooping Cough (Pertussis) and how is it spread? Whooping Cough is a highly contagious respiratory infection caused by the Bordetella pertussis bacteria. It primarily affects the lungs and airways and can lead to severe coughing fits, especially in young children. It is spread through droplets when an infected pers...

Developing User Interfaces with GitHub Copilot

  Developing User Interfaces with GitHub Copilot, Part 3 by  John Miller  | April 30, 2025 This post is the third installment in the series on AI assisted UI development. While this post is largely stand-alone, consider reading parts  1  and  2  before reading this post. Have AI Add a Data Visualization We've looked at using AI to create and add UI components in prior posts. In this post I'll add a data visualization to a page. This figure shows a rendered Sales Funnel Summary page before making any changes: The goal in this post is to have AI add a data visualization of the sales funnel to the page. Prompt: Using css, add a Sales Funnel graphic from the data in the detail-table. Include the total value and the average age. Don't include ‘unknown’ or ‘closed lost’. Below is the response from the Claude 3.7 Sonnet Thinking model. Begin Response I'll add a CSS-based Sales Funnel visualization between your summary table and the details table. Here's how t...