European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using.NET 9 to Build a Face Detection System

clock April 21, 2025 09:25 by author Peter

Face detection has evolved from a cool sci-fi concept to a practical tool in everything from photo tagging to biometric authentication. In this blog post, we’ll explore how to build a real-time face detection system using the latest .NET 9 and OpenCV via Emgu.CV, a popular .NET wrapper around the OpenCV library.


Prerequisites
To follow along, you'll need:

✅ .NET 9 SDK
✅ Visual Studio 2022+ or VS Code
✅ Basic knowledge of C#
✅ Emgu.CV NuGet packages
✅ A working webcam (for real-time detection)

Step 1. Create a New .NET Console App
Open a terminal and run:
dotnet new console -n FaceDetectionApp
cd FaceDetectionApp


Step 2. Install Required Packages
Add the Emgu.CV packages:
dotnet add package Emgu.CV
dotnet add package Emgu.CV.runtime.windows

Step 3. Add the Haar Cascade File
OpenCV uses trained XML classifiers for detecting objects. Download the Haar cascade for frontal face detection:
Place this file in your project root and configure it to copy to the output directory:
FaceDetectionApp.csproj
<ItemGroup>
  <None Update="haarcascade_frontalface_default.xml">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </None>
</ItemGroup>


Step 4. Write the Real-Time Detection Code
Replace Program.cs with:
using Emgu.CV;
using Emgu.CV.Structure;
using System.Drawing;

var capture = new VideoCapture(0);
var faceCascade = new CascadeClassifier("haarcascade_frontalface_default.xml");

if (!capture.IsOpened)
{
    Console.WriteLine("Could not open the webcam.");
    return;
}

Console.WriteLine("Press ESC to exit...");

while (true)
{
    using var frame = capture.QueryFrame()?.ToImage<Bgr, byte>();

    if (frame == null)
        break;

    var gray = frame.Convert<Gray, byte>();
    var faces = faceCascade.DetectMultiScale(gray, 1.1, 10, Size.Empty);

    foreach (var face in faces)
    {
        frame.Draw(face, new Bgr(Color.Green), 2);
    }

    CvInvoke.Imshow("Real-Time Face Detection", frame);

    if (CvInvoke.WaitKey(1) == 27) // ESC key
        break;
}

CvInvoke.DestroyAllWindows();


Output
Once you run the app using dotnet run, your webcam will open, and it will start detecting faces in real time by drawing rectangles around them.

Conclusion

With the power of .NET 9 and the flexibility of OpenCV, building a real-time face detection system has never been easier. Whether you're developing a hobby project or prototyping a serious application, this foundation sets you on the path to mastering computer vision in the .NET ecosystem.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: MAUI.NET 9: Adding Objects to a Local Database

clock April 17, 2025 08:07 by author Peter

Step 1. Now that we have our local database set up, let's use it to store game reviews.

Step 1.1. Code:
using System.ComponentModel.DataAnnotations;

namespace Models.DTOs;

public class DTOBase
{
    [Key]

    public int? Id { get; set; }
    public int? ExternalId { get; set; }
    public DateTime CreatedAt { get; set; }
    public DateTime UpdatedAt { get; set; }
    public bool Inactive { get; set; }
}


Step 1.2. In the same folder, let's create GameDTO.
namespace Models.DTOs;

[Microsoft.EntityFrameworkCore.Index(nameof(IGDBId), IsUnique = true)]
public class GameDTO : DTOBase
{
    public int? IGDBId { get; set; }
    public int UserId { get; set; }
    public required string Name { get; set; }
    public string? ReleaseDate { get; set; }
    public string? Platforms { get; set; }
    public string? Summary { get; set; }
    public string? CoverUrl { get; set; }
    public required GameStatus Status { get; set; }
    public int? Rate { get; set; }
}

public enum GameStatus
{
    Want,
    Playing,
    Played
}

Step 2. In DbCtx.cs, add the Games table

Step 2.1. Code:
public virtual required DbSet<GameDTO> Games { get; set; }

Step 3. In the repo project, create the GameRepo.cs.

Step 4. In the repo project, create the GameRepo.cs:
using Microsoft.EntityFrameworkCore;
using Models.DTOs;

namespace Repo
{
    public class GameRepo(IDbContextFactory<DbCtx> DbCtx)
    {
        public async Task<int> CreateAsync(GameDTO game)
        {
            using var context = DbCtx.CreateDbContext();
            await context.Games.AddAsync(game);
            return await context.SaveChangesAsync();
        }
    }
}


Step 4.1. Extract the interface from this class.
Step 5. Save the image locally so that, after adding the game, we can retrieve it without relying on an internet connection. In the ApiRepo project, inside the IGDBGamesAPIRepo class, create the GetGameImageAsync function.
public static async Task<byte[]> GetGameImageAsync(string imageUrl)
{
    try
    {
        using HttpClient httpClient = new();
        var response = await httpClient.GetAsync(imageUrl);

        if (!response.IsSuccessStatusCode)
            throw new Exception($"Exception downloading image URL: {imageUrl}");

        return await response.Content.ReadAsByteArrayAsync();
    }
    catch (Exception ex)
    {
        throw new Exception($"Error fetching image: {ex.Message}", ex);
    }
}


Step 5.1. In the Services project, inside the IGDBGamesApiService class, create the SaveImageAsync function.
public static async Task SaveImageAsync(string imageUrl, string fileName)
{
    try
    {
        byte[] imageBytes = await IGDBGamesAPIRepo.GetGameImageAsync(imageUrl);
        string filePath = Path.Combine(GameService.ImagesPath, fileName);

        if (File.Exists(filePath))
            return;

        await File.WriteAllBytesAsync(filePath, imageBytes);
    }
    catch (Exception)
    {
        throw; // Consider logging the exception instead of rethrowing it blindly
    }
}

Step 6. Create a function to add the repositories in MauiProgram.
public static IServiceCollection Repositories(this IServiceCollection services)
{
    services.AddScoped<IGameRepo, GameRepo>();
    return services;
}

Step 7. Call this function inside CreateMauiApp()

Step 7.1. Add a function to create the image path.

Step 7.2. Code:
if (!System.IO.Directory.Exists(GameService.ImagesPath))
    System.IO.Directory.CreateDirectory(GameService.ImagesPath);

Step 8. In Models.Resps, create an object to handle the service response for the mobile project.

Step 8.1. Code:
namespace Models.Resps;

public class ServiceResp
{
    public bool Success { get; set; }

    public object? Content { get; set; }
}


Step 9. Create the GameService.

Step 9.1. Code:
using Models.DTOs;
using Models.Resps;
using Repo;

namespace Services
{
   public static readonly string ImagesPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "Inventory");

    public class GameService(IGameRepo GameRepo)
    {
        public async Task<ServiceResp> CreateAsync(GameDTO game)
        {
            game.CreatedAt = game.UpdatedAt = DateTime.Now;

            //User id fixed to 1
            game.UserId = 1;
            await GameRepo.CreateAsync(game);
            return new ServiceResp(true);
        }
    }
}


Step 9.2. Press [Ctrl + .] to extract the interface, then add the service to the Dependency Injection (DI). In MauiProgram, inside the Services function, add the line.

Step 9.3. Code:
services.AddScoped<IGameService, GameService>();

Step 10. In AddGameVM, add a variable to control the accessibility of the confirm button:

Step 10.1. Code:
private bool confirmIsEnabled = true;

public bool ConfirmIsEnabled
{
    get => confirmIsEnabled;
    set => SetProperty(ref confirmIsEnabled, value);
}

Step 11.
Remove gameStatus from AddGameVM so that it uses the one from the DTO model.

Step 12. Add IGameService to the AddGameVM constructor.

Step 13. Create the command that adds the game to the local database with the information that will be displayed in the app. Simultaneously attempts to save the image for local use. After insertion, displays a message and returns to the search screen.

[RelayCommand]
public async Task Confirm()
{
    // Disable button to prevent multiple requests
    ConfirmIsEnabled = false;
    string displayMessage;

    int? _rate = null;

    try
    {
        if (GameSelectedStatus == GameStatus.Played)
        {
            displayMessage = "Game and rate added successfully!";
            _rate = Rate;
        }
        else
        {
            displayMessage = "Game added successfully!";
        }

        if (IsOn && CoverUrl is not null)
        {
            _ = IGDBGamesApiService.SaveImageAsync(CoverUrl, $"{IgdbId}.jpg");
        }

        GameDTO game = new()
        {
            IGDBId = int.Parse(Id),
            Name = Name,
            ReleaseDate = ReleaseDate,
            CoverUrl = CoverUrl,
            Platforms = Platforms,
            Summary = Summary,
            Status = GameSelectedStatus.Value,
            Rate = _rate
        };

        var resp = await gameService.CreateAsync(game);
        if (!resp.Success) displayMessage = "Error adding game";

        // Display message and navigate back
        bool displayResp = await Application.Current.Windows[0].Page.DisplayAlert("Aviso", displayMessage, null, "Ok");

        if (!displayResp)
        {
            await Shell.Current.GoToAsync("..");
        }
    }
    catch (Microsoft.EntityFrameworkCore.DbUpdateException)
    {
        bool displayResp = await Application.Current.Windows[0].Page.DisplayAlert("Error", "Game already added!", null, "Ok");

        if (!displayResp)
        {
            await Shell.Current.GoToAsync("..");
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine($"Error: {ex.Message}");
        throw;
    }
    finally
    {
        ConfirmIsEnabled = true;
    }
}


Step 14. Bind the command to the button:

Step 14.1. Code:
Command="{Binding ConfirmCommand}"

Step 15. Update the database version to be recreated with the current structure.

Step 16. Run the system and add a game, then return to the search:

If the same game is saved again, display a message notifying that it has already been added.

Now, our game is successfully saved in the local database without duplication.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: .NET Aspire Service Orchestration

clock April 14, 2025 08:15 by author Peter

The orchestration of.NET Aspire helps everything sing, from dispersed services to a coordinated symphony. We will delve deeper into orchestration using.NET Aspire in this chapter.

In this chapter, we will get deeper into orchestration with .NET Aspire.

  • Introduction to Orchestration in .NET Aspire.
  • Understanding the .NET Aspire Orchestrator.
  • Setting Up Your Aspire Orchestration Project.
  • Comparison with Docker Compose.

Introduction to Orchestration in .NET Aspire
As modern applications become more modular and distributed, orchestration becomes essential to tame the complexity. Whether it's a set of APIs, background workers, message queues, or third-party integrations, every moving part needs to be started, connected, and observed in a predictable way. That’s where .NET Aspire's orchestration capabilities shine.

Unlike traditional infrastructure-first orchestrators like Kubernetes, Aspire brings orchestration closer to the developer experience. It helps you declare and manage dependencies between projects and services within your solution, ensuring everything runs smoothly—without needing YAML or external tooling during development.

With Aspire, orchestration is,

  • Declarative: You define what services exist and how they relate to each other.
  • Integrated: It’s built right into the Aspire tooling—no need for Docker Compose or Kubernetes just to test things locally. Observable: Thanks to the built-in dashboard, you can monitor services, view logs, check health statuses, and trace failures all in one place.

The orchestration layer essentially acts as a central brain that boots up your projects in the right order, injects configuration and secrets, and ties everything together using environment variables and DI.

Understanding the .NET Aspire Orchestrator

At the heart of every .NET Aspire solution is a project typically named something like “YourApp.Orchestration”. This isn't just another project in your solution—it's the orchestration engine that wires up everything else.

Think of it as the composition root for your distributed application.

What Does the Orchestrator Do?
The orchestrator’s job is to,

  • Register services like your Web APIs, workers, Blazor apps, databases, caches, and even external services.
  • Configure dependencies between these services so they can talk to each other.
  • Inject environment variables and secrets automatically where needed.
  • Control the service startup order, ensuring dependencies are initialized in the right sequence.
  • Launch and monitor the services as a single unit during development.

Here’s a quick snippet showing how services are wired up inside the “Program.cs” of an Aspire Orchestrator project.
var builder = DistributedApplication.CreateBuilder(args);
var redis = builder.AddRedis("cache");
var productsApi = builder.AddProject<Projects.MyApp_ProductsApi>("products-api");
var frontend = builder.AddProject<Projects.MyApp_Blazor>("frontend");
builder.Build().Run();

This simple and intuitive code accomplishes quite a bit.

  • AddProject registers each project in your solution.
  • WithReference defines dependencies between services (injected as environment variables).
  • Services are automatically exposed via the Aspire Dashboard.

How Does Dependency Injection Work?
Aspire takes care of dependency wiring by exporting connection strings and configuration values as environment variables. For example, if your Products API needs Redis, Aspire will:

  • Automatically assign the Redis service a connection string.
  • Export it as ConnectionStrings__cache (or similar).
  • Your API can consume it using standard configuration binding in .NET.

What Services Can You Orchestrate?

  • ASP.NET Core APIs
  • Worker services
  • Blazor apps
  • Background jobs
  • Redis, PostgreSQL, SQL Server, MongoDB, RabbitMQ, etc.
  • External HTTP or gRPC endpoints.
  • Containerized services via AddContainer(...)

This makes it easy to emulate a realistic production environment locally, with minimal setup.

Setting Up Aspire Orchestration Project

If you are following me in this .NET Aspire quick guide, you must have prepared a blazor weather app in the last chapter, where we have added the support for resiliency using the .NET Aspire Service Default template;

To get started with service orchestration in your Aspire-powered solution, we first need to add an Aspire App Host Project.

  • Right-click the solution file in Solution Explorer.
  • Choose Add > New Project.
  • Search for .NET Aspire App Host and add it to your solution.

This project serves as the orchestration hub for your distributed application. In my example, I now have four projects in the solution, including the newly added project named BlazorWeatherApp.AppHost, following the standard Aspire naming convention.

Registering Projects with the App Host
Once the App Host project is added, the first step is to register all the required services — such as microservices, background workers, or frontend apps by referencing them inside the App Host project.


You can double-check the .csproj file of your App Host project to ensure all project references are correctly added.

What is the App Host and Why Use It?
In .NET Aspire, the App Host acts as the central coordinator for all services. It not only runs all your apps together but also provides features like.

  • Orchestration and centralized launching
  • Monitor all services
  • Built-in service discovery

Let’s look at the code to understand how services are registered.

What does builder.AddProject<>() do?
This method registers an application (like a microservice or a frontend project) with the App Host. It ensures.

  • The project is included in orchestration.
  • It’s monitored and tracked during execution.
  • It can discover and communicate with other registered services.

What role do these string parameters play here ("api", "myweatherapp")?
That string is the resource name or service identifier.

  • It's used for
    • Service discovery (e.g., resolving internal URLs)
    • Dashboard visualization (the names you see in Aspire Dashboard)
    • Logging and diagnostics labels
  • Think of it as a logical name for each project in the distributed system.

For example
builder.AddProject<Projects.BlazorWeatherApp_API>("api");

This means

  • Register the BlazorWeatherApp_API project
  • In service maps, configs, and communication, it's known as "api."

Setting App Host as Startup Project
Since the App Host handles orchestration, you no longer need to start multiple projects individually. Just set the App Host project as the startup project — it will launch and manage all other registered services automatically.

Run and Explore the Aspire Dashboard
After building the solution, run the App Host project. You’ll notice that a built-in dashboard UI launches with it.

On the Resources tab, you’ll see all your registered services listed — these are the ones added via the DistributedApplication.CreateBuilder() API.

This dashboard provides a single place to,

  • View the status of your services
  • Access service endpoints
  • Preview, test, or manage them in real time.

We’ll dive deeper into all the features of the Aspire Dashboard in the next chapter, but feel free to explore it now and get a feel for how Aspire centralizes the management of your distributed apps.

How .NET Aspire Orchestration Improves Manageability?
Orchestration isn't just about launching services—it's about taming complexity with structure, predictability, and control.

What Do We Mean by "Manageability"?
Manageability in this context refers to how easily a developer or team can,

  • Configure and run multi-service applications
  • Handle dependencies between services
  • Inject configuration (env vars, ports, secrets)
  • Maintain consistency across environments (local/dev/test)
  • Onboard new team members quickly

.NET Aspire Orchestration helps on all fronts
Comparing .NET Aspire Orchestration with Docker Compose
Docker Compose builds containers. Aspire builds confidence by making local orchestration feel like part of the codebase.

Why does This Comparison matter?
Both Docker Compose and .NET Aspire Orchestration aim to solve a similar problem:
“How do I spin up multiple dependent services locally in a predictable, repeatable way?”

But they approach it very differently.

Side-by-Side Breakdown

 

Feature Docker Compose .NET Aspire Orchestration
Setup Style YAML (declarative) C# code (imperative, type-safe)
Primary Use Case Dev/prod container orchestration Dev-time orchestration for .NET apps
Learning Curve Familiar to DevOps teams Friendly for .NET devs
Environment Setup Manual port/env config Automatic wiring + port/env injection
Telemetry Integration Requires manual OpenTelemetry setup Built-in with AddServiceDefaults()
Service Discovery Use of service names and ports Seamless via .WithReference()
Startup Order Management Controlled via depends_on Inferred through dependency references
Dashboard/Insights None (requires external tools) Built-in Aspire Dashboard
Production Readiness More production-aligned (with Docker/K8s) Dev-time only (not a prod orchestrator)
Tooling Integration Works across languages & stacks Tight integration with .NET ecosystem

DX (Developer Experience) with Docker Compose and Aspire

With Aspire (C#)

var redis = builder.AddRedis("cache");

var api = builder.AddProject<Projects.MyApp_Api>("api")
                 .WithReference(redis);

With Docker Compose (YAML)
services:
  redis:
    image: redis
    ports:
      - "6379:6379"
  api:
    build: ./api
    depends_on:
      - redis

Aspire gives you,

  • IntelliSense
  • Strong typing
  • Easier refactoring
  • Familiar project references

Where Docker Compose Still Shines?

  • Aspire is dev-time only, so for actual deployment scenarios, Docker Compose or Kubernetes is still necessary.
  • If you're working in a polyglot environment (e.g., Python + Node + .NET), Compose might still be the glue.
  • Aspire currently only supports orchestration of .NET projects and known service types (e.g., Redis, PostgreSQL, etc.).

What About Kubernetes?
Kubernetes is the heavyweight champion of production orchestration—but it’s overkill for your laptop.
While Kubernetes is the industry standard for production orchestration, it’s intentionally not used for local development in most cases. Here’s why

Kubernetes vs .NET Aspire Orchestration

 

Feature Kubernetes .NET Aspire Orchestration
Target Environment Production / Staging Local Development
Complexity High (requires YAML, Helm, CLI tooling) Low (C# code + dotnet run)
Startup Time Slower, may need Minikube / k3s Instant via .NET CLI
Learning Curve Steep for app devs Low, friendly for .NET developers
Tooling Integration External dashboards (Grafana, Prometheus) Built-in Aspire Dashboard

Takeaway

  • Use Kubernetes when deploying to the cloud at scale.

  • Use Docker Compose for multi-container, multi-stack local dev.

  • Use .NET Aspire Orchestration for pure .NET solutions where developer experience and manageability matter.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: How to Use Blazor Server to Convert Images to Text (Easy OCR Example)?

clock April 11, 2025 08:49 by author Peter

First, make a Blazor Server application.
Launch Visual Studio and make a new project called Blazor Server App.


Install the necessary NuGet package in step two.
Install the Tesseract wrapper by opening the Package Manager Console.

Install-Package Tesseract

Step 3. Download Language Data

  • Go to Tesseract tessdata GitHub repo
  • Download eng. traineddata (for English)
  • Create a folder in your project named tessdata
  • Add the downloaded file to that folder
  • Set its properties: Copy to Output Directory → Copy if newer


Step 4. Download Language Data
@page "/"
@rendermode InteractiveServer
@using Tesseract
@inject IWebHostEnvironment Env

<PageTitle>Image OCR</PageTitle>

<h1>Image to Text (OCR)</h1>

<InputFile OnChange="HandleFileSelected" />
<br />
<button class="btn btn-primary mt-2" @onclick="ExtractTextFromImage">Extract Text</button>

@if (!string.IsNullOrEmpty(extractedText))
{
    <h3>Extracted Text:</h3>
    <pre>@extractedText</pre>
}

@code {
    private string? imagePath;
    private string? extractedText;

    async Task HandleFileSelected(InputFileChangeEventArgs e)
    {
        var file = e.File;
        var uploads = Path.Combine(Env.WebRootPath, "uploads");

        if (!Directory.Exists(uploads))
            Directory.CreateDirectory(uploads);

        imagePath = Path.Combine(uploads, file.Name);

        using var stream = File.Create(imagePath);
        await file.OpenReadStream().CopyToAsync(stream);
    }

    void ExtractTextFromImage()
    {
        if (string.IsNullOrWhiteSpace(imagePath))
            return;

        var tessDataPath = Path.Combine(Env.ContentRootPath, "tessdata");

        using var engine = new TesseractEngine(tessDataPath, "eng", EngineMode.Default);
        using var img = Pix.LoadFromFile(imagePath);
        using var page = engine.Process(img);

        extractedText = page.GetText();
    }
}

Step 5. Run the Application

  • Run your Blazor Server app.
  • Navigate to /ocr in the browser.
  • Upload an image file with text.
  • View the extracted text displayed below.


Original Image


Output



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Understanding ASP.NET Core Web API Eager Loading

clock April 7, 2025 07:42 by author Peter

Consider that you are developing an application for a blog. When a user accesses your API to view their blog post, they anticipate seeing not only the content but also the comments, and possibly even the identity of each commenter. Now consider this:


Should I retrieve each component individually or should I make a single, effective request to the database?
You're considering eager loading if your instinct tells you, "One and done." Let's examine its definition, operation, and appropriate application in your ASP.NET Core Web API.

What Is Eager Loading?
Eager loading is a technique used with Entity Framework Core (EF Core) where you load related data alongside your main data in a single query. It's like ordering a combo meal instead of going back to the counter for fries and then again for your drink.

By using .Include() and .ThenInclude(), you’re telling EF Core:
Hey, while you’re grabbing that Blog, go ahead and pull in the Posts and their Comments too.

Why Use Eager Loading?

Here’s the deal

  • Performance boost: One query instead of many.
  • Avoid the N+1 problem: Imagine fetching 1 blog and 50 posts with separate queries—EF will hit the database 51 times!
  • Cleaner API results: Users don’t have to make extra requests to get the full picture.

Real-World Example. Blog, Posts, and Comments
Let’s build a sample project:
public class Blog
{
    public int BlogId { get; set; }
    public string Title { get; set; }
    public List<Post> Posts { get; set; }
}

public class Post
{
    public int PostId { get; set; }
    public string Content { get; set; }
    public int BlogId { get; set; }
    public Blog Blog { get; set; }
    public List<Comment> Comments { get; set; }
}

public class Comment
{
    public int CommentId { get; set; }
    public string Message { get; set; }
    public string AuthorName { get; set; }
    public int PostId { get; set; }
    public Post Post { get; set; }
}


This setup gives us:

  • A Blog with many Posts
  • Each Post with many Comments

DbContext Setup
public class AppDbContext : DbContext
{
    public AppDbContext(DbContextOptions<AppDbContext> options) : base(options) { }

    public DbSet<Blog> Blogs { get; set; }
    public DbSet<Post> Posts { get; set; }
    public DbSet<Comment> Comments { get; set; }
}


Sample Seed Data
Let’s seed some mock data for testing:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Blog>().HasData(new Blog { BlogId = 1, Title = "Tech with Jane" });

    modelBuilder.Entity<Post>().HasData(
        new Post { PostId = 1, Content = "ASP.NET Core is awesome!", BlogId = 1 },
        new Post { PostId = 2, Content = "Entity Framework Tips", BlogId = 1 }
    );

    modelBuilder.Entity<Comment>().HasData(
        new Comment { CommentId = 1, Message = "Loved it!", AuthorName = "John", PostId = 1 },
        new Comment { CommentId = 2, Message = "Very helpful!", AuthorName = "Alice", PostId = 1 },
        new Comment { CommentId = 3, Message = "Great explanation.", AuthorName = "Sara", PostId = 2 }
    );
}


Implementing Eager Loading in API
Here’s how to fetch a blog along with all its posts and the comments under each post:

[HttpGet("{id}")]
public async Task<IActionResult> GetBlogDetails(int id)
{
    var blog = await _context.Blogs
        .Include(b => b.Posts)
            .ThenInclude(p => p.Comments)
        .FirstOrDefaultAsync(b => b.BlogId == id);

    if (blog == null)
        return NotFound();

    return Ok(blog);
}


What’s happening here?
.Include(b => b.Posts) loads the Posts for the Blog.
.ThenInclude(p => p.Comments) loads the Comments for each Post.

All of this is done in one SQL query under the hood. No multiple round trips.

Best Practice. Use DTOs (Data Transfer Objects)
Returning full entities (especially with nested objects) is risky—you might expose internal or sensitive data. So, let’s clean it up using a DTO:
public class BlogDto
{
    public string Title { get; set; }
    public List<PostDto> Posts { get; set; }
}

public class PostDto
{
    public string Content { get; set; }
    public List<CommentDto> Comments { get; set; }
}

public class CommentDto
{
    public string Message { get; set; }
    public string AuthorName { get; set; }
}

Then in your controller
[HttpGet("{id}")]
public async Task<IActionResult> GetBlogDto(int id)
{
    var blog = await _context.Blogs
        .Where(b => b.BlogId == id)
        .Include(b => b.Posts)
            .ThenInclude(p => p.Comments)
        .Select(b => new BlogDto
        {
            Title = b.Title,
            Posts = b.Posts.Select(p => new PostDto
            {
                Content = p.Content,
                Comments = p.Comments.Select(c => new CommentDto
                {
                    Message = c.Message,
                    AuthorName = c.AuthorName
                }).ToList()
            }).ToList()
        })
        .FirstOrDefaultAsync();

    if (blog == null) return NotFound();
    return Ok(blog);
}


Now your API returns only what the client needs. Clean. Efficient. Secure.
Important Things to Remember

Use eager loading when

    You know you’ll need related data.
    You want to optimize performance with fewer database hits.

Avoid eager loading when

    The related data is huge, and you don’t need all of it.
    It slows down the response or affects performance negatively.

Output

 

Conclusion
The ASP.NET Core Web API's eager loading capability is a potent tool that enables you to effectively get relevant data using Entity Framework Core. by using leverage. Include() as well as. By using ThenInclude(), you may address the N+1 problem, get rid of pointless database searches, and enhance API speed. Eager loading must be used carefully, though; just include the information you require, and think about use DTOs to maintain secure and tidy answers. When used properly, eager loading facilitates the development of more scalable, well-organized, and quicker APIs.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: How to Start Using.NET Aspire?

clock March 25, 2025 07:36 by author Peter

.NET Aspire: What is it?
A contemporary, opinionated framework called.NET Aspire was created to make it easier to develop cloud-native and microservices-based applications within the.NET ecosystem. It offers a cohesive method for creating, implementing, and overseeing dispersed applications with the least amount of hassle.

.NET Aspire is built to solve common challenges developers face when working with microservices and cloud-native architectures, such as,Service Orchestration: Managing multiple services seamlessly within a single environment.

  • Built-in Observability: Automatically integrating logging, tracing, and metrics for monitoring application health.
  • Configuration Management: Simplifying configuration across multiple services.
  • Dependency Injection: Enabling easy integration of external services like databases, message brokers, and caching mechanisms.
  • Cloud-Native Compatibility: Supporting seamless deployment to cloud environments, including Azure, AWS, and Kubernetes.

By leveraging .NET Aspire, developers can focus more on business logic and less on infrastructure concerns, resulting in faster development cycles, improved maintainability, and better performance.

What Was There Before .NET Aspire vs. Now?

Before .NET Aspire, developers working with microservices in .NET had to rely on multiple tools and frameworks to achieve the same functionality Aspire now provides natively. Some challenges and comparisons include,

  • Service Orchestration
    • Before: Developers manually orchestrated microservices using custom scripts, Kubernetes, or third-party tools.
    • Now: .NET Aspire provides a built-in orchestration model, reducing the complexity of managing services.
  • Observability (Logging, Tracing, and Metrics)
    • Before: Developers had to integrate multiple libraries (like OpenTelemetry, Serilog, and Prometheus) separately.
    • Now: Aspire includes built-in observability, making logging, tracing, and metrics easy to implement.
  • Configuration Management
    • Before: Configuration was handled manually via environment variables, JSON files, or external providers.
    • Now: Aspire offers a structured approach to configuration management, reducing manual effort and errors.
  • Service-to-Service Communication
    • Before: Developers had to implement gRPC, HTTP clients, or messaging systems manually.
    • Now: Aspire simplifies service communication with built-in abstractions.
  • Deployment and Cloud Readiness
    • Before: Deploying microservices required setting up infrastructure using Docker, Kubernetes, or cloud-specific tools.
    • Now: Aspire provides streamlined deployment options, making cloud-native application development more accessible.

A Quick Comparison Table

Feature Before .NET Aspire With .NET Aspire
Service Orchestration Manual setup using Kubernetes or scripts Built-in orchestration model
Observability (Logging, Tracing, Metrics) Requires manual integration of OpenTelemetry, Serilog, Prometheus, etc. Native support for logging, tracing, and monitoring
Configuration Management Environment variables, JSON, third-party libraries Structured configuration approach
Service-to-Service Communication Manually implemented using gRPC, HTTP clients, etc. Simplified with built-in abstractions
Deployment & Cloud Readiness Custom Docker/Kubernetes setup Streamlined cloud-native deployment

With these improvements, .NET Aspire significantly reduces the development overhead and allows teams to focus more on building features rather than configuring infrastructure.

Why .NET Aspire for Modern .NET Developers?
As software development shifts towards microservices and cloud-first approaches, developers need tools that enable efficiency, scalability, and maintainability. .NET Aspire addresses these needs by offering.

  • Seamless Orchestration: Manage multiple services effortlessly within a unified environment.
  • Built-in Observability: Gain insights into application performance with logging and tracing.
  • Simplified Service Integration: Easily connect microservices and external dependencies.

Cloud-Ready Architecture: Deploy applications to cloud platforms with minimal configuration.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Features of WebMethod and ScriptMethod in .NET Webforms

clock March 17, 2025 07:55 by author Peter

in online forms. WebMethod and ScriptMethod Attributes are used by Net programs to call client-side scripts such as JavaScript and Jquery web calls. The shared function keyword from server-side pages is used in this technique. The function will send a post and receive a request on that, as indicated by the Shared keyword and WebMethod property that WebService exposes. The same kind of online technique will be utilized for sharing data in JSON format. Below is an example of the code.

Server End code

Public Class WebCall
    Inherits System.Web.UI.Page
    <WebMethod()>
    <ScriptMethod()>
    Public Shared Function GetServerTime() As String
        Return DateTime.Now.ToString()
    End Function

    Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load
        ' No server-side logic needed for this example.
    End Sub
End Class


From the client end, an OnClick event will be sent a request for getting a server date & timely response. The client-side code example is below.
<asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server"><main>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
    <script type="text/javascript">
        function callWebMethod() {
            alert("callweb");
            $.ajax({
                type: "POST",
                url: "WebCall.aspx/GetServerTime",
                data: "{}",
                contentType: "application/json; charset=utf-8",
                dataType: "json",
                success: function (msg) {
                    alert(msg.d);
                    $("#lblTime").text(msg.d);
                },
                error: function (xhr, status, error) {
                    alert("Error: " + xhr.responseText);
                }
            });
        }
    </script>
    <div>

  <button type="button" onclick="callWebMethod()">Get current server time</button><p id="btnGetTime"></p>
          <label id="lblTime" Text=""></label>
    </div>
    </main>
</asp:Content>


Output



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Real-Time.NET 9 with ML.NET Anomaly Detection in Server Logs

clock March 10, 2025 07:55 by author Peter

A wealth of information can be found in server logs. They can provide you with a wealth of information on user behavior, system performance, and any problems. However, manually identifying those problems is challenging. Examining logs is necessary, but it can be challenging when there are many logs. Anomaly detection can help with it. One machine learning method that can automatically identify anomalies is anomaly detection. Within the.NET environment, developers can create robust anomaly detection systems using.NET 9 and Microsoft's open-source machine learning framework, ML.NET. Using a real-world example, we will examine the capabilities of ML.NET and how to apply it to identify issues in server logs with.NET 9.

What is ML.NET?
ML.NET is a powerful, cross-platform machine learning framework designed for .NET developers. It allows you to create, train, and deploy custom machine learning models directly in C# or F# without requiring extensive data science expertise. Launched by Microsoft, ML.NET supports a wide range of scenarios.

  • Classification: Binary (e.g., spam detection) or multi-class (e.g., categorizing support tickets).
  • Regression: Predicting future values (e.g., sales forecasting).
  • Clustering: Grouping similar data points (e.g., segmenting customers).
  • Anomaly Detection: Finding outliers in datasets (e.g., identifying irregularities in server logs).
  • Time-Series Analysis: Finding trends and outliers in sequential data.
  • Recommendation Systems: Suggesting products or content based on user behavior.

ML.NET’s strengths include its integration with .NET tools like Visual Studio, support for pre-trained models (e.g., ONNX, TensorFlow), and the Model Builder GUI for simplified development. Whether you’re processing small datasets or scaling to enterprise-level applications, ML.NET offers flexibility and performance, making it an ideal choice for embedding intelligence into .NET 9 projects.

Project Overview: Detecting Error Spikes in Server Logs
We’re going to create a .NET 9 console application that uses ML.NET to find unusual activity in server log data. We’re especially looking at error counts over time. This example is like a real-world situation where a sudden increase in errors could mean there’s a problem with the server. This would let administrators fix the problem before it gets worse.

Step 1. Setting Up the Environment
Please find the complete source code: Click Here

To begin, ensure you have,

  • .NET 9 SDK: Installed from the official Microsoft site.
  • Visual Studio Code (or Visual Studio): For coding and debugging. I will be using VS Code for this project.
  • ML.NET Packages: Added via NuGet.

Let’s start. If you have C# Dev Kit, then you can create the project in VS Code or use below command.

Create a new console application.
dotnet new console -n AnomalyDetection -f net9.0
cd AnomalyDetection


Add the ML.NET NuGet packages.
dotnet add package Microsoft.ML
dotnet add package Microsoft.ML.TimeSeries

Step 2. Defining the Data Models.
Create a Models folder and add two classes.
LogData.cs: Represents server log entries with timestamps and error counts.
    namespace AnomalyDetection.Models;

    public record LogData
    {
        public DateTime Timestamp { get; set; }
        public float ErrorCount { get; set; }

    }


AnomalyPrediction.cs: Represents the model’s output, indicating whether an anomaly is detected.
using Microsoft.ML.Data;

namespace AnomalyDetection.Models;

public record AnomalyPrediction
{
    [VectorType(3)]
    public double[] Prediction { get; set; } =  [];

}

AnomalyResult.cs: Represents the output result from the model.
namespace AnomalyDetection.Models;

public record AnomalyResult
{
    public DateTime Timestamp { get; set; }
    public float ErrorCount { get; set; }
    public bool IsAnomaly { get; set; }
    public double ConfidenceScore { get; set; }

}

Step 3. Implementing Anomaly Detection Logic with ML.NET
Create a Services folder and add AnomalyDetector.cs.
using System;
using AnomalyDetection.Models;
using Microsoft.ML;

namespace AnomalyDetection.Services;

public class AnomalyDetectionTrainer
{
    private readonly MLContext _mlContext;
    private ITransformer _model;

    public AnomalyDetectionTrainer()
    {
        _mlContext = new MLContext(seed: 0);
        TrainModel();
    }

    private void TrainModel()
    {
        // Simulated training data (in practice, load from a file or database)
        var data = GetTrainingData();

        var dataView = _mlContext.Data.LoadFromEnumerable(data);

        // Define anomaly detection pipeline
       var pipeline = _mlContext.Transforms.DetectIidSpike(
                outputColumnName: "Prediction",
                inputColumnName: nameof(LogData.ErrorCount),
                confidence: 95.0, // Double instead of int
                pvalueHistoryLength: 5
            );

        // Train the model
        _model = pipeline.Fit(dataView);

    }

    public List<AnomalyResult> DetectAnomalies(List<LogData> logs)
        {
            var dataView = _mlContext.Data.LoadFromEnumerable(logs);
            var transformedData = _model.Transform(dataView);
            var predictions = _mlContext.Data.CreateEnumerable<AnomalyPrediction>(transformedData, reuseRowObject: false);

            return logs.Zip(predictions, (log, pred) => new AnomalyResult
            {
                Timestamp = log.Timestamp,
                ErrorCount = log.ErrorCount,
                IsAnomaly = pred.Prediction[0] == 1,
                ConfidenceScore = pred.Prediction[1]
            }).ToList();
        }

    //dummy training data
    private List<LogData> GetTrainingData()
    {
        return new List<LogData>(){
            new() { Timestamp = DateTime.Now.AddHours(-5), ErrorCount = 2 },
            new() { Timestamp = DateTime.Now.AddHours(-4), ErrorCount = 3 },
            new() { Timestamp = DateTime.Now.AddHours(-3), ErrorCount = 2 },
            new() { Timestamp = DateTime.Now.AddHours(-2), ErrorCount = 50 }, // Anomaly: Spike!
            new() { Timestamp = DateTime.Now.AddHours(-1), ErrorCount = 4 },
            new() { Timestamp = DateTime.Now.AddHours(-6), ErrorCount = 2 }
        };
    }
}

Explanation

_mlContext: An instance of MLContext, the core ML.NET object for managing data, models, and transformations. It’s initialized with a seed (0) for reproducible results.
_model: An ITransformer object that holds the trained anomaly detection model, applied later for predictions.
private void TrainModel()
{
    var data = GetTrainingData();
    var dataView = _mlContext.Data.LoadFromEnumerable(data);
    var pipeline = _mlContext.Transforms.DetectIidSpike(
        outputColumnName: "Prediction",
        inputColumnName: nameof(LogData.ErrorCount),
        confidence: 95.0,
        pvalueHistoryLength: 5
    );
    _model = pipeline.Fit(dataView);
}


Purpose: Trains the anomaly detection model using simulated data.

Steps

  • Data Loading: Calls GetTrainingData to fetch dummy server log data, then converts it into an IDataView using LoadFromEnumerable.
  • Pipeline Definition: Uses DetectIidSpike from MLContext.Transforms to create a pipeline for detecting anomalies in independent and identically distributed (IID) data:
    • outputColumnName: “Prediction”: Names the output column for anomaly results.
    • inputColumnName: nameof(LogData.ErrorCount): Specifies the input data (error counts).
    • confidence: 95.0: Sets a 95% confidence level for anomaly detection.
    • pvalueHistoryLength: 5: Defines a sliding window of 5 data points to evaluate anomalies.
  • Training: Fits the pipeline to the data, producing a trained _model.

DetectAnomalies Method
public List<AnomalyResult> DetectAnomalies(List<LogData> logs)
{
    var dataView = _mlContext.Data.LoadFromEnumerable(logs);
    var transformedData = _model.Transform(dataView);
    var predictions = _mlContext.Data.CreateEnumerable<AnomalyPrediction>(transformedData, reuseRowObject: false);
    return logs.Zip(predictions, (log, pred) => new AnomalyResult
    {
        Timestamp = log.Timestamp,
        ErrorCount = log.ErrorCount,
        IsAnomaly = pred.Prediction[0] == 1,
        ConfidenceScore = pred.Prediction[1]
    }).ToList();
}


Step 4. Check the anomalies in the logs with the above service
Update Program.cs
using AnomalyDetection.Models;
using AnomalyDetection.Services;

//create a dummy log data
var logs = new List<LogData>(){
    new() { Timestamp = DateTime.Now.AddHours(-5), ErrorCount = 2 },
    new() { Timestamp = DateTime.Now.AddHours(-4), ErrorCount = 3 },
    new() { Timestamp = DateTime.Now.AddHours(-3), ErrorCount = 2 },
    new() { Timestamp = DateTime.Now.AddHours(-2), ErrorCount = 50 },
    new() { Timestamp = DateTime.Now.AddHours(-1), ErrorCount = 4 },
    new() { Timestamp = DateTime.Now.AddHours(-6), ErrorCount = 2 }
};

//create an instance of the AnomalyDetectionTrainer
var trainer = new AnomalyDetectionTrainer();

//detect anomalies
var results = trainer.DetectAnomalies(logs);

//print the results
foreach (var result in results)
{
    Console.WriteLine($"Timestamp: {result.Timestamp}, ErrorCount: {result.ErrorCount}, IsAnomaly: {result.IsAnomaly}, ConfidenceScore: {result.ConfidenceScore}");
}

Let's run the console app and validate the results.
The project structure is as shown below.

A wealth of information can be found in server logs. They can provide you with a wealth of information on user behavior, system performance, and any problems. However, manually identifying those problems is challenging. Examining logs is necessary, but it can be challenging when there are many logs. Anomaly detection can help with it. One machine learning method that can automatically identify anomalies is anomaly detection. Within the.NET environment, developers can create robust anomaly detection systems using.NET 9 and Microsoft's open-source machine learning framework, ML.NET. Using a real-world example, we will examine the capabilities of ML.NET and how to apply it to identify issues in server logs with.NET 9.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Scalar-Based API Documentation for ASP.NET Core

clock March 6, 2025 05:54 by author Peter

Having well-organized and aesthetically pleasing API documentation is essential in modern web development. For creating visually appealing API documentation for ASP.NET Core applications, Scalar is a great tool. The easy integration of Scalar into an ASP.NET Core project and the creation of API documentation will be covered in this post.

Scalar: What is it?
An open-source tool for API documentation, Scalar offers a user-friendly and aesthetically pleasing interface for testing and researching APIs. It is a straightforward and user-friendly substitute for Redoc and Swagger UI.

Why Use Scalar for API Documentation?

  • User-friendly Interface: Scalar provides a clean and modern UI.
  • Interactive API Testing: Allows developers to test APIs directly from the documentation.
  • Easy Integration: Simple setup process with minimal configuration.
  • Open Source: Free to use and modify according to project needs.

Setting Up Scalar in ASP.NET Core
Step 1. Install Scalar Package
To integrate Scalar into your ASP.NET Core project, you need to install the Scalar NuGet package. Run the following command in the terminal.
Install-Package Scalar.AspNetCore

Step 2. Configure Scalar in Startup.cs.
Modify the Startup.cs file to add Scalar middleware.
using Scalar.AspNetCore;

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();
    services.AddScalar();
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseRouting();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapControllers();
    });

    app.UseScalar(); // Enable Scalar documentation
}

Step 3. Access API Documentation.
Once the configuration is complete, run your ASP.NET Core application and navigate to the following URL.
https://localhost:<port>/scalar

You should see a beautifully generated API documentation interface.

Customizing Scalar Documentation

Scalar provides customization options to enhance the API documentation experience.

1. Adding API Metadata
You can customize API metadata such as title, version, and description by modifying the Scalar configuration.
services.AddScalar(options =>
{
    options.DocumentTitle = "My API Documentation";
    options.ApiVersion = "v1.0";
    options.Description = "This is a sample API documentation using Scalar.";
});


2. Securing API Documentation
You can use authentication middleware to secure the Scalar endpoint and limit access to your API documentation.

Conclusion

With ASP.NET Core, Scalar is a potent tool for producing visually appealing and interactive API documentation. It is a great substitute for more conventional documentation tools like Swagger because of its simplicity of integration, intuitive interface, and customizable features. You may rapidly set up Scalar and enhance your experience with API documentation by following the instructions in this article.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: In Clean Code, Null Coalescing vs. Null Conditional

clock February 17, 2025 07:14 by author Peter

I will illustrate Null Coalescing (??) and Null Conditional (?.) in this article. Programmers frequently struggle with null references, although contemporary languages like C# provide sophisticated solutions. Two separate but complimentary methods for creating safe and clear code are the null conditional operator (?.) and the null coalescing operator (??). This page explains how to use them, gives examples, and examines their capabilities.

What is Null Coalescing (??)?
The null coalescing operator (??) provides a fallback value when a variable is null, serving as a concise alternative to traditional null-checking if statements.

Syntax
result = variable ?? fallbackValue;

How does Null Coalescing (??) work?
If variable is not null, its value is assigned to result.
If variable is null, the fallbackValue is assigned instead.
    string userName = null;
   string displayName = userName ?? "Guest User";
   Console.WriteLine(displayName);

The null coalescing operator (??) simplifies null-checking logic.

In this example, displayName is guaranteed to have a value, even if userName is null.

This approach is more concise and readable compared to an equivalent if statement:
The traditional method to achieve this before the introduction of ??
string displayName;
if (userName != null)
{
    displayName = userName;
}
else
{
    displayName = "Guest";
}

When to use the null coalescing operator (??)

Default Values: Assign a default value when a variable is null.

Dictionary<string, string> userPreferences = new Dictionary<string, string>()
        {
            { "theme", "dark" },
            { "language", "en" }
        };

// Use ?? to provide a default value if the key doesn't exist
string theme = userPreferences.GetValueOrDefault("theme") ?? "light";
string fontSize = userPreferences.GetValueOrDefault("fontSize") ?? "medium";

Console.WriteLine($"Theme: {theme}");       // Output: Theme: dark
Console.WriteLine($"Font Size: {fontSize}"); // Output: Font Size: medium


Output

  • Theme: dark   => requested value exists in the dictionary, so it will return the same value
  • Font Size: medium  => requested values do not exist, then it will return default values provided using the null coalescing operator (??)

Chained Fallbacks: Provide multiple fallback options.
string preferredName = null;
string alternateName = null;
        
// Use ?? to check both variables and provide a default value if both are null
string displayName = preferredName ?? alternateName ?? "Unknown Person";
Console.WriteLine(displayName);  // Output: Unknown Person


Output
"Unknown Person" because both variable assigned with null value.

  • The null coalescing operator can be chained to check multiple variables.
  • The operator will return the first non-null value from the chain.
  • If all variables are null, it will return the last fallback value. In this case, "Unknown".

The traditional method to achieve multiple null checks before the introduction of ??
string preferredName = null;
string alternateName = null;

// Traditional method using if statements
string displayName;

 if (preferredName != null)
 {
     displayName = preferredName;
}
 else if (alternateName != null)
  {
     displayName = alternateName;
  }
  else
  {
     displayName = "Unknown Person";
  }

Console.WriteLine(displayName);  // Output: Unknown Person

What is Null Conditional (?.)

The null conditional operator ?. allows safe navigation of objects and their properties, avoiding null reference exceptions. It short-circuits evaluation if the object or property being accessed is null.
result = object?.Property;

How does Null Conditional (?.) work?
public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public Address Address { get; set; }
}

public class Address
{
    public int Id { get; set; }
    public string City { get; set; }
    public string PinCode { get; set; }
}
Person person = null;

 // Use the null conditional operator to safely access a property
 string name = person?.Name ?? "Unknown";

 Console.WriteLine(name);  // Output: Unknown

 string persionCity = person?.Address?.City ?? "London"; // If person or Address object null then it will assign default city as London
The null conditional operator (?.) is used to safely access the Name property of person. Since person is null, person?.Name will return null.
When to use Null Conditional (?.)

  • The null conditional operator (?.) allows you to safely access properties or methods of potentially null objects without throwing exceptions.
  • It can be chained to access multiple levels of properties, like person?.Address?.City.
  • It works well in scenarios where the object might be null and you want to avoid explicit null checks.

Comparing Null Coalescing (??) and Null Conditional (?.)
Null Coalescing (??)

  • Use it when you need to provide a fallback value if an expression evaluates to null.
  • It's commonly used in scenarios involving value assignment or default parameters.

Null Conditional (?.)

  • Use it when you need to safely access or call properties and methods on potentially null objects.
  • It's frequently used in deep object hierarchies or when dealing with nullable types.

Combining Both ?? and ?
Consider the example above with the Person class, which has an Address reference, and we want to use a combination of both ?? and ?. to check for null. Here's how we can implement it:
string userCity = person?.Address?.City ?? "Unknown City";
Console.WriteLine(userCity); // Output: Unknown City


Summary
In this article, we have examined how to handle null checks more efficiently by utilizing C#'s Null Coalescing (??) and Null Conditional (?.) operators. We discussed how these features can be applied to implement null checks in a cleaner, more readable manner, improving both the clarity and maintainability of the code. Additionally, we explored how to use these operators individually as well as how to combine them in a single statement to handle multiple null checks effectively, making the code more concise and eliminating the need for verbose null-checking logic.

Feel free to follow me here, and you can read more articles at this link: Click Here to explore my articles.

  • New LINQ Methods in .NET 9: Index, CountBy, and AggregateBy
  • Working with JSON in .NET Core: Newtonsoft.Json, NetJSON, and System.Text.Json
  • Implement a retry policy in .NET 6 Applications with Polly
  • Understanding Model Binding in ASP.NET Core with .NET 8
  • Data Conversions in C# with SuperConvert


About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in