European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core 9.0 Hosting - HostForLIFE :: Types, Illustrations, and Best Practices of Table Sharding in SQL

clock June 11, 2025 09:24 by author Peter

Table Sharding in SQL
European ASP.NET Core 9.0 Hosting - HostForLIFE :: Table sharding is a database design technique used to improve the scalability and performance of large-scale applications. It involves splitting a large table into smaller, more manageable pieces called "shards," which are distributed across multiple database instances or servers. Each shard contains a subset of the data, and together they form the complete dataset.

Why Use Table Sharding?

  • Scalability: Sharding allows horizontal scaling by distributing data across multiple servers.
  • Performance: Queries are faster because they operate on smaller datasets.
  • Fault Tolerance: If one shard fails, only a portion of the data is affected.
  • Cost Efficiency: Sharding enables the use of smaller, less expensive servers instead of a single, high-performance server.

Types of Table Sharding
Range-Based Sharding

  • Data is divided based on a range of values in a specific column.
  • Example: A table storing user data can be sharded by user ID ranges (e.g., Shard 1: User IDs 1–1000, Shard 2: User IDs 1001–2000).
  • Pros: Simple to implement and query.
  • Cons: Uneven data distribution if ranges are not carefully chosen.


Hash-Based Sharding

  • A hash function is applied to a column (e.g., user ID) to determine which shard the data belongs to.
  • Example: hash(user_id) % number_of_shards determines the shard.
  • Pros: Ensures even data distribution.
  • Cons: Harder to query across shards and to add/remove shards dynamically.


Geographic Sharding

  • Data is divided based on geographic location.
  • Example: Users in North America are stored in one shard, while users in Europe are stored in another.
  • Pros: Useful for applications with geographically distributed users.
  • Cons: Can lead to uneven distribution if one region has significantly more users.

Key-Based Sharding

  • Similar to hash-based sharding, but uses a specific key (e.g., customer ID or order ID) to determine the shard.
  • Pros: Flexible and allows for custom sharding logic.
  • Cons: Requires careful planning to avoid hotspots.


Directory-Based Sharding

  • A lookup table (directory) maps each record to its corresponding shard.
  • Pros: Highly flexible and allows for dynamic shard allocation.
  • Cons: Adds complexity and requires maintaining the directory.

Examples of Table Sharding
Example 1. Range-Based Sharding
-- Shard 1: User IDs 1–1000
CREATE TABLE users_shard1 (
user_id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
);

-- Shard 2: User IDs 1001–2000
CREATE TABLE users_shard2 (
user_id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
);


Example 2. Hash-Based Sharding
-- Shard 1: Hash(user_id) % 2 = 0
CREATE TABLE users_shard1 (
user_id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
);

-- Shard 2: Hash(user_id) % 2 = 1
CREATE TABLE users_shard2 (
user_id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
);


Example 3. Geographic Sharding
-- Shard 1: North America
CREATE TABLE users_na (
user_id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100),
region VARCHAR(50)
);

-- Shard 2: Europe
CREATE TABLE users_eu (
user_id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100),
region VARCHAR(50)
);

Best Practices for Table Sharding

Choose the Right Sharding Key
Select a column that ensures even data distribution and minimizes cross-shard queries.
Example: User ID or Order ID.

Plan for Growth

Design shards to accommodate future data growth.
Avoid hardcoding shard ranges to allow for dynamic scaling.

Minimize Cross-Shard Queries

  • Cross-shard queries can be slow and complex. Design your application to minimize them.
  • Example: Use denormalization or caching to reduce the need for joins across shards.

Monitor and Balance Shards

  • Regularly monitor shard sizes and redistribute data if necessary to avoid hotspots.

Use Middleware or Sharding Libraries

  • Middleware tools like ProxySQL or libraries like Hibernate Shards can simplify sharding logic.

Implement Backup and Recovery

  • Ensure each shard is backed up independently and has a recovery plan.

Test for Performance

  • Test your sharding strategy under realistic workloads to identify bottlenecks.

Document Sharding Logic

  • Clearly document how data is distributed across shards to help developers and DBAs.

Challenges of Table Sharding

  • Complexity: Sharding adds complexity to database design and application logic.
  • Cross-Shard Transactions: Managing transactions across shards can be difficult.
  • Rebalancing Data: Adding or removing shards requires redistributing data, which can be time-consuming.
  • Query Optimization: Queries need to be optimized to avoid unnecessary cross-shard operations.

Conclusion
Table sharding is a powerful technique for scaling large databases, but it requires careful planning and implementation. By understanding the different types of sharding, following best practices, and addressing potential challenges, you can design a sharding strategy that meets your application's scalability and performance needs.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: ASP.NET Core Clean and Reliable Code Testing with Moq using C# 13 and xUnit

clock June 9, 2025 08:34 by author Peter

C# 13 and .NET 8 have greatly enhanced ASP.NET Core development capabilities. However, building scalable and maintainable systems requires robust testing in addition to feature implementation. Using xUnit, Moq, and the latest C# 13 features, you will learn how to write clean, reliable, and testable code.

This guide will walk you through testing a REST API or a service layer:

  • Creating a test project
  • Using xUnit to write clean unit tests
  • Using Moq to mock dependencies
  • Using best practices for test architecture and maintainability

Setting Up Your ASP.NET Core Project with C# 13
With ASP.NET Core Web API and C# 13, begin with .NET 8 and ASP.NET Core Web API.
dotnet new sln -n HflApi

dotnet new web -n Hfl.Api
dotnet new classlib -n Hfl.Domain
dotnet new classlib -n Hfl.Core
dotnet new classlib -n Hfl.Application
dotnet new classlib -n Hfl.Infrastructure


dotnet sln add Hfl.Api/Hfl.Api.csproj
dotnet sln add Hfl.Domain/Hfl.Domain.csproj
dotnet sln add Hfl.Core/Hfl.Core.csproj
dotnet sln add Hfl.Application/Hfl.Application.csproj
dotnet sln add Hfl.Infrastructure/Hfl.Infrastructure.csproj


Create a Test Project with xUnit and Moq
Add a new test project:
dotnet new xunit -n HflApi.Tests
dotnet add HflApi.Tests/HflApi.Tests.csproj reference HflApi/HflApi.csproj
dotnet add HflApi.Tests package Moq


Use Case: Testing a Service Layer
Domain, Service, Respository, Interface and API
namespace HflApi.Domain;
public record Order(Guid Id, string Status);


using HflApi.Domain;

namespace HflApi.Core.Interfaces;

public interface IOrderRepository
{
    Task<Order?> GetByIdAsync(Guid id);
}


using HflApi.Core.Interfaces;

namespace HflApi.Application.Services;

public class OrderService
{
    private readonly IOrderRepository _repository;

    public OrderService(IOrderRepository repository)
    {
        _repository = repository;
    }

    public async Task<string> GetOrderStatusAsync(Guid orderId)
    {
        var order = await _repository.GetByIdAsync(orderId);
        return order?.Status ?? "Not Found";
    }
}


using HflApi.Core.Interfaces;
using HflApi.Domain;

namespace Hfl.Infrastructure.Repositories
{
    public class InMemoryOrderRepository : IOrderRepository
    {
        private readonly List<Order> _orders = new()
    {
        new Order(Guid.Parse("7c3308b4-637f-426b-aafc-471697dabeb4"), "Processed"),
        new Order(Guid.Parse("5aee5943-56d0-4634-9f6c-7772f6d9c161"), "Pending")
    };

        public Task<Order?> GetByIdAsync(Guid id)
        {
            var order = _orders.FirstOrDefault(o => o.Id == id);
            return Task.FromResult(order);
        }
    }
}

using Hfl.Infrastructure.Repositories;
using HflApi.Application.Services;
using HflApi.Core.Interfaces;


var builder = WebApplication.CreateBuilder(args);


builder.Services.AddScoped<IOrderRepository, InMemoryOrderRepository>();
builder.Services.AddScoped<OrderService>();

var app = builder.Build();

app.MapGet("/orders/{id:guid}", async (Guid id, OrderService service) =>
{
    var status = await service.GetOrderStatusAsync(id);
    return Results.Ok(new { OrderId = id, Status = status });
});

app.Run();


Unit Testing with xUnit and Moq
Test Class
using Moq;
using HflApi.Application.Services;
using HflApi.Core.Interfaces;
using HflApi.Domain;


namespace OrderApi.Tests;

public class OrderServiceTests
{
    private readonly Mock<IOrderRepository> _mockRepo;
    private readonly OrderService _orderService;

    public OrderServiceTests()
    {
        _mockRepo = new Mock<IOrderRepository>();
        _orderService = new OrderService(_mockRepo.Object);
    }

    [Fact]
    public async Task GetOrderStatusAsync_ReturnsStatus_WhenOrderExists()
    {
        var orderId = Guid.NewGuid();
        _mockRepo.Setup(r => r.GetByIdAsync(orderId))
                 .ReturnsAsync(new Order(orderId, "Processed"));

        var result = await _orderService.GetOrderStatusAsync(orderId);

        Assert.Equal("Processed", result);
    }

    [Fact]
    public async Task GetOrderStatusAsync_ReturnsNotFound_WhenOrderDoesNotExist()
    {
        var orderId = Guid.NewGuid();
        _mockRepo.Setup(r => r.GetByIdAsync(orderId))
                 .ReturnsAsync((Order?)null);

        var result = await _orderService.GetOrderStatusAsync(orderId);

        Assert.Equal("Not Found", result);
    }

    [Theory]
    [InlineData("Processed")]
    [InlineData("Pending")]
    [InlineData("Shipped")]
    public async Task GetOrderStatus_ReturnsCorrectStatus(string status)
    {
        var orderId = Guid.NewGuid();
        _mockRepo.Setup(r => r.GetByIdAsync(orderId))
                 .ReturnsAsync(new Order(orderId, status));

        var result = await _orderService.GetOrderStatusAsync(orderId);

        Assert.Equal(status, result);
    }
}


Best Practices
1. Use Dependency Injection for Testability
All dependencies should be injected, so don't use static classes or service locator patterns.

2. Keep Tests Isolated
In order to isolate external behavior, Moq should be used to isolate database/network I/O from tests.

3. Use Theory for Parameterized Tests
[Theory]
[InlineData("Processed")]
[InlineData("Pending")]
[InlineData("Shipped")]
public async Task GetOrderStatus_ReturnsCorrectStatus(string status)
{
    var orderId = Guid.NewGuid();
    _mockRepo.Setup(r => r.GetByIdAsync(orderId))
             .ReturnsAsync(new Order(orderId, status));

    var result = await _orderService.GetOrderStatusAsync(orderId);

    Assert.Equal(status, result);
}

4. Group Tests by Behavior (Not CRUD)
Tests should be organized according to what systems do, not how they are performed. For example:

  • GetOrderStatus_ShouldReturnCorrectStatus
  • CreateOrder_ShouldSendNotification

5. Use Records for Test Data in C# 13
public record Order(Guid Id, string Status);

Immutable, concise, and readable test data objects can be created using records.

  • Test Coverage Tips
  • To measure test coverage, use Coverlet or JetBrains dotCover.
  • Business rules and logic at the service layer should be targeted.
  • Make sure you do not overtest third-party libraries or trivial getter/setter functions.

Recommended Tools

Tool

Purpose

xUnit

Unit Testing Framework

Moq

Mocking Dependencies

FluentAssertions

Readable Assertions

Coverlet

Code Coverage

Summary

Use xUnit, Moq, and C# 13 capabilities to test ASP.NET Core applications. To make sure your apps are dependable, provide a clean architecture, separated unit tests, and appropriate test names. Developers can find and address problems earlier in the development cycle by integrating DI, mocking, and xUnit assertions. This leads to quicker feedback, more confidence, and more maintainable systems. Unit tests' isolation guarantees that every part functions on its own, enhancing the overall dependability of the system. Over time, a codebase becomes easier to comprehend and maintain with a clear design and relevant test names. This method promotes a strong development process by lowering regressions and enhancing code quality. Furthermore, modular designs and well-defined test cases facilitate team member onboarding and debugging, encouraging cooperation. These procedures ultimately result in more scalable and resilient applications.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using Blazor Server to Call REST APIs: An Introduction with Example

clock June 5, 2025 08:27 by author Peter

Prerequisites

  • Basic knowledge of Blazor Server
  • Visual Studio or VS Code
  • .NET 6 or .NET 8 SDK
  • A sample REST API (we'll use JSONPlaceholder

Step 1. Create a Blazor Server App

  • Open Visual Studio
  • Create a new Blazor Server App
  • Name it BlazorRestClientDemo

Step 2. Create the Model

public class Post
{
    public int UserId { get; set; }
    public int Id { get; set; }
    public string Title { get; set; }
    public string Body { get; set; }
}

Step 3. Register HttpClient in Program.cs
builder.Services.AddHttpClient("API", client =>
{
    client.BaseAddress = new Uri("https://jsonplaceholder.typicode.com/");
});


Step 4. Create a Service to Call the API

public class PostService
{
    private readonly HttpClient _http;

    public PostService(IHttpClientFactory factory)
    {
        _http = factory.CreateClient("API");
    }

    public async Task<List<Post>> GetPostsAsync()
    {
        var response = await _http.GetFromJsonAsync<List<Post>>("posts");
        return response!;
    }

    public async Task<Post?> GetPostAsync(int id)
    {
        return await _http.GetFromJsonAsync<Post>($"posts/{id}");
    }

    public async Task<Post?> CreatePostAsync(Post post)
    {
        var response = await _http.PostAsJsonAsync("posts", post);
        return await response.Content.ReadFromJsonAsync<Post>();
    }
}

C#

Step 5. Register the Service and Program.cs.

builder.Services.AddScoped<PostService>();

Step 6. Use in a Razor Component
@page "/posts"
@inject PostService PostService

<h3>All Posts</h3>

@if (posts == null)
{
    <p>Loading...</p>
}
else
{
    <ul>
        @foreach (var post in posts)
        {
            <li><b>@post.Title</b> - @post.Body</li>
        }
    </ul>
}

@code {
    private List<Post>? posts;

    protected override async Task OnInitializedAsync()
    {
        posts = await PostService.GetPostsAsync();
    }
}

Add a simple form and call CreatePostAsync().

Conclusion
Blazor Server apps can easily consume REST APIs using HttpClient and typed models. In this article, you learned how to,

  • Register and inject HttpClient
  • Call GET and POST endpoints
  • Display data in the UI

Blazor is a powerful front-end technology, and now you know how to connect it with real-world APIs.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using C# 13, EF Core, and DDD to Create a Clean ASP.NET Core API

clock June 3, 2025 07:30 by author Peter

We'll show you how to create a scalable, modular, and tested RESTful API using the newest.NET technology and industry best practices. The solution is perfect for enterprise development in the real world because it is designed with maintainability and clean architecture principles in mind. HTTP requests and answers will be handled by ASP.NET Core 8, and by utilizing the most recent language improvements, C# 13 (with preview features) will allow us to develop code that is clearer and more expressive. To access and save data, we'll utilize SQL Server and Entity Framework Core 8.

By prioritizing key domain logic and dividing concerns across layers, the design complies with Domain-Driven Design (DDD). We'll use the Repository and Unit of Work patterns to abstract data access and guarantee transactional consistency. The project also includes structured logging, centralized exception management, and FluentValidation for input validation for stability and traceability.

  • ASP.NET Core 8
  • C# 13 (Preview features)
  • Entity Framework Core 8
  • MS SQL Server
  • Domain-Driven Design (DDD)
  • Repository + Unit of Work Patterns
  • Dependency Injection
  • Validation, Error Handling & Logging


Project Setup & Structure
Clean Folder Structure (Clean Architecture & DDD-Aligned)

In order to promote separation of concerns, maintainability, and testability, the project uses Clean Architecture principles and Domain-Driven Design principles. The src directory is divided into well-defined layers, each with a specific responsibility.

  • CompanyManagement.API: As the interface between the application layer and the outside world, CompanyManagement.API contains API controllers, dependency injection configurations, and middleware such as error handling and logging.
  • CompanyManagement.Application: In CompanyManagement.Application, the business logic is encapsulated in services (use cases), data transfer objects (DTOs), and command/query handlers. Without concern for persistence or infrastructure, this layer coordinates tasks and enforces application rules.
  • CompanyManagement.Domain: CompanyManagement.Domain defines the business model through entities, interfaces, enums, and value objects. This layer is completely independent of any other project and represents the domain logic.
  • CompanyManagement.Infrastructure: The CompanyManagement.Infrastructure class implements the technical details necessary to support the application and domain layers. This includes Entity Framework Core configurations, the DbContext, repository implementations, and database migrations.
  • CompanyManagement.Tests: CompanyManagement.Tests contains unit and integration tests to help maintain code quality and prevent regressions by testing each component of the system in isolation or as part of a broader workflow.

As a result of this layered structure, the application is able to evolve and scale while keeping the codebase clean, decoupled, and easy to test.

Step-by-Step Implementation
Define the Domain Model

Company.cs – Domain Entity
In Domain-Driven Design (DDD), the domain model captures the core business logic of your application. In the CompanyManagement.Domain.Entities namespace, the Company entity encapsulates key business rules and states of a real-world company.

There are three primary properties in the Company class:

  • Id: At the time of creation, a unique identifier (Guid) is generated.
  • Name: A company's name, which is immutable from outside the class, is its name.
  • EstablishedOn: The company's founding date is marked by the EstablishedOn property.

As Entity Framework Core (EF Core) requires a parameterless constructor for materialisation, the constructor is intentionally private. To ensure that Id is always generated and that required fields (Name, EstablishedOn) are always initialised during creation, a public constructor is provided.

This method contains a guard clause to ensure the new name is not null, empty, or whitespace, enforcing business rules directly within the domain model.

Rich domain modelling in DDD adheres to encapsulation, immutability (where appropriate), and self-validation, which are key principles.

Company.cs

namespace CompanyManagement.Domain.Entities;

public class Company
{
    public Guid Id { get; private set; }
    public string Name { get; private set; } = string.Empty;
    public DateTime EstablishedOn { get; private set; }

    private Company() { }

    public Company(string name, DateTime establishedOn)
    {
        Id = Guid.NewGuid();
        Name = name;
        EstablishedOn = establishedOn;
    }

    public void Rename(string newName)
    {
        if (string.IsNullOrWhiteSpace(newName))
            throw new ArgumentException("Name cannot be empty");

        Name = newname;
    }
}


Define the Repository Interface
In Domain-Driven Design (DDD), the repository pattern provides an abstraction over data persistence, enabling the domain layer to remain independent from infrastructure concerns like databases or external APIs. The ICompanyRepository interface, defined in the CompanyManagement.Domain.Interfaces namespace, outlines a contract for working with Company aggregates.

  • This interface declares the fundamental CRUD operations required to interact with the Company entity:
  • Task<Company?> GetByIdAsync(Guid id): Asynchronously retrieves a company by its unique identifier. Returns null if no match is found.
  • GetAllAsync(): Returns a list of all the companies in the system.
  • Task AddAsync(Company company): Asynchronously adds a new Company to the data store using the AddAsync(Company company) task.
  • void Update(Company company): Updates an existing company. This is typically used when domain methods like Rename modify the entity's status.
  • void Delete(Company company): Removes a company from the database.

The rest of the application depends only on abstractions, not implementations, when this interface is defined in the domain layer. A key component of Clean Architecture, the Dependency Inversion Principle (DIP) promotes loose coupling, testability (e.g., mocks in unit tests), and is based on loose coupling.

ICompanyRepository.cs

using CompanyManagement.Domain.Entities;

namespace CompanyManagement.Domain.Interfaces;

public interface ICompanyRepository
{
    Task<Company?> GetByIdAsync(Guid id);
    Task<List<Company>> GetAllAsync();
    Task AddAsync(Company company);
    void Update(Company company);
    void Delete(Company company);
}

EF Core Implementation
As the heart of the Entity Framework Core data access layer, AppDbContext represents a session with the SQL Server database, allowing us to query and save domain entity instances. Input into the constructor is handled through dependency injection, which allows the application to configure the context externally, such as setting the connection string or enabling SQL Server-specific functionality.

The Companies property is a strongly typed DbSet<Company>, which EF Core uses to track and manage Company entities in the database. It abstracts the Companies table and allows you to perform operations like querying, inserting, updating, and deleting data through LINQ and asynchronous methods.

The OnModelCreating method is overridden to configure entity mappings using the Fluent API. Here, we map the Company entity to the Companies table explicitly using modelBuilder.Entity<Company>().ToTable("Companies"). This approach provides flexibility for configuring additional constraints, relationships, and database-specific settings in the future — while keeping the domain model clean and free of persistence concerns.

We follow the Separation of Concerns principle by isolating all database configurations within AppDbContext, ensuring our domain remains pure and focused only on business logic.

AppDbContext.cs
using CompanyManagement.Domain.Entities;
using Microsoft.EntityFrameworkCore;

namespace CompanyManagement.Infrastructure.Data;

public class AppDbContext : DbContext
{
    public AppDbContext(DbContextOptions<AppDbContext> options) : base(options) { }

    public DbSet<Company> Companies => Set<Company>();

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Company>().ToTable("Companies");
    }
}
 

For managing Company entities in the database, the CompanyRepository class implements the ICompanyRepository interface concretely. The application leverages Entity Framework Core and is injected with an AppDbContext instance, so that it can interact directly with the database context. By using ToListAsync(), the GetAllAsync method retrieves all Company records from the database asynchronously. The GetByIdAsync method locates a specific company using its unique identifier (Guid id), returning null otherwise.

With AddAsync, a new Company entity is asynchronously added to the context's change tracker, preparing it for insertion into the database when SaveChangesAsync() is called. Update marks an existing company entity as modified, and Delete flags an entity for deletion.

Using the Repository Pattern, the data access logic is abstracted from the rest of the application, which keeps the underlying persistence mechanism hidden. As a result, we can swap or enhance persistence strategies without affecting the domain or application layers, promoting separation of concerns, testability, and maintainability.

CompanyRepository.cs

using CompanyManagement.Domain.Entities;
using CompanyManagement.Domain.Interfaces;
using CompanyManagement.Infrastructure.Data;
using Microsoft.EntityFrameworkCore;

namespace CompanyManagement.Infrastructure.Repositories;

public class CompanyRepository(AppDbContext context) : ICompanyRepository
{
    public Task<List<Company>> GetAllAsync() => context.Companies.ToListAsync();
    public Task<Company?> GetByIdAsync(Guid id) => context.Companies.FindAsync(id).AsTask();
    public Task AddAsync(Company company) => context.Companies.AddAsync(company).AsTask();
    public void Update(Company company) => context.Companies.Update(company);
    public void Delete(Company company) => context.Companies.Remove(company);
}


Application Layer – Services & DTOs
As part of the Application Layer, Data Transfer Objects (DTOs) are crucial to separating external inputs from internal domain models. With C# 9+, the CreateCompanyDTO is an immutable and simple data structure that encapsulates only the necessary data for creating a new company, such as its name and its establishment date.

Using a DTO like CreateCompanyDto ensures that the API or service layer receives only the necessary information while maintaining a clear boundary from domain entities. In addition to improving maintainability, validation, and security, it simplifies testing and supports serialization/deserialization out of the box, which eliminates over-posting.

It is a clean, minimal contract aligned with Clean Architecture and Single Responsibility Principle by protecting the domain from direct external access.

CreateCompanyDto.cs

namespace CompanyManagement.Application.DTOs;

public record CreateCompanyDto(string Name, DateTime EstablishedOn);

In this class, business operations related to Company entities are handled by the core application service. When persisting changes, it relies on two key abstractions: ICompanyRepository for data access and IUnitOfWork for transactional consistency management.

In the CreateAsync method, a new company is created by instantiating a new Company domain entity based on the CreateCompanyDto provided, then delegating to the repository and adding it asynchronously. Then it commits the transaction to the database by calling the SaveChangesAsync method on the unit of work. Finally, the caller receives the unique identifier (Id) for the newly created company that can be referenced in the future.

Asynchronously retrieves all existing companies by invoking the corresponding repository method, returning a list of Company entities.

The service contains business logic and coordinates domain operations, while the repository abstracts persistence details. By using interfaces to support easy unit testing, it follows best practices for asynchronous programming, dependency injection, and dependency injection.

CompanyService.cs
using CompanyManagement.Application.DTOs;
using CompanyManagement.Domain.Entities;
using CompanyManagement.Domain.Interfaces;

namespace CompanyManagement.Application.Services;

public class CompanyService(ICompanyRepository repository, IUnitOfWork unitOfWork)
{
    public async Task<Guid> CreateAsync(CreateCompanyDto dto)
    {
        var company = new Company(dto.Name, dto.EstablishedOn);
        await repository.AddAsync(company);
        await unitOfWork.SaveChangesAsync();
        return company.Id;
    }

    public async Task<List<Company>> GetAllAsync() => await repository.GetAllAsync();
}

Unit of Work
IUnitOfWork represents a fundamental pattern for managing data persistence and transactional consistency in an application. It abstracts the concept of a "unit of work," which encapsulates a series of operations that should be treated as a single atomic transaction. The SaveChangesAsync() method commits all pending changes to the underlying data store asynchronously. This method returns an integer indicating the number of state entries committed.

By relying on this abstraction, the application ensures that all modifications across multiple repositories can be coordinated and saved together, preserving data integrity and consistency. Furthermore, implementations can be mocked or swapped without changing the consuming code, improving testability and separation of concerns.

IUnitOfWork.cs
namespace CompanyManagement.Domain.Interfaces;

public interface IUnitOfWork
{
    Task<int> SaveChangesAsync();
}

The UnitOfWork class implements the IUnitOfWork interface, encapsulating the transaction management logic for the application. It depends on the AppDbContext, which represents the Entity Framework Core database context.

It simply delegated the call to context.SaveChangesAsync(), asynchronously persisting all tracked changes. This ensures that any changes made through repositories within a unit of work are saved as one atomic operation.

Using the UnitOfWork class to centralize the commit logic makes it easier to coordinate multiple repository operations under one transaction, while also supporting dependency injection and testing.

UnitOfWork.cs
using CompanyManagement.Domain.Interfaces;

namespace CompanyManagement.Infrastructure.Data;

public class UnitOfWork(AppDbContext context) : IUnitOfWork
{
    public Task<int> SaveChangesAsync() => context.SaveChangesAsync();
}


API Layer
ASP.NET Core's Dependency Injection (DI) container is used to wire up essential services in Program.cs. In order to register the AppDbContext with the DI container, AddDbContext is first used. The connection string is retrieved from the application's configuration under the key "Default" and configures Entity Framework Core to use SQL Server as the database provider. As a result, the application can access the database seamlessly.

Through AddScoped, the repository and unit of work abstractions are then mapped to their concrete implementations. The result is that a new instance of CompanyRepository and UnitOfWork is created and shared for every HTTP request, providing scoped lifetime management for database operations.

Also registered as a scoped service is the CompanyService, which contains the business logic for managing companies. With this setup, controllers and other components receive these dependencies via constructor injection, thereby promoting loose coupling, testability, and separation of concerns in the API layer.

ServiceRegistration.cs
using CompanyManagement.Application.Services;
using CompanyManagement.Domain.Interfaces;
using CompanyManagement.Infrastructure.Data;
using CompanyManagement.Infrastructure.Repositories;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;

namespace CompanyManagement.Infrastructure.DependencyInjection;

public static class ServiceRegistration
{
    public static IServiceCollection AddInfrastructure(this IServiceCollection services, IConfiguration configuration)
    {
        services.AddDbContext<AppDbContext>(options =>
            options.UseSqlServer(configuration.GetConnectionString("Default")));

        services.AddScoped<ICompanyRepository, CompanyRepository>();
        services.AddScoped<IUnitOfWork, UnitOfWork>();
        services.AddScoped<CompanyService>();

        return services;
    }
}

SwaggerConfiguration.cs
using Microsoft.OpenApi.Models;

namespace CompanyManagement.API.Configurations;

public static class SwaggerConfiguration
{
    public static IServiceCollection AddSwaggerDocumentation(this IServiceCollection services)
    {
        services.AddEndpointsApiExplorer();

        services.AddSwaggerGen(options =>
        {
            options.SwaggerDoc("v1", new OpenApiInfo
            {
                Title = "Company API",
                Version = "v1",
                Description = "API for managing companies using Clean Architecture"
            });

            // Optional: Include XML comments
            var xmlFilename = $"{System.Reflection.Assembly.GetExecutingAssembly().GetName().Name}.xml";
            var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFilename);
            if (File.Exists(xmlPath))
            {
                options.IncludeXmlComments(xmlPath);
            }
        });

        return services;
    }
}


SwaggerMiddleware.cs
namespace CompanyManagement.API.Middleware;

public static class SwaggerMiddleware
{
    public static IApplicationBuilder UseSwaggerDocumentation(this IApplicationBuilder app)
    {
        app.UseSwagger();

        app.UseSwaggerUI(options =>
        {
            options.SwaggerEndpoint("/swagger/v1/swagger.json", "Company API V1");
        });

        return app;
    }
}

Program.cs
using CompanyManagement.API.Configurations;
using CompanyManagement.API.Endpoints;
using CompanyManagement.API.Middleware;
using CompanyManagement.Infrastructure.DependencyInjection;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOpenApi();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerDocumentation();

var app = builder.Build();

app.UseSwagger();
app.UseSwaggerDocumentation();
builder.Services.AddInfrastructure(builder.Configuration);

 .
if (app.Environment.IsDevelopment())
{
    app.MapOpenApi();
}
app.MapCompanyEndpoints();

app.UseHttpsRedirection();


appsettings.json
{
  "ConnectionStrings": {
    "Default": "Server=localhost;Database=CompanyDb;Trusted_Connection=True;MultipleActiveResultSets=true"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*"
}

The CompanyEndpoints is the REST API entry point that handles HTTP requests related to company management. Decorated with the [ApiController] attribute, it features automatic model validation, binding, and API-specific conventions. The controller routes are prefixed with "api/[controller]", which means they will be based on the controller name (api/company). Using constructor injection, the controller receives an instance of CompanyService, which enables it to delegate business logic operations to the application layer.

In the Create action, a CreateCompanyDto object is accepted from the request body, the service's CreateAsync method is called, and a 201 Created response is returned with the new resource's location header.
When an HTTP GET request is made with a company ID in the route, GetById retrieves all companies via the service, searches for companies with the matching ID, and returns 200 OK if a match is found, or 404 Not Found otherwise.
A 200 OK response is returned by the GetAll action when HTTP GET requests are made to retrieve all companies.

By offloading business logic to the service layer, this controller adheres to the Single Responsibility Principle and ensures maintainability and testability.

CompanyEndpoints.cs
using CompanyManagement.Application.DTOs;
using CompanyManagement.Application.Services;

namespace CompanyManagement.API.Endpoints;

public static class CompanyEndpoints
{
    public static void MapCompanyEndpoints(this WebApplication app)
    {
        app.MapPost("/api/companies", async (CreateCompanyDto dto, CompanyService service) =>
        {
            var id = await service.CreateAsync(dto);
            return Results.Created($"/api/companies/{id}", new { id });
        });

        app.MapGet("/api/companies", async (CompanyService service) =>
        {
            var companies = await service.GetAllAsync();
            return Results.Ok(companies);
        });

        app.MapGet("/api/companies/{id:guid}", async (Guid id, CompanyService service) =>
        {
            var companies = await service.GetAllAsync();
            var match = companies.FirstOrDefault(c => c.Id == id);
            return match is not null ? Results.Ok(match) : Results.NotFound();
        });
    }
}

Best Practices
Several key best practices are followed in this project to ensure that the code is clean, maintainable, and scalable. For instance, Data Transfer Objects (DTOs) are used to avoid exposing domain entities directly to external clients, improving security and abstraction. The Single Responsibility Principle (SRP) is applied by ensuring each layer has a distinct, focused responsibility, resulting in improved code organisation and clarity.

The project uses Dependency Injection to promote flexibility, ease testing, and maintain separation of concerns. For I/O-bound operations such as database access, asynchronous programming with async/await improves application responsiveness and scalability.

As a result of exception handling middleware, error logging is consistent, and maintenance is simplified. Finally, the project includes comprehensive unit testing using xUnit and Moq frameworks for each layer, which ensures code quality and reliability in the future.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Use Cases and Performance Comparison of PLINQ vs LINQ in C#

clock May 26, 2025 08:20 by author Peter

Large datasets and computationally demanding activities are becoming increasingly prevalent in software applications, thus developers need effective tools to handle data. PLINQ (Parallel LINQ) and LINQ (Language Integrated Query) are two well-liked choices in C#. Although their syntax and functionality are identical, their query execution methods are very different. With the help of real-world examples and performance comparisons, this article examines the main distinctions, applications, and performance factors between LINQ and PLINQ.

What is LINQ?
LINQ (Language Integrated Query) is a feature of C# that enables developers to perform data querying in a syntax integrated into the language. Introduced in .NET Framework 3.5, LINQ provides a consistent method to work with different data sources like collections, databases, XML, and more. It executes queries sequentially, processing each item in turn.

LINQ Example

var numbers = new List<int> { 1, 2, 3, 4, 5 };
var evenNumbers = numbers.Where(n => n % 2 == 0).ToList();

foreach (var number in evenNumbers)
{
    Console.WriteLine(number); // Output: 2, 4
}

LINQ is straightforward to use and works well for small-to-medium-sized datasets or queries that are not computationally intensive.

What is PLINQ?

PLINQ (Parallel LINQ) was introduced with .NET Framework 4.0 and extends LINQ by enabling parallel query execution. Built on the Task Parallel Library (TPL), PLINQ uses multiple CPU cores to process large datasets or computationally expensive operations more efficiently. It partitions data into chunks and executes them concurrently using threads.

PLINQ Example
var numbers = Enumerable.Range(1, 10_000);
var evenNumbers = numbers.AsParallel()
                         .Where(n => n % 2 == 0)
                         .ToList();

Console.WriteLine(evenNumbers.Count); // Output: 5000


The AsParallel() method enables parallel execution of the query, leveraging all available processor cores.

Performance Comparison Between LINQ and PLINQ

To better understand how LINQ and PLINQ differ in performance, let’s process a large dataset and measure the time taken for each.

Example: LINQ vs PLINQ Performance
The following code processes a dataset of numbers from 1 to 5,000,000 and filters prime numbers using both LINQ and PLINQ. We also measure execution time.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;

class Program
{
    static void Main()
    {
        // Prepare a large dataset
        var largeDataSet = Enumerable.Range(1, 5_000_000).ToList();

        // LINQ benchmark
        var stopwatch = Stopwatch.StartNew();
        var linqPrimes = largeDataSet.Where(IsPrime).ToList();
        stopwatch.Stop();
        Console.WriteLine($"LINQ Time: {stopwatch.ElapsedMilliseconds} ms");
        Console.WriteLine($"LINQ Prime Count: {linqPrimes.Count}");

        // PLINQ benchmark
        stopwatch.Restart();
        var plinqPrimes = largeDataSet.AsParallel().Where(IsPrime).ToList();
        stopwatch.Stop();
        Console.WriteLine($"PLINQ Time: {stopwatch.ElapsedMilliseconds} ms");
        Console.WriteLine($"PLINQ Prime Count: {plinqPrimes.Count}");
    }

    static bool IsPrime(int number)
    {
        if (number <= 1) return false;
        for (int i = 2; i <= Math.Sqrt(number); i++)
        {
            if (number % i == 0) return false;
        }
        return true;
    }
}

Explanation of Benchmark

  • Dataset: A large range of numbers (1 to 5,000,000) serves as the input.
  • LINQ: The query is processed sequentially, examining each number to determine if it is prime.
  • PLINQ: The query runs in parallel, dividing the dataset into chunks for multiple threads to process concurrently.

Expected Output
On a multi-core machine, you might see performance results like:


Ordered vs Unordered Processing in PLINQ
By default, PLINQ processes data in unordered mode to maximize performance. However, if your application requires results to be in the same order as the input dataset, you can enforce order using .AsOrdered().

Example. Using .AsOrdered() in PLINQ
var numbers = Enumerable.Range(1, 10);
var orderedResult = numbers.AsParallel()
                       .AsOrdered()
                       .Where(n => n % 2 == 0)
                       .ToList();
Console.WriteLine(string.Join(", ", orderedResult)); // Output: 2, 4, 6, 8, 10

If maintaining the order doesn’t matter, you can use .AsUnordered() to further optimize performance.

Benchmark. Ordered vs Unordered PLINQ
var numbers = Enumerable.Range(1, 1_000_000).ToList();

var stopwatch = Stopwatch.StartNew();

// Ordered PLINQ
var orderedPrimes = numbers.AsParallel()
                       .AsOrdered()
                       .Where(IsPrime)
                       .ToList();
stopwatch.Stop();
Console.WriteLine($"AsOrdered Time: {stopwatch.ElapsedMilliseconds} ms");

stopwatch.Restart();

// Unordered PLINQ
var unorderedPrimes = numbers.AsParallel()
                         .AsUnordered()
                         .Where(IsPrime)
                         .ToList();
stopwatch.Stop();
Console.WriteLine($"AsUnordered Time: {stopwatch.ElapsedMilliseconds} ms");

Expected Output
AsOrdered Time: 210 ms
AsUnordered Time: 140 ms

Use Cases for LINQ and PLINQ

When to Use LINQ?

  • Small datasets where sequential processing is efficient.
  • Tasks requiring strict order preservation.
  • Easy debugging and simple queries.
  • Real-time systems where lower latency matters more than raw throughput.

When to Use PLINQ?

  • Large datasets where parallel execution can reduce runtime.
  • Computationally intensive tasks, such as processing images or mathematical operations.
  • Bulk operations where order doesn’t matter, e.g., statistical analysis of logs.
  • Applications running on multi-core machines utilize available CPU resources.

Summary Table of Insights
Key Differences Between LINQ and PLINQ

Feature LINQ PLINQ
Execution Sequential Parallel
Performance Best suited for small datasets Designed for large datasets
Utilization Uses a single CPU core Utilizes multiple CPU cores and threads
Order Preservation Preserves element order by default Unordered by default (order can be enforced)
Error Handling Simple error propagation Requires handling of thread-specific exceptions
Control Limited control over execution Offers options like cancellation and partitioning
Overhead No additional overhead Thread management and partitioning may add overhead

Conclusion
In C#, LINQ and PLINQ are both great tools for data queries. PLINQ performs best in situations requiring extensive data processing or operations on huge datasets where parallelism may be used, whereas LINQ is appropriate for smaller, simpler datasets.


Depending on whether you need result ordering or efficiency is your first priority, PLINQ offers both ordered and unordered processing options. The optimum method for your use case can be found by benchmarking your query in real-world situations.

By striking a balance between order sensitivity, performance, and application complexity, you may optimize LINQ and PLINQ and write code that is both efficient and maintainable.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using Consul for Service Discovery

clock May 22, 2025 07:37 by author Peter

We'll look at how to use Consul to set up Service Discovery in this post. The technique by which separate services can automatically identify one another without hardcoding network information is known as service discovery in the context of microservices. The IP/port configuration may fluctuate dynamically, and the services may scale up or down regularly. Because of this, hardcoding the port and IP addresses is unreliable.


We'll look at how to use Consul to set up Service Discovery in this post. The technique by which separate services can automatically identify one another without hardcoding network information is known as service discovery in the context of microservices. The IP/port configuration may fluctuate dynamically, and the services may scale up or down regularly. Because of this, hardcoding the port and IP addresses is unreliable.



There are two independent services, UserService and PaymentService, which provide their own functions. The AggregatorService is an endpoint exposed to the client, which would need to fetch information from both UserService and PaymentService, aggregate the result, and send it back to the client. For this, the AggregatorService would need to use ServiceDiscovery to resolve the details of both independent services.


Consul
The first step is to ensure our Consul service is running. In this example, we will use Docker containers.

services:
  servicediscovery:
    image: hashicorp/consul
    container_name: servicediscovery
    ports:
      - "9500:8500"    # HTTP UI/API
      - "9600:8600/udp" # DNS
    command: agent -dev -client=0.0.0.0
    networks:
        commonnetwork:
networks:
  commonnetwork:
    driver: bridge

We already have a common network, which would be used by other services as well, so that the containers have a common network to communicate.
Services

The next step is to create our services, which would register itself in the Consul registry. Let us begin with UserService.For the sample scenario, we will create a demo test endpoint to fetch user info.
[ApiController]
[Route("[controller]")]
public class UserController : ControllerBase
{
    private readonly ILogger<UserController> _logger;

    public UserController(ILogger<UserController> logger)
    {
        _logger = logger;
    }

    [HttpGet]
    [Route("GetUserInfo")]
    public ActionResult<UserInfo> Get([FromQuery]string userName)
    {
        return Ok(new UserInfo("John Doe","1234456","[email protected]"));
    }
}

public record UserInfo([property: JsonPropertyName("id")] string Name, [property: JsonPropertyName("name")] string Phone, [property: JsonPropertyName("email")] string Email);


As you can observe, the UserController exposes a single endpoint to fetch the User details when provided with a username.

We need to make an additional endpoint, which would be used by the Consul service for health checks on the UserService.
[ApiController]
[Route("[controller]")]
public class HeartBeatController:ControllerBase
{
    private readonly ILogger<HeartBeatController> _logger;

    public HeartBeatController(ILogger<HeartBeatController> logger)
    {
        _logger = logger;
    }

    [HttpGet]
    [ProducesResponseType(StatusCodes.Status200OK)]
    [ProducesResponseType(StatusCodes.Status400BadRequest)]
    [Route("health")]
    public ActionResult Health()
    {
        return Ok();
    }
}


The health check-up endpoint is a single HttpGet request that returns HTTP 200. This indicates to the Consul Service (or any other client, which requires checking the health of the service) that the service is up and running.

We can proceed to register the UserService with consul. We need to install the Consul.Net nuget package for the same.
Install-Package Consul

We can define our configuration for the service in appsettings.json as
"ConsulConfig": {
  "serviceName": "userservice", // The name under which the service will be registered in Consul
  "serviceId": "userservice001", // Unique service ID for Consul registration
  "serviceAddress": "userservice", // The address or hostname for Consul to reach the service (can be a Docker container name or IP)
  "servicePort": 8081, // The port that the service is listening on
  "healthCheckUrl": "/HeartBeat/health", // The health check URL to monitor the service's health
  "consulAddress": "http://servicediscovery:8500", // Address of the Consul agent (can be changed based on your setup)
  "deregisterAfterMinutes": 5, // Time to wait before deregistering a service after health check failure
  "TLSSkipVerify": true // Skip TLS verification for Consul (useful for self-signed certificates)
}


With the configuration in place, we can now register our service as follows.
var consulConfig = builder.Configuration.GetSection(nameof(ConsulConfig)).Get<ConsulConfig>();
if(consulConfig is not null)
{
    var consulClient = new ConsulClient(x => x.Address = new Uri(consulConfig.ConsulAddress));
    var registration = new AgentServiceRegistration
    {
        ID = consulConfig.ServiceId,
        Name = consulConfig.ServiceName,
        Address = consulConfig.ServiceAddress,
        Port = consulConfig.ServicePort,
        Check = new AgentServiceCheck
        {
            HTTP = $"https://{consulConfig.ServiceAddress}:{consulConfig.ServicePort}{consulConfig.HealthCheckUrl}",
            Interval = TimeSpan.FromSeconds(10),
            Timeout = TimeSpan.FromSeconds(5),
            DeregisterCriticalServiceAfter = TimeSpan.FromMinutes(consulConfig.DeregisterAfterMinutes),
            TLSSkipVerify = consulConfig.TLSSkipVerify,
        }
    };


    // Register service with Consul
    await consulClient.Agent.ServiceRegister(registration);
}

Where ConsulConfig is defined as
public record ConsulConfig
{
    public string ConsulAddress { get; set; } = null!;
    public string ServiceName { get; set; } = null!;
    public string ServiceId { get; set; } = null!;
    public string ServiceAddress { get; set; } = null!;
    public int ServicePort { get; set; }
    public string HealthCheckUrl { get; set; } = null!;
    public int DeregisterAfterMinutes { get; set; }
    public bool TLSSkipVerify { get; set; } = true;
}


The last step is to ensure our Docker Compose runs the UserService in a container and shares the common network with Consul.
userservice:
    image: ${DOCKER_REGISTRY-}userservice
    container_name: userservice
    build:
      context: .
      dockerfile: UserService/Dockerfile
    ports:
      - "7000:8080"
      - "7001:8081"
    networks:
      commonnetwork:
    depends_on:
      - "servicediscovery"


We can proceed to create another Service (namely, PaymentService) and register it with the Consul. I have skipped the sample here for brevity, but refer to the source code enclosed for details.

Once both services are registered, we can view them in the Consul dashboard.


AggregatorService
In our example context, the consumer code is an aggregator service, which would fetch data from both UserService and PaymentService to aggregate the results. Once the individual services register themselves with Consul, we can resolve them from the AggregatorService.

Our aim would be to create an endpoint that can use both individual services and aggregate the results.
[HttpGet]
public async Task<ActionResult<PaymentDetails?>> Get([FromQuery]string userName)
{
    var user = await _userService.GetUserByIdAsync(userName).ConfigureAwait(false);
    var paymentDetails = await _paymentService.GetPaymentInfo("123").ConfigureAwait(false);

    return Ok(new PaymentDetails()
    {
        User = user,
        Payment = paymentDetails
    });
}

We will delve into the details of the UserService and PaymentService classes in a bit. But to resolve the API Services, we need to configure the Consul service details in AggregatorService.
"ServiceDiscoveryOptions": {
  "ResolverName": "servicediscovery",
  "ResolverPort": 8500,
  "Services": [
    {
      "Key": "UserService",
      "Name": "userservice"
    },
    {
      "Key": "PaymentService",
      "Name": "paymentservice"
    }
  ]
}


The configuration can of course, be resolved using the IOptions<T> pattern
builder.Services.Configure<ServiceDiscoveryOptions>(
    builder.Configuration.GetSection(nameof(ServiceDiscoveryOptions)));


public record ServiceDiscoveryOptions
{
    public List<Service> Services { get; set; } = [];
    public string ResolverName { get; set; } = null!;
    public string ResolverPort { get; set; } = null!;
}

public record Service(string Key, string Name);

We can now create our ConsulServiceResolver, which would be responsible for resolving services.
public class ConsulServiceResolver : IDisposable
{
    private readonly ConsulClient _client;
    private bool _disposed = false;
    public ConsulServiceResolver(IOptions<ServiceDiscoveryOptions> serviceDiscoveryOptions)
    {
        var serviceDiscovery = serviceDiscoveryOptions.Value;
        _client = new ConsulClient(cfg => cfg.Address = new Uri($"http://{serviceDiscovery.ResolverName}:{serviceDiscovery.ResolverPort}"));
    }

    /// <summary>
    /// Resolves a healthy instance of the given service name from Consul.
    /// </summary>
    public async Task<(string Address, int Port)> ResolveServiceAsync(string serviceName)
    {
        var result = await _client.Health.Service(serviceName, tag: null, passingOnly: true);

        if (result.Response == null || result.Response.Length == 0)
            throw new Exception($"No healthy instances found for service '{serviceName}'");

        var serviceEntry = result.Response.First();

        return (serviceEntry.Service.Address, serviceEntry.Service.Port);
    }


    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    protected virtual void Dispose(bool disposing)
    {
        if (_disposed)
            return;

        if (disposing)
        {
            // Dispose managed state (managed objects).
            _client?.Dispose();
        }
        _disposed = true;
    }

    // Destructor (finalizer) only if needed
    ~ConsulServiceResolver()
    {
        Dispose(false);
    }
}

The ResolveServiceAsync method is used to resolve the individual services based on the service name. We use the Consul.Health.Service() method to list the services that are available (healthy) with the given service name. If we find more than one service (in case, multiple instances), we return the first instance. In the real world, we could use an efficient load balancing pattern, but for simplicity of this example, we will take the first one.

We can register the ConsulServiceResolver in our DI as well.
builder.Services.AddScoped<ConsulServiceResolver>();

We, however, have one complicity. We cannot hook up the code to resolve the services at the startup of AggregatorService, as the UserService and PaymentService might not have yet registered themselves, even if we set dependencies in Docker Compose. Additionally, since we are using a secure connection (HTTPS), we somehow need to bypass the SSL Validation in the developer environment.

For the later, we introduce our custom HttpClientFactory, which would be used to initialize our HttpClient.
public class DevelopmentHttpClientFactory : IHttpClientFactory
{
    private readonly IServiceProvider _serviceProvider;

    public DevelopmentHttpClientFactory(IServiceProvider serviceProvider)
    {
        _serviceProvider = serviceProvider;
    }

    public HttpClient CreateClient(string name)
    {
        var handler = new HttpClientHandler
        {
            ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
        };
        return new HttpClient(handler);
    }
}

builder.Services.AddSingleton<IHttpClientFactory>(sp => new DevelopmentHttpClientFactory(sp));


The IHttpClientFactory.CreateClient() allows us to create a new HttpClient, which bypass the SSL Validation.

To ensure that individual services might have already registered with Consul, we delay the creation of the HttpClient instance until we actually require it for the first time. This can be done in the Services classes.
public class UserService : ServiceBase, IUserService
{
    private readonly ILogger<UserService> _logger;
    public UserService(
        IHttpClientFactory httpClientFactory,
        ConsulServiceResolver consulResolver,
        ILogger<UserService> logger,
        IOptions<ServiceDiscoveryOptions> serviceDiscovery) : base(httpClientFactory, logger, consulResolver,serviceDiscovery,nameof(UserService))
    {
        _logger = logger;
    }

    public async Task<UserDto?> GetUserByIdAsync(string userId)
    {
        var client = await GetClientAsync();
        var response = await client.GetAsync($"/user/GetUserInfo?userName={userId}");

        if (response.IsSuccessStatusCode)
        {
            var json = await response.Content.ReadAsStringAsync();
            return JsonSerializer.Deserialize<UserDto>(json);
        }

        _logger.LogError("Failed to get user {UserId}: {StatusCode}", userId, response.StatusCode);
        throw new Exception($"Failed to get user {userId}: {response.StatusCode}");
    }
}


As observed in the UserService, the GetUserByIdAsync() we create/get instance of HttpClient specifically for the Service using the GetClientAsync() method. This is defined in the base class ServiceBase.
public abstract class ServiceBase
{
    protected readonly Task<HttpClient> _httpClientTask;
    protected readonly ConsulServiceResolver _consulResolver;
    protected ServiceBase(IHttpClientFactory httpClientFactory, ILogger<ServiceBase> logger,ConsulServiceResolver consulResolver,IOptions<ServiceDiscoveryOptions> serviceDiscoveryOptions, string serviceName)
    {
        _consulResolver = consulResolver;
        var registeredService = serviceDiscoveryOptions.Value.Services.FirstOrDefault(s => s.Key == serviceName)?.Name;
        if (registeredService == null)
        {
            logger.LogError("Service {ServiceName} not found in service discovery options", serviceName);
            throw new ArgumentException($"Service {serviceName} not found in service discovery options");
        }

        _httpClientTask = InitializeHttpClientAsync(httpClientFactory,registeredService);
    }

    private async Task<HttpClient> InitializeHttpClientAsync(IHttpClientFactory httpClientFactory,string serviceName)
    {
        var client = httpClientFactory.CreateClient(); // unnamed/default
        var (address, port) = await _consulResolver.ResolveServiceAsync(serviceName);
        client.BaseAddress = new Uri($"https://{address}:{port}");
        return client;
    }

    protected Task<HttpClient> GetClientAsync()
    {
        return _httpClientTask;
    }
}

The InitializeHttpClientAsync() method creates a new Instance of HttpClient using HttpClientFactory, which is the custom HttpClientFactory we created using DevelopmentHttpClientFactory, which disabled the SSL Validation. We then use ConsulService to resolve the service based on the serviceName parameter and assign the HttpClient.BaseAddress is based on the values resolved.

Later on, the UserService and PaymentService wrapper classes use the specifically initialized HttpClient to make the request to the specific API service.

Conclusion

The article outlines the importance of Service Discovery and uses the Consul library for service discovery. The complete source code of the sample application is attached to the article for further reference.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using C# with .NET 9 for Advanced Data Warehouse Modeling and Querying

clock May 14, 2025 07:25 by author Peter

The purpose of data warehouses is to offer business intelligence, analytics, and reporting. Even while SQL and ETL technologies receive most of the focus, C# can be a useful complement to model definition, metadata management, warehouse schema design, and large-scale querying. This article provides expert coverage of data warehousing with C# 14 and.NET 9.

Why C# Data Warehousing?

  • Automate the creation of warehouse schema (fact/dimension tables).
  • Regulating and certifying source-to-target mapping models.
  • Construct models dimensionally automatically.
  • Generate surrogate keys, SCD Type 2 rows.
  • Integrate with Azure Synapse, Snowflake, and BigQuery APIs.
  • Execute high-performance warehouse queries through parameterization and batching.

Data warehouse maintainability relies heavily on metadata-programming, code generation, and system integration, all of which C# excels at.

Programmatic Warehouse Modeling in C#
Let's create a straightforward dimensional model in present-day C#

public record CustomerDim(string CustomerKey, string CustomerName, string Country);
public record OrderFact(
    string OrderKey,
    string CustomerKey,
    DateTime OrderDate,
    decimal TotalAmount,
    string CurrencyCode);


You can cast the source data to those types before loading, or even generate the equivalent SQL CREATE TABLE scripts from attributes.
[WarehouseTable("dw.CustomerDim")]
public record CustomerDim(string CustomerKey, string CustomerName, string Country);


Use source generation or introspection to obtain the DDL based on annotated classes.

Warehouse Querying from C#

Instead of running raw SQL, put parameterized warehouse queries inside reusable procedures.
public async Task<List<OrderFact>> GetSalesByDateAsync(DateTime from, DateTime to)
{
    const string sql = @"
        SELECT
            OrderKey,
            CustomerKey,
            OrderDate,
            TotalAmount,
            CurrencyCode
        FROM dw.OrderFact
        WHERE OrderDate BETWEEN @from AND @to";
    using var conn = new SqlConnection(_warehouseConn);
    var results = await conn.QueryAsync<OrderFact>(sql, new { from, to });
    return results.ToList();
}


This design pattern allows you to.

  • Develop C# console applications for analytics APIs to display dashboards or reports
  • Export to Excel, Power BI, CSV, or PDF
  • Run batch summaries or ML feature generation jobs
  • High-Level Features
  • Surrogate Key Generation


C# can handle surrogate keys either in-process or through sequences.
int nextKey = await conn.ExecuteScalarAsync<int>(
    "SELECT NEXT VALUE FOR dw.CustomerKeySeq"
);

Slowly Changing Dimensions (SCD Type 2)
Use EF Core or Dapper to insert new rows for updated attributes with validity ranges.
if (existing.Name != updated.Name)
{
    // End old record
    existing.EndDate = DateTime.UtcNow;

    // Add new record
    var newVersion = new CustomerDim
    {
        // Assign necessary properties here
    };
    await conn.ExecuteAsync("INSERT INTO dw.CustomerDim ...", newVersion);
}

Query Materialization
Use the ToDataTable() extension methods to convert warehouse queries into in-memory tables.
var table = queryResults.ToDataTable();
ExportToCsv(table, "output/sales.csv");

BI Tool and API Integration

C# can,

  • Feed Power BI through REST or tabular model APIs
  • Push metrics to dashboards
  • Develop REST APIs that wrap SQL with business-oriented endpoints

Conclusion
Automate report sharing via email, Teams, or Slack. Conclusion With .NET 9 and C# 14, you can be hands-on and flexible in data warehouse modeling and querying. Whether modeling dimensions, building APIs, or filling dashboards, C# gives you control, performance, and maintainability that you simply can't get with SQL scripts alone.




European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using C# Examples to Implement the Saga Pattern for Distributed Transactions Across Services

clock May 9, 2025 08:28 by author Peter

Distributed transactions are challenging in microservices since each service will have its own data store. The Saga pattern provides a means to handle transactions across services without a global transaction manager. Instead of an atomic transaction, a saga splits it into a sequence of local transactions with compensating transactions for failure. In this post, I describe how you can implement the Saga pattern in C#, with real examples to demonstrate the flow.

Saga Concepts
A saga consists of.

  • A series of operations (local transactions)
  • Compensating actions to undo steps in case anything goes wrong

Sagas can be dealt with in two basic ways:

  • Choreography: all services subscribe to events and respond accordingly (no central coordinator).
  • Orchestration: a central saga orchestrator guides the flow.

Example Scenario: Order Processing Saga Consider an e-commerce process

  • Create Order (Order Service)
  • Reserve Inventory (Inventory Service)
  • Charge Payment (Payment Service)

If payment fails, we need to.

  • Cancel payment (if partial)
  • Release inventory
  • Cancel the order

Orchestration Example (C#)
We'll utilize a basic saga orchestrator.

Saga Orchestrator

public class OrderSagaOrchestrator
{
    private readonly IOrderService _orderService;
    private readonly IInventoryService _inventoryService;
    private readonly IPaymentService _paymentService;

    public OrderSagaOrchestrator(
        IOrderService orderService,
        IInventoryService inventoryService,
        IPaymentService paymentService)
    {
        _orderService = orderService;
        _inventoryService = inventoryService;
        _paymentService = paymentService;
    }

    public async Task<bool> ProcessOrderAsync(OrderData order)
    {
        try
        {
            await _orderService.CreateOrderAsync(order);
            await _inventoryService.ReserveInventoryAsync(order.OrderId, order.Items);
            await _paymentService.ChargeAsync(order.OrderId, order.TotalAmount);
            return true;
        }
        catch (Exception ex)
        {
            Console.WriteLine($"Saga failed: {ex.Message}. Starting compensation...");
            await _paymentService.RefundAsync(order.OrderId);
            await _inventoryService.ReleaseInventoryAsync(order.OrderId);
            await _orderService.CancelOrderAsync(order.OrderId);
            return false;
        }
    }
}

Example Interfaces
public interface IOrderService
{
    Task CreateOrderAsync(OrderData order);
    Task CancelOrderAsync(string orderId);
}

public interface IInventoryService
{
    Task ReserveInventoryAsync(string orderId, List<Item> items);
    Task ReleaseInventoryAsync(string orderId);
}

public interface IPaymentService
{
    Task ChargeAsync(string orderId, decimal amount);
    Task RefundAsync(string orderId);
}

public class OrderData
{
    public string OrderId { get; set; }
    public List<Item> Items { get; set; }
    public decimal TotalAmount { get; set; }
}

public class Item
{
    public string ProductId { get; set; }
    public int Quantity { get; set; }
}

Choreography Example
In a choreography-based saga, all services listen to events. When, for example, the OrderCreated event is published, the Inventory Service hears it and reserves inventory.

Example RabbitMQ consumer for Inventory Service.
consumer.Received += async (model, ea) =>
{
    var message = Encoding.UTF8.GetString(ea.Body.ToArray());
    var orderCreated = JsonConvert.DeserializeObject<OrderCreatedEvent>(message);
    await _inventoryService.ReserveInventoryAsync(
        orderCreated.OrderId,
        orderCreated.Items
    );
    // Then publish InventoryReserved event
};


Best Practices

  • Ensure compensating transactions are always idempotent.
  • Employ reliable messaging (such as RabbitMQ, Kafka) to prevent lost events.
  • Log saga progress for traceability.
  • Research using libraries such as MassTransit (saga support) for production.

Conclusion
The Saga pattern enables your C# services to orchestrate complex workflows without distributed transactions. With orchestration or choreography, sagas ensure data consistency between services and handle failures elegantly. By using these principles carefully, you can create scalable and resilient distributed systems.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Partial Properties and Indexers Simplified

clock May 6, 2025 07:53 by author Peter

Code cleanliness and scalability are critical, particularly in large projects or when numerous developers are collaborating. One golden rule we always hear is “separation of concerns.” To follow that, many of us use partial classes to split different responsibilities nicely. However, there was one irksome restriction up until C# 12: we were unable to make properties or indexers partial. As a result, we occasionally had to combine several logics in one location, which caused the code to become disorganized. It was particularly annoying when using tools like code generators and Entity Framework.

Now in C# 13.0, this small but powerful feature has come with Partial Properties and Indexers. It may look like a small update, but in real development, it’s actually a big one.

In this blog, let’s see how we were managing earlier, what’s new in C# 13, and how this feature can help us keep our code clean, readable, and more maintainable.

How was it earlier?

Let's say you're working with a code generator (like EF Core) that creates a class like this.
public partial class Product
{
    public string Name { get; set; }
}


Now, suppose you want to add some custom logic, like trimming extra spaces or checking that the name is not empty or null. But since the Name property is already there in the generated code, your can't do anything. You’re left with only a few not-so-nice options.

Shadow property: You make a new property with a similar name and manually sync values. A bit messy.
Use backing fields: override the generated class and handle it yourself, but then you might break the generated code. Risky.
Try partial methods or extension methods. They can work, but they feel like a jugaad (hack), not clean or natural.

End result? Code becomes untidy, logic is scattered in multiple files, and worst part — some developers stop following best practices just because tools don’t support it well.

How can we resolve this in C# 13.0?

In C# 13.0, we have a new feature called partial properties and partial indexers. This allows you to declare a property in one part of a partial class and implement it in another.

This update makes property logic more like how we write methods — now we can do things like this.
// Part 1
public partial void DoSomething();

// Part 2
public partial void DoSomething()
{
    // logic
}

We can now do the same with properties.
// Part 1
public partial string Name { get; set; }
// Part 2
public partial string Name
{
    get => _name;
    set => _name = value.Trim();
}

Simple and powerful! Now let's see a real-life example where we can apply this.

Real-World example - Email validation in a generated model

Let’s say we have a model generated by EF Core or a source generator.

// File: Customer.Generated.cs
public partial class Customer
{
    public partial string Email { get; set; }
}

Above is the auto-generated code, where we can't do much. Hence, creating another partial class and extending it as below.

// File: Customer.cs
public partial class Customer
{
    public partial string Email
    {
        get => _email;
        set
        {
            if (!value.Contains("@"))
                throw new ArgumentException("Invalid email address.");

            _email = value.Trim().ToLower();
        }
    }
    private string _email;
}


This is now clean, separated, and maintainable.

How does it work with Collections?
Earlier, indexers also had the same restriction. Now with partial indexers, working with custom data structures becomes much easier. Let’s see how.

// File: Matrix.Generated.cs
public partial class Matrix
{
    public partial double this[int row, int col] { get; set; }
}


// File: Matrix.cs
public partial class Matrix
{
    public partial double this[int row, int col]
    {
        get => _data[row, col];
        set => _data[row, col] = value;
    }
    private double[,] _data = new double[3, 3];
}


Now your indexer logic can be maintained cleanly alongside internal data handling.

Benefits we are getting out of it

So, what do we really get with these new Partial Properties and Indexers?

  • Cleaner Code Separation: Now your business logic stays in your own file, and the generated code stays untouched. No more mixing things up.
  • Better Tooling Support: Works smoothly with Entity Framework, Blazor, source generators, and even your own scaffolding tools if you are using.
  • No More Jugaad (Hacks): No need to hide properties or write duplicate logic. Just write behavior where it actually belongs.
  • Team Work Becomes Easy: The UI team can handle things like trimming or formatting, backend team can focus on saving data. Clean division.
  • Easier to Maintain: Just open the file, and at a glance, you know what’s written by you and what’s coming from code generation.


European ASP.NET Core 9.0 Hosting - HostForLIFE :: Developing an ASP.NET E-Commerce Chatbot with SQL Server

clock May 2, 2025 07:45 by author Peter

On digital platforms, chatbots are effective tools for enhancing user experience. Using ASP.NET (Web Forms) and SQL Server, I developed a basic e-commerce chatbot in this blog post that provides users with immediate responses about digital products like as planners, templates, and file kinds without the need for third-party AI libraries.

What does this Chatbot do?

  • Responds to product-related user queries using predefined keywords
  • Displays a friendly chat UI with user-bot interaction
  • Store each conversation in a chat log table for analysis
  • Uses SQL queries to retrieve responses based on keyword match
  • Fully functional without JavaScript or AJAX

Tech Stack

  • Frontend: ASP.NET Web Forms, Bootstrap 5
  • Backend: C# (.NET Framework), ADO.NET
  • Database: Microsoft SQL Server
  • Tables Used
    • BootResponses: Stores bot responses
    • ChatLogs: Stores chat history

UI Design (ChatBoot.aspx)
The chatbot is placed inside a styled Bootstrap card. Here's the code.

<div class="chat-wrapper">
    <div class="chat-header">
        <h5>E-Commerce Chatbot - Ask Me About Products!</h5>
    </div>

    <div class="chat-body" id="chatBody" runat="server">
    </div>

    <div class="chat-footer">
        <asp:TextBox
            ID="txtUserInput"
            runat="server"
            CssClass="form-control"
            placeholder="Type your question...">
        </asp:TextBox>

        <asp:Button
            ID="btnSend"
            runat="server"
            Text="Send"
            CssClass="btn btn-primary"
            OnClick="btnSend_Click" />
    </div>
</div>


Here’s how the UI looks.

Backend Logic (ChatBoot.aspx.cs)
On Page_Load, I show a friendly welcome message and sample questions.
chatBody.InnerHtml += @"
<div class='bot-msg'>
    Welcome! I'm your assistant bot for digital product shopping and selling.<br/>
    Try asking:
    <ul>
        <li>What templates do you have?</li>
        <li>How to list my product?</li>
        <li>Steps to buy a planner?</li>
    </ul>
</div>";


When the user clicks "Send", the bot checks the database for matching keywords.
string query = @"
    SELECT TOP 1 ResponseText
    FROM BootResponses
    WHERE @msg LIKE '%' + QuestionKeyword + '%'";


If a match is found, the bot replies with that message. Else, it shows a fallback message.

Also, both user messages and bot replies are logged into an SQL table ChatLogs.

 

 

Database Structure

BootResponses Table

Id QuestionKeyword ResponseText
1 Price Our digital product prices start at just $5.
2 Template We offer a variety of templates including resumes, portfolios, and business plans.
3 Hello Hi there! Welcome to our Digital Marketplace. How can I assist you today?
4 Hi Hello! I’m your Assistant bot. Ask me anything about our Digital Products.

It supports flexible keywords like "Price", "Template", "Upload", "Download", "Support", "Account", and many more.

ChatLogs Table

This logs every user query and bot reply for future review or enhancement.

Id UserMessage BotResponse Timestamp
1 How to buy and sell the digital product I'm sorry, I don't understand. 2025-04-29 16:09:58
2 How to create an Account? You can register a free account to manage your purchases and downloads. 2025-04-29 16:22:15

Highlights

 

  • Keyword Matching: Uses SQL LIKE to match partial questions.
  • Data Logging: Every conversation is stored in ChatLogs.
  • Responsive UI: Built with Bootstrap for a clean mobile-friendly interface.
  • Expandable: Easily add more QuestionKeyword and ResponseText entries.

Future Enhancements

  • Add AJAX to make the chatbot work without page refresh.
  • Train a basic ML model to better understand fuzzy queries.
  • Show suggestions dynamically as the user types.
  • Include voice-based input for accessibility.

Final Thoughts
This chatbot project demonstrates how even a basic keyword-based bot can serve as an interactive assistant for digital product platforms. It’s a great starting point if you're building your own chatbot and want to connect frontend UI with backend logic and a real database.


 



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in