European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core 9.0 Hosting - HostForLIFE :: Manage Transactions in the Entity Framework

clock June 16, 2025 09:36 by author Peter

We shall study Entity Framework Transactions in this article.

 Let's begin, then.

Comprehending Transactions

  • Transactions treat several activities as a single unit, ensuring data integrity.
  • They are crucial for preserving uniformity, particularly in intricate applications.

Built-in Transaction Management
Entity Framework (EF) provides built-in transaction management through the DbContext.
EF automatically handles transactions for single SaveChanges calls.
using (var context = new AppDbContext())

using (var context = new AppDbContext())
  {
     // Operations
     context.SaveChanges(); // EF handles transaction automatically
  }


Handling Multiple Operations as a Unit
Transactions are crucial for ensuring data integrity when performing multiple related operations. For instance, transferring money between accounts involves debiting one account and crediting another. Both operations should be part of a single transaction.

Example
using (var context = new AppDbContext())
  {
     using (var transaction  = context.Database.BeginTransaction())
       {
         try
          {
             // Debit one account
             // Credit another account

             context.SaveChanges();
             transaction.commit();
          }
         catch(Exception ex)
          {
            transaction.Rollback();
            // Handle the exception
          }
       }
   }


Rollback Strategies

Rollbacks revert all changes if any operation fails.
Essential for maintaining data integrity in case of errors.
using (var context = new AppDbContext())
  {
     using (var transaction  = context.Database.BeginTransaction())
       {
         try
          {
            // Operations
            transaction.commit();
          }
         catch(Exception ex)
          {
            transaction.Rollback();
          }
       }
  }


Commit Strategies

Use Commit to finalize transactions after successful operations.
Always wrap Commit within a try catch block to handle exception.
using (var context = new AppDbContext())
  {
     using (var transaction  = context.Database.BeginTransaction())
       {
         try
          {
            // Operations
            transaction.commit();
          }
         catch(Exception ex)
          {
            transaction.Rollback();
            // Log or Handle the exception
          }
       }
  }

Combining Explicit Transactions with Multiple Contexts
Handle transactions across multiple DbContext instances.
Ensure consistency across different data sources.
using (var context1 = new AppDbContext1())
using (var context2 = new AppDbContext2())
 {
    using (var transaction  = context1.Database.BeginTransaction())
     {
        try
         {
            // Operations on context1
            context1.SaveChanges();

            // Operations on context2
            context2.Database.UseTransaction(transaction.GetDbTransation());
            context2.SaveChanges();

            transaction.commit();
         }
        catch(Exception ex)
         {
            transaction.Rollback();
         }
     }
 }


Points to be taken

  • Transactions are critical for data integrity.
  • EF provides built-in and explicit transaction management.
  • Rollbacks and commits ensure consistency and handle errors effectively.

Conclusion
In this article, I have tried to cover how to handle Entity Framework Transactions.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: ASP.NET Core Clean and Reliable Code Testing with Moq using C# 13 and xUnit

clock June 9, 2025 08:34 by author Peter

C# 13 and .NET 8 have greatly enhanced ASP.NET Core development capabilities. However, building scalable and maintainable systems requires robust testing in addition to feature implementation. Using xUnit, Moq, and the latest C# 13 features, you will learn how to write clean, reliable, and testable code.

This guide will walk you through testing a REST API or a service layer:

  • Creating a test project
  • Using xUnit to write clean unit tests
  • Using Moq to mock dependencies
  • Using best practices for test architecture and maintainability

Setting Up Your ASP.NET Core Project with C# 13
With ASP.NET Core Web API and C# 13, begin with .NET 8 and ASP.NET Core Web API.
dotnet new sln -n HflApi

dotnet new web -n Hfl.Api
dotnet new classlib -n Hfl.Domain
dotnet new classlib -n Hfl.Core
dotnet new classlib -n Hfl.Application
dotnet new classlib -n Hfl.Infrastructure


dotnet sln add Hfl.Api/Hfl.Api.csproj
dotnet sln add Hfl.Domain/Hfl.Domain.csproj
dotnet sln add Hfl.Core/Hfl.Core.csproj
dotnet sln add Hfl.Application/Hfl.Application.csproj
dotnet sln add Hfl.Infrastructure/Hfl.Infrastructure.csproj


Create a Test Project with xUnit and Moq
Add a new test project:
dotnet new xunit -n HflApi.Tests
dotnet add HflApi.Tests/HflApi.Tests.csproj reference HflApi/HflApi.csproj
dotnet add HflApi.Tests package Moq


Use Case: Testing a Service Layer
Domain, Service, Respository, Interface and API
namespace HflApi.Domain;
public record Order(Guid Id, string Status);


using HflApi.Domain;

namespace HflApi.Core.Interfaces;

public interface IOrderRepository
{
    Task<Order?> GetByIdAsync(Guid id);
}


using HflApi.Core.Interfaces;

namespace HflApi.Application.Services;

public class OrderService
{
    private readonly IOrderRepository _repository;

    public OrderService(IOrderRepository repository)
    {
        _repository = repository;
    }

    public async Task<string> GetOrderStatusAsync(Guid orderId)
    {
        var order = await _repository.GetByIdAsync(orderId);
        return order?.Status ?? "Not Found";
    }
}


using HflApi.Core.Interfaces;
using HflApi.Domain;

namespace Hfl.Infrastructure.Repositories
{
    public class InMemoryOrderRepository : IOrderRepository
    {
        private readonly List<Order> _orders = new()
    {
        new Order(Guid.Parse("7c3308b4-637f-426b-aafc-471697dabeb4"), "Processed"),
        new Order(Guid.Parse("5aee5943-56d0-4634-9f6c-7772f6d9c161"), "Pending")
    };

        public Task<Order?> GetByIdAsync(Guid id)
        {
            var order = _orders.FirstOrDefault(o => o.Id == id);
            return Task.FromResult(order);
        }
    }
}

using Hfl.Infrastructure.Repositories;
using HflApi.Application.Services;
using HflApi.Core.Interfaces;


var builder = WebApplication.CreateBuilder(args);


builder.Services.AddScoped<IOrderRepository, InMemoryOrderRepository>();
builder.Services.AddScoped<OrderService>();

var app = builder.Build();

app.MapGet("/orders/{id:guid}", async (Guid id, OrderService service) =>
{
    var status = await service.GetOrderStatusAsync(id);
    return Results.Ok(new { OrderId = id, Status = status });
});

app.Run();


Unit Testing with xUnit and Moq
Test Class
using Moq;
using HflApi.Application.Services;
using HflApi.Core.Interfaces;
using HflApi.Domain;


namespace OrderApi.Tests;

public class OrderServiceTests
{
    private readonly Mock<IOrderRepository> _mockRepo;
    private readonly OrderService _orderService;

    public OrderServiceTests()
    {
        _mockRepo = new Mock<IOrderRepository>();
        _orderService = new OrderService(_mockRepo.Object);
    }

    [Fact]
    public async Task GetOrderStatusAsync_ReturnsStatus_WhenOrderExists()
    {
        var orderId = Guid.NewGuid();
        _mockRepo.Setup(r => r.GetByIdAsync(orderId))
                 .ReturnsAsync(new Order(orderId, "Processed"));

        var result = await _orderService.GetOrderStatusAsync(orderId);

        Assert.Equal("Processed", result);
    }

    [Fact]
    public async Task GetOrderStatusAsync_ReturnsNotFound_WhenOrderDoesNotExist()
    {
        var orderId = Guid.NewGuid();
        _mockRepo.Setup(r => r.GetByIdAsync(orderId))
                 .ReturnsAsync((Order?)null);

        var result = await _orderService.GetOrderStatusAsync(orderId);

        Assert.Equal("Not Found", result);
    }

    [Theory]
    [InlineData("Processed")]
    [InlineData("Pending")]
    [InlineData("Shipped")]
    public async Task GetOrderStatus_ReturnsCorrectStatus(string status)
    {
        var orderId = Guid.NewGuid();
        _mockRepo.Setup(r => r.GetByIdAsync(orderId))
                 .ReturnsAsync(new Order(orderId, status));

        var result = await _orderService.GetOrderStatusAsync(orderId);

        Assert.Equal(status, result);
    }
}


Best Practices
1. Use Dependency Injection for Testability
All dependencies should be injected, so don't use static classes or service locator patterns.

2. Keep Tests Isolated
In order to isolate external behavior, Moq should be used to isolate database/network I/O from tests.

3. Use Theory for Parameterized Tests
[Theory]
[InlineData("Processed")]
[InlineData("Pending")]
[InlineData("Shipped")]
public async Task GetOrderStatus_ReturnsCorrectStatus(string status)
{
    var orderId = Guid.NewGuid();
    _mockRepo.Setup(r => r.GetByIdAsync(orderId))
             .ReturnsAsync(new Order(orderId, status));

    var result = await _orderService.GetOrderStatusAsync(orderId);

    Assert.Equal(status, result);
}

4. Group Tests by Behavior (Not CRUD)
Tests should be organized according to what systems do, not how they are performed. For example:

  • GetOrderStatus_ShouldReturnCorrectStatus
  • CreateOrder_ShouldSendNotification

5. Use Records for Test Data in C# 13
public record Order(Guid Id, string Status);

Immutable, concise, and readable test data objects can be created using records.

  • Test Coverage Tips
  • To measure test coverage, use Coverlet or JetBrains dotCover.
  • Business rules and logic at the service layer should be targeted.
  • Make sure you do not overtest third-party libraries or trivial getter/setter functions.

Recommended Tools

Tool

Purpose

xUnit

Unit Testing Framework

Moq

Mocking Dependencies

FluentAssertions

Readable Assertions

Coverlet

Code Coverage

Summary

Use xUnit, Moq, and C# 13 capabilities to test ASP.NET Core applications. To make sure your apps are dependable, provide a clean architecture, separated unit tests, and appropriate test names. Developers can find and address problems earlier in the development cycle by integrating DI, mocking, and xUnit assertions. This leads to quicker feedback, more confidence, and more maintainable systems. Unit tests' isolation guarantees that every part functions on its own, enhancing the overall dependability of the system. Over time, a codebase becomes easier to comprehend and maintain with a clear design and relevant test names. This method promotes a strong development process by lowering regressions and enhancing code quality. Furthermore, modular designs and well-defined test cases facilitate team member onboarding and debugging, encouraging cooperation. These procedures ultimately result in more scalable and resilient applications.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using C# with .NET 9 for Advanced Data Warehouse Modeling and Querying

clock May 14, 2025 07:25 by author Peter

The purpose of data warehouses is to offer business intelligence, analytics, and reporting. Even while SQL and ETL technologies receive most of the focus, C# can be a useful complement to model definition, metadata management, warehouse schema design, and large-scale querying. This article provides expert coverage of data warehousing with C# 14 and.NET 9.

Why C# Data Warehousing?

  • Automate the creation of warehouse schema (fact/dimension tables).
  • Regulating and certifying source-to-target mapping models.
  • Construct models dimensionally automatically.
  • Generate surrogate keys, SCD Type 2 rows.
  • Integrate with Azure Synapse, Snowflake, and BigQuery APIs.
  • Execute high-performance warehouse queries through parameterization and batching.

Data warehouse maintainability relies heavily on metadata-programming, code generation, and system integration, all of which C# excels at.

Programmatic Warehouse Modeling in C#
Let's create a straightforward dimensional model in present-day C#

public record CustomerDim(string CustomerKey, string CustomerName, string Country);
public record OrderFact(
    string OrderKey,
    string CustomerKey,
    DateTime OrderDate,
    decimal TotalAmount,
    string CurrencyCode);


You can cast the source data to those types before loading, or even generate the equivalent SQL CREATE TABLE scripts from attributes.
[WarehouseTable("dw.CustomerDim")]
public record CustomerDim(string CustomerKey, string CustomerName, string Country);


Use source generation or introspection to obtain the DDL based on annotated classes.

Warehouse Querying from C#

Instead of running raw SQL, put parameterized warehouse queries inside reusable procedures.
public async Task<List<OrderFact>> GetSalesByDateAsync(DateTime from, DateTime to)
{
    const string sql = @"
        SELECT
            OrderKey,
            CustomerKey,
            OrderDate,
            TotalAmount,
            CurrencyCode
        FROM dw.OrderFact
        WHERE OrderDate BETWEEN @from AND @to";
    using var conn = new SqlConnection(_warehouseConn);
    var results = await conn.QueryAsync<OrderFact>(sql, new { from, to });
    return results.ToList();
}


This design pattern allows you to.

  • Develop C# console applications for analytics APIs to display dashboards or reports
  • Export to Excel, Power BI, CSV, or PDF
  • Run batch summaries or ML feature generation jobs
  • High-Level Features
  • Surrogate Key Generation


C# can handle surrogate keys either in-process or through sequences.
int nextKey = await conn.ExecuteScalarAsync<int>(
    "SELECT NEXT VALUE FOR dw.CustomerKeySeq"
);

Slowly Changing Dimensions (SCD Type 2)
Use EF Core or Dapper to insert new rows for updated attributes with validity ranges.
if (existing.Name != updated.Name)
{
    // End old record
    existing.EndDate = DateTime.UtcNow;

    // Add new record
    var newVersion = new CustomerDim
    {
        // Assign necessary properties here
    };
    await conn.ExecuteAsync("INSERT INTO dw.CustomerDim ...", newVersion);
}

Query Materialization
Use the ToDataTable() extension methods to convert warehouse queries into in-memory tables.
var table = queryResults.ToDataTable();
ExportToCsv(table, "output/sales.csv");

BI Tool and API Integration

C# can,

  • Feed Power BI through REST or tabular model APIs
  • Push metrics to dashboards
  • Develop REST APIs that wrap SQL with business-oriented endpoints

Conclusion
Automate report sharing via email, Teams, or Slack. Conclusion With .NET 9 and C# 14, you can be hands-on and flexible in data warehouse modeling and querying. Whether modeling dimensions, building APIs, or filling dashboards, C# gives you control, performance, and maintainability that you simply can't get with SQL scripts alone.




European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using.NET 9 to Build a Face Detection System

clock April 21, 2025 09:25 by author Peter

Face detection has evolved from a cool sci-fi concept to a practical tool in everything from photo tagging to biometric authentication. In this blog post, we’ll explore how to build a real-time face detection system using the latest .NET 9 and OpenCV via Emgu.CV, a popular .NET wrapper around the OpenCV library.


Prerequisites
To follow along, you'll need:

✅ .NET 9 SDK
✅ Visual Studio 2022+ or VS Code
✅ Basic knowledge of C#
✅ Emgu.CV NuGet packages
✅ A working webcam (for real-time detection)

Step 1. Create a New .NET Console App
Open a terminal and run:
dotnet new console -n FaceDetectionApp
cd FaceDetectionApp


Step 2. Install Required Packages
Add the Emgu.CV packages:
dotnet add package Emgu.CV
dotnet add package Emgu.CV.runtime.windows

Step 3. Add the Haar Cascade File
OpenCV uses trained XML classifiers for detecting objects. Download the Haar cascade for frontal face detection:
Place this file in your project root and configure it to copy to the output directory:
FaceDetectionApp.csproj
<ItemGroup>
  <None Update="haarcascade_frontalface_default.xml">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </None>
</ItemGroup>


Step 4. Write the Real-Time Detection Code
Replace Program.cs with:
using Emgu.CV;
using Emgu.CV.Structure;
using System.Drawing;

var capture = new VideoCapture(0);
var faceCascade = new CascadeClassifier("haarcascade_frontalface_default.xml");

if (!capture.IsOpened)
{
    Console.WriteLine("Could not open the webcam.");
    return;
}

Console.WriteLine("Press ESC to exit...");

while (true)
{
    using var frame = capture.QueryFrame()?.ToImage<Bgr, byte>();

    if (frame == null)
        break;

    var gray = frame.Convert<Gray, byte>();
    var faces = faceCascade.DetectMultiScale(gray, 1.1, 10, Size.Empty);

    foreach (var face in faces)
    {
        frame.Draw(face, new Bgr(Color.Green), 2);
    }

    CvInvoke.Imshow("Real-Time Face Detection", frame);

    if (CvInvoke.WaitKey(1) == 27) // ESC key
        break;
}

CvInvoke.DestroyAllWindows();


Output
Once you run the app using dotnet run, your webcam will open, and it will start detecting faces in real time by drawing rectangles around them.

Conclusion

With the power of .NET 9 and the flexibility of OpenCV, building a real-time face detection system has never been easier. Whether you're developing a hobby project or prototyping a serious application, this foundation sets you on the path to mastering computer vision in the .NET ecosystem.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: MAUI.NET 9: Adding Objects to a Local Database

clock April 17, 2025 08:07 by author Peter

Step 1. Now that we have our local database set up, let's use it to store game reviews.

Step 1.1. Code:
using System.ComponentModel.DataAnnotations;

namespace Models.DTOs;

public class DTOBase
{
    [Key]

    public int? Id { get; set; }
    public int? ExternalId { get; set; }
    public DateTime CreatedAt { get; set; }
    public DateTime UpdatedAt { get; set; }
    public bool Inactive { get; set; }
}


Step 1.2. In the same folder, let's create GameDTO.
namespace Models.DTOs;

[Microsoft.EntityFrameworkCore.Index(nameof(IGDBId), IsUnique = true)]
public class GameDTO : DTOBase
{
    public int? IGDBId { get; set; }
    public int UserId { get; set; }
    public required string Name { get; set; }
    public string? ReleaseDate { get; set; }
    public string? Platforms { get; set; }
    public string? Summary { get; set; }
    public string? CoverUrl { get; set; }
    public required GameStatus Status { get; set; }
    public int? Rate { get; set; }
}

public enum GameStatus
{
    Want,
    Playing,
    Played
}

Step 2. In DbCtx.cs, add the Games table

Step 2.1. Code:
public virtual required DbSet<GameDTO> Games { get; set; }

Step 3. In the repo project, create the GameRepo.cs.

Step 4. In the repo project, create the GameRepo.cs:
using Microsoft.EntityFrameworkCore;
using Models.DTOs;

namespace Repo
{
    public class GameRepo(IDbContextFactory<DbCtx> DbCtx)
    {
        public async Task<int> CreateAsync(GameDTO game)
        {
            using var context = DbCtx.CreateDbContext();
            await context.Games.AddAsync(game);
            return await context.SaveChangesAsync();
        }
    }
}


Step 4.1. Extract the interface from this class.
Step 5. Save the image locally so that, after adding the game, we can retrieve it without relying on an internet connection. In the ApiRepo project, inside the IGDBGamesAPIRepo class, create the GetGameImageAsync function.
public static async Task<byte[]> GetGameImageAsync(string imageUrl)
{
    try
    {
        using HttpClient httpClient = new();
        var response = await httpClient.GetAsync(imageUrl);

        if (!response.IsSuccessStatusCode)
            throw new Exception($"Exception downloading image URL: {imageUrl}");

        return await response.Content.ReadAsByteArrayAsync();
    }
    catch (Exception ex)
    {
        throw new Exception($"Error fetching image: {ex.Message}", ex);
    }
}


Step 5.1. In the Services project, inside the IGDBGamesApiService class, create the SaveImageAsync function.
public static async Task SaveImageAsync(string imageUrl, string fileName)
{
    try
    {
        byte[] imageBytes = await IGDBGamesAPIRepo.GetGameImageAsync(imageUrl);
        string filePath = Path.Combine(GameService.ImagesPath, fileName);

        if (File.Exists(filePath))
            return;

        await File.WriteAllBytesAsync(filePath, imageBytes);
    }
    catch (Exception)
    {
        throw; // Consider logging the exception instead of rethrowing it blindly
    }
}

Step 6. Create a function to add the repositories in MauiProgram.
public static IServiceCollection Repositories(this IServiceCollection services)
{
    services.AddScoped<IGameRepo, GameRepo>();
    return services;
}

Step 7. Call this function inside CreateMauiApp()

Step 7.1. Add a function to create the image path.

Step 7.2. Code:
if (!System.IO.Directory.Exists(GameService.ImagesPath))
    System.IO.Directory.CreateDirectory(GameService.ImagesPath);

Step 8. In Models.Resps, create an object to handle the service response for the mobile project.

Step 8.1. Code:
namespace Models.Resps;

public class ServiceResp
{
    public bool Success { get; set; }

    public object? Content { get; set; }
}


Step 9. Create the GameService.

Step 9.1. Code:
using Models.DTOs;
using Models.Resps;
using Repo;

namespace Services
{
   public static readonly string ImagesPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "Inventory");

    public class GameService(IGameRepo GameRepo)
    {
        public async Task<ServiceResp> CreateAsync(GameDTO game)
        {
            game.CreatedAt = game.UpdatedAt = DateTime.Now;

            //User id fixed to 1
            game.UserId = 1;
            await GameRepo.CreateAsync(game);
            return new ServiceResp(true);
        }
    }
}


Step 9.2. Press [Ctrl + .] to extract the interface, then add the service to the Dependency Injection (DI). In MauiProgram, inside the Services function, add the line.

Step 9.3. Code:
services.AddScoped<IGameService, GameService>();

Step 10. In AddGameVM, add a variable to control the accessibility of the confirm button:

Step 10.1. Code:
private bool confirmIsEnabled = true;

public bool ConfirmIsEnabled
{
    get => confirmIsEnabled;
    set => SetProperty(ref confirmIsEnabled, value);
}

Step 11.
Remove gameStatus from AddGameVM so that it uses the one from the DTO model.

Step 12. Add IGameService to the AddGameVM constructor.

Step 13. Create the command that adds the game to the local database with the information that will be displayed in the app. Simultaneously attempts to save the image for local use. After insertion, displays a message and returns to the search screen.

[RelayCommand]
public async Task Confirm()
{
    // Disable button to prevent multiple requests
    ConfirmIsEnabled = false;
    string displayMessage;

    int? _rate = null;

    try
    {
        if (GameSelectedStatus == GameStatus.Played)
        {
            displayMessage = "Game and rate added successfully!";
            _rate = Rate;
        }
        else
        {
            displayMessage = "Game added successfully!";
        }

        if (IsOn && CoverUrl is not null)
        {
            _ = IGDBGamesApiService.SaveImageAsync(CoverUrl, $"{IgdbId}.jpg");
        }

        GameDTO game = new()
        {
            IGDBId = int.Parse(Id),
            Name = Name,
            ReleaseDate = ReleaseDate,
            CoverUrl = CoverUrl,
            Platforms = Platforms,
            Summary = Summary,
            Status = GameSelectedStatus.Value,
            Rate = _rate
        };

        var resp = await gameService.CreateAsync(game);
        if (!resp.Success) displayMessage = "Error adding game";

        // Display message and navigate back
        bool displayResp = await Application.Current.Windows[0].Page.DisplayAlert("Aviso", displayMessage, null, "Ok");

        if (!displayResp)
        {
            await Shell.Current.GoToAsync("..");
        }
    }
    catch (Microsoft.EntityFrameworkCore.DbUpdateException)
    {
        bool displayResp = await Application.Current.Windows[0].Page.DisplayAlert("Error", "Game already added!", null, "Ok");

        if (!displayResp)
        {
            await Shell.Current.GoToAsync("..");
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine($"Error: {ex.Message}");
        throw;
    }
    finally
    {
        ConfirmIsEnabled = true;
    }
}


Step 14. Bind the command to the button:

Step 14.1. Code:
Command="{Binding ConfirmCommand}"

Step 15. Update the database version to be recreated with the current structure.

Step 16. Run the system and add a game, then return to the search:

If the same game is saved again, display a message notifying that it has already been added.

Now, our game is successfully saved in the local database without duplication.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Scalar-Based API Documentation for ASP.NET Core

clock March 6, 2025 05:54 by author Peter

Having well-organized and aesthetically pleasing API documentation is essential in modern web development. For creating visually appealing API documentation for ASP.NET Core applications, Scalar is a great tool. The easy integration of Scalar into an ASP.NET Core project and the creation of API documentation will be covered in this post.

Scalar: What is it?
An open-source tool for API documentation, Scalar offers a user-friendly and aesthetically pleasing interface for testing and researching APIs. It is a straightforward and user-friendly substitute for Redoc and Swagger UI.

Why Use Scalar for API Documentation?

  • User-friendly Interface: Scalar provides a clean and modern UI.
  • Interactive API Testing: Allows developers to test APIs directly from the documentation.
  • Easy Integration: Simple setup process with minimal configuration.
  • Open Source: Free to use and modify according to project needs.

Setting Up Scalar in ASP.NET Core
Step 1. Install Scalar Package
To integrate Scalar into your ASP.NET Core project, you need to install the Scalar NuGet package. Run the following command in the terminal.
Install-Package Scalar.AspNetCore

Step 2. Configure Scalar in Startup.cs.
Modify the Startup.cs file to add Scalar middleware.
using Scalar.AspNetCore;

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();
    services.AddScalar();
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseRouting();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapControllers();
    });

    app.UseScalar(); // Enable Scalar documentation
}

Step 3. Access API Documentation.
Once the configuration is complete, run your ASP.NET Core application and navigate to the following URL.
https://localhost:<port>/scalar

You should see a beautifully generated API documentation interface.

Customizing Scalar Documentation

Scalar provides customization options to enhance the API documentation experience.

1. Adding API Metadata
You can customize API metadata such as title, version, and description by modifying the Scalar configuration.
services.AddScalar(options =>
{
    options.DocumentTitle = "My API Documentation";
    options.ApiVersion = "v1.0";
    options.Description = "This is a sample API documentation using Scalar.";
});


2. Securing API Documentation
You can use authentication middleware to secure the Scalar endpoint and limit access to your API documentation.

Conclusion

With ASP.NET Core, Scalar is a potent tool for producing visually appealing and interactive API documentation. It is a great substitute for more conventional documentation tools like Swagger because of its simplicity of integration, intuitive interface, and customizable features. You may rapidly set up Scalar and enhance your experience with API documentation by following the instructions in this article.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in