European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core 9.0 Hosting - HostForLIFE :: .NET 9 : Task.WhenEach

clock November 19, 2024 07:15 by author Peter

In .NET 9, a new method, Task.WhenEach, has been introduced to streamline asynchronous programming. This method allows you to process tasks as they complete, rather than waiting for all tasks to finish. This is particularly useful in scenarios where tasks have varying completion times and you want to act on each one as soon as it's done.


Step 1. Create a function named PrintWithDelay
async Task<int> PrintWithDelay(int delay)
{
await Task.Delay(delay);
return delay;
}


This code defines an asynchronous method named PrintWithDelay that takes an integer delay as input and returns an integer.

async Task<int>

  • async: This keyword indicates that the method is asynchronous, meaning it can yield execution to other tasks while waiting for I/O operations or other asynchronous operations to complete.
  • Task<int>: This specifies the return type of the method. It will return a Task object that, when awaited, will yield the integer result.

await Task.Delay(delay)

  • Task.Delay(delay): This creates a new task that will complete after the specified delay milliseconds.
  • await: This keyword pauses the execution of the current method until the Task.Delay task completes. While waiting, the thread can be used by other tasks.

return delay
Once the delay has elapsed, the method resumes execution and returns the original delay value.

Step 2. Create a list of tasks that will each execute the PrintWithDelay method with different delay values.
List<Task<int>> printTasks = [
PrintWithDelay(4000),
PrintWithDelay(6000),
PrintWithDelay(2000)
];

  • This declares a list named printTasks to store tasks. Each task in this list will return an integer.
  • There are three calls to the PrintWithDelay method, each with a different delay value (4000, 6000, and 2000 milliseconds, respectively).
  • Each call returns a Task<int>, representing an asynchronous operation that will eventually return an integer.
  • These tasks are added to the printTasks list.

Step 3. Utilize Task.WhenEach in .NET 9.

Task.WhenEach yields an IAsyncEnumerable, allowing asynchronous processing of tasks as they complete.
await foreach (var task in Task.WhenEach(printTasks))
{
Console.WriteLine(await task);
}

Task.WhenEach(printTasks)

  • This part takes a collection of Task<int> objects (stored in printTasks).
  • It returns an IAsyncEnumerable<Task<int>>. This enumerable represents a sequence of tasks that will complete over time.

await foreach (var task in Task.WhenEach(printTasks))

  • This is an asynchronous foreach loop that iterates over the IAsyncEnumerable returned by Task.WhenEach.
  • The await keyword signifies that the loop will pause execution until the next task in the sequence completes.

In short, the code does the following

  • Schedules Tasks: The printTasks are scheduled to run asynchronously.
  • Processes Completed Tasks: As each task completes, it is yielded from the Task.WhenEach enumerable.
  • Logs Task Completion: The await foreach loop iterates over these completed tasks, and for each one, it logs a message to the console, which includes the task's status and result.

Output


By leveraging Task.WhenEach, you can write more efficient and responsive asynchronous code in .NET 9. Happy Coding!



European ASP.NET Core 9.0 Hosting - HostForLIFE :: A Beginner's Guide to.NET Core 8 Web API CRUD Operations

clock November 13, 2024 07:59 by author Peter

Because it enables the development of applications that expand and adapt to the requirements of various services and platforms, web API development is a crucial part of the current development scenario. CRUD actions, which enable the fundamental interface with the application's data, are one of the cornerstone procedures in the development of APIs. Starting with CRUD operations makes perfect sense if you are "green" with.NET Core because it teaches you the basics of creating a Web API and handling data.

The creation of a.NET Core 8 Web API project and the specification of the endpoints for each CRUD operation will be covered in this tutorial. Effective data handling techniques are outlined by the Entity Framework Core, an ORM system made to communicate with.NET databases. By the end of this article, you will have mastered the fundamentals of RESTful design in.NET Core and developed a functional Web API that can manipulate data.

This tutorial is intended for both a novice front-end developer looking to hone their skills and an aspiring backend developer with experience in.NET Core 8 API development. Therefore, you will undoubtedly learn how to begin creating APIs in.NET Core 8 if you fit into either of the two categories. Let's go over this initial phase.

Open Visual Studio 2022 and Choose "Create a new project".

On the Create a new project page, search for "Web API" on the search bar, select the project template and press the "next" button.

On the configuration of the project, enter the Project Name and choose the check box to keep the solution file and project in the same directory.

To configure the Framework version, tick the boxes according to the screenshot, and click "create" on the project's additional details page. To get things started, it will generate the project using the default files.

Initially, the project folder structure looked like this.

First, we need to install the packages required for the ORM to interact with the Database. To install the packages, Right click on the solution and choose "Manage NuGet Packages... "

On the NuGet page, search for the below two packages and install the versions above 8.
Microsoft.EntityFrameworkCore.Tools
Microsoft.EntityFrameworkCore.SqlServer

Then, Right-click on the solution. Create a new Class file named Employee.cs and paste the code.
namespace EmployeePortal.Models.Entities
{
    public class Employee
    {
        public Guid Id { get; set; }
        public required string Name { get; set; }
        public required string Email { get; set; }
        public required string PhoneNumber { get; set; }
        public decimal Salary { get; set; }
    }
}


Then, we need to create a DB Context file for the application that holds the configuration for the ORM and its Entities. Create a file named ApplicationDbContext.cs and paste the below code.
using EmployeePortal.Models.Entities;
using Microsoft.EntityFrameworkCore;
namespace EmployeePortal
{
    public class ApplicationDbContext : DbContext
    {
        public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
            : base(options)
        {
        }
        public DbSet<Employee> Employees { get; set; }
    }
}

Then, we need to add a DB connection string to the appsettings.json file.
"ConnectionStrings": {
  "DefaultConnection": "Server=your_server_name;Database=your_database_name;User Id=your_username;Password=your_password;TrustServerCertificate=True;"
}


Then, we need to add the SQL Server services to our Program.cs file and add the below code to the program file under the services, which tells the application to use the SQL server from the connection string.
builder.Services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(
    builder.Configuration.GetConnectionString("DefaultConnection")));


Now, we add a migration to create a snapshot of our entities from the application because we are using the Code First Approach in EF. So, we need to specify the entities and their relationships, and then we run the migration which will create a Database and tables based on the relationship of the entities.

Open the package manager console and run the following commands.

  • add-migration "initial one": generates a migration file based on the current state of your data models compared to the database schema.
  • update-database: applies the migration to your database, creating or altering tables, columns, or relationships as defined in the migration file.

Now, If you open the SQL server, you can see the database and the tables.


We need to create a Web API controller to fetch the data from the database through the endpoints. Right-click on the controller's folder and choose to add a new Web API Controller.


You can request the endpoints using the necessary information.

Endpoints and their use cases with return types

  • Get All Employees: it is GET api/employees. Method: GetEmployees(). Purpose: Fetches all the employees from the store. Response: Returns an Ok (HTTP 200) status with the body, which contains the list of employees.
  • Get Employee by ID: it is GET api/employees/{id}. Method: GetEmployeeById(Guid id). Purpose: Use unique IDs to retrieve an employee. Response: Returns an Ok (HTTP 200) status and data about an employee when such is available, however, there is a risk that if no such employees can be traced then one will be returned as NotFound (HTTP 404).
  • Add a New Employee: it is POST api/employees. Method: AddEmployee(EmployeeDto employees). Purpose: Whenever an employee Dto record is transmitted to the application, the application updates the defined employees in the store. Response: Returned 201 Created status whenever an employer is successfully added.
  • Update Employee: it is PUT api/employees/{id}. Method: UpdateEmployee(Guid id, UpdateEmployeeDto employeeDto). Purpose: One has to search the employee by the unique ID and change his information. Response: If the updates work effectively, they return an Ok (HTTP 200) status, while the NotFound (HTTP 404) will be resolved in situations where no suitable culprit persons are available.
  • Delete Employee: it is DELETE api/employees/{id}. Method: DeleteEmployee(Guid id). Purpose: When databases are directed to delete certain profiles, the corresponding unique ID numbers are used. Response: If it works, returns NoContent (HTTP 204) in situations where the deletion was successful, whereas NotFound (HTTP 404) if no matching employee is found.

Conclusion
In this article, we explored a basic implementation of CRUD operations for managing employee data in a .NET Core Web API. By following these steps, we created endpoints to add, retrieve, update, and delete employees, using DTOs to encapsulate and simplify data transfer. This approach helps establish a solid foundation for building RESTful APIs and managing data flow in a secure and organized manner.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Explaning IExceptionFilter in .NET Core

clock November 4, 2024 12:50 by author Peter

So, Let's get started.


Exception Filter in ASP.NET Core
Exception Filter allows us to handle unhandled exceptions that occur while processing an HTTP request within our application. Exception Filters in ASP.NET Core Applications are used for tasks such as Logging Exceptions, Returning Custom Error Responses, and Performing any other necessary action when an exception occurs.

Exception Filter also provides a way to centralize the exception-handling logic and keep it separate from our controller or from the business and data access logic, making our code more organized and maintainable.

IExceptionFilter

  • IExceptionFilter is an interface in ASP.NET Core that provides a mechanism for handling exceptions that occur during the processing of a request. By implementing IExceptionFilter, you can write custom logic to handle exceptions globally or per-controller.
  • Advantages of IExceptionFilter
  • Centralized Exception Handling: You can manage all your exceptions in a single place, making it easier to maintain and modify your exception-handling strategy.
  • Separation of Concerns: By handling exceptions separately, you keep the error-handling logic away from your business logic, improving code readability and maintainability.
  • Consistent Error Responses: It allows you to standardize the way errors are reported back to the client, which can improve the API's usability. You can return consistent model formats, error codes, and messages.
  • Access to HttpContext: Since filters have access to the `HttpContext`, you can easily log errors, modify responses, or perform any other operation based on the context of the request.
  • Interception of All Exceptions: It can catch exceptions that aren't handled anywhere else, ensuring that your application can respond gracefully to unexpected errors.
  • Custom Logic: You can implement any custom logic needed for exception handling, such as logging specific exceptions differently.

Disadvantages of IExceptionFilter

  • Global Scope: When implemented globally, all exceptions will be handled by the same filter. This might not be desirable if different controllers or actions require different handling strategies.
  • Complex Error Handling Logic: If you have complex error-handling needs, managing too many unique cases in a single filter could lead to convoluted code.
  • Performance Concerns: Introducing additional logic in exception handling can potentially add overhead, especially if the handling involves extensive processing or logging.
  • Limited to Web Context: Unlike middleware, exception filters are limited in scope to the MVC pipeline. They cannot handle exceptions that occur outside of the controller actions, such as in middleware.
  • Difficulty in Testing: Since exception filters are tied to the ASP.NET injection system, they can introduce complexity when writing unit tests, particularly if they depend on the HttpContext.

Implementing IExceptionFilter
Implementing IExceptionFilter can greatly benefit your ASP.NET Core applications by providing structured and centralized exception handling. However, balance must be struck in how it's used to avoid complexity, ensure performance, and maintain flexibility. Choosing the right approach to exception handling may also involve combining it with other options like middleware, custom error pages, or even using logged service responses as needed.
public class HandleExceptionFilter : IExceptionFilter
    {
      private readonly ILogger<HandleExceptionFilter> _logger;
      public HandleExceptionFilter(ILogger<HandleExceptionFilter> logger)
        {
            _logger = logger;
        }

      public override void OnException(ExceptionContext filterContext)
        {
            bool isAjaxCall = filterContext.HttpContext.Request.Headers["x-requested-with"] == "XMLHttpRequest";
            filterContext.HttpContext.Session.Clear();

            if (isAjaxCall)
            {
                filterContext.HttpContext.Response.StatusCode = (int)HttpStatusCode.InternalServerError;
                var data = new
                {
                    filterContext.Exception.Message,
                    filterContext.Exception.StackTrace
                };
                filterContext.Result = new JsonResult(data);
                filterContext.ExceptionHandled = true;
            }

            if (!isAjaxCall)
            {
                filterContext.Result = new RedirectResult("/Error/Error");
            }

            _logger.LogError(GetExceptionDetails(filterContext.Exception));

            filterContext.ExceptionHandled = true;
            base.OnException(filterContext);

        }

       private string GetExceptionDetails(Exception exception)
        {
            var properties = exception.GetType()
                .GetProperties();
            var fields = properties
                .Select(property => new
                {
                    Name = property.Name,
                    Value = property.GetValue(exception, null)
                })
                .Select(x => $"{x.Name} = {(x.Value != null ? x.Value.ToString() : String.Empty)}");
            return String.Join("\n", fields);
        }
    }


// Register the filter in Startup.cs

public void ConfigureServices(IServiceCollection services)
  {
    services.AddControllers(options =>
     {
         options.Filters.Add<HandleExceptionFilter>();
     });
  }

// Above .net 6

builder.Services.AddScoped<HandleExceptionFilter>();

Then, add the name as EmployeesController.cs and paste the code below.
using EmployeeAdminPortal.Data;
using EmployeeAdminPortal.DTO;
using EmployeeAdminPortal.Models.Entities;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;

namespace EmployeeAdminPortal.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class EmployeesController : ControllerBase
    {
        private readonly ApplicationDbContext _dbContext;

        public EmployeesController(ApplicationDbContext dbContext)
        {
            _dbContext = dbContext;
        }

        [HttpGet]
        public IActionResult GetEmployees()
        {
            return Ok(_dbContext.Employees);
        }

        [HttpGet]
        [Route("{id:guid}")]
        public IActionResult GetEmployeeById(Guid id)
        {
            var employee = _dbContext.Employees.Find(id);
            if (employee == null)
            {
                return NotFound();
            }
            return Ok(employee);
        }

        [HttpPost]
        public IActionResult AddEmployee(EmployeeDto employeeDto)
        {
            var employee = new Employee
            {
                Name = employeeDto.Name,
                Email = employeeDto.Email,
                PhoneNumber = employeeDto.PhoneNumber,
                Salary = employeeDto.Salary
            };
            _dbContext.Employees.Add(employee);
            _dbContext.SaveChanges();
            return StatusCode(StatusCodes.Status201Created);
        }

        [HttpPut]
        [Route("{id:guid}")]
        public IActionResult UpdateEmployee(Guid id, UpdateEmployeeDto employeeDto)
        {
            var employee = _dbContext.Employees.Find(id);
            if (employee == null)
            {
                return NotFound();
            }
            employee.Name = employeeDto.Name;
            employee.Email = employeeDto.Email;
            employee.PhoneNumber = employeeDto.PhoneNumber;
            employee.Salary = employeeDto.Salary;
            _dbContext.SaveChanges();
            return Ok(employee);
        }

        [HttpDelete]
        [Route("{id:guid}")]
        public IActionResult DeleteEmployee(Guid id)
        {
            var employee = _dbContext.Employees.Find(id);
            if (employee == null)
            {
                return NotFound();
            }
            _dbContext.Employees.Remove(employee);
            _dbContext.SaveChanges();
            return NoContent();
        }
    }
}


Also, you need to create a new folder for DTO(Data Transfer Objects), which is used to transfer data between layers or services within an application.
Create two DTO files named EmployeeDto.cs and UpdateEmployeeDto.cs.
// Employee DTO
namespace EmployeePortal.DTO
{
    public class EmployeeDto
    {
        public required string Name { get; set; }
        public required string Email { get; set; }
        public required string PhoneNumber { get; set; }
        public decimal Salary { get; set; }
    }
}

UpdateEmployeeDto.cs
// Update Employee DTO
namespace EmployeePortal.DTO
{
    public class UpdateEmployeeDto
    {
        public string? Name { get; set; }
        public string? Email { get; set; }
        public string? PhoneNumber { get; set; }
        public decimal Salary { get; set; }
    }
}

Then, build and run your application. You can able to see the endpoints of our application on the chrome with the help of Swagger.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Using EF Core, Create a Model with a Database Table in .NET 8

clock October 29, 2024 07:30 by author Peter

Building scalable and reliable apps is now simpler than ever thanks to.NET 8 and Entity Framework Core (EF Core). Making models that map to database tables is a crucial step in developing a data-driven application because it enables object-oriented data manipulation. This post will demonstrate how to use.NET 8 and Entity Framework Core to create a model and link it to a database table.

Prerequisites
Before we dive in, make sure you have the following set up.

  • .NET SDK 8.0 or higher.
  • Visual Studio or VS Code installed. In my scenario, I will develop the project in Visual Studio.
  • Basic knowledge of C# and object-oriented programming.

Step 1. Setting Up the Project
First, create a new .NET 8 project. In this example, we'll make an ASP.NET Core Web API project, but the process is similar for other types of .NET applications.
Open Visual Studio and click on Create a new project. Select ASP.NET Core Web API, and click on the Next button.



Create a project as an E-POS and the Solution name as an E-Business. Click on the Next button.|

After clicking on the create button, the E-POS project was created.

Now, I'm going to create the Models folder in my project. Right-click on your project to open a pop-up menu, then click 'Add,' select 'New Folder,' and name it 'Models.'

Inside the Models folder, create a new class file. Right-click on the Models folder, select 'Add,' then choose 'Class,' click it, and give it the name ‘Product’.

Step 2. Define the Model Class

A model in Entity Framework Core is simply a C# class that defines the structure of a database table. Let's define a simple product model with properties for Id, Name, Price, and CreatedDate.

Create a new folder called Models, and inside it, add a file named Product.cs.
namespace E_POS.Models
{
        public class Product
    {
        public int Id { get; set; }      // Primary Key
        public required string Name { get; set; }

        [Column(TypeName = "decimal(18, 2)")]
        public decimal Price { get; set; }

        public DateTime CreatedDate { get; set; }
    }

}

‘Id’ is the primary key, which EF Core will map to the table's primary key by convention.
Other properties (Name, Price, and CreatedDate) will be mapped to corresponding columns in the table.

Install Required NuGet Packages
To install NuGet packages like those required for Entity Framework Core. There are multiple ways, depending on your environment and tools. Here's a summary of the different methods you can use:

1. Using .NET CLI (Command Line Interface)
This is the method you mentioned, using the terminal or command prompt. It’s a simple and effective way to add packages to your project.

For Microsoft.EntityFrameworkCore
Install-Package Microsoft.EntityFrameworkCore

For Microsoft.EntityFrameworkCore.Tools
Install-Package Microsoft.EntityFrameworkCore.Tools

For Microsoft.EntityFrameworkCore.Design
Install-Package Microsoft.EntityFrameworkCore.Design

For Microsoft.EntityFrameworkCore.SqlServer (to work with SQL Server)
Install-Package Microsoft.EntityFrameworkCore.SqlServer

2. Using NuGet Package Manager in Visual Studio
Visual Studio offers a graphical interface to manage NuGet packages.

Steps

  • Right-click on your project in Solution Explorer.
  • Select Manage NuGet Packages.
  • In the Browse tab, search for the required packages (e.g., Microsoft.EntityFrameworkCore, Microsoft.EntityFrameworkCore.Tools, etc.).
  • Click Install for each package.

3. Using the NuGet Package Manager Console in Visual Studio
This console allows you to install packages using commands similar to the CLI but directly within Visual Studio.

Check whether the packages are installed or not.

Step 3. Configure the Connection String
Now, let's configure the database connection string.

Open the appsettings.json file and add your database connection details under the ConnectionStrings section.
{
  "ConnectionStrings": {
    "DefaultConnection": "Server=LAPTOP-A40000TJ\\SQLEXPRESS; Database=EBusiness; Integrated Security=True; Encrypt=True; TrustServerCertificate=True;"
  }
}


This connection string defines how a .NET application connects to a SQL Server database. It specifies,

  • Server: LAPTOP-A4E1RITJ\SQLEXPRESS (machine name and SQL instance)
  • Database: E_Business
  • Integrated Security: Uses Windows Authentication.
  • Encrypt: Secures data transmission.
  • TrustServerCertificate: Accepts self-signed SSL certificates.

This connection string assumes you are using SQL Server. Replace the placeholders with your actual database credentials.

Step 4. Create the DbContext Class
The DbContext class in Entity Framework Core manages the database connection and is responsible for querying and saving data. Let’s create the AppDbContext class that represents our database session and exposes the Product model as a DbSet.

In the root of your project, create a folder named Data and add a class file called AppDbContext.cs.
using E_POS.Models;
using Microsoft.EntityFrameworkCore;

namespace E_POS.Data
{
    public class AppDbContext: DbContext
    {
        public AppDbContext(DbContextOptions<AppDbContext> options) : base(options)
        {
        }
        public DbSet<Product> Products { get; set; }
    }
}

DbSet<Product> represents the Products table in the database.
The AppDbContext constructor accepts DbContextOptions to configure the database connection.

Step 5. Register the DbContext in the Program.cs

To ensure that Entity Framework Core can access your DbContext, register it in the Program.cs file. Modify the Program.cs to include AppDbContext.
#region Database Configure

var connectionString = builder.Configuration.GetConnectionString("DefaultConnection");

builder.Services.AddDbContext<AppDbContext>(options => options.UseSqlServer(connectionString));

#endregion


Step 6. Create and Apply Migrations
Migrations allow you to create or modify the database schema to match your models.
In Visual Studio, you can open the Package Manager Console by following these steps.

  • Go to the Tools menu at the top of Visual Studio.
  • Select NuGet Package Manager.
  • Click on Package Manager Console.

 

To make an initial migration and apply it, run the following commands.
Add-Migration InitialCreate
Database-Update



Add-Migration InitialCreate: Creates a new migration based on the Product model and AppDbContext.
Database-Update: Applies the migration to your database, creating the necessary tables.

Let’s check whether the database is created or not.


 



Summary
In this article, we’ve explored how to create a model and map it to a database table in .NET 8 using Entity Framework Core. The process involves defining a model, creating a DbContext, configuring the database connection, and using migrations to keep your database schema in sync with your models.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Mastering API Testing with Api.http Files in .NET Projects

clock October 24, 2024 08:17 by author Peter

The Api.http file in a .NET project is a text file containing one or more HTTP requests that can be executed directly from the development environment, such as Visual Studio or Visual Studio Code. This file enables developers to quickly test API endpoints without the need for external tools such as Postman or cURL. It is particularly useful when you are debugging, developing, or testing an API. This article will describe what Api.http is and how to use it, along with an example.

What is Api.http?
It’s a simple text file with a .http extension that contains the definitions of HTTP requests (such as GET, POST, PUT, DELETE, etc.).
Purpose: It allows developers to send HTTP requests directly from their code editor for testing API endpoints, simulating client requests, and checking API responses.
File Format: It contains HTTP requests along with headers and request bodies. This is similar to how requests are made using tools like Postman or cURL.

Benefits of using Api.http file

Quick API testing
You can make HTTP requests directly from your development environment without having to switch to external tools such as Postman.
This provides a faster feedback loop during API development.

Easy debugging

You can simulate client requests and test your API directly without writing a separate client or using a browser.
When working with RESTful services, you can easily test endpoints, troubleshoot problems, and validate responses.

Lightweight and simple
The .http file is lightweight and easy to set up. It doesn’t require complex configuration or external dependencies.
You can keep all your requests in one file or split them into different files based on API resources or services.

Version control
Because the .http file is just a text file, you can commit it to version control (like Git) along with your project.

This allows teams to share API requests and helps document the expected API inputs and outputs for specific endpoints.

Multiple requests in one file
You can write multiple requests in a single .http file, separating them with comments e.g. ###.

This is useful for testing different aspects of an API, for example, retrieving data, posting data, and updating data in one go.

Environment variables (in Visual Studio Code with the REST client)
In VS Code, you can use environment variables to avoid hardcoding values such as API tokens, endpoints, or other dynamic data, making it easier to move between development, staging, and production environments.

Use case example

Suppose you’re working on a .NET API that manages users. You want to test your API endpoints during development. Instead of using Postman or writing a front client, you can use the Api.http file to test different endpoints directly from your code editor.

Example flow
We can

  • Add a GET request to retrieve a list of users.
  • Add a POST request to create a new user with a JSON payload.
  • Add a PUT request to update user information.
  • Add a DELETE request to remove a user.

All of these can be executed directly from the Api.http file. This provides immediate feedback on the functionality of the API.

Structure of http file
The http file contains multiple requests with multiple api calls. Now, let’s go for the structure of the Api.http file. Below is a sample Api.http file.

### Get Users
GET https://api.example.com/users
Authorization: Bearer YOUR_TOKEN

### Create a New User
POST https://api.example.com/users
Content-Type: application/json
Authorization: Bearer YOUR_TOKEN

{
  "name": "John Doe",
  "email": "[email protected]"
}

### Update a User
PUT https://api.example.com/users/123
Content-Type: application/json
Authorization: Bearer YOUR_TOKEN

{
  "name": "Jane Doe",
  "email": "[email protected]"
}

### Delete a User
DELETE https://api.example.com/users/123
Authorization: Bearer YOUR_TOKEN


How to create Api.http file?
If you create a new WebAPI project in .NET 8 using Visual Studio 2022, select the WebAPI default project template, then it will automatically create YourPeojectname.http file where you can add your API requests.

However, we can add it manually for existing projects as well. To add the http file manually, follow the below steps:

Step 1. Adding http file
For Visual Studio – Right-click on your project in Solution Explorer.
Select Add > New Item and choose a Text File or File. Rename it to Api.http.

For Visual Studio code– Right-click on your project folder, select New File and name it Api.http.

Step 2. Write Http Request
Now, we can add HTTP requests to the Api.http file. Each request can be a GET, POST, PUT, or DELETE with optional headers and body, like in the example above.

Step 3. Running the request
In Visual Studio 2022
Open the Api.http file.
Visual Studio automatically detects the HTTP requests in the file. For each request, you will see a Send Request button below.

Then click on the Send Request button to execute the request.
The response status code, headers, and body will be displayed in the output pane.

In Visual Studio Code
Install the REST Client extension.
Open the Api.http file.
Hover over the HTTP request you want to execute and click on the Send Request button.
The response will be shown in a separate tab or panel.

Step 4. Check the Response
When you send the request, you can view the response in your editor, which includes:
HTTP status code e.g., 200, 404, 500.
Response body JSON, XML, or other formats.
Headers such as Content-Type, Authorization, etc..
Below is an example output of the Get Weather Forecast API.


Conclusion
The Api.http file is a powerful tool that allows you to test and debug API endpoints directly within your development environment. It provides a lightweight and integrated alternative to external tools such as Postman and can be a significant time saver when it comes to API development and testing in .NET projects.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Discover How to Use NLog Advanced Features

clock October 21, 2024 08:55 by author Peter

Logging is an essential part of the application, especially when you want to monitor traffic communicating with an external client that is having some problems connecting to your site. Let's say a very obscure error is happening with regard to third-party software you are using, and the software provider is asking for debug-level logs in order to help you.

So, the solution is simple, add a logging component like NLog and log to a file by configuring the nlog.config file. But many times your application may not allow you a simple implementation. The complications appear when your application is a multi-tenant deployment, or if you can't modify nlog.config or appsettings.json after the deployment, and what if you need to temporarily enable the Debug logging for a specific library only or any temporary logging for that matter?

If we just add an NLog to the multi-tenant application with five clients, then all five clients will start writing logs in the same location, be it a file or a database. And what if the deployed code is not possible to modify (like a read-only zipped version)?

There is a solution for all of the above. NLog libraries provide all the necessary features to resolve the above problems.

The best part is that NLog allows you to use a new instance of NLog per each request and allows you to modify the logger instance just for this scope (per request).

Let's address the issues one by one.

To start, let's use a middleware to set the NLog in a generic manner.

In the middleware class, we will set all the dynamic properties specific to the tenant. In my case, they are

a DB connection string; logger filters for specific libraries (like Microsoft, ThirdPartyLib(substitute it with any namespace you want), Nlog.API (current application), and everything(*)).

In the Nlog.config file, we set the logger like the following code (please note mdlc:Nlog_mainDebug). I found that this is the only way to modify the logger dynamically - by using the 'filters' tag.

For loggers
<logger name="Nlog.API*" minlevel="Debug" writeTo="dbTarget" ><!--only works if Nlog_mainDebug is Yes-->
        <filters defaultAction="Log">
            <when condition="'${mdlc:Nlog_mainDebug}' != 'Yes'" action="Ignore" /><!--comparisson is case-sensetive-->
        </filters>
</logger>


For the database target we set a connectionString with a dynamic variable (please note mdlc:NLog_DbCon).
<target xsi:type="Database"
name="dbTarget"
connectionString="${mdlc:NLog_DbCon}"
commandText="INSERT INTO NLogs(CreatedOn,Message,Level,Exception,StackTrace,Logger,Url)
            VALUES (@datetime,@msg,@level,@exception,@trace,@logger,@url)"
            >
        <parameter name="@datetime" layout="${date}" />
        <parameter name="@msg" layout="${message}" />
       <parameter name="@level" layout="${level}" />
        <parameter name="@exception" layout="${exception}" />
        <parameter name="@trace" layout="${stacktrace}" />
        <parameter name="@logger" layout="${logger}" />
        <parameter name="@url" layout="${aspnet-request-url}" />
</target>

In the middleware, the properties are set to specific values (TenantMiddleware.cs).
NLog.ScopeContext.PushProperty("Nlog_Microsoft",Constants_NLogMicrosoft);
NLog.ScopeContext.PushProperty("Nlog_ThirdPartyLib", Constants_NLogThirdPartyLib);
NLog.ScopeContext.PushProperty("Nlog_mainDebug", Constants_NLogMainDebug);
NLog.ScopeContext.PushProperty("Nlog_mainWarning", Constants_NLogMainWarning);
NLog.ScopeContext.PushProperty("Nlog_EverythingElseWarn", Constants_NLogEverythingElseWarn);
NLog.ScopeContext.PushProperty("NLog_DbCon", connectionString);

In this sample, the connection string is static, but it can be retrieved based on the current domain + subdomain dynamically. Depending on the subdomain, the connection string could vary. The same is true for the logger enabling. The source of the values could be dynamic (like a database).
Testing

Try Cool, Fantastic, and WeatherForecast APIs by making requests to all three controllers.

As a result, you should see the logged database rows.


There are multiple possible scenarios. And here are a few.

In a day to day logging I would set only one logger, which is Warning for everything like:
<logger name="*" minlevel="Warn" writeTo="dbTarget" >

When you need to debug a specific library then additionally enable this:
<logger name="ThirdPartyLib*" minlevel="Debug" writeTo="dbTarget">

And of course, set the DB destination per URL if multi-tenant. If not multi-tenant, for simplicity, set the DB connection in nlog.config instead. I'm demonstrating the most complex case here to see what's possible.

The source code is attached, and more details can be found in the Nlog Multi-tenant Strategy.txt file inside the project. Cheers!



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Minimal APIs or Controllers in ASP.NET Core

clock October 15, 2024 07:17 by author Peter

In the previous five to six years,.NET has seen substantial evolution. It's now simpler than ever to create applications using MVC or APIs (Application Programming Interfaces). The Minimal APIs feature, new in ASP.NET Core 6, streamlines API development by doing away with the requirement to create controllers, which are often placed at the front of APIs. Currently,.NET supports two methods: Minimal APIs (the new method) and Controllers (the old method). Which one, though, ought to you use?

What are Minimal APIs?
Minimal APIs define endpoints as logical handlers using lambdas or methods. They utilize method injection for services, while controllers use constructor or property injection. In Minimal APIs, each endpoint only requires the specific services it needs. This is in contrast to controllers, where all endpoints within the controller use the same class constructor, which can make the controller "fat" as it grows. Minimal APIs are designed to hide the host class by default and emphasize configuration and extensibility via extension methods that take lambda expressions.

Here is an example of minimal API.

app.MapGet("/weatherforecast", (HttpContext httpContext) =>
{
    var forecast = Enumerable.Range(1, 5).Select(index =>
        new WeatherForecast
        {
            Date = DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
            TemperatureC = Random.Shared.Next(-20, 55),
            Summary = summaries[Random.Shared.Next(summaries.Length)]
        })
        .ToArray();

    return forecast;
});

Here are the important points that we should know before jumping into the minimal API. These points are from the Microsoft docs.

Minimal APIs lack built-in support for,

  • Model binding (IModelBinderProvider, IModelBinder). However, support can be added via custom binding shims.
  • Validation (IModelValidator).
  • Application parts or the application model. There is no way to apply or build your conventions.
  • View rendering. For this, it is recommended to use Razor Pages.
  • JsonPatch.
  • OData.

You can work around these limitations by implementing custom solutions for each of these missing features.

Conclusion
Minimal APIs are an excellent way to start building APIs. One practical use case for Minimal APIs is in Vertical Slice Architecture (Vertical Slice Architecture is a design approach where features are implemented end-to-end, encapsulating all layers (UI, business logic, and data access) within self-contained slices.). In this approach, you can define endpoints for each module separately, making management easier. Since each endpoint in Minimal APIs declares the specific services it needs, this approach helps avoid the issue of controller classes becoming "fat" as they grow and more endpoints are added.

For me, minimal APIs are good to go for small-scale projects and they are easy to handle when endpoints grow.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Expression Trees in Real Life: C# Dynamic Filtering with Minimal API

clock October 7, 2024 08:39 by author Peter

We covered the essentials of expression trees, use cases, and restrictions in our prior tutorial. Any subject that lacks a real-world example is particularly nonsensical when it comes to programming. The second section of expression trees in C# will be covered in this article, along with some practical examples of how they may be used effectively.

What are we going to construct?
Our primary goal is to create an Asp.NET Core web API with dynamic filtering capabilities using Expression Trees, EF core, and a simple API.
In order to demonstrate the true potential of expression trees for creating intricate and dynamic searches, we intend to implement filtering across the product database. The last example with several dynamic filtering arguments is as follows:

Getting started
First, open Visual Studio and select the Asp.NET Core Web API template with the following configuration:


We use .NET 8.0, but the topic itself doesn’t depend on any .NET version. You can even use classical .NET Framework to use Expression Trees. The project name is “ExpressionTreesInPractice”. Here is the generated template from the Visual Studio:

To have simple storage, we will use InMemory Ef Core. You can use any other EF Core sub-storage.

Now go to Tool->Nuget Package Manager->Package Manager Console and type the following command:
install-package microsoft.entityframeworkcore.inmemory

Now, let’s create our DbContext implementation. Create a folder called ‘Database’ and add a class called ProductDbContext to it with the following implementation:

using ExpressionTreesInPractice.Models;
using Microsoft.EntityFrameworkCore;

namespace ExpressionTreesInPractice.Database
{
    public class ProductDbContext : DbContext
    {
        public DbSet<Product> Products { get; set; }
        public ProductDbContext(DbContextOptions<ProductDbContext> options) : base(options) { }
        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.Entity<Product>().HasData(new List<Product>
            {
                new Product(){ Id = 1, Category = "TV", IsActive = true, Name = "LG", Price = 500},
                new Product(){ Id = 2, Category = "Mobile", IsActive = false, Name = "Iphone", Price = 4500},
                new Product(){ Id = 3, Category = "TV", IsActive = true, Name = "Samsung", Price = 2500}
            });
            base.OnModelCreating(modelBuilder);
        }
    }
}

We just need some basic initialized data when we run our application, and that is why we need to override OnModelCreating from DbContext. A great example of a template method pattern, isn’t it?
We need our Entity model called Product, and you can create a folder called ‘Models’ and add the Product class to it with the following content:
namespace ExpressionTreesInPractice.Models
{
    public class Product
    {
        public int Id { get; set; }
        public string Category { get; set; }
        public decimal Price { get; set; }
        public bool IsActive { get; set; }
        public string Name { get; set; }
    }
}

It is time to register our DbContext implementation in the Program.cs file:
builder.Services.AddDbContext<ProductDbContext>(x => x.UseInMemoryDatabase("ProductDb"));

By the way, Program.cs has tons of unnecessary code snippets that we need to remove. After the cleaning process, our should look like this:
using ExpressionTreesInPractice.Database;
using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddDbContext<ProductDbContext>(x => x.UseInMemoryDatabase("ProductDb"));

var app = builder.Build();

// Configure the HTTP request pipeline.
app.UseHttpsRedirection();

app.Run();


We don’t want to use controllers because they are heavy and cause additional problems. That is why we choose to use minimal API. If you don’t know what minimal API is, please refer to our video tutorial to learn more.

After understanding it, open Program.cs and add the following code snippet:
app.MapGet("/products", async ([FromBody] ProductSearchCriteria productSearch, ProductDbContext dbContext) =>

{ }


The above code defines a route in a minimal ASP.NET Core API and creates an endpoint for an HTTP GET request to the /products path. The method uses asynchronous programming to handle potentially long-running operations without blocking the main application flow.

ProductSearchCriteria is a parameter passed to the method, which contains the criteria used to filter the products. It's marked with [FromBody], meaning the request body will be bound to this parameter. Usually, GET requests don't use request bodies, but this setup is allowed if you need to pass a complex object.

ProductDbContext is the database context, which represents the session with the database. It's injected into the method, allowing the application to perform operations like querying the database for products based on the search criteria.

The reason for using `ProductSearchCriteria` instead of `Product` is that the query needs to be dynamic. In this case, the user may provide some of the attributes of the `Product`, but not all of them. Since the properties of `Product` are not nullable, the user would be required to provide every property, even if they don't want to filter by all of them.

By using `ProductSearchCriteria`, we allow for more flexibility. It acts as a container for optional and dynamic parameters. The user can choose to provide only the attributes they want to search by, making it a better fit for scenarios where not all product properties are needed in the query.

Here is what our ProductSearchCriteria class looks like in the ‘Models’ folder.
namespace ExpressionTreesInPractice.Models
{
    public record PriceRange(decimal? Min, decimal? Max);
    public record Category(string Name);
    public record ProductName(string Name);
    public class ProductSearchCriteria
    {
        public bool? IsActive { get; set; }
        public PriceRange? Price { get; set; }
        public Category[]? Categories { get; set; }
        public ProductName[]? Names { get; set; }
    }
}

Now, let's focus on our minimal API implementation. Please take into account that the purpose of the current tutorial is not to show the best practices or write clean code. The purpose is to demonstrate Expression trees in practice and after learning the point you can easily refactor the code.

Here is our first code snippet inside the MapGet function:
await dbContext.Database.EnsureCreatedAsync();

 ParameterExpression parameterExp = Expression.Parameter(typeof(Product), "x");

 Expression predicate = Expression.Constant(true);//x=>True && x.IsActive=true/false


 if (productSearch.IsActive.HasValue)

 {

     MemberExpression memberExp = Expression.Property(parameterExp, nameof(Product.IsActive));

     ConstantExpression constantExp = Expression.Constant(productSearch.IsActive.Value);

     BinaryExpression binaryExp = Expression.Equal(memberExp, constantExp);

     predicate = Expression.AndAlso(predicate, binaryExp);

 }

var lambdaExp = Expression.Lambda<Func<Product, bool>>(predicate, parameterExp);

var data = await dbContext.Products.Where(lambdaExp).ToListAsync();

 return Results.Ok(data);

This code is using C#'s Expression classes to dynamically build a predicate for querying a database. Let's break it down step by step.

  • await dbContext.Database.EnsureCreatedAsync();
  • This line asynchronously ensures that the database is created. If it doesn’t exist, it will be created. This is typically used in development or testing environments to ensure the database schema is in place.
  • ParameterExpression parameterExp = Expression.Parameter(typeof(Product), "x");
  • Here, a parameter expression is created to represent an instance of the Product class. This will act as the input parameter (x) in the expression tree, similar to how you define a lambda expression like x => ....
  • Expression predicate = Expression.Constant(true);
  • An initial predicate is created as a constant boolean expression with the value true. This is useful for building the dynamic predicate incrementally, as you can use it as a base to add more conditions (e.g., true AND other conditions). It serves as a starting point for combining additional expressions.
  • if (productSearch.IsActive.HasValue)
  • This block checks if the IsActive property in productSearch is not null, meaning the user has provided a filter for whether the product is active or not.
  • Inside the if block:
  • MemberExpression memberExp = Expression.Property(parameterExp, nameof(Product.IsActive));
  • This creates a MemberExpression that accesses the IsActive property of the Product instance represented by parameterExp (x.IsActive). Essentially, it represents the expression x => x.IsActive.
  • ConstantExpression constantExp = Expression.Constant(productSearch.IsActive.Value);
  • A ConstantExpression is created with the value of productSearch.IsActive. This represents the value to compare against (true or false).
  • BinaryExpression binaryExp = Expression.Equal(memberExp, constantExp);
  • A BinaryExpression is created to compare the IsActive property with the provided value. This represents x.IsActive == productSearch.IsActive.
  • predicate = Expression.AndAlso(predicate, binaryExp);
  • The current predicate (which started as true) is combined with the new condition (x.IsActive == productSearch.IsActive) using a logical AND. This results in an expression that can be used to filter products based on their active status.

Overall, the above code is dynamically building an expression tree that will eventually be used to filter products based on whether they are active. The initial predicate (true) allows for additional conditions to be added easily without special handling for the first condition. If productSearch.IsActive is provided. It adds a condition that checks if the product’s IsActive property matches the given value (true or false).

Then, the lambdaExp variable is assigned a lambda expression that represents a filtering function for the Product entities. This lambda expression is created from the predicate built earlier, which may contain conditions like checking whether the product is active (IsActive). The Expression.Lambda<Func<Product, bool>> call generates a Func<Product, bool>, meaning a function that takes a Product as input and returns a boolean value, determining whether the product satisfies the filtering criteria.

Next, this lambda expression is passed to the Where method of the Products DbSet in dbContext. The Where method applies this filter to the product records in the database. It creates a query that retrieves only the products matching the conditions defined in the lambda expression.

Finally, the ToListAsync() method asynchronously executes the query and retrieves the matching products as a list. This list is then returned as part of an HTTP 200 OK response using Results.Ok(data). The result is the filtered list of products, which is sent back as the API's response.

In order to test it, just run the application and send the following GET request with Body via Postman:

This approach is useful for us when building queries dynamically, as it allows the flexibility to add conditions based on which filters are provided.

Here is how your LINQ expression should look after compiling your expression tree:
{x => (True AndAlso (x.IsActive == True))}

So far, we have implemented the easiest property, which has two values: true or false. But how about other properties like categories, names, prices, etc.? Users are also able to not pick a product based on whether it is active or not but pick, for example, based on its category field. We allow users to provide multiple categories at the same time. That is why we implemented it as an array in our ProductSearchCategory class.

if (productSearch.Categories is not null && productSearch.Categories.Any())
{
    //x.Category
    MemberExpression memberExp = Expression.Property(parameterExp, nameof(Product.Category));
    Expression orExpression = Expression.Constant(false);
    foreach (var category in productSearch.Categories)
    {
        var constExp = Expression.Constant(category.Name);
        BinaryExpression binaryExp = Expression.Equal(memberExp, constExp);
        orExpression = Expression.OrElse(orExpression, binaryExp);
    }
    predicate = Expression.AndAlso(predicate, orExpression);
}


The code is adding dynamic filtering for product categories. It first checks if the `Categories` in the `productSearch` object is not null and contains any items. If so, it proceeds to build a dynamic expression to filter products by category.

It starts by accessing the `Category` property of the `Product` class through an expression. This member expression represents `x => x.Category`, where `x` is an instance of `Product`.

An initial `orExpression` is set to `false`. This will serve as the base for the dynamic category comparison. It uses a loop to iterate over each category in `productSearch.Categories`. For each category, a constant expression with the category name is created, and a binary expression checks if the product's `Category` equals this name.

The binary expressions are then combined using `OrElse`, meaning that if the product matches any of the given categories, the condition becomes true. After processing all categories, the combined `orExpression` is appended to the main `predicate` with `AndAlso`. This means the overall predicate will now check both the previous conditions and whether the product's category matches any of the categories in the search criteria.

This approach allows for dynamically filtering products by multiple categories, and it integrates the category filtering into the existing predicate.

At the end of the last code, you would get a LINQ expression that represents a lambda function used to filter products based on dynamic conditions. This expression can be translated into a predicate for use in a LINQ query, which can be applied to your ProductDbContext or any IQueryable<Product>.

The LINQ expression, in this case, would be a combination of logical operations (AND and OR) that filter products. Specifically, it looks like this in pseudocode:
products.Where(x => (x.Category == "Category1" || x.Category == "Category2" || ...) && other conditions)

If a  user provides both (isActive and categories) then we should get the following lambda expression:
{x => ((True AndAlso (x.IsActive == True)) AndAlso (((False OrElse (x.Category == "TV")) OrElse (x.Category == "Some Other")) OrElse (x.Category == "Mobile")))}

We follow the same approach for the Names field. Here is our code snippet:
if (productSearch.Names is not null && productSearch.Names.Any())
{
    //x.Name
    MemberExpression memberExp = Expression.Property(parameterExp, nameof(Product.Name));
    Expression orExpression = Expression.Constant(false);
    foreach (var productName in productSearch.Names)
    {
        var constExp = Expression.Constant(productName.Name);
        BinaryExpression binaryExp = Expression.Equal(memberExp, constExp);
        orExpression = Expression.OrElse(orExpression, binaryExp);
    }
    predicate = Expression.AndAlso(predicate, orExpression);
}


This code snippet dynamically builds a filtering condition for product names using expression trees. It first checks if the `productSearch.Names` property is not null and contains any items. If there are product names to filter by, it proceeds to build an expression for comparing the `Name` property of the `Product` entity.

The `memberExp` expression refers to the `Name` property of the `Product` (`x.Name` in a lambda expression). An initial expression, `orExpression`, is created, starting as `false`. This `orExpression` will be updated in a loop to accumulate comparisons for each name in `productSearch.Names`.

Within the loop, for each name in the `productSearch.Names` collection, a constant expression is created from the product name. A binary expression is then formed to check if the product's `Name` equals the current name from the search. The loop builds up a series of `OR` conditions using `Expression.OrElse`, which creates a logical OR operation between the current `orExpression` and the new comparison.

After the loop, the final `orExpression` represents a chain of OR conditions where the product's `Name` must match one of the names in `productSearch.Names`. This expression is combined with the existing `predicate` using `Expression.AndAlso`, ensuring that the name filter is applied along with any other conditions previously defined in the `predicate`.

Long story short, our block of code dynamically constructs a query filter that matches products based on their `Name`, allowing for multiple possible names from the `productSearch.Names` collection.

If the User provides only Names from the Body of the query, we will get approximately the following lambda expression at the end:
{x => (True AndAlso (((False OrElse (x.Name == "LG")) OrElse (x.Name == "LG2")) OrElse (x.Name == "Samsung")))}

If we get all filter parameters like isActive, categories, and names from the request body, we will get the following lambda expression at the end:
{x => (((True AndAlso (x.IsActive == True)) AndAlso (((False OrElse (x.Category == "TV")) OrElse (x.Category == "Some Other")) OrElse (x.Category == "Mobile"))) AndAlso (((False OrElse (x.Name == "LG")) OrElse (x.Name == "LG2")) OrElse (x.Name == "Samsung")))}

Here is what it looks like when running the application and sending the query:

The final argument for our dynamic filtering is Price. It is a complex object which consists of min and max values. The user should be able to provide any of them, both or none of them. That is why we designed it with nullable parameters.

Here is what our code implementation looks like:
if (productSearch.Price is not null)
{
    //x.Price 400
    MemberExpression memberExp = Expression.Property(parameterExp, nameof(Product.Price));
    //x.Price>=min
    if (productSearch.Price.Min is not null)
    {
        var constExp = Expression.Constant(productSearch.Price.Min);
        var binaryExp = Expression.GreaterThanOrEqual(memberExp, constExp);
        predicate = Expression.AndAlso(predicate, binaryExp);
    }
    //(x.Price>=min && x.Price.Max<=max)
    if (productSearch.Price.Max is not null)
    {
        var constExp = Expression.Constant(productSearch.Price.Max);
        var binaryExp = Expression.LessThanOrEqual(memberExp, constExp);
        predicate = Expression.AndAlso(predicate, binaryExp);
    }
}

This code dynamically constructs a predicate for filtering products based on their `Price` range using expression trees. It starts by checking if the `productSearch.Price` object is not null, which indicates that a price filter is applied.

The `memberExp` expression is created to represent the `Price` property of the `Product` (`x.Price`). This expression is used to compare the product's price against the minimum and maximum values in the `productSearch.Price` object.

If the minimum price (`productSearch.Price.Min`) is provided (not null), an expression is built to check if the product's `Price` is greater than or equal to this minimum value. This condition is added to the overall `predicate` using `Expression.AndAlso`, meaning the product must satisfy this condition to be included in the results.

Similarly, if the maximum price (`productSearch.Price.Max`) is provided, another expression is constructed to check if the product's `Price` is less than or equal to the maximum value. This condition is also combined with the existing `predicate` using `Expression.AndAlso`, ensuring that both the minimum and maximum price conditions are applied.

Long story short, the code builds a predicate that filters products by a specified price range, ensuring that products have a price greater than or equal to the minimum (if provided) and less than or equal to the maximum (if provided).

If the User provides only Price from the Body of the query, we will get approximately the following lambda expression at the end:
{x => ((True AndAlso (x.Price >= 400)) AndAlso (x.Price <= 5000))}

If we get all filter parameters like IsActive, Categories, Names, and Price from the request body, we will get the following lambda expression at the end:
{x => (((((True AndAlso (x.IsActive == True)) AndAlso (((False OrElse (x.Category == "TV")) OrElse (x.Category == "Some Other")) OrElse (x.Category == "Mobile"))) AndAlso (((False OrElse (x.Name == "LG")) OrElse (x.Name == "LG2")) OrElse (x.Name == "Samsung"))) AndAlso (x.Price >= 400)) AndAlso (x.Price <= 5000))}

Here is what it looks like when running the application and sending the query:

The elegant ending

This article serves as a practical continuation of the previous tutorial on C# expression trees, focusing on their real-world usage within an ASP.NET Core web API. It explores the creation of dynamic filtering functionality using minimal API, Entity Framework Core (EF Core), and expression trees.

The project involves building a product database with dynamic filtering capabilities, such as filtering by product attributes like `IsActive`, `Category`, `Name`, and `Price`. The use of expression trees is highlighted to construct flexible, dynamic queries without hardcoding-specific filters.

The setup begins with an ASP.NET Core Web API using an in-memory database for storage, although other EF Core-supported databases could be used. The article emphasizes using minimal API over traditional controllers for simplicity and performance and guides the user through the necessary steps, including setting up the database context (`DbContext`) and initializing data.

One of the core features demonstrated is how expression trees are used to build predicates dynamically. For example, when filtering by the `IsActive` property, the system checks whether the user provided this filter and then dynamically constructs a condition that compares the product's `IsActive` status with the provided value. The process is extended to handle dynamic filtering of other properties such as `Category`, `Name`, and `Price`, each of which allows flexible criteria for querying.

By using expression trees, the article illustrates how complex and flexible queries can be constructed without writing multiple hardcoded query methods. The example of filtering products by `Name` and `Category` demonstrates how logical `OR` conditions can be combined dynamically, depending on user input, resulting in concise and reusable query logic.

Additionally, the price filtering is handled by checking both minimum and maximum values and dynamically adjusting the predicate to include only those products within the specified price range.

In conclusion, this article demonstrates the power of expression trees in building dynamic, flexible queries in C# applications. It provides hands-on code examples of using expression trees to construct queries for an ASP.NET Core web API, offering a practical way to manage complex, real-world scenarios like filtering product databases based on varying user input.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Differences in Span<T> and List<T>

clock October 1, 2024 07:13 by author Peter

The two data structures in.NET, Span<T> and List<T>, have different functions and features, particularly with regard to memory management and performance. Depending on the use case, I'll discuss the differences and offer insights into which is better for performance below. Note: If you are a Java fan, you should think of Span as an array and List as an ArrayList. In Java, Span<T> and List<T> are comparable to Arrays and ArrayList.

Memory Management

  • Span<T>
    • Span<T> is a stack-only structure that provides a view over a contiguous block of memory (such as an array, memory from the stack, or a portion of an existing array).
    • It does not own the memory but rather operates over existing memory, meaning it does not allocate memory on the heap.
    • Span<T> Is lightweight and efficient because it doesn't involve allocations or resizing, and it is used for scenarios where you need to work with slices of arrays or memory buffers without making copies.
  • List<T>
    • List<T> is a heap-allocated collection that dynamically manages a resizable array under the hood.
    • It manages its memory by growing the internal array as needed when new items are added, which incurs allocation and copy costs.
    • List<T> Has a lot of flexibility in terms of adding, removing, and accessing elements, but this comes with overhead due to heap allocations and resizing.

Mutability and Resizing

  • Span<T>
    • Span<T> cannot be resized. It represents a fixed-size view of existing memory. You cannot add or remove elements from a, only modify the elements within the given range.
    • If you need to add or remove elements dynamically, Span<T> is not suitable, but it excels in scenarios where the size is fixed and known in advance.
  • List<T>
    • List<T> is dynamically resizable, making it convenient for scenarios where the number of elements is unknown or changes frequently.
    • However, resizing comes with performance costs, as it requires allocating a new array and copying over the elements whenever the capacity of the list is exceeded.

Performance

  • Span<T>
    • Faster for fixed-size data manipulation: Since Span<T> avoids heap allocations and runs directly on existing memory, it can be faster than List<T> for operations like slicing arrays or working with buffers.
    • Minimal overhead: Because Span<T> is designed to work with stack-allocated data or fixed-length buffers, there is virtually no memory overhead, making it more efficient for memory-constrained operations.
    • Ideal for scenarios where the performance of accessing and manipulating in-memory data is critical (e.g., high-performance applications like games, parsers, or real-time systems).

List<T>

  • More overhead due to dynamic resizing: Each time List<T> Grows beyond its current capacity, it has to allocate a larger array and copy elements, which impacts performance, especially in scenarios with frequent additions.
  • Despite these costs, List<T> is still performant for general use cases where dynamic size changes are necessary, and the slight performance overhead is acceptable.
  • Access to elements in a List<T> (indexer-based access) is very fast (O(1) time complexity), but modifications that require resizing (like Add, Remove) can incur extra costs.

Use Cases

  • Span<T>
    • High-performance scenarios: Span<T> Is designed for performance-critical code, such as working with buffers, memory manipulation, or slices of arrays where dynamic allocation and copying should be avoided.
    • Memory-efficient processing: If you're working with large datasets (e.g., image processing, networking buffers) where you just need to process data and not store it permanently, Span<T> is a good choice.
    • Fixed-size operations: If you have data that won’t change in size (e.g., you’re reading data into an array and just want to operate on parts of it), Span<T> is perfect.
  • List<T>
    • Dynamic collection handling: List<T> Is great when you need to manage a collection whose size changes over time. It’s ideal for situations where elements are frequently added or removed.
    • General-purpose collection: List<T> Is a high-level data structure that offers a lot of functionality out of the box, such as sorting, searching, and collection-wide operations like ForEach.
    • Higher-level use cases: If performance isn’t the absolute top priority and you need a flexible collection that grows and shrinks, List<T> is the better choice.

Memory Safety and Stack Limitations

  • Span<T>
    • Span<T> operates on stack memory, so it's constrained by the stack size of the thread. Stack sizes are typically much smaller than heap sizes, so you can't store large data in Span<T>.
    • However, Span<T> Can also reference heap-allocated arrays without copying them. But for stack-allocated spans (like stackalloc), large allocations can lead to stack overflow exceptions.
  • List<T>
    • Since List<T> is heap-allocated, it is not constrained by the stack size. You can store significantly larger amounts of data in a List<T>, although at the cost of dynamic memory management.

Safety and Lifetime Constraints

  • Span<T>
    • Lifespan constraints: Span<T> Is meant to be short-lived and cannot be stored on the heap, which limits its use outside of local scopes.
    • Stack Safety: You can't return a Span<T> from a method if it's referencing stack-allocated memory, as that memory would no longer be valid once the method returns.
  • List<T>
    • No such lifespan restrictions exist in List<T>, as it’s stored on the heap. This makes it easier to pass between methods and store in-class fields.

Span<T> vs List<T>

Aspect Span<T> List<T>
Memory Allocation Stack-allocated or a slice of existing memory Heap-allocated, dynamically resizable array
Resizing Fixed-size (non-resizable) Dynamically resizable
Performance Faster for fixed-size, in-memory operations Slower due to resizing and heap allocations
Use Cases High-performance scenarios, low-level memory access General-purpose dynamic collections
Context Short-lived, stack-constrained Long-lived, heap-allocated
Memory Safety Stack-safe, cannot be heap-allocated No stack constraints, heap-based
Thread Safety It can be used safely for memory slices Not inherently thread-safe without synchronization

When to Choose Span<T> or List<T>?

  • Choose Span<T>
    • If you are working with slices of memory or arrays and need maximum performance with minimal memory overhead.
    • If you know the size of the data and don’t need the collection to grow or shrink dynamically.
    • For high-performance applications (e.g., parsers, network buffers, game engines).
  • Choose List<T>
    • If you need a dynamic, resizable collection where elements will be added and removed frequently.
    • If you need the convenience of built-in operations like searching, sorting, and enumerating.
    • This is for general-purpose applications where performance is important but not critical.

Conclusion
For performance-critical operations involving fixed-size memory manipulation, Span<T> offers significant advantages because it avoids the overhead of heap allocations and resizes. However, if you need flexibility in a dynamic collection, List<T> is more appropriate, even though it has additional overhead.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Versioning the API and Turning on Authorization in the Swagger UI for.NET Core

clock September 26, 2024 07:18 by author Peter

API Versioning
The process of creating different versions of an API to coexist at the same time is called API versioning. By employing this method, we may allow new features and enhancements for clients that are more recent while preserving backward compatibility with older clients. Your application may require breaking updates to your API as it develops. Managing these changes in a way that doesn't affect current users who are dependent on previous API versions is helpful.

There are various ways to implement API versioning in .NET Core web API. The most common ways to implement API versioning are:

  • URL versioning: This is the most common approach to implementing API versioning. In this technique, the version is part of the endpoint itself.
  • For Example: /api/v1/purchaseOrders
  • Query String Versioning: In this technique, the version is passed as a query parameter. This approach maintains the same URL path, with the version indicated by an additional query parameter.
  • For Example: /api/purchaseOrders?api-version=1.0
  • Header Versioning: In this technique, the version is passed in a custom request header. The version is included in the request headers rather than the URL.
  • For Example

GET /api/purchaseOrders

  • Headers: x-api-version: 1.0Media Type Versioning (Accept Header Versioning): In this technique, the version is passed via content negotiation. The version is included in the Accept header with a custom media type. The client requests a specific version of the API by setting the Accept header to a custom MIME type.
  • For Example
    GET /api/purchaseOrder
  • Headers: Accept: application/vnd.companyname.v1+json
How to implement API versioning?
How to implement API versioning using URL versioning and enable the token authorization option in Swagger step by step. For this, I am using .NET Core 8.

Create a web API: Create a .NET Core web API and name it "APIVersioingPOC".
Add required packages: Add the below-required packages for API versioning using the Nuget package manager.

Install-Package Asp.Versioning.Mvc
Install-Package Asp.Versioning.Mvc.ApiExplorer


Create Entity Class: Create a folder with the name "Entity" and add an entity class named "PurchaseDetails"
namespace APIVersioingPOC.Entity
{
    public class PurchaseDetails
    {
        public string ProductName { get; set; }
        public int Rate { get; set; }
        public int Qty { get; set; }
        public int Amount { get; set; }
    }
}


Create Service: Create a folder with the name "Service" and add an interface with the name "IPurchaseOrderService" and a service class with the name "PurchaseOrderService" as below.

using APIVersioingPOC.Entity;

namespace APIVersioingPOC.Service
{
    public interface IPurchaseOrderService
    {
        List<PurchaseDetails> GetPurchaseOrders();
    }
}


using APIVersioingPOC.Entity;

namespace APIVersioingPOC.Service
{
    public class PurchaseOrderService: IPurchaseOrderService
    {
        public List<PurchaseDetails> GetPurchaseOrders()
        {
            return new List<PurchaseDetails>
            {
               new PurchaseDetails { ProductName="Laptop", Rate=80000, Qty=2, Amount=160000},
               new PurchaseDetails { ProductName="Dekstop", Rate=40000, Qty=1, Amount=40000},
               new PurchaseDetails { ProductName="Hard Disk", Rate=4000, Qty=10, Amount=40000},
               new PurchaseDetails { ProductName="Pen Drive", Rate=600, Qty=10, Amount=6000},
            };
        }
    }
}

To resolve the dependency, register the service in Program.cs as below.
// Add custom services
builder.Services.AddScoped<IPurchaseOrderService, PurchaseOrderService>();

Configure the versioning: In Program.cs file, add the below code.
var builder = WebApplication.CreateBuilder(args);

// Add API Explorer that provides information about the versions available
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddApiVersioning(options =>
{
    options.DefaultApiVersion = new ApiVersion(1, 0); // Default API version (v1.0)
    options.AssumeDefaultVersionWhenUnspecified = true; // Assume the default version if not specified
    options.ReportApiVersions = true; // Report API versions in response headers
    options.ApiVersionReader = new UrlSegmentApiVersionReader(); // Use URL segment versioning (e.g., /api/v1/resource)

}).AddApiExplorer(options =>
{
    options.GroupNameFormat = "'v'VVV";
    options.SubstituteApiVersionInUrl = true;
});

builder.Services.AddSwaggerGen();

var app = builder.Build();

app.UseSwagger();
app.UseSwaggerUI(options =>
{
    var provider = app.Services.GetRequiredService<IApiVersionDescriptionProvider>();

    foreach (var description in provider.ApiVersionDescriptions)
    {
        options.SwaggerEndpoint($"/swagger/{description.GroupName}/swagger.json", description.GroupName.ToUpperInvariant());
    }
});

ConfigureSwaggerOptions: Create a folder with the name "OpenApi" and add the ConfigureSwaggerOptions class to configure swagger options.
using Asp.Versioning.ApiExplorer;
using Microsoft.Extensions.Options;
using Microsoft.OpenApi.Models;
using Swashbuckle.AspNetCore.SwaggerGen;

namespace APIVersioingPOC.OpenApi
{
    public class ConfigureSwaggerOptions : IConfigureOptions<SwaggerGenOptions>
    {
        private readonly IApiVersionDescriptionProvider _provider;

        public ConfigureSwaggerOptions(IApiVersionDescriptionProvider provider)
        {
            _provider = provider;
        }

        public void Configure(SwaggerGenOptions options)
        {
            foreach (var description in _provider.ApiVersionDescriptions)
            {
                options.SwaggerDoc(description.GroupName, CreateInfoForApiVersion(description));
            }

            // Add token authentication option to pass bearer token
            options.AddSecurityDefinition("Bearer", new OpenApiSecurityScheme
            {
                In = ParameterLocation.Header,
                Description = "Please enter token",
                Name = "Authorization",
                Type = SecuritySchemeType.Http,
                BearerFormat = "JWT",
                Scheme = "bearer"
            });

            // Add security scheme
            options.AddSecurityRequirement(new OpenApiSecurityRequirement
            {
                {
                    new OpenApiSecurityScheme
                    {
                        Reference = new OpenApiReference
                        {
                            Type = ReferenceType.SecurityScheme,
                            Id = "Bearer"
                        }
                    },
                    new string[] { }
                }
            });
        }

        private static OpenApiInfo CreateInfoForApiVersion(ApiVersionDescription apiVersionDescription)
        {
            var info = new OpenApiInfo
            {
                Title = "API Versioning",
                Version = apiVersionDescription.ApiVersion.ToString(),
                Description = "Swagger document for API Versioning.",
            };

            // Add deprecated API description
            if (apiVersionDescription.IsDeprecated)
            {
                info.Description += " This API version has been deprecated.";
            }

            return info;
        }
    }
}


Add the code below to the Program.cs

// Add custom services
builder.Services.AddSingleton<IConfigureOptions<SwaggerGenOptions>, ConfigureSwaggerOptions>();


Create Controller: Create a controller with the name "PurchaseOrderController". For demo purposes, I have created two versions of the same API endpoint.
using APIVersioingPOC.Service;
using Asp.Versioning;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace APIVersioingPOC.Controllers
{
    [ApiController]
    [Route("api/v{version:apiVersion}/[controller]")]
    [ApiVersion("1.0")]
    [ApiVersion("2.0")]
    [Authorize]
    public class PurchaseOrderController : ControllerBase
    {
        private readonly IPurchaseOrderService _purchaseOrderService;

        public PurchaseOrderController(IPurchaseOrderService purchaseOrderService)
        {
            _purchaseOrderService = purchaseOrderService;
        }

        [HttpGet("GetPurchaseOrders")]
        [MapToApiVersion("1.0")]
        public IActionResult GetPurchaseOrders()
        {
            var users = _purchaseOrderService.GetPurchaseOrders();
            return Ok(users);
        }

        [HttpGet("GetPurchaseOrders")]
        [MapToApiVersion("2.0")]
        public IActionResult GetPurchaseOrdersV2()
        {
            var purchaseDetails = _purchaseOrderService.GetPurchaseOrders();
            return Ok(purchaseDetails);
        }
    }
}


Let's run the project

For default version V1 you will get the swagger document as below.

When you use the V2 option in the "Select a Definition" dropdown box, the swagger document will appear as seen below.


You can pass the authentication token as below and click on the Authorize button.

In this way, we learned how to implement API versioning and enable authorization in Swagger UI.

Happy Learning!



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in