European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core 9.0 Hosting - HostForLIFE :: How to Start Using.NET Aspire?

clock March 25, 2025 07:36 by author Peter

.NET Aspire: What is it?
A contemporary, opinionated framework called.NET Aspire was created to make it easier to develop cloud-native and microservices-based applications within the.NET ecosystem. It offers a cohesive method for creating, implementing, and overseeing dispersed applications with the least amount of hassle.

.NET Aspire is built to solve common challenges developers face when working with microservices and cloud-native architectures, such as,Service Orchestration: Managing multiple services seamlessly within a single environment.

  • Built-in Observability: Automatically integrating logging, tracing, and metrics for monitoring application health.
  • Configuration Management: Simplifying configuration across multiple services.
  • Dependency Injection: Enabling easy integration of external services like databases, message brokers, and caching mechanisms.
  • Cloud-Native Compatibility: Supporting seamless deployment to cloud environments, including Azure, AWS, and Kubernetes.

By leveraging .NET Aspire, developers can focus more on business logic and less on infrastructure concerns, resulting in faster development cycles, improved maintainability, and better performance.

What Was There Before .NET Aspire vs. Now?

Before .NET Aspire, developers working with microservices in .NET had to rely on multiple tools and frameworks to achieve the same functionality Aspire now provides natively. Some challenges and comparisons include,

  • Service Orchestration
    • Before: Developers manually orchestrated microservices using custom scripts, Kubernetes, or third-party tools.
    • Now: .NET Aspire provides a built-in orchestration model, reducing the complexity of managing services.
  • Observability (Logging, Tracing, and Metrics)
    • Before: Developers had to integrate multiple libraries (like OpenTelemetry, Serilog, and Prometheus) separately.
    • Now: Aspire includes built-in observability, making logging, tracing, and metrics easy to implement.
  • Configuration Management
    • Before: Configuration was handled manually via environment variables, JSON files, or external providers.
    • Now: Aspire offers a structured approach to configuration management, reducing manual effort and errors.
  • Service-to-Service Communication
    • Before: Developers had to implement gRPC, HTTP clients, or messaging systems manually.
    • Now: Aspire simplifies service communication with built-in abstractions.
  • Deployment and Cloud Readiness
    • Before: Deploying microservices required setting up infrastructure using Docker, Kubernetes, or cloud-specific tools.
    • Now: Aspire provides streamlined deployment options, making cloud-native application development more accessible.

A Quick Comparison Table

Feature Before .NET Aspire With .NET Aspire
Service Orchestration Manual setup using Kubernetes or scripts Built-in orchestration model
Observability (Logging, Tracing, Metrics) Requires manual integration of OpenTelemetry, Serilog, Prometheus, etc. Native support for logging, tracing, and monitoring
Configuration Management Environment variables, JSON, third-party libraries Structured configuration approach
Service-to-Service Communication Manually implemented using gRPC, HTTP clients, etc. Simplified with built-in abstractions
Deployment & Cloud Readiness Custom Docker/Kubernetes setup Streamlined cloud-native deployment

With these improvements, .NET Aspire significantly reduces the development overhead and allows teams to focus more on building features rather than configuring infrastructure.

Why .NET Aspire for Modern .NET Developers?
As software development shifts towards microservices and cloud-first approaches, developers need tools that enable efficiency, scalability, and maintainability. .NET Aspire addresses these needs by offering.

  • Seamless Orchestration: Manage multiple services effortlessly within a unified environment.
  • Built-in Observability: Gain insights into application performance with logging and tracing.
  • Simplified Service Integration: Easily connect microservices and external dependencies.

Cloud-Ready Architecture: Deploy applications to cloud platforms with minimal configuration.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Features of WebMethod and ScriptMethod in .NET Webforms

clock March 17, 2025 07:55 by author Peter

in online forms. WebMethod and ScriptMethod Attributes are used by Net programs to call client-side scripts such as JavaScript and Jquery web calls. The shared function keyword from server-side pages is used in this technique. The function will send a post and receive a request on that, as indicated by the Shared keyword and WebMethod property that WebService exposes. The same kind of online technique will be utilized for sharing data in JSON format. Below is an example of the code.

Server End code

Public Class WebCall
    Inherits System.Web.UI.Page
    <WebMethod()>
    <ScriptMethod()>
    Public Shared Function GetServerTime() As String
        Return DateTime.Now.ToString()
    End Function

    Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load
        ' No server-side logic needed for this example.
    End Sub
End Class


From the client end, an OnClick event will be sent a request for getting a server date & timely response. The client-side code example is below.
<asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server"><main>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
    <script type="text/javascript">
        function callWebMethod() {
            alert("callweb");
            $.ajax({
                type: "POST",
                url: "WebCall.aspx/GetServerTime",
                data: "{}",
                contentType: "application/json; charset=utf-8",
                dataType: "json",
                success: function (msg) {
                    alert(msg.d);
                    $("#lblTime").text(msg.d);
                },
                error: function (xhr, status, error) {
                    alert("Error: " + xhr.responseText);
                }
            });
        }
    </script>
    <div>

  <button type="button" onclick="callWebMethod()">Get current server time</button><p id="btnGetTime"></p>
          <label id="lblTime" Text=""></label>
    </div>
    </main>
</asp:Content>


Output



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Real-Time.NET 9 with ML.NET Anomaly Detection in Server Logs

clock March 10, 2025 07:55 by author Peter

A wealth of information can be found in server logs. They can provide you with a wealth of information on user behavior, system performance, and any problems. However, manually identifying those problems is challenging. Examining logs is necessary, but it can be challenging when there are many logs. Anomaly detection can help with it. One machine learning method that can automatically identify anomalies is anomaly detection. Within the.NET environment, developers can create robust anomaly detection systems using.NET 9 and Microsoft's open-source machine learning framework, ML.NET. Using a real-world example, we will examine the capabilities of ML.NET and how to apply it to identify issues in server logs with.NET 9.

What is ML.NET?
ML.NET is a powerful, cross-platform machine learning framework designed for .NET developers. It allows you to create, train, and deploy custom machine learning models directly in C# or F# without requiring extensive data science expertise. Launched by Microsoft, ML.NET supports a wide range of scenarios.

  • Classification: Binary (e.g., spam detection) or multi-class (e.g., categorizing support tickets).
  • Regression: Predicting future values (e.g., sales forecasting).
  • Clustering: Grouping similar data points (e.g., segmenting customers).
  • Anomaly Detection: Finding outliers in datasets (e.g., identifying irregularities in server logs).
  • Time-Series Analysis: Finding trends and outliers in sequential data.
  • Recommendation Systems: Suggesting products or content based on user behavior.

ML.NET’s strengths include its integration with .NET tools like Visual Studio, support for pre-trained models (e.g., ONNX, TensorFlow), and the Model Builder GUI for simplified development. Whether you’re processing small datasets or scaling to enterprise-level applications, ML.NET offers flexibility and performance, making it an ideal choice for embedding intelligence into .NET 9 projects.

Project Overview: Detecting Error Spikes in Server Logs
We’re going to create a .NET 9 console application that uses ML.NET to find unusual activity in server log data. We’re especially looking at error counts over time. This example is like a real-world situation where a sudden increase in errors could mean there’s a problem with the server. This would let administrators fix the problem before it gets worse.

Step 1. Setting Up the Environment
Please find the complete source code: Click Here

To begin, ensure you have,

  • .NET 9 SDK: Installed from the official Microsoft site.
  • Visual Studio Code (or Visual Studio): For coding and debugging. I will be using VS Code for this project.
  • ML.NET Packages: Added via NuGet.

Let’s start. If you have C# Dev Kit, then you can create the project in VS Code or use below command.

Create a new console application.
dotnet new console -n AnomalyDetection -f net9.0
cd AnomalyDetection


Add the ML.NET NuGet packages.
dotnet add package Microsoft.ML
dotnet add package Microsoft.ML.TimeSeries

Step 2. Defining the Data Models.
Create a Models folder and add two classes.
LogData.cs: Represents server log entries with timestamps and error counts.
    namespace AnomalyDetection.Models;

    public record LogData
    {
        public DateTime Timestamp { get; set; }
        public float ErrorCount { get; set; }

    }


AnomalyPrediction.cs: Represents the model’s output, indicating whether an anomaly is detected.
using Microsoft.ML.Data;

namespace AnomalyDetection.Models;

public record AnomalyPrediction
{
    [VectorType(3)]
    public double[] Prediction { get; set; } =  [];

}

AnomalyResult.cs: Represents the output result from the model.
namespace AnomalyDetection.Models;

public record AnomalyResult
{
    public DateTime Timestamp { get; set; }
    public float ErrorCount { get; set; }
    public bool IsAnomaly { get; set; }
    public double ConfidenceScore { get; set; }

}

Step 3. Implementing Anomaly Detection Logic with ML.NET
Create a Services folder and add AnomalyDetector.cs.
using System;
using AnomalyDetection.Models;
using Microsoft.ML;

namespace AnomalyDetection.Services;

public class AnomalyDetectionTrainer
{
    private readonly MLContext _mlContext;
    private ITransformer _model;

    public AnomalyDetectionTrainer()
    {
        _mlContext = new MLContext(seed: 0);
        TrainModel();
    }

    private void TrainModel()
    {
        // Simulated training data (in practice, load from a file or database)
        var data = GetTrainingData();

        var dataView = _mlContext.Data.LoadFromEnumerable(data);

        // Define anomaly detection pipeline
       var pipeline = _mlContext.Transforms.DetectIidSpike(
                outputColumnName: "Prediction",
                inputColumnName: nameof(LogData.ErrorCount),
                confidence: 95.0, // Double instead of int
                pvalueHistoryLength: 5
            );

        // Train the model
        _model = pipeline.Fit(dataView);

    }

    public List<AnomalyResult> DetectAnomalies(List<LogData> logs)
        {
            var dataView = _mlContext.Data.LoadFromEnumerable(logs);
            var transformedData = _model.Transform(dataView);
            var predictions = _mlContext.Data.CreateEnumerable<AnomalyPrediction>(transformedData, reuseRowObject: false);

            return logs.Zip(predictions, (log, pred) => new AnomalyResult
            {
                Timestamp = log.Timestamp,
                ErrorCount = log.ErrorCount,
                IsAnomaly = pred.Prediction[0] == 1,
                ConfidenceScore = pred.Prediction[1]
            }).ToList();
        }

    //dummy training data
    private List<LogData> GetTrainingData()
    {
        return new List<LogData>(){
            new() { Timestamp = DateTime.Now.AddHours(-5), ErrorCount = 2 },
            new() { Timestamp = DateTime.Now.AddHours(-4), ErrorCount = 3 },
            new() { Timestamp = DateTime.Now.AddHours(-3), ErrorCount = 2 },
            new() { Timestamp = DateTime.Now.AddHours(-2), ErrorCount = 50 }, // Anomaly: Spike!
            new() { Timestamp = DateTime.Now.AddHours(-1), ErrorCount = 4 },
            new() { Timestamp = DateTime.Now.AddHours(-6), ErrorCount = 2 }
        };
    }
}

Explanation

_mlContext: An instance of MLContext, the core ML.NET object for managing data, models, and transformations. It’s initialized with a seed (0) for reproducible results.
_model: An ITransformer object that holds the trained anomaly detection model, applied later for predictions.
private void TrainModel()
{
    var data = GetTrainingData();
    var dataView = _mlContext.Data.LoadFromEnumerable(data);
    var pipeline = _mlContext.Transforms.DetectIidSpike(
        outputColumnName: "Prediction",
        inputColumnName: nameof(LogData.ErrorCount),
        confidence: 95.0,
        pvalueHistoryLength: 5
    );
    _model = pipeline.Fit(dataView);
}


Purpose: Trains the anomaly detection model using simulated data.

Steps

  • Data Loading: Calls GetTrainingData to fetch dummy server log data, then converts it into an IDataView using LoadFromEnumerable.
  • Pipeline Definition: Uses DetectIidSpike from MLContext.Transforms to create a pipeline for detecting anomalies in independent and identically distributed (IID) data:
    • outputColumnName: “Prediction”: Names the output column for anomaly results.
    • inputColumnName: nameof(LogData.ErrorCount): Specifies the input data (error counts).
    • confidence: 95.0: Sets a 95% confidence level for anomaly detection.
    • pvalueHistoryLength: 5: Defines a sliding window of 5 data points to evaluate anomalies.
  • Training: Fits the pipeline to the data, producing a trained _model.

DetectAnomalies Method
public List<AnomalyResult> DetectAnomalies(List<LogData> logs)
{
    var dataView = _mlContext.Data.LoadFromEnumerable(logs);
    var transformedData = _model.Transform(dataView);
    var predictions = _mlContext.Data.CreateEnumerable<AnomalyPrediction>(transformedData, reuseRowObject: false);
    return logs.Zip(predictions, (log, pred) => new AnomalyResult
    {
        Timestamp = log.Timestamp,
        ErrorCount = log.ErrorCount,
        IsAnomaly = pred.Prediction[0] == 1,
        ConfidenceScore = pred.Prediction[1]
    }).ToList();
}


Step 4. Check the anomalies in the logs with the above service
Update Program.cs
using AnomalyDetection.Models;
using AnomalyDetection.Services;

//create a dummy log data
var logs = new List<LogData>(){
    new() { Timestamp = DateTime.Now.AddHours(-5), ErrorCount = 2 },
    new() { Timestamp = DateTime.Now.AddHours(-4), ErrorCount = 3 },
    new() { Timestamp = DateTime.Now.AddHours(-3), ErrorCount = 2 },
    new() { Timestamp = DateTime.Now.AddHours(-2), ErrorCount = 50 },
    new() { Timestamp = DateTime.Now.AddHours(-1), ErrorCount = 4 },
    new() { Timestamp = DateTime.Now.AddHours(-6), ErrorCount = 2 }
};

//create an instance of the AnomalyDetectionTrainer
var trainer = new AnomalyDetectionTrainer();

//detect anomalies
var results = trainer.DetectAnomalies(logs);

//print the results
foreach (var result in results)
{
    Console.WriteLine($"Timestamp: {result.Timestamp}, ErrorCount: {result.ErrorCount}, IsAnomaly: {result.IsAnomaly}, ConfidenceScore: {result.ConfidenceScore}");
}

Let's run the console app and validate the results.
The project structure is as shown below.

A wealth of information can be found in server logs. They can provide you with a wealth of information on user behavior, system performance, and any problems. However, manually identifying those problems is challenging. Examining logs is necessary, but it can be challenging when there are many logs. Anomaly detection can help with it. One machine learning method that can automatically identify anomalies is anomaly detection. Within the.NET environment, developers can create robust anomaly detection systems using.NET 9 and Microsoft's open-source machine learning framework, ML.NET. Using a real-world example, we will examine the capabilities of ML.NET and how to apply it to identify issues in server logs with.NET 9.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Scalar-Based API Documentation for ASP.NET Core

clock March 6, 2025 05:54 by author Peter

Having well-organized and aesthetically pleasing API documentation is essential in modern web development. For creating visually appealing API documentation for ASP.NET Core applications, Scalar is a great tool. The easy integration of Scalar into an ASP.NET Core project and the creation of API documentation will be covered in this post.

Scalar: What is it?
An open-source tool for API documentation, Scalar offers a user-friendly and aesthetically pleasing interface for testing and researching APIs. It is a straightforward and user-friendly substitute for Redoc and Swagger UI.

Why Use Scalar for API Documentation?

  • User-friendly Interface: Scalar provides a clean and modern UI.
  • Interactive API Testing: Allows developers to test APIs directly from the documentation.
  • Easy Integration: Simple setup process with minimal configuration.
  • Open Source: Free to use and modify according to project needs.

Setting Up Scalar in ASP.NET Core
Step 1. Install Scalar Package
To integrate Scalar into your ASP.NET Core project, you need to install the Scalar NuGet package. Run the following command in the terminal.
Install-Package Scalar.AspNetCore

Step 2. Configure Scalar in Startup.cs.
Modify the Startup.cs file to add Scalar middleware.
using Scalar.AspNetCore;

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();
    services.AddScalar();
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseRouting();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapControllers();
    });

    app.UseScalar(); // Enable Scalar documentation
}

Step 3. Access API Documentation.
Once the configuration is complete, run your ASP.NET Core application and navigate to the following URL.
https://localhost:<port>/scalar

You should see a beautifully generated API documentation interface.

Customizing Scalar Documentation

Scalar provides customization options to enhance the API documentation experience.

1. Adding API Metadata
You can customize API metadata such as title, version, and description by modifying the Scalar configuration.
services.AddScalar(options =>
{
    options.DocumentTitle = "My API Documentation";
    options.ApiVersion = "v1.0";
    options.Description = "This is a sample API documentation using Scalar.";
});


2. Securing API Documentation
You can use authentication middleware to secure the Scalar endpoint and limit access to your API documentation.

Conclusion

With ASP.NET Core, Scalar is a potent tool for producing visually appealing and interactive API documentation. It is a great substitute for more conventional documentation tools like Swagger because of its simplicity of integration, intuitive interface, and customizable features. You may rapidly set up Scalar and enhance your experience with API documentation by following the instructions in this article.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in