European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core Hosting :: File Upload in Angular 7

clock September 20, 2019 10:52 by author Scott

In this article, we will see how to upload a file into the database using the Angular 7 app in an ASP.NET project. Let us create a new project in Visual Studio 2017. We are using ASP.NET Core 2 and Angular 7 for this project.

Prerequisites:

  • Node JS must be installed
  • Angular CLI must be installed
  • Basic knowledge of Angular

Let’s get started.

Create a new project and name it as FileUploadInAngular7

Select a new project with Angular in .NET Core and uncheck the configure HTTPS

Your project will be created. Left click on ClientApp and open it in file explorer

Type cmd as shown in the picture.

Type npm install in the cmd

Now it will take time and all the node_modules will be installed in the project.

Create a Web API controller as UploadController.

Add the following code in it.

using System.IO;
using System.Net.Http.Headers;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc;
namespace FileUploadInAngular7.Controllers
{
    [Produces("application/json")]
    [Route("api/[controller]")]
    public class UploadController : Controller
    {
        private IHostingEnvironment _hostingEnvironment;
        public UploadController(IHostingEnvironment hostingEnvironment)
        {
            _hostingEnvironment = hostingEnvironment;
        }
        [HttpPost, DisableRequestSizeLimit]
        public ActionResult UploadFile()
        {
            try
            {
                var file = Request.Form.Files[0];
                string folderName = "Upload";
                string webRootPath = _hostingEnvironment.WebRootPath;
                string newPath = Path.Combine(webRootPath, folderName);
                if (!Directory.Exists(newPath))
                {
                    Directory.CreateDirectory(newPath);
                }
                if (file.Length > 0)
                {
                    string fileName = ContentDispositionHeaderValue.Parse(file.ContentDisposition).FileName.Trim('"');
                    string fullPath = Path.Combine(newPath, fileName);
                    using (var stream = new FileStream(fullPath, FileMode.Create))
                    {
                        file.CopyTo(stream);
                    }
                }
                return Json("Upload Successful.");
            }
            catch (System.Exception ex)
            {
                return Json("Upload Failed: " + ex.Message);
            }
        }
    }
}

Navigate to the following path and open both the files.

In home.component.html write the following code.

<h1>File Upload Using Angular 7 and ASP.NET Core Web API</h1>
<input #file type="file" multiple (change)="upload(file.files)" />
<br />
<span style="font-weight:bold;color:green;" *ngIf="progress > 0 && progress < 100">
  {{progress}}%
</span>
<span style="font-weight:bold;color:green;" *ngIf="message">
  {{message}}
</span>

In home.component.ts file write the following code in it.

import { Component, OnInit } from '@angular/core';
import { HttpClient, HttpRequest, HttpEventType, HttpResponse } from '@angular/common/http';
@Component({
  selector: 'app-home',
  templateUrl: './home.component.html',
})
export class HomeComponent {
  public progress: number;
  public message: string;
  constructor(private http: HttpClient) { }
  upload(files) {
    if (files.length === 0)
      return;
    const formData = new FormData();
    for (let file of files)
      formData.append(file.name, file);
    const uploadReq = new HttpRequest('POST', `api/upload`, formData, {
      reportProgress: true,
    });
    this.http.request(uploadReq).subscribe(event => {
      if (event.type === HttpEventType.UploadProgress)
        this.progress = Math.round(100 * event.loaded / event.total);
      else if (event.type === HttpEventType.Response)
        this.message = event.body.toString();
    });
  }
}

Run the project and you will see the following output.



European ASP.NET Core Hosting :: How to Fix “An error occurred while starting the application” in ASP.NET Core on IIS

clock July 5, 2019 07:33 by author Scott

.NET Core is the latest Microsoft product and Microsoft always keep update their technologies. We have written tutorial about how to publish ASP.NET Core on IIS server, but we know that some of you sometimes receive error when deploying your ASP.NET Core. Feel frustrated to fix the issue? Yeah, not only you have headache, but some of our clients also experience same problem like you. That’s why we write this tutorial and hope it can help to fix your issue!

Anyone see this error? Have problem to solve it? We want to help you here!

The Problem

 

It basically means something bad happened with your application. Things you need to check

  • You might not have the correct .NET Core version installed on the server.
  • You might be missing DLL’s
  • Something went wrong in your Program.cs or Startup.cs before any exception handling kicked in

If you use Windows Server, then I believe that you can’t find anything on your Event Viewer too. You’ll notice that there is no error on your Event Viewer log. Why? This is because Event Logging must be wired up explicitly and you’ll need to use the Microsoft.Extensions.Logging.EventLog package, and depending on the error, you might not have a chance to even catch it to log to the Event Viewer.

How to Fix it?

So, how to fix error above? The followings are the steps to fix the error:

1. Open your web.config

2. Change stdoutLogEnabled=true

3. Create a logs folder

  • Unfortunately, the AspNetCoreModule doesn’t create the folder for you by default
  • If you forget to create the logs folder, an error will be logged to the Event Viewer that says: Warning: Could not create stdoutLogFile \\?\YourPath\logs\stdout_timestamp.log, ErrorCode = -2147024893.
  • The “stdout” part of  the value “.\logs\stdout” actually references the filename not the folder.  Which is a bit confusing.

4. Run your request again, then open the \logs\stdout_*.log file

Note – you will want to turn this off after you’re done troubleshooting, as it is a performance hit.

So your web.config’s aspNetCore element should look something like this

 <aspNetCore processPath=”.\YourProjectName.exe” stdoutLogEnabled=”true” stdoutLogFile=”.\logs\stdout” />

Doing this will log all the requests out to this file and when the exception occurs, it will give you the full stack trace of what happened in the \logs\stdout_*.log file

  

 

 

Hope this helps!

Looking for ASP.NET Core hosting with affordable cost?

We know it is quite hard to find reliable hosting that support ASP.NET Core. We believe that some of you have signed up for 1 hosting, but they are below your expectation. Maybe they don’t support latest ASP.NET Core or they might support .NET Core but their support are not competent. We are here to help you. If you are looking for cheap, reliable with best ASP.NET Core hosting support, then you can visit our site at https://www.hostforlife.eu.



European ASP.NET Core Hosting :: Alert Dialog From Controller Without JavaScript In View

clock May 9, 2019 12:27 by author Peter
We can show an alert dialog in the browser from Controller without using any JavaScript in the View, which saves our time and makes the popping up of dynamic data way faster.
 
Displaying an Alert Dialog popup can be done from the controller and even from the server-side, but it is very useful when you want to display an alert using much less code.
 
In your Controller, copy the below code just before your return code.

public ActionResult SmartRegister(csUser model)  
 {               
     User us = new User();  
     rfSocietyEntities db = new rfSocietyEntities();  
     if (ModelState.IsValid)  
     {  
         int count = db.Users.Where(a => a.Email.Equals(model.Email)).Count();  
         if (count == 0)  
         {  
             us.Admin = model.Admin;  
             us.Email = model.Email;  
             us.FullName = model.FullName;  
             us.Password = model.Password;  
             us.PhoneNo = model.PhoneNo;  
             db.Users.Add(us);  
             db.SaveChanges();  
             return RedirectToAction("Dashboard""Dashboard");  
         }  
         else  
         {  
             TempData["msg"] = "<script>alert('Email id already registered.');</script>";  
             return View (model);  
         }  
     }  
     else  
     {  
         TempData["msg"] = "<script>alert('Please Check Data entered or try later.');</script>";  
         return View(model);  
     }  
}    
In your View file, add the below code.
  @Html.Raw(TempData["msg"])




European ASP.NET Core Hosting - HostForLIFE.eu :: ASP.NET Core Security Headers

clock April 30, 2019 11:13 by author Peter

With the help of headers, your website could send some useful information to the browser. Let’s see how it is possible to add more protection to your website.
To add a header for each request, we can use middleware.

XSS and CSP
Still in the OWASP top 10, there is XSS - Cross-Site Scripting attack. Sure, it helps a lot to encode symbols before displaying text on the website (using any one of the HtmlEncoder, JavaScriptEncoder, and UrlEncoder). And, it’s better never to use @Html.Raw(). But it is also possible to add a header that will inform the browser to stop XSS attack. This kind of header is useful mostly for old browsers.
app.Use(async (context, next) =>  
{  
context.Response.Headers.Add("X-Xss-Protection", "1");  
await next();  
}); 


For new browsers, it is better to use CSP. Here is how it is possible to add the CSP header.
app.Use(async (context, next) =>  
{  
context.Response.Headers.Add(  
  "Content-Security-Policy",  
  "default-src 'self'; " +  
  "img-src 'self' myblobacc.blob.core.windows.net; " +  
  "font-src 'self'; " +  
  "style-src 'self'; " +  
  "script-src 'self' 'nonce-KIBdfgEKjb34ueiw567bfkshbvfi4KhtIUE3IWF' "+  
  " 'nonce-rewgljnOIBU3iu2btli4tbllwwe'; " +  
  "frame-src 'self';"+  
  "connect-src 'self';");  
await next();  
});  


In this example, it is allowed to run scripts.js files only from the current website (that is a meaning of ‘self’). And it is allowed to run 2 specified with “nonce” attribute scripts that are inserted in page inside script tag. For example, if you are using some script like this one inside your page.
<script>  
function showMessage() {  
alert("Just for demo");  
}   
</script>  

Then, you will be not able to run this script without adding ‘unsafe-inline’ into your CSP definition.

But adding ‘unsafe-inline’ means leaving your website not-protected. So, better move the script into .js file or use a nonce. Just add to your script attribute nonce with some random value. For example,
<script nonce="KUY8VewuvyUYVEIvEFue4vwyiuf"> </script>  

Then, you can add to your CSP script-scr value ‘nonce-KUY8VewuvyUYVEIvEFue4vwyiuf’ and you will be able to run scripts from exactly this <script> section.

‘unsafe-inlne’ is also related to events that are added to your html as attributes. Like onclick, onchange, onkeydown, onfocus. For example, instead of the following onclick event, you should add id or class to your element and call event from <script> or .js file.
<p onclick="showMessage()">Show message</p>  

Like this,
<p id="message-text">Show message</p>  

<script nonce=”KUY8VewuvyUYVEIvEFue4vwyiuf”>  
$(document).ready(function() {  
$("#message-text") (function() {  
alert( "Just for demo" );  
});   
});  
</script>  


X-Frame-Options
By default, it is possible to display your website inside an iframe. But with one small header, it is possible to disallow this. Why? Because someone could display your website inside a frame and place a transparent layer over it. And, the users would be thinking that they are clicking on your website buttons/links but in a real case, they would be clicking on items placed in the transparent layer. And as cookies still could be in the user’s browser, some operation could be authenticated. This kind of attack is called Clickjacking. And, here is a header to protect your website from this attack.
context.Response.Headers.Add("X-Frame-Options", "DENY");  

Content sniffing
By the next link File Upload XSS you can find a more or less fresh sample of how it is possible to inject JavaScript into an svg file. And if a file like this would be located on the server that would have content sniffing security enabled, then JavaScript wouldn’t work because svg extension doesn’t correspond to JS content. Hope you believe me now that the next header is required.
context.Response.Headers.Add("X-Content-Type-Options", "nosniff");  

Referrer-Policy
One of the headers that is automatically added by browsers is “Referer”. It contains a site from which the user has been transferred. Sometimes, that is convenient for analytics. But sometimes, the URL could contain some private information that is better not to be disclosed.

If you don’t want to allow browsers to display your website as last visited in “Referer” header, please use the Referrer-Policy: no-referrer

Here is an example of all headers in one middleware.
app.Use(async (context, next) =>  
{  
context.Response.Headers.Add("X-Xss-Protection", "1");  
context.Response.Headers.Add("X-Frame-Options", "DENY");  
context.Response.Headers.Add("Referrer-Policy", "no-referrer");  
context.Response.Headers.Add("X-Content-Type-Options", "nosniff");  
              context.Response.Headers.Add(  
  "Content-Security-Policy",  
  "default-src 'self'; " +  
  "img-src 'self' myblobacc.blob.core.windows.net; " +  
  "font-src 'self'; " +  
  "style-src 'self'; " +  
  "script-src 'self' 'nonce-KIBdfgEKjb34ueiw567bfkshbvfi4KhtIUE3IWF' "+  
  " 'nonce-rewgljnOIBU3iu2btli4tbllwwe'; " +  
  "frame-src 'self';"+  
  "connect-src 'self';");  
await next();  
});  


Sure, you can read information about each one header and change value to something more appropriate for your needs.
Strict-Transport-Security

For activating Strict-Transport-Security - web security policy mechanism that helps to protect your website from protocol downgrade attacks and cookie hijacking, add the next one to your middleware pipeline (or just don’t remove it),
app.UseHsts();  

This middleware will add “Strict-Transport-Security” header

Removing Server Header
Sometimes, headers could provide some information that is better to hide. To disable the Server header from Kestrel, you need to set AddServerHeader to false. Use UseKestrel() if your ASP.NET Core version is  lower than 2.2 and ConfigureKestrel() if not.
WebHost.CreateDefaultBuilder(args)  
     .UseKestrel(c => c.AddServerHeader = false)  
     .UseStartup<Startup>()  
     .Build();



European ASP.NET Core Hosting :: Create A Typed HttpClient With

clock April 23, 2019 10:57 by author Peter

HttpClient is used for sending HTTP requests and receiving HTTP responses from a resource identified by a URI. But, HttpClient has some issues. To read more on the issues of HttpClient, you can check this link. In the .NET Core 2.1 release, Microsoft has introduced a new way of designing HttpClients to solve these issues, and it's called HttpClientFactory. HttpClientFactory is an opinionated factory, available since .NET Core 2.1, for creating HttpClient instances in our applications. This means that we can create HttpClients and can register them in the HttpClientFactory in our application and can leverage the dependency injection capabilities of the .NET core to inject these HttpClients in our code. HttpClientFactory allows us to no longer care about the lifecycle of the HttpClient by leaving it to the framework.
 
There are three ways to use HttpClientFactory to instantiate HttpClients.

  • Default client
  • Named client
  • Typed client

In order to use the factory, we need to register it in the DI container. So, we need to use an extension method AddHttpClient() on IServiceCollection interface in our Startup.cs class. This will allow us to inject the HttpClient in our class constructors.
 
In this article, we will see how to create a Typed HttpClient using the HttpClient factory in a .NET core MVC application and use it for making HTTP calls. I prefer Typed HttpClient over the other two because,

    As the name suggests, typed clients provide type safety.
    Typed clients help in encapsulating the API calls when we are making use of the HttpClient at one place, thus making our code DRY (Don't Repeat Yourself). The other two will scatter the implementation details of making HTTP calls throughout the codebase.

We will make a simple MVC application to learn the workings of typed HttpClient. This application will receive the name of a movie and call a REST API to fetch the details of that movie and shall display it to the user. I will be using Visual Studio Code for developing the application. The REST API used for fetching the movie details is the OMDB API. The OMDB API is a RESTful web service to obtain movie information. This is a free API with a 1000 requests per day limit for a user. We need an API key for accessing this API. To know more details about this API you can check their website.

Create a folder called MovieFinder and open it in VS Code. Create an MVC application by running the following command in the terminal.
    dotnet new mvc --name MovieFinder 

This shall create a basic .NET Core MVC application. Now, let’s create a View Model class to hold the data from the OMDB API. So, let’s add a class named MovieDetailModel.
    public class MovieDetailModel 
    { 
        public string Title { get; set; } 
        public string Year { get; set; } 
        public string Director { get; set; } 
        public string Actors { get; set; } 
        public string IMDBRating { get; set; } 
        public string PosterImage { get; set; } 
        public string Plot { get; set; } 
    } 


Now, we need to create an interface for our typed client. Let's name it IMovieDetailsClient.
    public interface IMovieDetailsClient 
    { 
        Task<MovieDetailModel> GetMovieDetailsAsync(string movieName); 
    } 


This interface contains a single method, GetMovieDetailsAsync, which accepts the movie name as the parameter and shall return the details of that movie. Now we need to create a class which implements this interface. This class shall contain the actual logic of calling the OMDB API to fetch the movie details.
    public class MovieDetailsClient : IMovieDetailsClient 
    { 
        private readonly HttpClient _httpClient; 
     
        public MovieDetailsClient(HttpClient httpClient) 
        { 
            httpClient.BaseAddress = new Uri("http://www.omdbapi.com/"); 
            _httpClient = httpClient; 
        } 
     
        public async Task<MovieDetailModel> GetMovieDetailsAsync(string movieName) 
        { 
            var queryString = $"?t={movieName}&apikey=<your-api-key>"; 
            var response = await _httpClient.GetStringAsync(queryString); 
     
            JObject json = JObject.Parse(response); 
     
            if (json.SelectToken("Response").Value<string>() == "True") 
            { 
     
                var movieDetails = new MovieDetailModel 
                { 
                    Title = json.SelectToken("Title").Value<string>(), 
                    Year = json.SelectToken("Year").Value<string>(), 
                    Director = json.SelectToken("Director").Value<string>(), 
                    Actors = json.SelectToken("Actors").Value<string>(), 
                    IMDBRating = json.SelectToken("imdbRating").Value<string>(), 
                    PosterImage = json.SelectToken("Poster").Value<string>(), 
                    Plot = json.SelectToken("Plot").Value<string>() 
                }; 
     
                return movieDetails; 
            } 
     
            return new MovieDetailModel 
            { 
                Title = movieName 
            }; 
        } 
    } 


In this class, we inject the HttpClient in our class constructor and set the base address of our OMDB API endpoint. We also implement GetMovieDetailsAsync method declared in our interface. We called the OMDB API from our method and mapped the API response to our view model and returned it. I have used the JSON.NET library for parsing the response from the API.
Note that I have hardcoded the API Base address and the API key in the code. This is not a healthy practice. We shall always prefer moving these kinds of values to configuration files and shall read those values from the configuration files in our code.
 
We have now created our typed client. Now let's create the view for user interaction. In the Index.cshtml file in the Views\Home folder, replace the existing code with the following code.
    @{ 
        ViewData["Title"] = "Home Page"; 
    } 
     
    @model MovieDetailModel 
     
    <form  
        asp-controller="Home"  
        asp-action="Submit" 
        method="post"  
        class="form-horizontal"  
        role="form"> 
     
        <div class="form-group"> 
            <label for="Title">Title</label> 
            <input  
                class="form-control"  
                placeholder="Enter Title" 
                asp-for="Title">    
        </div> 
        <button type="submit" class="btn btn-primary">Submit</button> 
    </form> 
     
     
    @{ 
        <br> 
         
        if(!string.IsNullOrEmpty(Model?.Year)) 
        { 
            var title = Model.Title; 
            var year = Model.Year;   
            var message = $"{title} is release on {year}";  
      
            <br> 
            <div class="card" style="width: 18rem;"> 
                <img class="card-img-top" [email protected] alt="Poster Not Available"> 
                <div class="card-body"> 
                    <h5 class="card-title">@Model.Title (@Model.Year)</h5> 
                    <p class="card-text">@Model.Plot</p>               
                </div> 
                <ul class="list-group list-group-flush"> 
                    <li class="list-group-item"><strong>Director : @Model.Director</strong></li> 
                    <li class="list-group-item"><strong>Actors : @Model.Actors</strong></li> 
                    <li class="list-group-item"><strong>Rating : @Model.IMDBRating</strong></li> 
                </ul> 
            </div>      
        } 
         
        if(Model != null && string.IsNullOrEmpty(Model?.Year)) 
        { 
            <div class="alert alert-danger" role="alert"> 
                <strong>Sorry!! Requested Movie Details are not available..</strong>  
            </div> 
        } 
    } 


Now, we need to add the Controller code for accepting the movie name from the view and for displaying the movie details. For that, we need to inject the typed client we had created into the constructor of our controller. So, we need to register this typed client with the HttpClient factory in our Startup.cs class. Add the following code in the ConfigureServices method in the startup class.
    services.AddHttpClient<IMovieDetailsClient, MovieDetailsClient>(); 

Now, let's add our controller methods. In the HomeController make the changes as below.
    public class HomeController : Controller 
    { 
        private readonly IMovieDetailsClient _movieDetailsClient; 
     
        public HomeController(IMovieDetailsClient movieDetailsClient) 
        { 
            _movieDetailsClient = movieDetailsClient; 
        } 
     
        public IActionResult Index() 
        { 
            return View(); 
        } 
     
        [HttpPost] 
        public async Task<IActionResult> Submit(MovieDetailModel model) 
        { 
            var movieDetail = await _movieDetailsClient.GetMovieDetailsAsync(model.Title); 
            return View("Index", movieDetail); 
        }       
    } 


We are injecting our IMovieDetails client in the constructor of our controller and assigning it to a read-only field _movieDetailsClient. We have also defined an action named Submit which takes the title of the movie from the view as a parameter. This method makes use of our typed HttpClient to fetch the details of that movie and shall return the view with the details of that movie.

Now, run the application. Execute the command dotnet run in the terminal. Open a browser and navigate to https://localhost:5001/.

 



European ASP.NET Core Hosting :: RESTful WebAPI With Onion Architecture

clock April 9, 2019 11:29 by author Peter

Hello friends, here I will show you how to create a WebApi with the following characteristics:

  • ASP.Core 2.1
  • EntityFramework
  • FluentValidation
  • Nlogger
  • Swagger
  • Jwt

Let's start. First create an empty project, then add the following folders:

  • Application
  • Domain
  • Service
  • Infrastructure

Then in the Domain folder, we create a library project Net.Core 2.1 with the name WebApi.Domain add the following dependencies

FluentValidation.AspNetCore

In this project, we add the following folders:

  • Dtos
  • Entities
  • Interfaces

In the Entities folder, we create the BaseEntity class:

    namespace WebApi.Domain.Entities 
    { 
        public abstract class BaseEntity 
        { 
            public virtual int Id { get; set; } 
        } 
    } 

Our project classes will inherit the field Id from this abstract class (if you want you can add other fields like CreatedAt or CreatedBy).

Then we create the Country class with the properties that defines a Country.

    namespace WebApi.Domain.Entities 
    { 
        public class Country : BaseEntity 
        { 
            public string Name { get; set; } 
            public int Population { get; set; } 
            public decimal Area { get; set; } 
            public string ISO3166 { get; set; } 
            public string DrivingSide { get; set; } 
            public string Capital { get; set; } 
     
        } 
    } 

Now to make the exercise more interesting, we are going to assume that we do not want to expose all the Country class. In the Dtos folder, we create the following CountryDensityDTO class.

    using System; 
    using WebApi.Domain.Entities; 
     
    namespace WebApi.Domain.Dtos 
    { 
        public class CountryDensityDTO : BaseEntity 
        { 
            public string Name { get; set; } 
            public string Capital { get; set; } 
            public decimal Area { get; set; } 
            public int Population { get; set; } 
     
     
            public int Populationdensity 
            { 
                get 
                { 
                    return Decimal.ToInt32(Population / Area); 
                } 
            } 
        } 
    }

This class exposes Name, Capital Area, Population and a calculated field Populationdensity.
Now we will continue with the Infrastructure layer and then we will finish the missing parts.We go to the Infrastructure folder and create a library Net.Core 2.1. We name it WebApi.Infrastructure.Data.

We add the following Packages:

  • Microsoft.EntityFrameworkCore.SqlServer 2.1.4
  • Microsoft.EntityFrameworkCore.Tools 2.1.4
  • Microsoft.Extensions.Identity.Stores 2.1.1
  • Microsoft.VisualStudio.Web.CodeGeneration.Design 2.1.5
  • Add Project reference WebApi.Domain 

We create the following folders:

  • Context
  • EntityDbMapping
  • Repository

In the Context Folder, we add the SqlServerContext class. We refer to our Country entity with DbSet to work with the database. As we work with CodeFirst approach, we will create a mapping for our entity Country in the database.
"modelBuilder.Entity<Country>(new CountryMap().Configure);"

Optionally, in this part we can also add seed data when creating a table.

using Microsoft.AspNetCore.Identity.EntityFrameworkCore; 
using Microsoft.EntityFrameworkCore; 
using WebApi.Domain.Entities; 
using WebApi.Infrastructure.Data.EntityDbMapping; 

namespace WebApi.Infrastructure.Data.Context 

public class SqlServerContext :   IdentityDbContext<ApplicationUser> 

    public DbSet<Country> Country { get; set; } 

    public SqlServerContext(DbContextOptions<SqlServerContext> options) : base(options) 
    { 
      
    }     
    protected override void OnModelCreating(ModelBuilder modelBuilder) 
    { 
        base.OnModelCreating(modelBuilder); 
        modelBuilder.Entity<Country>(new CountryMap().Configure); 
        // ModelBuilderExtensions.Seed(modelBuilder); 

    } 

//Data for first time on table 
public static class ModelBuilderExtensions 

    public static void Seed(this ModelBuilder modelBuilder) 
    { 
        modelBuilder.Entity<Country>().HasData( 
            new Country 
            { 
                Id = 1, 
                Name = "Venezuela", 
                Population = 300000000, 
                Area = 230103 
              
            }, 
            new Country 
            { 
                Id = 2, 
                Name = "Peru", 
                Population = 260000000, 
                Area =33249               
            } 
        ); 
    } 


In the folder, EntityDbMapping, we create the CountryMap class. In this class, we define the physical representation of the properties of the Country class as fields in the table of the database.

    using Microsoft.EntityFrameworkCore;   
    using Microsoft.EntityFrameworkCore.Metadata.Builders;   
    using WebApi.Domain.Entities;   
       
    namespace WebApi.Infrastructure.Data.EntityDbMapping   
    {   
        public class CountryMap : IEntityTypeConfiguration<Country>   
        {   
            public void Configure(EntityTypeBuilder<Country> builder)   
            {   
                builder.ToTable("Country");   
       
                builder.HasKey(c => c.Id);   
       
                builder.Property(c => c.Name)   
                    .IsRequired()   
                    .HasColumnName("Name")   
                    .HasColumnType("varchar(150)");   
       
                builder.Property(c => c.Population)   
                    .IsRequired()   
                    .HasColumnType("int")   
                    .HasColumnName("Population");   
       
                builder.Property(c => c.Area)   
                    .IsRequired()   
                    .HasColumnType("decimal(14,2)")   
                    .HasColumnName("Area");   
       
                builder.Property(c => c.ISO3166)   
                .IsRequired()   
                .HasColumnType("varchar(3)")   
                .HasColumnName("ISO3166");   
       
                builder.Property(c => c.DrivingSide)   
                .IsRequired()   
                .HasColumnType("varchar(50)")   
                .HasColumnName("DrivingSide");   
       
                builder.Property(c => c.Capital)   
                .IsRequired()   
                .HasColumnType("varchar(50)")   
                .HasColumnName("Capital");   
            }   
       
        }   
    }   


This is it for now.
In the next chapter, we will implement validations with FluentValidation. We will also configure Mapper to use it with our DTOs and will implement Identity using Jwt.



European ASP.NET Core Hosting :: How to Use Auth Cookies in ASP.NET Core

clock March 29, 2019 11:49 by author Scott

Cookie-based authentication is the popular choice to secure customer facing web apps. For .NET programmers, ASP.NET Core has a good approach that is worth looking into. In this take, I will delve deep into the auth cookie using ASP.NET Core 2.1. Version 2.1 is the latest LTS version as of the time of this writing. So, feel free to follow along, I’ll assume you’re on Visual Studio or have enough C# chops to use a text editor. I will omit namespaces and using statements to keep code samples focused. If you get stuck, download the sample code found at the end.

With ASP.NET 2.1, you can use cookie-based authentication out of the box. There is no need for additional NuGet packages. New projects include a metapackage that has everything, which is Microsoft.AspNetCore.App. To follow along, type dotnet new mvc in a CLI or do File > New Projectin Visual Studio.

For those of you who come from classic .NET, you may be aware of the OWIN auth cookie. You will be happy to know those same skills transfer over to ASP.NET Core quite well. The two implementations remain somewhat similar. With ASP.NET Core, you still configure the auth cookie, set up middleware, and set identity claims.

Setup

To begin, I’ll assume you know enough about the ASP.NET MVC framework to gut the scaffolding into a skeleton web app. You need a HomeController with an Index, Login, Logout, and Revoke action methods. Login will redirect to Index after it signs you in, so it doesn’t need a view. I’ll omit showing view sample code since views are not the focus here. If you get lost, be sure to download the entire demo to play with it.

I’ll use debug logs to show critical events inside the cookie authentication. Be sure to enable debug logs in appsettings.json and disable Microsoft and system logs.

My log setup looks like this:

"LogLevel": {
  "Default": "Debug",
  "System": "Warning",
  "Microsoft": "Warning"
}

Now you’re ready to build a basic app with cookie authentication. I’ll forgo HTML forms with a user name and password input fields. These front-end concerns only add clutter to what is more important which is the auth cookie. Starting with a skeleton app shows how effective it is to add an auth cookie from scratch. The app will sign you in automatically and land on an Index page with an auth cookie. Then, you can log out or revoke user access. I want you to pay attention to what happens to the auth cookie as I put authentication in place.

Cookie Options

Begin by configuring auth cookie options through middleware inside the Startup class. Cookie options tell the authentication middleware how the cookie behaves in the browser. There are many options, but I will only focus on those that affect cookie security the most.

  • HttpOnly: A flag that says the cookie is only available to servers. The browser only sends the cookie but cannot access it through JavaScript.
  • SecurePolicy: This limits the cookie to HTTPS. I recommend setting this to Always in prod. Leave it set to None in local.
  • SameSite: Indicates whether the browser can use the cookie with cross-site requests. For OAuth authentication, set this to Lax. I am setting this to Strict because the auth cookie is only for a single site. Setting this to None does not set a cookie header value.

There are cookie options for both the auth cookie and a global cookie policy. Stay alert since the cookie policy can override auth cookie options and vice versa. Setting HttpOnly to false in the auth cookie will override cookie policy options. While setting SameSite in the cookie policy overrides auth cookie options. In my demo, I’ll illustrate both scenarios, so it is crystal clear how this works.

In the Startup class, find the ConfigureServices method and type:

services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
  .AddCookie(options =>
  {
    options.Cookie.HttpOnly = true;
    options.Cookie.SecurePolicy = _environment.IsDevelopment()
      ? CookieSecurePolicy.None : CookieSecurePolicy.Always;
      options.Cookie.SameSite = SameSiteMode.Lax;
  });

This creates the middleware service with the AddAuthentication and AddCookie methods. AuthenticationScheme is useful when there is more than one auth cookie. Many instances of the cookie authentication let you protect endpoints with many schemes. You supply any string value; the default is set to Cookies. Note that the options object is an instance of the CookieAuthenticationOptions class.

The SecurePolicy is set through a ternary operator that comes from _environment. This is a private property that gets set in the Startup constructor. Add IHostingEnvironment as a parameter and let dependency injection do the rest.

In this same ConfigureServices method, add the global cookie policy through middleware:

services.Configure<CookiePolicyOptions>(options =>
{
  options.MinimumSameSitePolicy = SameSiteMode.Strict;
  options.HttpOnly = HttpOnlyPolicy.None;
  options.Secure = _environment.IsDevelopment()
    ? CookieSecurePolicy.None : CookieSecurePolicy.Always;
});

Take a good look at SameSite and HttpOnly settings for both cookie options. When I set the auth cookie, you will see this set to HttpOnly and Strict. This illustrates how both options override each other.

Invoke this middleware inside the request pipeline in the Configure method:

app.UseCookiePolicy();
app.UseAuthentication();

The cookie policy middleware is order sensitive. This means it only affects components after invocation. By invoking the authentication middleware, you will get a HttpContext.User property. Be sure to call this UseAuthentication method before calling UseMvc.

Login

In the HomeController add an AllowAnonymous filter to the Login and Logout methods. There are only two action methods available without an auth cookie.

Inside the Startup class, look for the AddMvc extension method and add a global auth filter:

services.AddMvc(options => options.Filters.Add(new AuthorizeFilter()))

With the app secure, configure the cookie name and login / logout paths. Find where the rest of the CookieAuthenticationOptions are and do:

options.Cookie.Name = "SimpleTalk.AuthCookieAspNetCore";
options.LoginPath = "/Home/Login";
options.LogoutPath = "/Home/Logout";

This will cause the app to redirect to the login endpoint to sign in. However, before you can take this for a spin, you’ll need to create the auth cookie.

Do this inside the Login action method in the HomeController:

var claims = new List<Claim>
{
  new Claim(ClaimTypes.Name, Guid.NewGuid().ToString())
}; 

var claimsIdentity = new ClaimsIdentity(
  claims, CookieAuthenticationDefaults.AuthenticationScheme);
var authProperties = new AuthenticationProperties(); 

await HttpContext.SignInAsync(
  CookieAuthenticationDefaults.AuthenticationScheme,
  new ClaimsPrincipal(claimsIdentity),
  authProperties);

AuthenticationProperties drive further auth cookie behavior in the browser. For example, the IsPersistent property persists the cookie across browser sessions. Be sure to get explicit user consent when you enable this property. ExpiresUtc sets an absolute expiration, be sure to enable IsPersistent and set it to true. The default values will give you a session cookie that goes away when you close the tab or browser window. I find the default values in this object enough for most use cases.

To take this for a spin load up the browser by going to the home or Index page. Note it redirects to Login which redirects back to Index with an auth cookie.

Once this loads it looks something like this. Be sure to take a good look at how the auth cookie is set:

JWT Identity Claim

Often, an auth cookie isn’t enough to secure API endpoints or microservices. For the web app to call a service, it can use a JWT bearer token to authenticate. To make the access token accessible, place it inside the identity claims.

In the Login action method within HomeController, expand the list of claims with a JWT:

var userId = Guid.NewGuid().ToString();
var claims = new List<Claim>
{
  new Claim(ClaimTypes.Name, userId),
  new Claim("access_token", GetAccessToken(userId))
}; 

private static string GetAccessToken(string userId)
{
  const string issuer = "localhost";
  const string audience = "localhost"; 

  var identity = new ClaimsIdentity(new List<Claim>
  {
    new Claim("sub", userId)
  }); 

  var bytes = Encoding.UTF8.GetBytes(userId);
  var key = new SymmetricSecurityKey(bytes);
  var signingCredentials = new SigningCredentials(
    key, SecurityAlgorithms.HmacSha256); 

  var now = DateTime.UtcNow;
  var handler = new JwtSecurityTokenHandler(); 

  var token = handler.CreateJwtSecurityToken(
    issuer, audience, identity,
    now, now.Add(TimeSpan.FromHours(1)),
    now, signingCredentials); 

  return handler.WriteToken(token);
}

I must caution, don’t ever do this is in production. Here, I use the user id as the signing key which is symmetric to keep it simple. In a prod environment use an asymmetric signing key with public and private keys. Client apps will then use a well-known configuration endpoint to validate the JWT.

Placing the JWT in ClaimsIdentity makes it accessible through the HttpContex.User property. For this app, say you want to put the JWT in a debug log to show off this fancy access token.

In the Startup class, create this middleware inside the Configure method:

app.Use(async (context, next) =>
{
  var principal = context.User as ClaimsPrincipal;
  var accessToken = principal?.Claims
    .FirstOrDefault(c => c.Type == "access_token"); 

  if (accessToken != null)
  {
    _logger.LogDebug(accessToken.Value);
  } 

  await next();
});

The _logger is another private property you set through the constructor. Add ILogger<Startup> as a parameter and let dependency injection do the rest. Note the ClaimsPrincipal has a list of Claims you can iterate through. What I find useful is to look for a Type of claim like an access_token and get a Value. Because this is middleware always call next() so it doesn’t block the request pipeline.

Logout

To log out of the web app and clear the auth cookie do:

await HttpContext.SignOutAsync(
  CookieAuthenticationDefaults.AuthenticationScheme);

This belongs inside the Logout action method in the HomeController. Note that you specify the authentication scheme. This tells the sign-out method which auth cookie it needs to blot out. Inspecting HTTP response headers reveals Cache-Control and Pragma headers set to no-cache. This shows the auth cookie disables browser caching when it wants to update the cookie. The Login action method responds with the same HTTP headers.

Revocation

There are use cases where the app needs to react to back-end user access changes. The auth cookie will secure the application, but, remains valid for the lifetime of the cookie. With a valid cookie, the end-user will not see any changes until they log out or the cookie expires. In ASP.NET Core 2.1, one way to validate changes is through cookie authentication events. The validation event can do back-end lookups from identity claims in the auth cookie. Create the event by extending CookieAuthenticationEvents. Override the ValidatePrincipal method and set the event in the auth cookie options.

For example:

public class RevokeAuthenticationEvents : CookieAuthenticationEvents
{
  private readonly IMemoryCache _cache;
  private readonly ILogger _logger; 

  public RevokeAuthenticationEvents(
    IMemoryCache cache,
    ILogger<RevokeAuthenticationEvents> logger)
  {
    _cache = cache;
    _logger = logger;
  } 

  public override Task ValidatePrincipal(
    CookieValidatePrincipalContext context)
  {
    var userId = context.Principal.Claims
      .First(c => c.Type == ClaimTypes.Name); 

    if (_cache.Get<bool>("revoke-" + userId.Value))
    {
      context.RejectPrincipal(); 

      _cache.Remove("revoke-" + userId.Value);
      _logger.LogDebug("Access has been revoked for: "
        + userId.Value + ".");
    } 

    return Task.CompletedTask;
  }
}

To have IMemoryCache set by dependency injection, put AddMemoryCache inside the ConfigureSerices method in the Startup class. Calling RejectPrincipal has an immediate effect and kicks you back out to Login to get a new auth cookie. Note this relies on in-memory persistence which gets set in the Revoke action method. Keep in mind that this event runs once per every request, so you want to use an efficient caching strategy. Doing an expensive lookup at every request will affect performance.

Revoke access by setting the cache inside the Revoke method in the HomeController:

var principal = HttpContext.User as ClaimsPrincipal;
var userId = principal?.Claims
  .First(c => c.Type == ClaimTypes.Name); 

_cache.Set("revoke-" + userId.Value, true);
return View();

After visiting the Revoke endpoint, the change does not have an immediate effect. After revocation, navigating home will show the debug log and redirect to Login. Note that landing in the Index page again will have a brand new auth cookie.

To register this event, be sure to set the EventsType in the CookieAuthenticationOptions. You will need to provide a scoped service to register this RevokeAuthenticationEvents. Both are set inside the ConfigureServices method in the Startup class.

For example:

options.EventsType = typeof(RevokeAuthenticationEvents);
services.AddScoped<RevokeAuthenticationEvents>();

The CookieValidatePrincipalContext in ValidatePrincipal can do more than revocation if necessary. This context has ReplacePrincipal to update the principal, then renew the cookie by setting ShouldRenew to true.

Session Store

Setting a JWT in the claims to have a convenient way to access identity data works well. However, every identity claim you put in the principal ends up in the auth cookie. If you inspect the cookie, you will notice it doubles in size with an access token. As you add more claims in the principal the auth cookie gets bigger. You may hit HTTP header limits in a prod environment with many auth cookies. In IIS, the default max limit is set to 8KB-16KB depending on the version. You can increase the limit, but this means bigger payloads per request because of the cookies.

There are many ways to quell this problem, like a user session to keep all JWTs out of the auth cookie. If you have current code that accesses the identity through the principal, then this is not ideal. Moving identity data out of the principal is risky because it may lead to a complete rewrite.

One alternative is to use the SessionStore found in CookieAuthenticationOptions. OWIN, for example, has a similar property. Implement the ITicketStore interface and find a way to persist data in the back-end. Setting the SessionStore property defines a container to store the identity across requests. Only a session identifier gets sent to the browser in the auth cookie.

Say you want to use in-memory persistence instead of the auth cookie:

public class InMemoryTicketStore : ITicketStore
{
  private readonly IMemoryCache _cache; 

  public InMemoryTicketStore(IMemoryCache cache)
  {
    _cache = cache;
  } 

  public Task RemoveAsync(string key)
  {
    _cache.Remove(key); 

    return Task.CompletedTask;
  } 

  public Task<AuthenticationTicket> RetrieveAsync(string key)
  {
    var ticket = _cache.Get<AuthenticationTicket>(key); 

    return Task.FromResult(ticket);
  } 

  public Task RenewAsync(string key, AuthenticationTicket ticket)
  {
    _cache.Set(key, ticket); 

    return Task.CompletedTask;
  } 

  public Task<string> StoreAsync(AuthenticationTicket ticket)
  {
    var key = ticket.Principal.Claims
      .First(c => c.Type == ClaimTypes.Name).Value; 

    _cache.Set(key, ticket); 

    return Task.FromResult(key);
  }
}

Set an instance of this class in SessionStore inside CookieAuthenticationOptions, these options are set in the ConfigureServices method in the Startup class. One caveat is getting an instance since it needs a provider from BuildServiceProvider. A temporary IoC container here feels hacky and pines after a better solution.

A better approach is to use the options pattern in ASP.NET Core. Post-configuration scenarios set or change options at startup. With this solution, you leverage dependency injection without reinventing the wheel.

To put in place this options pattern, implement IPostConfigureOptions<CookieAuthenticationOptions>:

public class ConfigureCookieAuthenticationOptions
  : IPostConfigureOptions<CookieAuthenticationOptions>
{
  private readonly ITicketStore _ticketStore; 

  public ConfigureCookieAuthenticationOptions(ITicketStore ticketStore)
  {
    _ticketStore = ticketStore;
  } 

  public void PostConfigure(string name,
           CookieAuthenticationOptions options)
  {
    options.SessionStore = _ticketStore;
  }
}

To register both InMemoryTicketStore and ConfigureCookieAuthenticationOptions, place this in ConfigureServices:

services.AddTransient<ITicketStore, InMemoryTicketStore>();
services.AddSingleton<IPostConfigureOptions<CookieAuthenticationOptions>,
  ConfigureCookieAuthenticationOptions>();

Make sure you make this change in the Startup class. If you peek inside Configure<CookiePolicyOptions>, for example, and crack open the code. Note the pattern; configuration runs through a singleton object and a lambda expression. ASP.NET Core uses this same options pattern under the hood. Now, firing this up in the browser and inspecting the auth cookie will have a much smaller footprint.

Conclusion

Implementing an auth cookie is seamless in ASP.NET Core 2.1. You configure cookie options, invoke middleware, and set identity claims. Sign in and sign out methods work based on an authentication scheme. Auth cookie options allow the app to react to back-end events and set a session store. The auth cookie is flexible enough to work well with any enterprise solution. 



European ASP.NET Core Hosting :: Consuming RabbitMQ Messages In ASP.NET Core

clock March 26, 2019 11:42 by author Peter

Background tasks play a very important role when we are building a distributed system. The most common scenario is consuming the service bus's message. In this article, I'd like to present how to consume the RabbitMQ message via BackgroundService in ASP.NET Core.
Run RabbitMQ Host

We should set up an instance of RabbitMQ. The fastest way is to use Docker.
docker run -p 5672:5672 -p 15672:15672 rabbitmq:management  

After running the Docker container, we are able to view the management page via http://localhost:15672.

Consuming RabbitMQ Messages In ASP.NET Core
Setup the BackgroundService
Here, we create a new class named ConsumeRabbitMQHostedService that is inherited from BackgroundService.
BackgroundService is a base class for implementing a long-running IHostedService. It provides the main work needed to set up the background task.
Here is an example to demonstrate how to consume RabbitMQ messages.
public class ConsumeRabbitMQHostedService : BackgroundService 

    private readonly ILogger _logger; 
    private IConnection _connection; 
    private IModel _channel; 
 
    public ConsumeRabbitMQHostedService(ILoggerFactory loggerFactory) 
    { 
        this._logger = loggerFactory.CreateLogger<ConsumeRabbitMQHostedService>(); 
        InitRabbitMQ(); 
    } 
 
    private void InitRabbitMQ() 
    { 
        var factory = new ConnectionFactory { HostName = "localhost" }; 
 
        // create connection 
        _connection = factory.CreateConnection(); 
 
        // create channel 
        _channel = _connection.CreateModel(); 
 
        _channel.ExchangeDeclare("demo.exchange", ExchangeType.Topic); 
        _channel.QueueDeclare("demo.queue.log", false, false, false, null); 
        _channel.QueueBind("demo.queue.log", "demo.exchange", "demo.queue.*", null); 
        _channel.BasicQos(0, 1, false); 
 
        _connection.ConnectionShutdown += RabbitMQ_ConnectionShutdown; 
    } 
 
    protected override Task ExecuteAsync(CancellationToken stoppingToken) 
    { 
        stoppingToken.ThrowIfCancellationRequested(); 
 
        var consumer = new EventingBasicConsumer(_channel); 
        consumer.Received += (ch, ea) => 
        { 
            // received message 
            var content = System.Text.Encoding.UTF8.GetString(ea.Body); 
 
            // handle the received message 
            HandleMessage(content); 
            _channel.BasicAck(ea.DeliveryTag, false); 
        }; 
 
        consumer.Shutdown += OnConsumerShutdown; 
        consumer.Registered += OnConsumerRegistered; 
        consumer.Unregistered += OnConsumerUnregistered; 
        consumer.ConsumerCancelled += OnConsumerConsumerCancelled; 
 
        _channel.BasicConsume("demo.queue.log", false, consumer); 
        return Task.CompletedTask; 
    } 
 
    private void HandleMessage(string content) 
    { 
        // we just print this message  
        _logger.LogInformation($"consumer received {content}"); 
    } 
     
    private void OnConsumerConsumerCancelled(object sender, ConsumerEventArgs e)  {  } 
    private void OnConsumerUnregistered(object sender, ConsumerEventArgs e) {  } 
    private void OnConsumerRegistered(object sender, ConsumerEventArgs e) {  } 
    private void OnConsumerShutdown(object sender, ShutdownEventArgs e) {  } 
    private void RabbitMQ_ConnectionShutdown(object sender, ShutdownEventArgs e)  {  } 
 
    public override void Dispose() 
    { 
        _channel.Close(); 
        _connection.Close(); 
        base.Dispose(); 
    } 


Configure Services
We should configure this hosted service with the background task logic in ConfigureServices method.
public void ConfigureServices(IServiceCollection services) 

    // others ... 
     
    services.AddHostedService<ConsumeRabbitMQHostedService>(); 
}  

Result

After running this app, we may get the following output in the terminal.Turning to the Management UI of RabbitMQ, we find that it creates a new exchange and a new queue.

The next time we try to publish a message to show the background task is running well, we get the following result.




European ASP.NET Core Hosting :: Using Docker to Store ASP.NET Core Kestrel Certificates

clock March 22, 2019 08:48 by author Scott

When working with ASP.Net Core in Docker containers, it can be cumbersome to deal with certificates. While there is a documentation about setting certificate for dev environment, there’s no real guidance on how to make it work when deploying containers in a Swarm for example.
In this article we are going to see how to take advantage of Docker secrets to store ASP.Net Core Kestrel certificates in the context of Docker Swarm.

Hosting the service

First of all, we are going to create à Swarm service on our machine that use the sample Asp.Net Core app. The purpose of this article is to make SSL work in the container withoutchanging anything to an existing image.

docker service create --name mywebsite --publish published=8080,target=80,mode=host microsoft/dotnet-samples:aspnetapp

We are creating service mywebsite, publishing only one port 8080 bound to the port 80 in the container using the host mode and using the image microsoft/dotnet-samples:aspnetapp. Please note that you can use others configuration (for example expose port in routing mesh mode).

Preparing the certificate

We need a certificate. It can be created via an external certificate authority but here for the sake of the article, we are going to create a self signed certificate (of course, don’t use this in production). We are using Powershell for this task (you can skip this if you already have a pfx certificate signed by a real CA).

$cert = New-SelfSignedCertificate -DnsName "mywebsite" -CertStoreLocation "cert:\LocalMachine\My"
$password = ConvertTo-SecureString -String "mylittlesecret" -Force -AsPlainText
$cert | Export-PfxCertificate -FilePath c:\temp\mywebsite.pfx -Password $password

Once you have your pfx, we are goind to unprotect it from the password. It might be seem unsecure but when it will be added to the Docker secret store, it will be stored securedly. For this task I will use OpenSSL (not possible with Powerhsell as far as I know). OpenSSL is provided with Git for example.

& 'C:\Program Files\Git\mingw64\bin\openssl.exe' pkcs12 -in c:\temp\mywebsite.pfx -nodes -out c:\temp\mywebsite.pem -passin pass:mylittlesecret
& 'C:\Program Files\Git\mingw64\bin\openssl.exe' pkcs12 -export -in c:\temp\mywebsite.pem -out c:\temp\mywebsite.unprotected.pfx -passout pass:

Now that the pfx is un protected, we can add it to the docker store certificate and display it.

docker secret create kestrelcertificate c:\temp\mywebsite.unprotected.pfx

docker secret ls

ID                          NAME                 DRIVER              CREATED             UPDATED

iapy6rolt7po1mwm9aw6z0qc5   kestrelcertificate                       13 minutes ago      13 minutes ago

Our secret being in the store, you can delete (or store securely somewhere else your pfx).

Making it work

We can now update our service to take in account this secret. When adding a secret to a service, Docker will create a file in a specific directory containing the value of the secret. On Windows it’s c:\programdata\docker\secrets.

Let’s update our service and see what happened inside the container.

docker service update --secret-add kestrelcertificate mywebsite

docker exec 4b51e736ce65 cmd.exe /c dir c:\programdata\docker\secrets

 Volume in drive C has no label.
 Volume Serial Number is 3CBB-E577

 Directory of c:\programdata\docker\secrets

11/15/2018  10:38 PM    <DIR>          .
11/15/2018  10:38 PM    <DIR>          ..
11/15/2018  10:38 PM    <SYMLINK>      kestrelcertificate [C:\ProgramData\Docker\internal\secrets\iapy6rolt7po1mwm9aw6z0qc5]
               1 File(s)              0 bytes
               2 Dir(s)  21,245,009,920 bytes free

We can see that our secret exists and is named kestrelcertificate, as we named it in the command line.

We can therefore update our service to remove the old binding on port 80, replace it with a binding on port 443, tell Kestrel to use this port and finally give Kestrel the path of our secret.
This can be done with only one command:

docker service update --publish-rm published=8080,target=80,mode=host --publish-add published=8080,target=443,mode=host --env-add ASPNETCORE_URLS=https://+:443 --env-add Kestrel__Certificates__Default__Path=c:\programdata\docker\secrets\kestrelcertificate mywebsite

Wait a while that your service update, try to browse and it should work ! Well, actually it should only works on Linux.

Making it work on Windows

If you try to have a look a the logs generated by your service, you should end with something like this.

docker service logs mywebsite

mywebsite.1.uy3vm8txwxec@nmarchand-lt    | crit: Microsoft.AspNetCore.Server.Kestrel[0]
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |       Unable to start Kestrel.
mywebsite.1.uy3vm8txwxec@nmarchand-lt    | Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: Unspecified error
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Internal.Cryptography.Pal.CertificatePal.FromBlobOrFile(Byte[] rawData, String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(String fileName, String password)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.LoadCertificate(CertificateConfig certInfo, String endpointName)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.LoadDefaultCert(ConfigurationReader configReader)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.Load()
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.ValidateOptions()
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |
mywebsite.1.uy3vm8txwxec@nmarchand-lt    | Unhandled Exception: Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: Unspecified error
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Internal.Cryptography.Pal.CertificatePal.FromBlobOrFile(Byte[] rawData, String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(String fileName, String password)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.LoadCertificate(CertificateConfig certInfo, String endpointName)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.LoadDefaultCert(ConfigurationReader configReader)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.Load()
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.ValidateOptions()
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Hosting.Internal.WebHost.StartAsync(CancellationToken cancellationToken)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token, String shutdownMessage)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at Microsoft.AspNetCore.Hosting.WebHostExtensions.Run(IWebHost host)
mywebsite.1.uy3vm8txwxec@nmarchand-lt    |    at aspnetapp.Program.Main(String[] args) in C:\app\aspnetapp\Program.cs:line 18

We can see a nasty bug of Windows here (Github issue).
What did happen ? If you look closely at the dir command we made in the container, you’ll see that the secret is not really a file but instead a symbolic link to an other file. Unfortunately, Windows is unable to use a certificate that is a symlink. One solution could be to manually read the certificate with File.ReadAllBytes() and pass it to the constructor of X509Certificate. However, it would be against the purpose of this article which is to not modify the Docker image.

We can find a workaround by browsing the Docker documentation which states that the real file containing the secret (which in fact is the target of the symlink) can be found in the path c:\programdata\docker\internal\secrets\<secretid> where secretid is the id of the secret (as shown by docker secret ls).

We can update our service to change the path by updating the environment variable. It now works also on windows!

docker service update --env-rm
Kestrel__Certificates__Default__Path=c:\programdata\docker\secrets\kestrelcertificate --env-add Kestrel__Certificates__Default__Path=c:\programdata\docker\internal\secrets\iapy6rolt7po1mwm9aw6z0qc5 mywebsite

docker logs 7b54cdc42a86

Hosting environment: Production
Content root path: C:\app
Now listening on: https://[::]:443
Application started. Press Ctrl+C to shut down.

Final word

We have seen in this article how to use Docker secrets to store ASP.Net Core Kestrel certificates in our Docker Swarm. However, please keep in mind that the Windows workaround should be used with care as written in the Docker documentation.

Another word also about SSL Offloading : I know that usually the reverse proxy (Nginx, Traefik, etc.) is used to be the SSL termination but sometimes you still want SSL end to end. 



European ASP.NET Core Hosting :: Smart Logging Middleware for ASP.NET Core

clock March 13, 2019 09:44 by author Scott

ASP.NET Core comes with request logging built-in. This is great for getting an app up-and-running, but the events are not as descriptive or efficient as hand-crafted request logging can be. Here is a typical trace from a single GET request to /about:



If the request fails because of an error, you'll see instead:



While these fine-grained events are sometimes useful, they take up storage space and bandwidth, and add noise to the log.

Spreading the information across many events also makes it harder to perform some analyses, for example, the HTTP method and elapsed time are on different events, making it hard to separate the timings of GET and POST requests to the same RequestPath.

In production we want:

One "infrastructure" event per normal request, with basic HTTP information attached

Extra information to help with debugging if a request fails due to a server-side (5xx) error

The same format and event type for all requests, so that logs from both successful and failed requests can be easily grouped, sorted and analyzed together

Here's the format we're aiming for, expanded so that you can see all of the attached properties:

Instead of five infrastructure events + one application event, just a single infrastructure event is logged. This makes the application's own "Hello" event much easier to spot.

This post includes the full source code for the middleware, so you can take it and modify it to include the information you find most useful.

Step 1: Install and Configure Serilog

ASP.NET Core includes some basic logging providers, but to get the most out of it you'll need to plug in a full logging framework like Serilog. If you haven't done that already, these instructions should have you up and running quickly.

Step 2: Turn off Information events from Microsoft and System

Where Serilog is configured, add level overrides for the Microsoft and System namespaces:

Log.Logger = new LoggerConfiguration() 
    .MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
    .MinimumLevel.Override("System", LogEventLevel.Warning)
    // Other logger configuration

These can be turned back on for debugging when they're needed.

If you're using appsettings.json configuration, check out the level overrides example in the README.

Step 3: Add the SerilogMiddleware class

The SerilogMiddleware class hooks into the request processing pipeline to collect information about the requests:

using Microsoft.AspNetCore.Http; 
using Serilog; 
using Serilog.Events; 
using System; 
using System.Collections.Generic; 
using System.Diagnostics; 
using System.Linq; 
using System.Threading.Tasks;

namespace Datalust.SerilogMiddlewareExample.Diagnostics 
{
    class SerilogMiddleware
    {
        const string MessageTemplate =
            "HTTP {RequestMethod} {RequestPath} responded {StatusCode} in {Elapsed:0.0000} ms";

        static readonly ILogger Log = Serilog.Log.ForContext<SerilogMiddleware>();

        readonly RequestDelegate _next;

        public SerilogMiddleware(RequestDelegate next)
        {
            if (next == null) throw new ArgumentNullException(nameof(next));
            _next = next;
        }

        public async Task Invoke(HttpContext httpContext)
        {
            if (httpContext == null) throw new ArgumentNullException(nameof(httpContext));

            var sw = Stopwatch.StartNew();
            try
            {
                await _next(httpContext);
                sw.Stop();

                var statusCode = httpContext.Response?.StatusCode;
                var level = statusCode > 499 ? LogEventLevel.Error : LogEventLevel.Information;

                var log = level == LogEventLevel.Error ? LogForErrorContext(httpContext) : Log;
                log.Write(level, MessageTemplate, httpContext.Request.Method, httpContext.Request.Path, statusCode, sw.Elapsed.TotalMilliseconds);
            }
            // Never caught, because `LogException()` returns false.
            catch (Exception ex) when (LogException(httpContext, sw, ex)) { }
        }

        static bool LogException(HttpContext httpContext, Stopwatch sw, Exception ex)
        {
            sw.Stop();

            LogForErrorContext(httpContext)
                .Error(ex, MessageTemplate, httpContext.Request.Method, httpContext.Request.Path, 500, sw.Elapsed.TotalMilliseconds);

            return false;
        }

        static ILogger LogForErrorContext(HttpContext httpContext)
        {
            var request = httpContext.Request;

            var result = Log
                .ForContext("RequestHeaders", request.Headers.ToDictionary(h => h.Key, h => h.Value.ToString()), destructureObjects: true)
                .ForContext("RequestHost", request.Host)
                .ForContext("RequestProtocol", request.Protocol);

            if (request.HasFormContentType)
                result = result.ForContext("RequestForm", request.Form.ToDictionary(v => v.Key, v => v.Value.ToString()));

            return result;
        }
    }
}

I've deliberately kept this to a single (somewhat ugly!) source file so that it's easy to copy, paste and modify.

There are a lot of design details and trade-offs involved. You can see that when a request is deemed to have failed, some additional information is attached, including the headers sent by the client, and the form data if any.

Step 4: Add the middleware to the pipeline

In your application's Startup.cs file, you can add the middleware either before or after the outermost exception handling components. If you add it before this, you won't get the full Exception information in any error events, but you will always be able to record the exact status code returned to the client. Adding the middleware into the pipeline inside the exception handling components (i.e., after them in the source code) will provide all exception detail but has to assume that the status code is 500 in this case.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, 
                      ILoggerFactory loggerFactory)
{
    loggerFactory.AddSerilog();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
        app.UseBrowserLink();
    }
    else
    {
        app.UseExceptionHandler("/Home/Error");
    }

    app.UseMiddleware<SerilogMiddleware>();

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Step 5: Profit!

Now you have a stream of request log events each with paths, status codes, timings, and exceptions:

We've seen how a simple customized logging strategy can not only produce cleaner events, but also reduce the volume of infrastructure log events.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in