European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core Hosting :: Harnessing the Power of UserManager and RoleManager for Robust User Control in .Net Core Identity

clock July 4, 2023 07:05 by author Peter

In this article, I will demonstrate how to utilise the.Page control is executed using NetCore Identity Framework, with a concentration on the UserManager and RoleManager classes. The involved web application is a.NET Core Razor Pages application, and the database is an Entity Framework Core-managed MSSQL database.

Imagine you are a member of the HR department responsible for administering the internal website of your company. Your organisation consists of three kinds of employees:

  • Only personnel of the IT Department are granted access to IT-related pages.
  • Only personnel of the Sales Department are permitted access to finance-related pages.
  • The chief executive officer has access to all pages, including CEO-only pages.

Final Resolution Summary

(This is how it looks when HR configure which page/s are allowed to access by IT Department in Page Access Management Page.)

(This is how it looks like when Bob from the IT department acesss IT page)

(This is how it looks like when Bob from IT department access to view the Sales Team page)

Programming
First, I create two projects.
1. A website named "WebApp1"
    A .Net Core Razor Web App and called it "WebApp1"
    A Library and project named "DataLibray"

In the appsetting.json of my WebApp1, I configure the database connection
"ConnectionStrings": {
  "MyDBContextConnection": "Server=.\\SQLEXPRESS;Database=WebApp1;Integrated Security=True; TrustServerCertificate=True;"
}


In my DataLibrary project, I have created a class named "MyDbContext.cs" that looks like the following:
public class MyDbContext : IdentityDbContext<IdentityUser, IdentityRole, string,
  IdentityUserClaim<string>, IdentityUserRole<string>, IdentityUserLogin<string>,
  IdentityRoleClaim<string>, IdentityUserToken<string>>
{
    public MyDbContext() : base()
    {
    ....//omitted code

Take note of IdentityUser, IdentityRole, and IdentityRoleClaim. These three classes are integral to nearly all sections of my code.

This is what I registered in the program.cs
var connectionString = builder.Configuration.GetConnectionString("MyDBContextConnection")
    ?? throw new InvalidOperationException("Connection string 'MyDBContextConnection' not found.");

builder.Services.AddDbContext<MyDbContext>(options =>
    options.UseSqlServer(connectionString, b => b.MigrationsAssembly("WebApp1")));


builder.Services.AddIdentity<IdentityUser, IdentityRole>(options =>
{
    options.SignIn.RequireConfirmedAccount = false;
    options.User.RequireUniqueEmail = false;

    options.Password.RequireDigit = false;
    options.Password.RequireLowercase = false;
    options.Password.RequireUppercase = false;
    options.Password.RequireNonAlphanumeric = false;
    options.Password.RequiredLength = 0;
    options.Password.RequiredUniqueChars = 0;

}).AddEntityFrameworkStores<MyDbContext>().AddDefaultTokenProviders();


From the code snippet in my Program.cs, it is evident that I have disabled the requirement for users to have an email as their account and have also disabled the password policy. I made these modifications to simplify the development process, as I prefer not to repeatedly enter email addresses and complex passwords during development.

By the way, ensure both projects are set to target framework .Net 7.0.

Also, ensure that both initiatives adhere to the target framework.Net 7.0.
Once all required configurations are in place, the Entity Framework (EF) commands can be executed to establish the database. In my instance, these are the commands I use:
    After the first command completes successfully and returns no error, add-migration FirstTimeCreateDB updates the database.

If everything is in order, the following tables will be added to the "WebApp1" database:

    AspNetUsers is the table that stores all the user information.
    AspNetRoles is the table that stores all the roles (in my case, it is the Department)
    AspNetUserRoles is the table that stores the relationship between the user and role (in my case, it is the Employee and the Department)
    AspNetRoleClaims is the table to store the claims of each department (in my case, it is the info that what pages can each department can access)

.NET Core provides Scaffold Identity, which offers the convenience of auto-generating user interface components such as Register, Login, Logout, and Forgot Password. However, considering the specific focus of my demo project on showcasing the functionalities of the User Manager and Role Manager, and the specific responsibilities of the HR department in this scenario, certain pages like Register and Forgot Password are not required.

Department Page

First, I establish a Department Page to create departments. It appears as follows:

The source code for this task is very straightforward. It primarily involves the use of RoleManager. To begin, declare the RoleManager, then inject it via the constructor, and finally, invoke the CreateAsync() method.
private readonly RoleManager<IdentityRole> _roleManager;

public DepartmentManagementModel(  RoleManager<IdentityRole> rolemanager)
{
    _roleManager = rolemanager;
}

public async Task<IActionResult> OnPostAsync()
{
    IdentityRole IdentityRole = new IdentityRole();
    IdentityRole.Name = this.IdentityRole.NormalizedName;
    IdentityRole.NormalizedName = this.IdentityRole.NormalizedName;
    await _roleManager.CreateAsync(IdentityRole);
}


If the creation is successful, you’ll be able to see the newly created role in the AspNetRoles table.

Employee Page
Next, Create an Employee Page to create Employee.

Employee Page
Next, Create an Employee Page to create Employee.

This page contains a drop-down box to store all the Departments. We use Role Manager to get all the roles (Department):
DepartmentOptions = _roleManager.Roles
.Select(r => new SelectListItem
{
    Text = r.Name,
    Value = r.Id
}).ToList();

The source code for creating an employee is straightforward and primarily utilizes the UserManager. First, declare the UserManager, then inject it via the constructor, and finally, invoke the CreateAsync() method.
private readonly UserManager<IdentityUser> _userManager;
public CreateEmployeeModel(UserManager<IdentityUser> userManager,
            RoleManager<IdentityRole> rolemanager, UrlEncoder urlencoder, IWebHostEnvironment env)
{
    _userManager = userManager;
    _roleManager = rolemanager;
}

public async Task<IActionResult> OnPostAsync()
{
    var user = new IdentityUser { UserName = Input.UserName };
    var result = await _userManager.CreateAsync(user, Input.Password);
}


The source code of add a role to the department is
await _userManager.AddToRoleAsync(user, Groupname);

Page Access Management Page
Then, we can see in table AspNetUsers and AspNetUserRoles that a User with the role she belongs has been added.

Next is the Page Access Management Page. First, I created a Class called PageNameConfiguration to configure the page name that we will control access.
public static class PageNameConfiguration
{
    public static IServiceCollection ConfigurPageNameFunction(this IServiceCollection services)
    {
        PageNameDictionary.Instance.Add("Sales Report Page", PagesNameConst.SalesReport);
        PageNameDictionary.Instance.Add("Information Technology Page", PagesNameConst.IT);
        //...omitted codes
        return services;
    }

    public static void AddPageAccessPolicy(this AuthorizationOptions options)
    {
        options.AddPolicy(PagesNameConst.SalesReport,
            policy => policy.RequireClaim(PagesNameConst.SalesReport, "Yes"));

        options.AddPolicy(PagesNameConst.IT,
          policy => policy.RequireClaim(PagesNameConst.IT, "Yes"));

   //...omitted code

The AddPageAccessPolicy method is used to establish the Access Policy. Here, ‘claims’ refer to the names of each page, with the assigned value of "Yes". The value could be anything; you can use "Yes" or "True" to indicate that the page can be accessed by a certain user.

To ensure the functionality of the AddPageAccessPolicy method, it must be registered within the Program.cs file.
builder.Services.AddAuthorization(options =>
{
    options.AddPageAccessPolicy();
});

This is the page that manages page access. It features a dropdown menu containing all the departments. Listed below are the page names. I utilize checkboxes to control which pages can be accessed by the selected department.

Once selected, click the update button, and the claims will be updated. The code behind of update claim to the role is
await _roleManager.AddClaimAsync(role, new Claim(pagename.ClaimType, "Yes"));

Please note that I have hard-coded the value of ‘claims’ to "Yes". This is because, in the AddPageAccessPolicy method, I specified that the value must be set to "Yes". Consequently, in the AspNetRoleClaims table, the Claim Type is now associated with the page name, and the Claim Value is designated as "Yes".


Authorize Attribute Set Policy
To set up the page to be viewable only with certain claims, it is imperative to set the policy attribute directly on the page itself. For instance, take my SalesReport Page as an example, where I’ve implemented the policy attribute at the class level.
[Authorize(Policy = PagesNameConst.SalesReport)]
public class SalesReportModel : PageModel
{
    public void OnGet()
    {
    }
}

Access Denied Page
I have created an ‘Access Denied’ page. This page will be displayed when an employee attempts to access a page for which they do not have the necessary authorization. Like the login page, the AccessDenied page needed to be registered in Program.cs we well.
builder.Services.ConfigureApplicationCookie(options =>
{
    options.LoginPath = "/Identity/Login";
    options.AccessDeniedPath = "/Identity/AccessDenied";
});

Now, we are done. I created a Employee named ‘Bob’, which is from IT Department; he can access IT page but not the Sales Report page.

2FA with Sign In Manager
Now, let’s take this a step further. Imagine there’s a page named "TopSecretCEOOnly", which can only be accessed by the CEO. Since this page holds the company’s top secrets, a standard username and password combination isn’t secure enough. Management has requested an additional layer of security: Two-Factor Authentication (2FA).

This added security can be accomplished by creating a 2FA verification page. I have developed a page called VerifyWith2FA.cshtml (note that this page can also be auto-generated using identity scaffolding).

Then, I implemented a redirect to the 2FA page in my "TopSecretCEOOnly" page. This ensures that before a user can view this page, they must successfully pass the 2FA verification.
[Authorize(Policy = PagesNameConst.TopSecret)]
public class TopSecretCEOOnlyModel : PageModel
{
    public IActionResult OnGet()
    {
        if (HttpContext.Session.GetString("Passed2FA") != "true")
        {
            TempData["ReturnUrl"] = "/DailyVisit/TopSecretCEOOnly";
            return RedirectToPage("/Identity/VerifyWith2FA");
        }
        HttpContext.Session.Remove("Passed2FA");
        return Page();
    }
}


In my VerifyWith2FA page, I’m using the UserManager, method VerifyTwoFactorTokenAsync to verify the 2FA OTP code.
var authenticatorCode = Input.TwoFactorCode.Replace(" ", string.Empty).Replace("-", string.Empty);
bool isOTPValid = await _userManager.VerifyTwoFactorTokenAsync(user, _userManager.Options.Tokens.AuthenticatorTokenProvider, authenticatorCode);


Please take note that I have created TWO 2FA pages for verification

  • LoginWith2FA.cshtml - This is for 2FA DURING login; it use SigninManager to verify user sign in using OTP
  • VerifyWith2FA.cshtml - This is for 2FA verification AFTER login. To visit certain sensitive page, it uses UserManager to verify the user

So, to access the "Top Secret" page, one must input an OTP (One-Time Password). Once this OTP has been successfully verified, only then will the page's contents be displayed.

Since I am using 2FA to login/verify, I updated my create Employee UI to allow users to select whether this user will have 2FA or not. This 2D QR code needs to use Google Authenticator App to scan and store the OTP.


Reset Password
I have a page to reset password. You can use ResetPasswordAsync from User Manager to reset the password.
var token = await _userManager.GeneratePasswordResetTokenAsync(user);
var password = GeneratePassword();
var resetResult = await _userManager.ResetPasswordAsync(user, token, password);


If it is a 2FA user, you need to GenerateNewTwoFactorRecoveryCodesAsync.
await _userManager.SetAuthenticationTokenAsync(user, "Google", "secret", null);
await _userManager.GenerateNewTwoFactorRecoveryCodesAsync(user, 1);


Personal Level Claim

So far, I just discussed the claim at the group level; how about claim at the PERSONAL user level?

I have created a page named UserClaimActionManagement.cshtml. This page is designed to control the number of actions a user is permitted to perform. For instance, Bob has been granted both the right to view and edit a document.

The underlying code for associating a claim with a user involves using the _userManager.AddClaimAsync() method. As you can see, if the claim is at the personal level, we use the UserManager and invoke the AddClaim method. In the previous example, the claim was at the role level, which is why we used RoleManager.

await _userManager.AddClaimAsync(user, new Claim(actionname.ClaimType, "Yes"));

For the demo, I got another page named DemoUserClaim.cshtml. In OnGet() method, I get the claim from the user and decide what they can do.
public async Task OnGet()
{
    var user = await _userManager.GetUserAsync(User);
    var userClaims = await _userManager.GetClaimsAsync(user);

    CanViewDocument = userClaims.Any(c => c.Type == DemoUserClaimConst.CanViewDocument && c.Value == "Yes");
    CanEditDocument = userClaims.Any(c => c.Type == DemoUserClaimConst.CanEditDocument && c.Value == "Yes");
}


With the claim, I can control the user’s rights, either in view or in the backend. Either user without "CanEditDocument" claims unable to view the edit part, or we blocked the edit method being called in the backend code.
@if(Model.CanViewDocument)
{
<h3>You can see me means you have the claims for <span style="color:orange;">Can View document</span></h3>
}

@if (Model.CanEditDocument)
{
    <h3>You can see me means you have the claims for <span style="color: green;">Can Edit document</span></h3>

    <form method="post">
        <input type="submit" value="Edit Document" />
    </form>
}


Markup
public async Task OnPost()
{
    if (!CanEditDocument)
    {
        //sorry you are not allowed to Edit Documet
        return;
    }
    else
    {
        //proceed with edit document
    }
}

Generate Navigation Bar Contents By Claims
Naturally, a more effective and refined approach would be to generate the web navigation bar based on claims. This way, when users log in, they will only see menu options corresponding to their respective claims. The following code snippet demonstrates how I utilize "User.HasClaim" in the _NavigationBar to dictate access to the IT Page and Sales Report Page respectively.
if (User.HasClaim(x => x.Type == Constant.PagesNameConst.IT))
{
    <a href="/DailyVisit/InformationTechnology" class="list-group-item list-group-item-action">
        Information Technology Page
    </a>
}
if (User.HasClaim(x => x.Type == Constant.PagesNameConst.IT))
{
    <a href="/DailyVisit/SalesReport" class="list-group-item list-group-item-action">
        Sales Report Page
</a>
}
//and so on...

However, it’s still necessary to apply the authorization policy to individual pages. Relying on navigation bar control alone isn’t secure enough; without the policy, users could still attempt to access a page if they have the URL, even if it isn’t displayed in the navigation bar.

Customized User Manager or Role Manager
Imagine you need extra information; say the employee has an extra information called the mode of work, either working in an office, WFH, or hybrid. Or the department needs to set maximum working hours every week. The existing AspNetUsers and AspNetRoles do not have such columns; then, you may use inheritance in C# to customize it.

To solve this, I created another project (new Data Library and New Web App and DB) similar to the previous one but slightly modified.

For the employee, I now created a customized class named "MyEmployee", which inherits IdentityUser, and added the work mode customized new field.
public class MyEmployee : IdentityUser
{
    public WorkMode EmployeeWorkMode { get; set; }
//... omitted code


The same thing goes for Role. I created a customized class named "MyDepartment", which inherits IdentityRole, and added the Max Working Hours customized new field.
public class MyDepartment : IdentityRole
    {
        public int MaxWorkingHours{ get; set; }
//... omitted code


The same thing goes for claims.
public class MyDepartmentClaim : IdentityRoleClaim<string>
{
    //... omitted code
public class MyEmployeeDepartment : IdentityUserRole<string>
{
    //... omitted code


For my DB Context configuration, I now created a CreateMyOwnIdentity method to rename those identity tables.
public class MyDbContext : IdentityDbContext<MyEmployee, MyDepartment, string,
      IdentityUserClaim<string>, MyEmployeeDepartment, IdentityUserLogin<string>,
      MyDepartmentClaim, IdentityUserToken<string>>
{
    ....
    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        base.OnModelCreating(modelBuilder);
        modelBuilder.CreateMyOwnIdentity();
    }

Following is how I renamed the tables (not necessary, but I did it for demo purposes)
public static class IdentityModelBuilderExtensions
{
    public static void CreateMyOwnIdentity(this ModelBuilder builder)
    {
        builder.Entity<MyEmployee>(b =>
        {
            b.ToTable("MyEmployee");

        });

        builder.Entity<MyDepartment>(b =>
        {
            b.ToTable("MyDepartment");


            b.HasMany(e => e.MyDepartmentClaim)
                .WithOne(e => e.MyDepartment)
                .HasForeignKey(rc => rc.RoleId)
                .IsRequired();
        });
        builder.Entity<MyDepartmentClaim>(b =>
        {
            b.ToTable("MyDepartmentClaim");

        });

        builder.Entity<MyEmployeeDepartment>(b =>
        {
            b.ToTable("MyEmployeeDepartment");
        });
    }
}


After executing the EF create migration and update DB commands, I have the new tables with new names.

The rest are pretty much the same, just add an extra column during Create Employee and Create Department.

The data will be updated in the database as well.

Further enhancements and notes

  • I do not do input validation for most of my UI.
  • Some pages can be grouped more tidily. For example, the manage employee and reset employee pages can be mixed into one page. I separate it for demo purposes.
  • I used QRCoder v1.4.1 to create a 2D QR Code for this project. My code for QR Code creation will not work if you use the latest version, v1.4.3. You may also use other QR Coder libraries.

Conclusion
When I first started developing this project using .Net Core Identity, I encountered a myriad of questions and issues, especially when it came to navigating IdentityManager, RoleManager, and claims. I hope this article can serve as a useful guide for others using .Net Core Identity to develop their own applications.



European ASP.NET Core Hosting :: Integration of the Google Calendar API with .Net Core

clock June 27, 2023 07:50 by author Peter

I will explain how web server applications implement OAuth 2.0 authorization to access Google APIs using Google OAuth 2.0 endpoints.
OAuth 2.0


OAuth 2.0 enables users to grant selective data-sharing permission to applications while preserving their own credentials. As an example, an application can use OAuth 2.0 to obtain the user's permission to create calendar events in their Google Calendar.

Pre-Requisite

    Enable Google API project.
    Create credentials for Authorization.
    Get Access and Refresh the token in the authorization code's name.
    Redirect URL: The Google OAuth 2.0 server verifies the user's identity and obtains the user's permission to grant your application access to the requested Scope. Using the 'Redirect URL' you specified, the response is sent back to the application.

Tokens
Tokens of access have a finite lifespan and may expire over time. To prevent interruptions in access to related APIs, it is possible to refresh an access token without user permission. This is possible by requesting the token with inactive access.

    1024 bytes are used for the Authorization Code
    2048-bit Access Token
    Refresh Token: 512 bytes

Code Structure

Packages

  • NodeTime
  • Google.Apis
  • Google.Apis.Auth
  • Google.Apis.Calendar.v3

UserController

public class UserController: Controller {

  private IGoogleCalendarService _googleCalendarService;
  public UserController(IGoogleCalendarService googleCalendarService)

  {
    _googleCalendarService = googleCalendarService;
  }

  [HttpGet]
  [Route("/user/index")]
  public IActionResult Index() {
    return View();
  }

  [HttpGet]
  [Route("/auth/google")]
  public async Task < IActionResult > GoogleAuth() {
    return Redirect(_googleCalendarService.GetAuthCode());
  }

  [HttpGet]
  [Route("/auth/callback")]
  public async Task < IActionResult > Callback() {
    string code = HttpContext.Request.Query["code"];
    string scope = HttpContext.Request.Query["scope"];

    //get token method
    var token = await _googleCalendarService.GetTokens(code);
    return Ok(token);
  }

  [HttpPost]
  [Route("/user/calendarevent")]
  public async Task < IActionResult > AddCalendarEvent([FromBody] GoogleCalendarReqDTO calendarEventReqDTO) {
    var data = _googleCalendarService.AddToGoogleCalendar(calendarEventReqDTO);
    return Ok(data);
  }
}

DTO
public class GoogleCalendarReqDTO {
  public string Summary {
    get;
    set;
  }

  public string Description {
    get;
    set;
  }

  public DateTime StartTime {
    get;
    set;
  }

  public DateTime EndTime {
    get;
    set;
  }

  public string CalendarId {
    get;
    set;
  }

  public string refreshToken {
    get;
    set;
  }
}

public class GoogleTokenResponse {
  public string access_type {
    get;
    set;
  }

  public long expires_in {
    get;
    set;
  }

  public string refresh_token {
    get;
    set;
  }

  public string scope {
    get;
    set;
  }

  public string token_type {
    get;
    set;
  }
}

IGoogleCalendarService
public interface IGoogleCalendarService {
  string GetAuthCode();

  Task < GoogleTokenResponse > GetTokens(string code);
  string AddToGoogleCalendar(GoogleCalendarReqDTO googleCalendarReqDTO);
}

GoogleCalendarService
public class GoogleCalendarService: IGoogleCalendarService {

  private readonly HttpClient _httpClient;

  public GoogleCalendarService() {
    _httpClient = new HttpClient();
  }

  public string GetAuthCode() {
    try {

      string scopeURL1 = "https://accounts.google.com/o/oauth2/auth?redirect_uri={0}&prompt={1}&response_type={2}&client_id={3}&scope={4}&access_type={5};
      var redirectURL = "https://localhost:7272/auth/callback;
      string prompt = "consent"
      string response_type = "code";
      string clientID = ".apps.googleusercontent.com";
      string scope = "https://www.googleapis.com/auth/calendar;
      string access_type = "offline";
      string redirect_uri_encode = Method.urlEncodeForGoogle(redirectURL);
      var mainURL = string.Format(scopeURL1, redirect_uri_encode, prompt, response_type, clientID, scope, access_type);

      return mainURL;
    } catch (Exception ex) {
      return ex.ToString();
    }
  }

  public async Task < GoogleTokenResponse > GetTokens(string code) {

    var clientId = "_____.apps.googleusercontent.com";
    string clientSecret = "_____";
    var redirectURL = "https://localhost:7272/auth/callback;
    var tokenEndpoint = "https://accounts.google.com/o/oauth2/token;
    var content = new StringContent($"code={code}&redirect_uri={Uri.EscapeDataString(redirectURL)}&client_id={clientId}&client_secret={clientSecret}&grant_type=authorization_code", Encoding.UTF8, "application/x-www-form-urlencoded");

    var response = await _httpClient.PostAsync(tokenEndpoint, content);
    var responseContent = await response.Content.ReadAsStringAsync();
    if (response.IsSuccessStatusCode) {
      var tokenResponse = Newtonsoft.Json.JsonConvert.DeserializeObject < GoogleTokenResponse > (responseContent);
      return tokenResponse;
    } else {
      // Handle the error case when authentication fails
      throw new Exception($"Failed to authenticate: {responseContent}");
    }
  }

  public string AddToGoogleCalendar(GoogleCalendarReqDTO googleCalendarReqDTO) {
    try {
      var token = new TokenResponse {
        RefreshToken = googleCalendarReqDTO.refreshToken
      };
      var credentials = new UserCredential(new GoogleAuthorizationCodeFlow(
        new GoogleAuthorizationCodeFlow.Initializer {
          ClientSecrets = new ClientSecrets {
            ClientId = "___.apps.googleusercontent.com", ClientSecret = "__"
          }

        }), "user", token);

      var service = new CalendarService(new BaseClientService.Initializer() {
        HttpClientInitializer = credentials,
      });

      Event newEvent = new Event() {
        Summary = googleCalendarReqDTO.Summary,
          Description = googleCalendarReqDTO.Description,
          Start = new EventDateTime() {
            DateTime = googleCalendarReqDTO.StartTime,
              //TimeZone = Method.WindowsToIana();    //user's time zone
          },
          End = new EventDateTime() {
            DateTime = googleCalendarReqDTO.EndTime,
              //TimeZone = Method.WindowsToIana();    //user's time zone
          },
          Reminders = new Event.RemindersData() {
            UseDefault = false,
              Overrides = new EventReminder[] {

                new EventReminder() {
                    Method = "email", Minutes = 30
                  },

                  new EventReminder() {
                    Method = "popup", Minutes = 15
                  },

                  new EventReminder() {
                    Method = "popup", Minutes = 1
                  },
              }
          }

      };

      EventsResource.InsertRequest insertRequest = service.Events.Insert(newEvent, googleCalendarReqDTO.CalendarId);
      Event createdEvent = insertRequest.Execute();
      return createdEvent.Id;
    } catch (Exception e) {
      Console.WriteLine(e);
      return string.Empty;
    }
  }
}


Method.cs
public static class Method {
  public static string urlEncodeForGoogle(string url) {
    string unreservedChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-.~";
    StringBuilder result = new StringBuilder();
    foreach(char symbol in url) {
      if (unreservedChars.IndexOf(symbol) != -1) {
        result.Append(symbol);
      } else {
        result.Append("%" + ((int) symbol).ToString("X2"));
      }
    }

    return result.ToString();

  }

  public static string WindowsToIana(string windowsTimeZoneId) {
    if (windowsTimeZoneId.Equals("UTC", StringComparison.Ordinal))
      return "Etc/UTC";

    var tzdbSource = TzdbDateTimeZoneSource.Default;
    var windowsMapping = tzdbSource.WindowsMapping.PrimaryMapping
      .FirstOrDefault(mapping => mapping.Key.Equals(windowsTimeZoneId, StringComparison.OrdinalIgnoreCase));

    return windowsMapping.Value;
  }
}


Program.cs
builder.Services.AddScoped<IGoogleCalendarService, GoogleCalendarService>();

Run your application

 

It will redirect to [Route("/auth/google")].


After successful sign-in, you will see this window because our calendar project on GCP is not recognized with the domain name. You can visit the google drive link, as I mentioned above.

Google will ask you to give access to your Google calendar, then continue.

It will redirect to [Route("/auth/callback")], and you will get an authorization code.

Now you can copy the refresh token and send the request.
Save the refresh token for future use.




European ASP.NET Core Hosting :: Action Filters in .Net Core

clock June 22, 2023 07:55 by author Peter

What are filters and what are their advantages?
Filters allow us to execute a piece of code before or after any request processing pipeline stage.

The advantages of filtration

  • Instead of writing code for each individual action, we write it once and then reuse it.
  • Extensible: They allow us to perform some action ABC before the method and some operation XYZ after the method.


Varieties of Filters

  • Authorization Filters: Most prominent in ascending order to determine if the user is authorized.
  • Resource Filters: It is executed following authorization and prior to the next request pipeline.
  • Action Filters: They are executed prior to and following the execution of an action method Exception Filters: It is used to manage global exceptions and is executed when an unhandled exception is thrown.
  • Result Filters: These are executed following the implementation of the action method and prior to and after the execution of action results.

There are Action Filters in.Net Core.
Action filters execute immediately before and after the execution of an action method. It can perform multiple operations, including modifying the passed arguments and the output.

How Do I Configure Action Filters?
Globally Tailored Action Based

The subsequent stages are identical for both

  • Create a filter by extending a class that implements the IActionFilter interface.
  • It will require you to implement these two procedures. OnActionExecuted and OnActionExecuting are event handlers.
  • OnActionExecuted is executed after the method, while the other is executed before the method is invoked.

Global Action Filtering System
After creating this filter, you must add it to the controllers services container.

public class GlobalFilter: IActionFilter {
  public void OnActionExecuted(ActionExecutedContext context) {
    Console.WriteLine("Global Action Executed");
  }

  public void OnActionExecuting(ActionExecutingContext context) {
    Console.WriteLine("Global Action is Executing");
  }
}

public void ConfigureServices(IServiceCollection services) {
  services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
  services.AddHangfire(x =>
    x.UseSqlServerStorage("Data Source=DJHC573\\MYSYSTEM;Initial Catalog=Hangfire-DB;Integrated Security=true;Max Pool Size = 200;"));
  services.AddHangfireServer();
  services.AddMvcCore(options => {
    options.Filters.Add(new GlobalFilter());
  });

}

That’s all you need to do, so this one will run whenever any action gets called.

Custom Action-Based Filter
    Add your filter in Startup.cs, if you have multiple filters, add them in a similar fashion.
    Add ServiceFilter(typeof(YourFilter)) at your controller level or action level, that’s all.

public class NewsletterFilter: IActionFilter {
  public void OnActionExecuted(ActionExecutedContext context) {
    Console.WriteLine("Action is Executed");
  }

  public void OnActionExecuting(ActionExecutingContext context) {
    Console.WriteLine("Action is Executing");
  }
}


public void ConfigureServices(IServiceCollection services) {
  services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
  //services.AddHangfire(x =>
  //x.UseSqlServerStorage("Data Source=DJHC573\\MYSYSTEM;Initial Catalog=Hangfire-DB;Integrated Security=true;Max Pool Size = 200;"));
  //services.AddHangfireServer();
  services.AddMvcCore(options => {
    options.Filters.Add(new GlobalFilter());
  });
  services.AddScoped < NewsletterFilter > ();

}

public class ValuesController: ControllerBase {
  // GET api/values
  [ServiceFilter(typeof (NewsletterFilter))]
  [HttpGet]
  public ActionResult < IEnumerable < string >> Get() {
    return new string[] {
      "value1",
      "value2"
    };
  }
}


When I applied Global Action Filter and Newsletter (Custom Filters) on the same controller, it executed like this.


*If we want an asynchronous action filter, then we can inherit from IAsyncActionFilter instead of IActionFilter it has an extra method OnActionExecutionAsync which takes action context and delegate in parameters.



European ASP.NET Core Hosting :: Full Cache, Cache Aside, and Read-Through Caching: NCache and Caching Patterns

clock June 14, 2023 07:24 by author Peter

Caching is a technique for storing data in the server's memory. It will enhance the application's performance by reducing database calls. Indirectly, it will improve the efficacy of the database by delivering higher throughput and lower latency than most databases. This blog will cover caching techniques/patterns such as full Cache, cache-aside, and read-through using the NCache cache service provider.

What exactly is NCache?
NCache is an open-source distributed cache for.NET, Java, and Node.js that operates in memory. NCache is incredibly quick and scalable, and it caches application data to minimize database access. NCache is utilized to address the performance issues associated with data storage, databases, and the scalability of.NET, Java, and Node.js applications.

Learn how to use NCache here. Please complete the NCache installation process.

Full Cache

A full cache is a technique that loads all datasets at the outset of an application. This Technique is commonly used for applications with inert or infrequently changing data. It enhances application performance by decreasing data source calls. NCache provides the Cache Startup injector feature for implementing the Full Cache. In other terms, it populates it with all data beforehand.

By implementing the ICacheLoader interface, Cache Startup Loader can be simply integrated into our current and future applications. Enabling the Cache Loader/Refresher from NCache Web Manager is the next step. This capability is exclusive to the enterprise edition.

Launch NCache Web Manager, select the existing Clustered Cache or create a new clustered cache with the minimum number of nodes required to configure the full Cache, and then click View Details.

This link will transport you to the Cache Settings page, where you can enable Cache Loader/Refresher.

Please visit the aforementioned link and finalize the CacheLoader / Refresher configuration. Consider, for instance, a web application that imports hundreds of images on the initial page. In this instance, we should utilize a cache startup loader that pre-loads the Cache with all images at application startup, allowing all images to load from the Cache, which is extremely fast and also eliminates the data source request.

Any modifications to the images from the data source will render the Cache obsolete. Cache Refresher is another feature offered by NCache to prevent this invalidation.

public class LoaderRefresher: ICacheLoader {
  private string _connectionString;
  private static ICache _cache;
  public void Init(IDictionary < string, string > parameters, string cacheName) {

    if (parameters != null && parameters.Count > 0)
      _connectionString = parameters.ContainsKey("ConnectionString") ?
      parameters["ConnectionString"] as string : string.Empty;
      _cache = CacheManager.GetCache(cacheName);
  }

  public object LoadDatasetOnStartup(string dataSet) {

    IList < object > loadDatasetAtStartup;
    if (string.IsNullOrEmpty(dataSet))
      throw new InvalidOperationException("Invalid dataset.");

    switch (dataSet.ToLower()) {
      //call FetchProductsFromDataSouce based on dataset value
    }

    string[] keys = GetKeys(loadDatasetAtStartup);
    IDictionary < string, CacheItem > cacheData = GetCacheItemDictionary(keys, loadDatasetAtStartup);
    _cache.InsertBulk(cacheData);

    object userContext = DateTime.Now;
    return userContext;
  }

  public object RefreshDataset(string dataSet, object userContext) {
    if (string.IsNullOrEmpty(dataSet))
      throw new InvalidOperationException("Invalid dataset.");

    DateTime ? lastRefreshTime;

    switch (dataSet.ToLower()) {

      //call FetchUpdatedSuppliers based on dataset value and insert the update value in cache.

    }

    userContext = DateTime.Now;
    return userContext;
  }

  public IDictionary < string, RefreshPreference > GetDatasetsToRefresh(IDictionary < string, object > userContexts) {
   IDictionary < string, RefreshPreference > DatasetsNeedToRefresh = new Dictionary < string, RefreshPreference > ();
    DateTime ? lastRefreshTime;
    bool datasetHasUpdated;

    foreach(var dataSet in userContexts.Keys) {
    switch (dataSet.ToLower()) {
        // Based on switch input check the dataset to be updated, if it returns true call invoke DatasetsNeedToRefresh method for the dataset.
      }
    }

    return DatasetsNeedToRefresh;
  }
  p
ublic void Dispose() {
    // clean your unmanaged resources
  }
}

Local/Clustered Cache information and Database connection string information are set by the Init Function.

During program startup, the loadDatasetOnStartup function imports the dataset from the data source.

RefreshDataset
This function is invoked according to the Refresh Frequency specified during CacheLoader / Refresher configuration.  
GetDatasetsToRefresh – This function is used to obtain a new dataset in real-time via polling if the refresh-on-event property is enabled, as depicted in the following illustration. This configuration can be enabled on the NCache web manager cache details page under the Cache Loader/Refresher section.
Create a new class to support the IReadThruProvider interface.

The Cache keeps calling this method to update the data set based on the frequency/interval set for “Refresh-Interval”. By default, it is 900 sec.

You can refresh the cached dataset manually using NCache Web Manager or Manager.

NCache web Manager: The refresh now option for the dataset is available in the cache details page under Cache Loader/Refresher section from advanced settings.

PowerShell command: Use the below command to refresh the dataset.
Invoke-RefresherDataset -CacheName [your clustered cache name] -Server [server IP address] -Dataset [your dataset name] -RefreshPreference RefreshNow

Cache-Aside

In the Cache-aside technique, the database and Cache won’t interact. Let me elaborate on the step-by-step process involved in this caching Technique.
Step 1. The application tries to get data from the Cache.
Step 2. In this step, the flow will check whether the request data(Key) is available in the cache server.
Step 3. If the Key is unavailable, the application will directly talk to the database.
Step 4. The data from the database is transferred to the application.
Step 5. Add the data into the Cache.


NCache, one of the leading caching service providers, to implement the cache-aside functionality in our application, the below code snippet.
try
{
    // Pre-condition: Cache is already connected

    // Get key to fetch from cache
    string key = $"Product:{product.ProductID}";

    // Get item against key from cache in template type
    var retrievedItem = cache.Get<Product>(key);
    if (retrievedItem != null)
    {
        if (retrievedItem is Product)
        {
            // Perform operations according to business logic
        }
    }
    else
    {
        // block to query the datasource and write it to the Cache server
    }
}

So, in the process, if the data is not cached, the application will directly connect with the database to get new data. Each Technique in caching has its pros and cons.

Advantages
If the caching layer is unavailable, the application still works. Instead of querying the caching, you will have complete control of the database every time.

Disadvantages

The very first time, the access will be slower. Because the keys may not be available in the Cache, so initially, the application will update the Cache by fetching the data from the database. This process may slow down the performance of the application. On the other hand, you will have inconsistent data. A separate service is required to update the data in the Cache whenever there is an update in the database for a table.

Read-through technique
Read-thru is another technique where the cache layer is always synced with the database layer. The application will query the Cache directly to get the data. If the requested Key is not available in the Cache, the cache layer will directly talk with the database layer and update the Cache. The step-by-step process involved in this caching Technique.

Step 1. The application tries to get data from the Cache.
Step 2. Check the requested key availability in Cache.
Step 3. If the Key is not available, the cache layer will directly talk with the database layer.
Step 4. The new data will be added to the Cache.
Step 5. The cache layer will return the data to the application.


public class SqlReadThruProvider : Runtime.DatasourceProviders.IReadThruProvider
    {
        private SqlDatasource sqlDatasource;

        public void Init(IDictionary parameters, string cacheId)
        {
            object connString = parameters["connstring"];
            sqlDatasource = new SqlDatasource();
            sqlDatasource.Connect(connString == null ? "" : connString.ToString());
        }

        public void Dispose()
        {
            sqlDatasource.DisConnect();
        }

        public ProviderCacheItem LoadFromSource(string key)
        {
            ProviderCacheItem cacheItem = new ProviderCacheItem(sqlDatasource.LoadCustomer(key));
            cacheItem.ResyncOptions.ResyncOnExpiration = true;
            return cacheItem;
        }

        public IDictionary<string, ProviderCacheItem> LoadFromSource(ICollection<string> keys)
        {
            IDictionary<string, ProviderCacheItem> providerItems = new Dictionary<string, ProviderCacheItem>();
            foreach (string key in keys)
            {
                object data = sqlDatasource.LoadCustomer(key);
                ProviderCacheItem item = new ProviderCacheItem(data);
                item.Expiration = new Expiration(ExpirationType.DefaultAbsolute);
                item.Group = "customers";

                providerItems.Add(key, item);
            }
            return providerItems;
        }

        public ProviderDataTypeItem<IEnumerable> LoadDataTypeFromSource(string key, DistributedDataType dataType)
        {
            ProviderDataTypeItem<IEnumerable> providerItem = null;
            switch (dataType)
            {

                case DistributedDataType.Counter:
                    providerItem = new ProviderDataTypeItem<IEnumerable>(sqlDatasource.GetCustomerCountByCompanyName(key));
                    break;
                case DistributedDataType.Dictionary:
                    providerItem = new ProviderDataTypeItem<IEnumerable>(sqlDatasource.LoadCustomersByCity(key));
                    break;
                case DistributedDataType.List:
                    providerItem = new ProviderDataTypeItem<IEnumerable>(sqlDatasource.LoadCustomersFromCountry(key));
                    break;
                case DistributedDataType.Queue:
                    providerItem = new ProviderDataTypeItem<IEnumerable>(sqlDatasource.LoadCustomersByOrder(key));
                    break;
                case DistributedDataType.Set:

                    providerItem = new ProviderDataTypeItem<IEnumerable>(sqlDatasource.LoadOrderIDsByCustomer(key));
                    break;
            }

            return providerItem;
        }
    }

We have five methods implemented from the IReadThruProvider interface.

  • Init(): establish the connection to the data source.
  • LoadFromSource(string key): This method is used to load the data to the Cache from the source based on the Key.
  • LoadFromSource(ICollection<string> keys): This method is used to load multiple objects from the data source and returns the dictionary of keys with respective data contained in ProviderCacheItem
  • LoadDataTypeFromSource(string key, DistributedDataType dataType): This method load data structure from the external data source and returns the data Structure contained in ProviderCacheItem, which is enumerated.
  • Dispose(): This function will release the resource.

var ReadThruProviderName = "SqlReadThruProvider";
var customer = _cache.Get<Customer>(CustomerID, new ReadThruOptions(ReadMode.ReadThru, ReadThruProviderName));


The above statements are used to get data from the Cache with the read mode as ReadThru.

Pros
The data is always up to data in the Cache, the application code doesn’t query the database, and everything will take care of by caching layer.

Cons
Single point of failure -If the caching layer is not available, the application can’t retrieve data because it’s not connected to the database.

We have seen what is full Cache, Cache-aside, and Read-thru Cache and how NCache as a distributed cache provider, used to implement all these features or techniques. All these techniques have pros and cons, so based on the requirement, we need to understand which is the right choice to solve our business problem.



European ASP.NET Core Hosting :: File Picker in .NET MAUI

clock June 6, 2023 07:11 by author Peter

Cross-platform framework.NET MAUI enables developers to create native mobile and desktop applications using C# and XAML. It facilitates the creation of apps that operate seamlessly on Android, iOS, macOS, and Windows using a single codebase. This open-source platform is an evolution of Xamarin Forms, extending its applicability to desktop scenarios and enhancing UI controls for enhanced performance and extensibility.

Using.NET MAUI, you can create applications that run on multiple platforms, including Android, iOS, MacOS, and Windows, from a single codebase.

.NET MAUI enables efficient cross-platform development, eliminating the need for distinct codebases for each platform and simplifying application maintenance and updates.

.NET MAUI saves time and effort and simplifies application maintenance. Visual Studio 2022 supports dotnet 7 and.Net Maui application development

Cross-platform framework.NET MAUI enables developers to create native mobile and desktop applications using C# and XAML. It facilitates the creation of apps that operate seamlessly on Android, iOS, macOS, and Windows using a single codebase. This open-source platform is an evolution of Xamarin Forms, extending its applicability to desktop scenarios and enhancing UI controls for enhanced performance and extensibility.

This article demonstrates how to implement a File Picker in a.NET MAUI application.
Project Setup

    To create a new project, launch Visual Studio 2022 and select Create a new project in the start window.

In the Create a new project window, select MAUI in the All project types drop-down, select the .NET MAUI App template, and click the Next button:

Give your project a name, pick an appropriate location for it, then click the Next button in the Configure your new project window:


In the Additional information window, click the Create button:

Implementation
As a first point, we need to implement the screen design as per our requirements. In this tutorial, we will use 3 controls - button, image, and label like in the following code block.

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="MauiFilePicker.MainPage">

    <ScrollView>
        <VerticalStackLayout
            Spacing="25"
            Padding="30"
            VerticalOptions="Start">

            <Button
                x:Name="ImageSelect"
                Text="Select Barcode"
                SemanticProperties.Hint="Select Image"
                Clicked="SelectBarcode"
                HorizontalOptions="Center" />
            <Label x:Name="outputText"
                   Padding="10"/>
            <Image
                x:Name="barcodeImage"
                SemanticProperties.Description="Selected Barcode"
                HeightRequest="200"
                HorizontalOptions="Center" />
        </VerticalStackLayout>
    </ScrollView>

</ContentPage>


Here,

    Button - used to select images from the device.
    Image - used to display the selected image.
    Label - used to display the file path once the image is selected through the file picker.

Functional part
    Add a click event for the button to select the image like below.
private async void SelectBarcode(object sender, EventArgs e)
{
    var images = await FilePicker.Default.PickAsync(new PickOptions
    {
        PickerTitle = "Pick Barcode/QR Code Image",
        FileTypes = FilePickerFileType.Images
    });
    var imageSource = images.FullPath.ToString();
    barcodeImage.Source = imageSource;
    outputText.Text = imageSource;
}


Here,
FileTypes: The default file type available for selection in the FilePicker are FilePickerFileType.Images, FilePickerFileType.Png, and FilePickerFileType.Videos. However, if you need to specify custom file types for a specific platform, you can create an instance of the FilePickerFileType class. This allows you to define the desired file types according to your requirements.
PickerTitle: The PickOptions.PickerTitle is presented to the user and its behavior varies across different platforms.

Full Code
namespace MauiFilePicker;

public partial class MainPage : ContentPage
{
    int count = 0;

    public MainPage()
    {
        InitializeComponent();
    }

    private async void SelectBarcode(object sender, EventArgs e)
    {
        var images = await FilePicker.Default.PickAsync(new PickOptions
        {
            PickerTitle = "Pick Barcode/QR Code Image",
            FileTypes = FilePickerFileType.Images
        });
        var imageSource = images.FullPath.ToString();
        barcodeImage.Source = imageSource;
        outputText.Text = imageSource;
    }
}


<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="MauiFilePicker.MainPage">

    <ScrollView>
        <VerticalStackLayout
            Spacing="25"
            Padding="30"
            VerticalOptions="Start">

            <Button
                x:Name="ImageSelect"
                Text="Select Barcode"
                SemanticProperties.Hint="Select Image"
                Clicked="SelectBarcode"
                HorizontalOptions="Center" />
            <Label x:Name="outputText"
                   Padding="10"/>
            <Image
                x:Name="barcodeImage"
                SemanticProperties.Description="Selected Barcode"
                HeightRequest="200"
                HorizontalOptions="Center" />
        </VerticalStackLayout>
    </ScrollView>

</ContentPage>


Demo
Android


 



European ASP.NET Core Hosting :: Web API HttpContextAccessor in Asp.NET Core

clock May 29, 2023 09:55 by author Peter

The HttpContextAccessor class provides access to the present HTTP request and response context within ASP.NET Core Web API. It provides access to headers, cookies, query parameters, and user claims, among other aspects of the HTTP request and response.

Add Dependency to your program in Step 1.cs

Example Source Code builder.Services.AddSingletonIHttpContextAccessor, HttpContextAccessor>();

Utilizing HttpContextAccessor
In the example given below, the User is obtained, and then the company ID and user ID are obtained from the Auth Microservice.


Example Source Code

builder.Services.AddSingletonIHttpContextAccessor, HttpContextAccessor>();

Utilizing HttpContextAccessor
In the example given below, the User is obtained, and then the company ID and user ID are obtained from the Auth Microservice.

Code Example
using AC_Projects_API_Domain_Layer.Models;
using AC_Projects_API_Repository_Layer.IRepository;
using AC_Projects_API_Service_Layer.IService;
using AutoMapper;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Claims;
using System.Text;
using System.Threading.Tasks;

namespace AC_Projects_API_Service_Layer.Services
{
    public class ColumsService : ICustomService<Columns>
    {
        private readonly IRepository<Columns> _statusRepository;
        public long LoggedInUserCompanyId;
        public long LoggedInUserId;
        private readonly IHttpContextAccessor _httpContextAccessor;

        public ColumsService(
            IRepository<Columns> statusRepository,
            IMapper mapper,
            IHttpContextAccessor httpContextAccessor
            )
        {
            _statusRepository = statusRepository;
            _httpContextAccessor = httpContextAccessor;
            ClaimsIdentity user = (ClaimsIdentity)_httpContextAccessor.HttpContext.User.Identity;
            LoggedInUserCompanyId = Convert.ToUInt32(user.FindFirst("CompanyId").Value.ToString());
            LoggedInUserId = Convert.ToUInt32(user.FindFirst("UserId").Value.ToString());
        }
        public void Delete(Columns entity)
        {
            try
            {
                if (entity != null)
                {
                    entity.ModifiedBy = LoggedInUserId;
                    entity.ModifiedDate = DateTime.Now;
                    _statusRepository.Delete(entity);
                    _statusRepository.SaveChanges();
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        public Columns Get(long Id)
        {
            try
            {
                var obj = _statusRepository.Get(Id);
                if (obj != null)
                {
                    return obj;
                }
                else
                {
                    return null;
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        public IEnumerable<Columns> GetAll()
        {
            try
            {
                var obj = _statusRepository.GetAll();
                if (obj != null)
                {
                    return obj;
                }
                else
                {
                    return null;
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        public Columns Insert(Columns entity)
        {
            try
            {
                if (entity != null)
                {
                    entity.CreatedBy = LoggedInUserId;
                    entity.CreatedDate = DateTime.Now;
                    entity.CompanyId = LoggedInUserCompanyId;
                    _statusRepository.Insert(entity);
                    _statusRepository.SaveChanges();
                    return entity;
                }
                return null;
            }
            catch (Exception)
            {
                throw;
            }
        }

        public void Remove(Columns entity)
        {
            try
            {
                if (entity != null)
                {
                    _statusRepository.Remove(entity);
                    _statusRepository.SaveChanges();
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        public void Update(Columns entity)
        {
            try
            {
                if (entity != null)
                {
                    entity.ModifiedBy = LoggedInUserId;
                    entity.ModifiedDate = DateTime.Now;
                    _statusRepository.Update(entity);
                    _statusRepository.SaveChanges();
                }
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}

Why'd we Use HttpContextAccessor?
For the following reasons, the `HttpContextAccessor` is frequently utilized in ASP.NET Core Web API applications:

Accessing HTTP Request and Response
Access to the current HTTP request and response context is the main goal of the `HttpContextAccessor` component. You can use it to get details about incoming requests, including headers, cookies, query parameters, and user claims. By gaining access to the "Response" property, you can also alter the HTTP response.

Authentication and Authorization

You can gain access to authentication-related data via the `HttpContextAccessor`, including the claims, identity, and roles of an authenticated user. This can be used to implement authorization checks inside of your application, figure out the permissions of the current user, or alter behavior based on authentication status.

Logging and Diagnostics
You can record information relevant to the current HTTP request using the `HttpContextAccessor' To assist with debugging or monitoring, you can use it to record request-specific information, such as the URL, headers, or user information. It enables you to effectively detect problems by correlating log entries with particular requests.

Custom Middleware and Filters
You can design action filters or middleware components in ASP.NET Core that need access to the HTTP context. You can access request-specific data and adjust the request or response pipeline by using the `HttpContextAccessor`, which enables you to inject the current context into your custom components.

Integration with External Services

To carry out specific tasks, some third-party services or libraries can need access to the HTTP context. These external components or services can interact with the current request and response by utilizing the "HttpContextAccessor" to give them the necessary context.

Although the "HttpContextAccessor" offers practical access to the HTTP context, it should only be used sparingly. By exposing the HTTP context too broadly in your application, you risk creating security holes or adding dependencies that make it harder to maintain your code. The use of "HttpContextAccessor" in your application should, therefore, be carefully considered in terms of its scope and objectives.

The ASP.NET Core Web API's `ttpContextAccessor` gives users access to the current HTTP request and response context. It enables you to carry out actions based on authentication and authorization, alter the response, and retrieve information about the request. It is frequently employed for logging, diagnostics, developing unique middleware, interfacing with outside services, and other purposes. It should be utilized cautiously, taking into account security issues and keeping a clean codebase.



European Entity Framework Hosting - UK :: Top 8 Reasons to use it in .Net Application

clock May 22, 2023 09:39 by author Peter

EF Core or Entity Framework Core is based upon the ORM or Object-Relational Mapper model. With the help of the ORM model, EF Core provides us with an intermediate layer between the domain model and the database objects. Microsoft treated EF Core as a data access-based API which helps us to communicate with the database by using .NET POCOs and Linq-type objects. While we use this Entity Framework in any application, it always takes less time to implement the CRUD-related layer with the database. In this section, we will mainly focus on the new features which are introduced in EF Core 7.0 version.

EF Core Architecture Design
Entity Framework Core or EF Core is a cross-platform-based, lightweight, and open-source version of our existing Entity Framework data access technology. In this architecture, we just need to focus on our programming code, and the rest operations related to the database handling are maintained by the EF Core itself. With the help of EF Core, we can perform many things like –

  • Can load data by using C# class objects or entities.
  • Can perform add, update, and delete operations by invoking related methods for those entities.
  • Can map multiple database table objects into a single C# entity objects.
  • Can handle the concurrent user scenario to update the same database table data.
  • Can use LINQ query to fetch data from the database.
  • Can establish a connection with different types of databases like SQL Server, SQLite, Azure Cosmos DB, PostgreSQL, etc.
  • Can build and develop our domain model based on any existing database.
  • Can maintain our database schema based on our domain model.

The below diagram represents the architecture structure behind the Entity Framework Core.


The DbContext is a special class that represents a unit of work and provides methods to configure options, connection strings, logging, and the model used to map your domain to the database. During the derive new classes from DbContext, it performs the following steps:
    Establish an active connection session with the database.
    Save and query instances of entities.
    Include properties of type DbSet<T> representing tables in the database.

In EF Core, the provider always translates the objects into the SQL Commands at the time of execution. We can treat EF Core as a Provider of Database.
    It Is a plug-in library designed for a specific database engine, such as SQL Server, Azure Cosmos DB, or PostgreSQL.
    Translates method calls and LINQ queries to the database's native SQL dialect.
    Extends EF Core to enable functionality that's unique to the database engine.

What’s new in EF Core 7.0?
When Microsoft introduced the new .Net Framework version 7.0, at the same time, they also introduced the new version of Entity Framework i.e. EF Core 7.0. EF Core 7.0 has many new features compared to the previous versions. Some of the major changes of EF Core in this new version are as below –
New Bulk Update and Bulk Delete Method

Before EF Core 7.0, we commonly use the SaveChanges method to perform save or update related operations in the database. This method normally identifies the changes in the entity objects and sends updated information to the database. So, in an earlier version of EF Core, if we want to update any record in the database, then first we need to query that record, make the changes in the result and then need to call the SaveChanges method. If we want to delete any records, we need to follow a similar workflow: retrieve the objects, change their state as deleted, and then call the SaveChanges.

Due to this workflow, for a long time, developers and community members request to bring some features quite similar to a LINQ query to directly push the changes into the database. So, in EF Core 7.0, Microsoft introduced two new methods, namely ExecuteUpdate and ExecuteDelete, for this purpose. Here, we need to remember that these two new method does not replace the SaveChanges method at all. These provide us with more flexibility while working with the database.

To delete any records, now, we can perform the code below –
context.Product.Where( s => s.ProductId == 100).ExecuteDelete();

Now, in the above example, we are deleting one single record. But in the same way, we can delete multiple records at a time by providing proper filter conditions in the LINQ expressions. And most importantly, this command will execute immediately and does not require invoking the SaveChanges method.

Just like the delete operation, we can perform the update operations as well. For that, we need to use the SetProperty method to assign the values of the change.
context.People
        .Where(p => p.City == "Calcutta")
        .ExecuteUpdate (s => s.SetProperty(c =>c.City, c =>"Kolkata"));


As per the above example, we are trying to update the people's city names from Calcutta to Kolkata. Executing this statement will update all records' city property values to a new property value in a single shot. While it executes the commands, it runs the below SQL commands in the database.
UPDATE [p]
SET [p].[City] = N'Kolkata'
FROM [People] AS [p]
WHERE [p].[City] = N'Calcutta'

SQL
Mapping JSON Type Column

In the case of a Database, using JSON Data type columns and storing data in that column is one of the most used and required features nowadays. By using this approach, we can store object-type data in a single column where we do not require a strict type relational concept.  Entity Framework Core 7, now provides support for this JSON-based column which helps us to map with .Net object types with JSON documents. Also, we can perform the LINQ queries based on these types of JSON columns along with save and update changes. Although, this feature is still in the basic mode as updated by the team. Probably, in the next release, such a more advanced implementation will be incorporated into this functionality.

During the implementation, the support we want to use the Employee class in which we will use another child object called AddressInfo as below.
public class Employee
{
    public int Id { get; set; }
    public string FullName { get; set; }
    public int Age { get; set; }
    public Address AddressInfo { get; set; }
}

public class Address
{
    public string State { get; set; }
    public string City { get; set; }
    public string Country { get; set; }
    public string PostalCode { get; set; }
}

Now, related to the Entity Employee, we need to use two different mappings for the Employee.AddressInfo property in the OnModelling method. In this method, OwnsOne mapping provided an overload which help us to mention the relationship of the owned property by using OwnedNavigationBuilder.
modelBuilder.Entity<Employee>()
            .OwnsOne(p=> p.AddressInfo, jb => { jb.ToJson(); });


Performance Improvement on SaveChanges Method
In EF Core 6.0, one of the most important features was performance improvement for all types of non-tracking queries. At that time, the EF Core team promises that they will come up will more performance improvements in the next version i.e. EF Core 7. And now, they key their promise.  

In EF Core normal or default transaction process, when we invoke the SaveChanges method, then it is performed with a database transaction so that if one statement fails to execute, then the entire process will be rollback. But when we send only a command related to a single table to update or insert, then also this process is followed, which is unnecessary as there is no need to use Transaction for a single statement. Now, in EF Core 7, while we perform SaveChanges in one single entity, then EF Core sends only one command (DDL statement) despite sending send 3 commands (BEGIN Transaction, DDL statement, Commit Transaction) to the database. Due to this, the performance in EF Core is must faster.

Another change is made in the case of Insert Statement in EF Core 7. As we know, while we perform any insert operation in EF Core, it executes two statements in the database part, and that is Insert Statement paid with a  select statement to return the database generated value for any primary or foreign key. But now, in EF Core, the SQL Server makes compression on all of those commands to convert into a single command with the help of OUTPUT parameters in the insert statement.

SQL Command executed in EF Core 6.
INSERT INTO [Department] ([Name])
    VALUES (@p0);
    SELECT [DeptId]
    FROM [Department]
    WHERE @@ROWCOUNT = 1
    AND [DeptId] = scope_identity();

Now, in EF Core 7, SQL Command is executed.
INSERT INTO [Department] ([Name])
    OUTPUT INSERTED.[DeptId]
    VALUES (@p0);


Mapping Stored Procedure
In the conventional Entity Framework, we can use the stored procedures with the entities. So, we invoke the SaveChanges method, EF called the stored procedure by pushing into the parameters. But, in EF Core 7, this process makes much simpler. In EF Core 7 version, several FluentAPI-related methods have been introduced, like InsertUsingStoredProcedure, UpdateUsingStoredProcedure, and DeleteUsingStoredProcedure. These methods can be applied within an entity in the OnModelCreating method. Each of the above mention methods takes an input string as a procedure name and a StoredProcedureBuilder object which contains different parameters which need to be passed into the stored procedure.
modelBuilder.Entity<Person>()
    .InsertUsingStoredProcedure("uspPeopleInsert",
        spbuilder => spbuilder
            .HasParameter(p => p.PersonId, pb => pb.IsOutput().HasName("id"))
            .HasParameter(p => p.FirstName)
            .HasParameter(p => p.LastName)
)


Encrypt Defaults to True in SQL Server Connections
In EF Core 7, another breaking change is Encrypt value sets as true by default in the connection string. This is one of the breaking changes in Microsoft.Data.SqlClient packages. In the earlier version, the default value was Encrypt = false within the connection string. But now, in the latest version, this one is considered true by default. It means the Database server must be configured with a valid certificate, and the client must trust this certificate while establishing a connection with the server.
DB Context and API Enhancement

In EF Core 7.0, there are many small types of improvements done in the DBContext and its related classes. Some of them are as,
Now, uninitialized DbSet properties are automatically suppressed in EF Core 7. The default access type of the DbSet property is public. So, we use the DbSet property in the constructor, then it is automatically initialized by the DbContext.
Now, we can distinguish between the cancel the operation due to any hardware failure by using logs in the case of EF Core 7. This failure operation can be performed either by the application when it explicitly canceled any query or also by bypassing the CancellationToken passed to the operating methods.
Now, we can perform overloading against  IProperty and INavigation in EntityEntry methods on the model class.
We can use EntityEntry for any shared entity. In that case, EF Core will use the same CLR type for all different entity types. These are commonly called "shared-type entity types", and are treated as a dictionary type with the key/value pairs used for the properties of the entity type.

Model Binding
In EF Core 7.0, there are many small types of improvements done in the Model Binding. Some of them are as,
Indexes can be used as ascending or descending
    modelBuilder
        .Entity<Post>()
        .HasIndex(post => post.Title)
        .IsDescending();

    modelBuilder
        .Entity<Blog>()
        .HasIndex(blog => new { blog.Name, blog.Owner })
        .IsDescending(false, true);


    Can map composite key-related attribute
    [PrimaryKey(nameof(PostId), nameof(CommentId))]
    public class Comment
    {
        public int PostId { get; set; }
        public int CommentId { get; set; }
        public string CommentText { get; set; } = null!;
    }


Can define the delete behavior as the mapping attribute for the relationship data. The default attribute value is DeleteBehaviour.Cascade. But that value can be changed to DeleteBehavior.NoAction.
    public class Post
    {
        public int Id { get; set; }
        public string? Title { get; set; }

        [DeleteBehavior(DeleteBehavior.NoAction)]
        public Blog Blog { get; set; } = null!;
    }

Improved Code value generation
EF7 provides two significant improvements to the automatic generation of values for key field properties. In EF Core 7, key types based on value converters can use automatically generated key values so long as the underlying type supports this. This is configured in the normal way using ValueGeneratedOnAdd. EF7 also introduces support for a database sequence attached to the key's column default constraint. In its simplest form, this just requires telling EF Core to use a sequence for the key property



European ASP.NET Core Hosting :: Non-nullable Property Must Contain a Non-null Value in .NET 6

clock January 17, 2022 08:19 by author Peter

Prior to .NET 6, null-state analysis and variable annotations are disabled for existing project, however, it is enabled for all .NET 6 projects by default. Because of this, all the properties or models are showing warning for non-nullable elements. In this article, I will explore the warning with null reference in more detail and also provide the possible recommended solutions.

Recent version of Visual Studio 2022 and .NET 6, we have a common warning for the models or properties. The warning is given below.

    Non-nullable property must contain a non-null value when exiting constructor. Consider declaring the property as nullable

Recently, I was getting this warning for most of the properties which are not specified as nullable. I was not happy with this warning though I was able to run my .NET 6 application smoothly.

Exact Warning

    Non-nullable property ‘propertyname’ must contain a non-null value when exiting constructor. Consider declaring the property as nullable.

After carefully observing this error message, it makes sense for those properties. In order to minimize the likelihood that, our code causes the runtime to throw System.NullReferenceException, we need to resolve this warning.

Therefore, the compiler is giving warning in the solution that the default assignment of your properties (which is null) doesn’t match its state type (which is non-null string).

Let’s explore a bit more on nullable references.

Null state Analysis

This analysis highlights and tracks the null reference. In other words, our code may return null response, which may result in runtime error throwing System.NullReferenceException. The .NET compiler utilizes Null State Analysis to identify the null-state of a variable or properties.

To overcome this System.NullReferenceException and resolve this warning, we can assign a default value to the variable. Alternatively, we can check against null before doing any modification or operation.

Sample:
String peter = null;
//code
//code
// warning: dereference null.
Console.WriteLine($"The length of the message is {peter.Length}");
if(peter != null)
{
// No warning. Analysis determined "message" is not null.
Console.WriteLine($"The length of the message is { peter.Length}");
}
peter = "Hello, World!";


// No warning. Analysis determined "message" is not null.
Console.WriteLine($"The length of the message is {peter.Length}");

This is how static analysis determines that the peter is dereference null, i.e. maybe-null. However, if we check null before using it or assign the value, then the warning will be removed.

Null variable Annotations
We generally define the variable to be either null or not null, explicitly with reference type.

We can declare a variable as non-nullable reference type or nullable reference type, which provides clear indication to null-state analysis. Based on the declaration, the compiler gets enough information to make assumptions about the variable.

Nullable Reference Type: we use syntax “?” for nullable value types in variables. By appending ? in the type of the variable, we declare a nullable variable.

Example

string? peter;

Using the above declaration, it is verified that all reference type variables are in the existing code. However, local variables var is considered as nullable.

We can also overwrite a warning if we know the variable is not null. Even though we know that value can’t be null, the compiler detects it as maybe-null with its null-state analysis. We can use null-forgiving operator ! after the variable. To overwrite compiler’s analysis:

Example
peter!.Length

Generally, the rules for handling any type of parameter as per C# 8.0.
    If the type argument for T is a reference type, T? references the corresponding nullable reference type. For example, if T is a string, then T? is a string?.
    If the type argument for T is a value type, T? references the same value type, T. For example, if T is an int, the T? is also an int.
    If the type argument for T is a nullable reference type, T? references that same nullable reference type. For example, if T is a string?, then T? is also a string?.
    If the type argument for T is a nullable value type, T? references that same nullable value type. For example, if T is a int?, then T? is also a int?.

For more detail Click here.

Now we will explore possible solutions.

Solution
Now, we will discuss the solution for the above warning. We can resolve this error in 3 ways.

Solution 1
We can simply make the properties nullable, as shown below:

Sample:
public class AppSetting {
    public string? ReferenceKey { get; set; }
    public string? Value { get; set; }
    public string? Description { get; set; }
}

Solution 2

We can assign default value to those properties as shown below:
public class AppSetting {
    public string ReferenceKey { get; set; } = "Default Key";
    public string Value { get; set; } = "Default Value";
    public string Description { get; set; } = "Default Description";
}


Alternatively, you can give a reasonable default value for non-nullable strings as string.empty.

Example
public class AppSetting {
    public string ReferenceKey { get; set; } = string.empty;
    public string Value { get; set; } = string.empty;
    public string Description { get; set; } = string.empty;
}


Solution 3
You can disable this warning from project level. You can disable it by removing the below line from project file csproj or setting.
<Nullable>enable</Nullable>

For more detail check this link.

Alternatively, you can go to project properties by right-clicking on it. Then make nullable enable from build general option as portrayed.
Non-nullable property must contain a non-null value when exiting constructor. Consider declaring the property as nullable



European ASP.NET Core Hosting :: Send Push Notification To Android Device From .Net Core Web API

clock September 20, 2021 06:57 by author Peter

In this article, you will learn about how to send push notifications to android devices using FCM from .Net Core Web API.  You will also learn how to create a Firebase project and get Server Key and Sender ID step by step. Our main focus is how to handle send notification functionality on the API side.
What is FCM?

Firebase Cloud Messaging (FCM) is a free cloud service from Google that allows users to send notifications and messages to users.

It is a cross-platform messaging solution. You can send notifications and messages across a variety of platforms, including Android, iOS and web applications.
Create Firebase Project

Go to Firebase Console.

Follow the steps to set up the project.
After adding the project to the Firebase, add App to the same project.
Send Push Notification To Android Device From .Net Core Web API

 

Enter package name, app name, and SHA-1 key of your Android Studio project.

Follow all the steps to complete the add Firebase to your Android app. If you face any problem, you can refer to  Add Firebase to your Android Project.
Get FCM Sender ID & Server Key

Click on the “Gear” icon and access “Project settings”.

Go to the “Cloud Messaging” section and you will have access to the Sender ID and Server Key. We will use it later in the API.


For setting up a Firebase Cloud Messaging client app on Android and getting FCM registration token or device token please refer to this Set up an Android client.
How to send FCM push notification from an ASP.NET Core Web API project.

What CorePush Package?

It’s very Lightweight .NET Core Push Notifications for Android and iOS. I used it in my project to send Firebase Android and Apple iOS push notifications. In this article just I focus on sending push notifications on Android devices Useful links,

  • NuGet package
  • Documentation

Step 1 – Create Project
Open Visual Studio click on “Create a new project”.

Select ASP.NET Core Web Application option.


Add Project name and Solution name.

Select the “API” option with “.NET Core” and “ASP.NET Core 3.1” to create ASP.NET API.


You can see the default folder structure.

Step 2 – Install NuGet Packages
Open Package Manager Console


And run the below commands one by one:

  1. Install-Package CorePush
  2. Install-Package Newtonsoft.Json
  3. Install-Package Swashbuckle.AspNetCore

Step 3 – Create Models for the controller
Now, create a directory with name Models and add the following files

  • ResponseModel.cs
  • NotificationModel.cs
  • FcmNotificationSetting.cs

ResponseModel.cs will contain definitions for response models.
NotificationModel.cs will contain definitions for notification and google notification model
FcmNotificationSetting.cs will contain definitions for FCM notification settings.

Code for ResponseModel.cs file,
using Newtonsoft.Json;

namespace net_core_api_push_notification_demo.Models
{
    public class ResponseModel
    {
        [JsonProperty("isSuccess")]
        public bool IsSuccess { get; set; }
        [JsonProperty("message")]
        public string Message { get; set; }
    }
}


Code for NotificationModel.cs file,
using Newtonsoft.Json;

namespace net_core_api_push_notification_demo.Models
{
    public class NotificationModel
    {
        [JsonProperty("deviceId")]
        public string DeviceId { get; set; }
        [JsonProperty("isAndroiodDevice")]
        public bool IsAndroiodDevice { get; set; }
        [JsonProperty("title")]
        public string Title { get; set; }
        [JsonProperty("body")]
        public string Body { get; set; }
    }

    public class GoogleNotification
    {
        public class DataPayload
        {
            [JsonProperty("title")]
            public string Title { get; set; }
            [JsonProperty("body")]
            public string Body { get; set; }
        }
        [JsonProperty("priority")]
        public string Priority { get; set; } = "high";
        [JsonProperty("data")]
        public DataPayload Data { get; set; }
        [JsonProperty("notification")]
        public DataPayload Notification { get; set; }
    }
}


Code for FcmNotificationSetting.cs file,
namespace net_core_api_push_notification_demo.Models
{
    public class FcmNotificationSetting
    {
        public string SenderId { get; set; }
        public string ServerKey { get; set; }
    }
}


Step 4 – Update appsettings.Development.json file

Code for appsettings.Development.json file,

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "FcmNotification": {
    "SenderId": "*SENDER_ID*",
    "ServerKey": "*SERVER_KEY*"
  }
}

Replace *SENDER_ID* with your sender Id.
Replace *SERVER_KEY* with your server key.
Which we have found from the Cloud Messaging section of Firebase project settings.

Step 5 – Create Service
Now, create a directory with the name Services and add the following file.
Code for NotificationService.cs file,
using CorePush.Google;
using Microsoft.Extensions.Options;
using net_core_api_push_notification_demo.Models;
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using static net_core_api_push_notification_demo.Models.GoogleNotification;

namespace net_core_api_push_notification_demo.Services
{
    public interface INotificationService
    {
        Task<ResponseModel> SendNotification(NotificationModel notificationModel);
    }

    public class NotificationService : INotificationService
    {
        private readonly FcmNotificationSetting _fcmNotificationSetting;
        public NotificationService(IOptions<FcmNotificationSetting> settings)
        {
            _fcmNotificationSetting = settings.Value;
        }

        public async Task<ResponseModel> SendNotification(NotificationModel notificationModel)
        {
            ResponseModel response = new ResponseModel();
            try
            {
                if (notificationModel.IsAndroiodDevice)
                {
                    /* FCM Sender (Android Device) */
                    FcmSettings settings = new FcmSettings()
                    {
                        SenderId = _fcmNotificationSetting.SenderId,
                        ServerKey = _fcmNotificationSetting.ServerKey
                    };
                    HttpClient httpClient = new HttpClient();

                    string authorizationKey = string.Format("keyy={0}", settings.ServerKey);
                    string deviceToken = notificationModel.DeviceId;

                    httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Authorization", authorizationKey);
                    httpClient.DefaultRequestHeaders.Accept
                            .Add(new MediaTypeWithQualityHeaderValue("application/json"));

                    DataPayload dataPayload = new DataPayload();
                    dataPayload.Title = notificationModel.Title;
                    dataPayload.Body = notificationModel.Body;

                    GoogleNotification notification = new GoogleNotification();
                    notification.Data = dataPayload;
                    notification.Notification = dataPayload;

                    var fcm = new FcmSender(settings, httpClient);
                    var fcmSendResponse = await fcm.SendAsync(deviceToken, notification);

                    if (fcmSendResponse.IsSuccess()) {
                        response.IsSuccess = true;
                        response.Message = "Notification sent successfully";
                        return response;
                    } else {
                        response.IsSuccess = false;
                        response.Message = fcmSendResponse.Results[0].Error;
                        return response;
                    }
                }
                else {
                    /* Code here for APN Sender (iOS Device) */
                    //var apn = new ApnSender(apnSettings, httpClient);
                    //await apn.SendAsync(notification, deviceToken);
                }
                return response;
            }
            catch (Exception ex) {
                response.IsSuccess = false;
                response.Message = "Something went wrong";
                return response;
            }
        }
    }
}

Step 6 – Update Startup.cs file
Code for Startup.cs file,

using CorePush.Apple;
using CorePush.Google;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.OpenApi.Models;
using net_core_api_push_notification_demo.Models;
using net_core_api_push_notification_demo.Services;

namespace net_core_api_push_notification_demo
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddControllers();
            services.AddTransient<INotificationService, NotificationService>();
            services.AddHttpClient<FcmSender>();
            services.AddHttpClient<ApnSender>();

            // Configure strongly typed settings objects
            var appSettingsSection = Configuration.GetSection("FcmNotification");
            services.Configure<FcmNotificationSetting>(appSettingsSection);

            // Register the swagger generator
            services.AddSwaggerGen(c => {
                c.SwaggerDoc(name: "V1", new OpenApiInfo { Title = "My API", Version = "V1" });
            });
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment()) {
                app.UseDeveloperExceptionPage();
            }

            // Enable middleware to serve generated Swagger as a JSON endpoint
            app.UseSwagger();
            // Enable the SwaggerUI
            app.UseSwaggerUI(c => {
                c.SwaggerEndpoint(url: "/swagger/V1/swagger.json", name: "My API V1");
            });

            app.UseHttpsRedirection();
            app.UseRouting();
            app.UseAuthorization();
            app.UseEndpoints(endpoints => {
                endpoints.MapControllers();
            });
        }
    }
}

Step 7 – Add Controller
Now, add the NotificationController.cs file in the Controllers folder
Code for NotificationController.cs file,

using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using net_core_api_push_notification_demo.Models;
using net_core_api_push_notification_demo.Services;

namespace net_core_api_push_notification_demo.Controllers
{
    [Route("api/notification")]
    [ApiController]
    public class NotificationController : ControllerBase
    {
        private readonly INotificationService _notificationService;
        public NotificationController(INotificationService notificationService)
        {
            _notificationService = notificationService;
        }

        [Route("send")]
        [HttpPost]
        public async Task<IActionResult> SendNotification(NotificationModel notificationModel)
        {
            var result = await _notificationService.SendNotification(notificationModel);
            return Ok(result);
        }
    }
}

Step 8 – Running Web API
Now, press F5 to start debugging the Web API project, if everything is OK, we’ll get the following output in the browser.


Now, we will call the send notification API and check whether the notification is sent or not. As per the below screenshot, enter all properties details and hit the Execute button. If everything is fine, then the value of isSuccess property is true and the value of the message property is “Notification sent successfully”.

Wow, here is the notification!



European ASP.NET Core Hosting :: JWT Authentication In ASP.NET Core

clock May 3, 2021 06:48 by author Peter

JWT in ASP.NET Core
JWT (JSON web token) has become more and more popular in web development. It is an open standard which allows transmitting data between parties as a JSON object in a secure and compact way. The data transmitting using JWT between parties are digitally signed so that it can be easily verified and trusted.

In this article, we will learn how to setup JWT with ASP.NET core web application. We can create an application using Visual Studio or using CLI (Command Line Interface).

    dotnet new webapi -n JWTAuthentication   

Above command will create an ASP.NET Web API project with the name "JWTAuthentication" in the current folder.
 
The first step is to configure JWT based authentication in our project. To do this, we need to register a JWT authentication schema by using "AddAuthentication" method and specifying JwtBearerDefaults.AuthenticationScheme. Here, we configure the authentication schema with JWT bearer options.
    public void ConfigureServices(IServiceCollection services)    
    {    
        services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)    
        .AddJwtBearer(options =>    
        {    
            options.TokenValidationParameters = new TokenValidationParameters    
            {    
                ValidateIssuer = true,    
                ValidateAudience = true,    
                ValidateLifetime = true,    
                ValidateIssuerSigningKey = true,    
                ValidIssuer = Configuration["Jwt:Issuer"],    
                ValidAudience = Configuration["Jwt:Issuer"],    
                IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Configuration["Jwt:Key"]))    
            };    
        });    
        services.AddMvc();    
    }   


In this example, we have specified which parameters must be taken into account to consider JWT as valid. As per our code,  the following items consider a token valid:

    Validate the server (ValidateIssuer = true) that generates the token.
    Validate the recipient of the token is authorized to receive (ValidateAudience = true)
    Check if the token is not expired and the signing key of the issuer is valid (ValidateLifetime = true)
    Validate signature of the token (ValidateIssuerSigningKey = true)
    Additionally, we specify the values for the issuer, audience, signing key. In this example, I have stored these values in appsettings.json file.

AppSetting.Json

    {    
      "Jwt": {    
        "Key": "ThisismySecretKey",    
        "Issuer": "Test.com"    
      }    
    }   

The above-mentioned steps are used to configure a JWT based authentication service. The next step is to make the authentication service is available to the application. To do this, we need to call app.UseAuthentication() method in the Configure method of startup class. The UseAuthentication method is called before UseMvc method.

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)    
    {    
        app.UseAuthentication();    
        app.UseMvc();    
    }

Generate JSON Web Token
I have created a LoginController and Login method within this controller, which is responsible to generate the JWT. I have marked this method with the AllowAnonymous attribute to bypass the authentication. This method expects the Usermodel object for Username and Password.
 
I have created the "AuthenticateUser" method, which is responsible to validate the user credential and returns to the UserModel. For demo purposes, I have returned the hardcode model if the username is "Peter". If the "AuthenticateUser" method returns the user model, API generates the new token by using the "GenerateJSONWebToken" method.
 
Here, I have created a JWT using the JwtSecurityToken class. I have created an object of this class by passing some parameters to the constructor such as issuer, audience, expiration, and signature.
 
Finally, JwtSecurityTokenHandler.WriteToken method is used to generate the JWT. This method expects an object of the JwtSecurityToken class.
    using Microsoft.AspNetCore.Authorization;    
    using Microsoft.AspNetCore.Mvc;    
    using Microsoft.Extensions.Configuration;    
    using Microsoft.IdentityModel.Tokens;    
    using System;    
    using System.IdentityModel.Tokens.Jwt;    
    using System.Security.Claims;    
    using System.Text;    
        
    namespace JWTAuthentication.Controllers    
    {    
        [Route("api/[controller]")]    
        [ApiController]    
        public class LoginController : Controller    
        {    
            private IConfiguration _config;    
        
            public LoginController(IConfiguration config)    
            {    
                _config = config;    
            }    
            [AllowAnonymous]    
            [HttpPost]    
            public IActionResult Login([FromBody]UserModel login)    
            {    
                IActionResult response = Unauthorized();    
                var user = AuthenticateUser(login);    
        
                if (user != null)    
                {    
                    var tokenString = GenerateJSONWebToken(user);    
                    response = Ok(new { token = tokenString });    
                }    
        
                return response;    
            }    
        
            private string GenerateJSONWebToken(UserModel userInfo)    
            {    
                var securityKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_config["Jwt:Key"]));    
                var credentials = new SigningCredentials(securityKey, SecurityAlgorithms.HmacSha256);    
        
                var token = new JwtSecurityToken(_config["Jwt:Issuer"],    
                  _config["Jwt:Issuer"],    
                  null,    
                  expires: DateTime.Now.AddMinutes(120),    
                  signingCredentials: credentials);    
        
                return new JwtSecurityTokenHandler().WriteToken(token);    
            }    
        
            private UserModel AuthenticateUser(UserModel login)    
            {    
                UserModel user = null;    
        
                //Validate the User Credentials    
                //Demo Purpose, I have Passed HardCoded User Information    
                if (login.Username == "Peter")    
                {    
                    user = new UserModel { Username = "Peter", EmailAddress = "[email protected]" };    
                }    
                return user;    
            }    
        }    
    }   


Once, we have enabled the JWT based authentication, I have created a simple Web API method that returns a list of value strings when invoked with an HTTP GET request. Here, I have marked this method with the authorize attribute, so that this endpoint will trigger the validation check of the token passed with an HTTP request.
 
If we call this method without a token, we will get 401 (UnAuthorizedAccess) HTTP status code as a response. If we want to bypass the authentication for any method, we can mark that method with the AllowAnonymous attribute.
 
To test the created Web API, I am Using Fiddler. First, I have requested to "API/login" method to generate the token. I have passed the following JSON in the request body.
    {"username": "Peter", "password": "password"}

As a response, we will get the JSON like the following,
    {    
        "token" : "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJKaWduZXNoIFRyaXZlZGkiLCJlbWFpbCI6InRlc3QuYnRlc3RAZ21haWwuY29tIiwiRGF0ZU9mSm9pbmciOiIwMDAxLTAxLTAxIiwianRpIjoiYzJkNTZjNzQtZTc3Yy00ZmUxLTgyYzAtMzlhYjhmNzFmYzUzIiwiZXhwIjoxNTMyMzU2NjY5LCJpc3MiOiJUZXN0LmNvbSIsImF1ZCI6IlRlc3QuY29tIn0.8hwQ3H9V8mdNYrFZSjbCpWSyR1CNyDYHcGf6GqqCGnY"    
    }  

Now, we will try to get the list of values by passing this token into the authentication HTTP header. Following is my Action method definition.
    [HttpGet]    
    [Authorize]    
    public ActionResult<IEnumerable<string>> Get()    
    {    
        return new string[] { "value1", "value2", "value3", "value4", "value5" };    
    }  

    Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJKaWduZXNoIFRyaXZlZGkiLCJlbWFpbCI6InRlc3QuYnRlc3RAZ21haWwuY29tIiwiRGF0ZU9mSm9pbmciOiIwMDAxLTAxLTAxIiwianRpIjoiYzJkNTZjNzQtZTc3Yy00ZmUxLTgyYzAtMzlhYjhmNzFmYzUzIiwiZXhwIjoxNTMyMzU2NjY5LCJpc3MiOiJUZXN0LmNvbSIsImF1ZCI6IlRlc3QuY29tIn0.8hwQ3H9V8mdNYrFZSjbCpWSyR1CNyDYHcGf6GqqCGnY 

Handle Claims with JWT
Claims are data contained by the token. They are information about the user which helps us to authorize access to a resource. They could be Username, email address, role, or any other information. We can add claims information to the JWT so that they are available when checking for authorization.
 
In the above example, if we want to pass the claims to our token then the claim information needs to add GenerateJSONWebToken method of Login controller. In the following example, I have added a username, email address, and date of joining as claimed into the token.

    private string GenerateJSONWebToken(UserModel userInfo)    
    {    
        var securityKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_config["Jwt:Key"]));    
        var credentials = new SigningCredentials(securityKey, SecurityAlgorithms.HmacSha256);    
        
        var claims = new[] {    
            new Claim(JwtRegisteredClaimNames.Sub, userInfo.Username),    
            new Claim(JwtRegisteredClaimNames.Email, userInfo.EmailAddress),    
            new Claim("DateOfJoing", userInfo.DateOfJoing.ToString("yyyy-MM-dd")),    
            new Claim(JwtRegisteredClaimNames.Jti, Guid.NewGuid().ToString())    
        };    
        
        var token = new JwtSecurityToken(_config["Jwt:Issuer"],    
            _config["Jwt:Issuer"],    
            claims,    
            expires: DateTime.Now.AddMinutes(120),    
            signingCredentials: credentials);    
        
        return new JwtSecurityTokenHandler().WriteToken(token);    
    }   

The claims are an array of key-value pair. The keys may be values of a JwtRegisteredClaimNames structure (it provides names for public standardized claims) or custom name (such as DateOfJoining in above example).
 
This claims can be used to filter the data. In the following example, I have to change the list of values if the user spends more than 5 years with the company.
    [HttpGet]    
    [Authorize]    
    public ActionResult<IEnumerable<string>> Get()    
    {    
        var currentUser = HttpContext.User;    
        int spendingTimeWithCompany = 0;    
        
        if (currentUser.HasClaim(c => c.Type == "DateOfJoing"))    
        {    
            DateTime date = DateTime.Parse(currentUser.Claims.FirstOrDefault(c => c.Type == "DateOfJoing").Value);    
            spendingTimeWithCompany = DateTime.Today.Year - date.Year;    
        }    
        
        if(spendingTimeWithCompany > 5)    
        {    
            return new string[] { "High Time1", "High Time2", "High Time3", "High Time4", "High Time5" };    
        }    
        else    
        {    
            return new string[] { "value1", "value2", "value3", "value4", "value5" };    
        }    
    }   

JWT is very famous in web development. It is an open standard that allows transmitting data between parties as a JSON object in a secure and compact way. In this article, we will learn how to generate and use JWT with ASP.NET core application.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in