European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core Hosting :: How to Use HTTP-REPL tool to test WEB API in ASP.NET Core 2.2

clock February 26, 2019 07:37 by author Scott

Today there are no tools built into Visual Studio to test WEB API. Using browsers, one can only test http GET requests. You need to use third-party tools like PostmanSoapUIFiddler or Swagger to perform a complete testing of the WEB API. In ASP.NET Core 2.2, a CLI based new dotnet core global tool named “http-repl” is introduced to interact with API endpoints. It’s a CLI based tool which can list down all the routes and execute all HTTP verbs. In this post, let’s find out how to use HTTP-REPL tool to test WEB API in ASP.NET Core 2.2.

HTTP-REPL Tool to test WEB API in ASP.NET Core 2.2

The “http-repl” is a dotnet core global tool and to install this tool, run the following command. At the time of writing this post, the http-repl tool is in preview stage
and available for download at 
dotnet.myget.org

dotnet tool install -g dotnet-httprepl --version 2.2.0-* --add-source https://dotnet.myget.org/F/dotnet-core/api/v3/index.json

Once installed, you can verify the installation using the following command.

dotnet tool list -g



Now the tool is installed, let’s see how we can test the WEB API. For this tool to work properly, the prerequisite here is that your services will have Swagger/OpenAPI available that describes the service.

We need to add this tool to web browser list so that we can browse the API with this tool. To do that, follow the steps given in the below image.



The location of HTTP-REPL tool executable is "C:\Users\<username>\.dotnet\tools". Once added, you can verify it in the browser list.

Run the app (make sure HTTP REPL is selected in browser list) and you should see a command prompt window. As mentioned earlier, it’s a CLI based experience so you can use commands like dir, ls, cdand cls. Below is an example run where I start-up a Web API.

You can use all the HTTP Verbs, and when using the POST verb, you should set a default text editor to supply the JSON. You can set Visual Studio Code as default text editor using the following command.

pref set editor.command.default "C:\Program Files (x86)\Microsoft VS Code\Code.exe"

Once the default editor is set, and you fire POST verb, it will launch the editor with the JSON written for you. See below GIF.

You can also navigate to the Swagger UI from the command prompt via executing ui command. Like,

Similarly, you can also execute the DELETE and PUT. In case of PUT command, you should use following syntax and in the default code editor, supply the updated data.

> delete 2 //This would delete the record with id 2.
>
> put 2010 -h "Content-Type: application/json"

When you fire PUT command, the behavior is same as the POST verb. The text editor will open with the JSON written for you, just supply the updated value to execute PUT command.

Pros and Cons

Pros

  • Helps in debugging WEB API
  • Fast and quickly switch between API endpoints
  • Descriptive error response shown

Cons:

  • Dependency on Swagger/Open API specification
  • Not as informative as UI tools

After playing with this for a while, I strongly feel it’s command line version of the Swagger UI and it would be very handy when there are many API endpoints. You can easily navigate or switch between the APIs and execute it. 



European ASP.NET Hosting :: Overriding ASP.NET Core Framework

clock February 20, 2019 10:57 by author Scott

OVERVIEW

In .NET it’s really easy to create your own interfaces and implementations. Likewise, it’s seemingly effortless to register them for dependency injection. But it is not always obvious how to override existing implementations. Let’s discuss various aspects of “dependency injection” and how you can override the “framework-provided services”.

As an example, let’s take a recent story on our product backlog for building a security audit of login attempts. The story involved the capture of attempted usernames along with their corresponding IP addresses. This would allow system administrators to monitor for potential attackers. This would require our ASP.NET Core application to have custom logging implemented.

LOGGING

Luckily ASP.NET Core Logging is simple to use and is a first-class citizen within ASP.NET Core.

In the Logging repository there is an extension method namely AddLogging, here is what it looks like:

public static IServiceCollection AddLogging(this IServiceCollection services)
{
    if (services == null)
    {
        throw new ArgumentNullException(nameof(services));
    }

    services.TryAdd(ServiceDescriptor.Singleton<ILoggerFactory, LoggerFactory>());
    services.TryAdd(ServiceDescriptor.Singleton(typeof(ILogger<>), typeof(Logger<>)));

    return services;
}

As you can see, it is rather simple. It adds two ServiceDescriptor instances to the IServiceCollection, effectively registering the given service type to the corresponding implementation type.

FOLLOWING THE RABBIT DOWN THE HOLE

When you create a new ASP.NET Core project from Visual Studio, all the templates follow the same pattern. They have the Program.cs file with a Main method that looks very similar to this:

public static void Main(string[] args)
{
    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseIISIntegration()
        .UseStartup<Startup>()
        .UseApplicationInsights()
        .Build();

    host.Run();
}

TEMPLATES 

One thing that is concerning about a template like this is that the IWebHost is an IDisposable, so why then is this statement not wrapped in a using you ask? The answer is that the Run extension method internally wraps itself in a using. If you were wondering where the AddLogging occurs, it is a result of invoking the Build function.

[ Microsoft.AspNetCore.Hosting.WebHostBuilder ]
    public IWebHost Build() ...
        private IServiceCollection BuildCommonServices() ...
            creates services then invokes services.AddLogging()

A FEW WORDS ON THE SERVICE DESCRIPTOR

The ServiceDescriptor class is an object that describes a service, and this is used by dependency injection. In other words, instances of the ServiceDescriptor are descriptions of services. The ServiceDescriptor class exposes several static methods that allow its instantiation.

The ILoggerFactory interface is registered as a ServiceLifetime.Singleton and its implementation is mapped to the LoggerFactory. Likewise, the generic type typeof(ILogger<>) is mapped to typeof(Logger<>). This is just one of the several key “Framework-Provided Services” that are registered.

PUTTING IT TOGETHER

Now we know that the framework is providing all implementations of ILogger<T>, and resolving them as their Logger<T>. We also know that we could write our own implementation of the ILogger<T>interface. Being that this is open-source we can look to their implementation for inspiration.

public class RequestDetailLogger<T> : ILogger<T>
{
    private readonly ILogger _logger;

    public RequestDetailLogger(ILoggerFactory factory,
                               IRequestCategoryProvider requestCategoryProvider)
    {
        if (factory == null)
        {
            throw new ArgumentNullException(nameof(factory));
        }
        if (requestCategoryProvider == null)
        {
            throw new ArgumentNullException(nameof(requestCategoryProvider));
        }

        var category = requestDetailCategoryProvider.CreateCategory<T>();
        _logger = factory.CreateLogger(category);
    }

    IDisposable ILogger.BeginScope<TState>(TState state)
        => _logger.BeginScope(state);

    bool ILogger.IsEnabled(LogLevel logLevel)
        => _logger.IsEnabled(logLevel);

    void ILogger.Log<TState>(LogLevel logLevel,
                             EventId eventId,
                             TState state,
                             Exception exception,
                             Func<TState, Exception, string> formatter)
        => _logger.Log(logLevel, eventId, state, exception, formatter);
}

The IRequestCategoryProvider is defined and implemented as follows:

using static Microsoft.Extensions.Logging.Abstractions.Internal.TypeNameHelper;

public interface IRequestCategoryProvider
{
    string CreateCategory<T>();
}

public class RequestCategoryProvider : IRequestCategoryProvider
{
    private readonly IPrincipal _principal;
    private readonly IPAddress _ipAddress;

    public RequestCategoryProvider(IPrincipal principal,
                                   IPAddress ipAddress)
    {
        _principal = principal;
        _ipAddress = ipAddress;
    }

    public string CreateCategory<T>()
    {
        var typeDisplayName = GetTypeDisplayName(typeof(T));

        if (_principal == null || _ipAddress == null)
        {
            return typeDisplayName;
        }

        var username = _principal?.Identity?.Name;
        return $"User: {username}, IP: {_ipAddress} {typeDisplayName}";
    }
}

If you’re curious how to get the IPrincipal and IPAddress into this implementation (with DI). It is pretty straight-forward. In the Startup.ConfigureServices method do the following:

public void ConfigureServices(IServiceCollection services)
{
    // ... omitted for brevity

    services.AddTransient<IRequestCategoryProvider, RequestCategoryProvider>();
    services.AddTransient<IHttpContextAccessor, HttpContextAccessor>();
    services.AddTransient<IPrincipal>(
        provider => provider.GetService<IHttpContextAccessor>()
                           ?.HttpContext
                           ?.User);
    services.AddTransient<IPAddress>(
        provider => provider.GetService<IHttpContextAccessor>()
                           ?.HttpContext
                           ?.Connection
                           ?.RemoteIpAddress);
}

Finally, we can Replace the implementations for the ILogger<T> by using the following:

public void ConfigureServices(IServiceCollection services)
{
    // ... omitted for brevity
    services.Replace(ServiceDescriptor.Transient(typeof(ILogger<>),
                                                 typeof(RequestDetailLogger<>)));
}

Notice that we replace the framework-provided service as a ServiceLifetime.Transient. Opposed to the default ServiceLifetime.Singleton. This is more or less an extra precaution. We know that with each request we get the HttpContext from the IHttpContextAccessor, and from this we have the User. This is what is passed to each ILogger<T>.

CONCLUSION

This approach is valid for overriding any of the various framework-provided service implementations. It is simply a matter of knowing the correct ServiceLifetime for your specific needs. Likewise, it is a good idea to leverage the open-source libraries of the framework for inspiration. With this you can take finite control of your web-stack.



European ASP.NET Core Hosting :: How to Use IOptions for ASP.NET Core 2 Configuration

clock February 7, 2019 11:48 by author Scott

Almost every project will have some settings that need to be configured and changed depending on the environment, or secrets that you don't want to hard code into your repository. The classic example is connection strings and passwords etc which in ASP.NET 4 were often stored in the <applicationSettings> section of web.config.

In ASP.NET Core this model of configuration has been significantly extended and enhanced. Application settings can be stored in multiple places - environment variables, appsettings.json, user secrets etc - and easily accessed through the same interface in your application. Further to this, the new configuration system in ASP.NET allows (actually, enforces) strongly typed settings using the IOptions<> pattern.

While working on an RC2 project the other day, I was trying to use this facility to bind a custom Configuration class, but for the life of me I couldn't get it to bind my properties. Partly that was down to the documentation being somewhat out of date since the launch of RC2, and partly down to the way binding works using reflection. In this post I'm going to go into demonstrate the power of the IOptions<> pattern, and describe a few of the problems I ran in to and how to solve them.

Strongly typed configuration

 

In ASP.NET Core, there is now no default AppSettings["MySettingKey"] way to get settings. Instead, the recommended approach is to create a strongly typed configuration class with a structure that matches a section in your configuration file (or wherever your configuration is being loaded from):

public class MySettings
{
    public string StringSetting { get; set; }
    public int IntSetting { get; set; }
}

Would map to the lower section in the appsettings.json below.

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "MySettings": {
    "StringSetting": "My Value",
    "IntSetting": 23
  }
}

Binding the configuration to your classes

 

In order to ensure your appsettings.json file is bound to the MySettings class, you need to do 2 things.

1. Setup the ConfigurationBuilder to load your file

2. Bind your settings class to a configuration section

When you create a new ASP.NET Core application from the default templates, the ConfigurationBuilder is already configured in Startup.cs to load settings from environment variables, appsettings.json, and in development environments, from user secrets:

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    if (env.IsDevelopment())
    {
        builder.AddUserSecrets();
    }

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

If you need to load your configuration from another source then this is the place to do it, but for most common situations this setup should suffice. There are a number of additional configuration providers that can be used to bind other sources, such as xml files for example.

In order to bind a settings class to your configuration you need to configure this in the ConfigureServices method of Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    services.Configure<MySettings>(options => Configuration.GetSection("MySettings").Bind(options));
}

Note: The syntax for model binding has changed from RC1 to RC2 and was one of the issues I was battling with. The previous method, using services.Configure<MySettings>(Configuration.GetSection("MySettings")), is no longer available

You may also need to add the configuration binder package to the dependencies section of your project.json:

"dependencies": {
  ...
  "Microsoft.Extensions.Configuration.Binder": "1.0.0-rc2-final"
  ...
}

Using your configuration class

 

When you need to access the values of MySettings you just need to inject an instance of an IOptions<> class into the constructor of your consuming class, and let dependency injection handle the rest:

public class HomeController : Controller
{
    private MySettings _settings;
    public HomeController(IOptions<MySettings> settings)
    {
        _settings = settings.Value
        // _settings.StringSetting == "My Value";
    }
}

The IOptions<> service exposes a Value property which contains your configured MySettings class.

It's important to note that there doesn't appear to be a way to access the raw IConfigurationRoot through dependency injection, so the strongly typed route is the only way to get to your settings.

You can expose the IConfigurationRoot directly to the DI container using services.AddSingleton(Configuration). (Thanks Saša Ćetković for pointing that out!)

Complex configuration classes

 

The example shown above is all very nice, but what if you have a very complex configuration, nested types, collections, the whole 9 yards?

public class MySettings
{
    public string StringSetting { get; set; }
    public int IntSetting { get; set; }
    public Dictionary<string, InnerClass> Dict { get; set; }
    public List<string> ListOfValues { get; set; }
    public MyEnum AnEnum { get; set; }
}

public class InnerClass
{
    public string Name { get; set; }
    public bool IsEnabled { get; set; } = true;
}

public enum MyEnum
{
    None = 0,
    Lots = 1
}

Amazingly we can bind that using the same configure<MySettings> call to the following, and it all just works:

{
  "MySettings": {
    "StringSetting": "My Value",
    "IntSetting": 23,
    "AnEnum": "Lots",
    "ListOfValues": ["Value1", "Value2"],
    "Dict": {
      "FirstKey": {
        "Name": "First Class",
           "IsEnabled":  false
      },
      "SecondKey": {
        "Name": "Second Class"
      }
    }
  }
}

When values aren't provided, they get their default values, (e.g. MySettings.Dict["SecondKey].IsEnabled == true). Dictionaries, lists and enums are all bound correctly. That is until they aren't...

Models that won't bind

 

So after I'd beaten the RC2 syntax change in to submission, I thought I was home and dry, but I still couldn't get my configuration class to bind correctly. Getting frustrated, I decided to dive in to the source code for the binder and see what's going on (woo, open source!).

It was there I found a number of interesting cases where a model's properties won't be bound even if there are appropriate configuration values. Most of them are fairly obvious, but could feasibly sting you if you're not aware of them. I am only going to go into scenarios that do not throw exceptions, as these seem like the hardest ones to figure out.

Properties must have a public Get method

 

The properties of your configuration class must have a getter, which is public and must not be an indexer, so none of these properties would bind:

private string _noGetter;
private string[] _arr;

public string NoGetter { set { _noGetter = value; } }
public string NonPublicGetter { set { _noGetter = value; } }
public string this[int i]
{
    get { return _arr[i]; }
    set { _arr[i] = value; }
}

Properties must have a public Set method...

 

Similarly, properties must have a public setter, so again, none of these would bind:

public string NoGetter { get; }
public string NonPublicGetter { get; private set; }

...Except when they don't have to

 

The public setter is actually only required if the value being bound is null. If it's a simple type like a string or and int, then the setter is required as there's no way to change the value. You can create readonly properties with default values, but they just won't be bound. For properties which are complex types, you don't need a setter, as long as the value has a value at binding time:

public MyInnerClass ComplexProperty { get; } = new MyInnerClass();
public List<string> ListValues { get; } = new List<string>();
public Dictionary<string, string> DictionaryValue1 { get; } = new Dictionary<string,string>();
private Dictionary<string, string> _dict = new Dictionary<string,string>();
public Dictionary<string, string> DictionaryValue2 { get { return _dict; } }

The sub properties of the MyInnerClass object returned by ComplexProperty would be bound, values would be added to the collection in ListValues, and KeyValuePairs would be added to the dictionaries.

Dictionaries must have string keys

 

This is one of the gotchas that got me! While integers, are obviously perfectly valid keys to dictionaries usually, they are not allowed in this case thanks to this snippet in ConfigurationBinder.BindDictionary:

var typeInfo = dictionaryType.GetTypeInfo();

// IDictionary<K,V> is guaranteed to have exactly two parameters
var keyType = typeInfo.GenericTypeArguments[0];
var valueType = typeInfo.GenericTypeArguments[1];

if (keyType != typeof(string))
{
    // We only support string keys
    return;
}

Don't expose IDictionary

 

This is another one that got me accidentally. While coding to interfaces is nice, the model binder uses reflection and Activator.CreateInstance(type) to create the classes to be bound. If your properties are interfaces or abstract then the binder will throw when trying to create them.

If you are exposing your properties as a readonly getter however, then the binder does not need to create the property and you might think the configuration class would bind correctly. And that is true in almost all cases. Unforunately while the binder can bind any properties which are a type that derives from IDictionary<,>, it will not bind an IDictionary<,> property directly. This leaves you with the following situation:

public interface IMyDictionary<TKey, TValue> : IDictionary<TKey, TValue> { }

public class MyDictionary<TKey, TValue>
    : Dictionary<TKey, TValue>, IMyDictionary<TKey, TValue>
{
}

public class MySettings
{
  public IDictionary<string, string> WontBind { get; } = new Dictionary<string, string>();
  public IMyDictionary<string, string> WillBind { get; } = new MyDictionary<string, string>();
}

Our wrapper type IMyDictionary which is really just an IDictionary will be bound, whereas the directly exposed IMyDictionary will not. This doesn't feel right to me and I've raised an issue with the team.

Make properties Implementing ICollection also expose an Add method

 

Types deriving from ICollection<> are automatically bound in the same way as dictionaries, however the ICollection<> interface exposes no methods to add an object to the collection, only methods for enumerating and counting. It may seem strange then that it is this interface the binder looks for when checking whether a property can be bound.

If a property exposes a type that implements ICollection<> (and is not an ICollection<> itself, as for IDictionary above, though that makes sense in this case), then it is a candidate for binding. In order to add an item to the collection, reflection is used to invoke an Add method on the type:

var addMethod = typeInfo.GetDeclaredMethod("Add");
addMethod.Invoke(collection, new[] { item });

If an add method on the exposed type does not exist (e.g. it could be a ReadOnlyCollection<>), then this property will not be bound, but no error will be thrown, you will just get an empty collection. This one feels a little nasty to me, but I guess the common use case is you will be exposing List<> and IList<> etc. Feels like they should be looking for IList<> if that is what they need though!

Summary

 

The strongly typed configuration is a great addition to ASP.NET Core, providing a clean way to apply the Interface Segregation Principle to your configuration. Currently it seems more convoluted to retrieve your settings than tin ASP.NET 4, but I wouldn't be surprised if they add some convenience methods for quickly accessing values in a forthcoming release.

It's important to consider the gotchas described if you're having trouble binding values (and you're not getting an exceptions thrown). Pay particular attention to your collections, as that's where my issues arose.



European ASP.NET Core Hosting :: ASP.NET Core 2.0 MVC Filters

clock January 28, 2019 09:58 by author Scott

The following is tutorial how to run code before and after MVC request pipeline in ASP.NET Core.

Solution

In an empty project update Startup class to add services and middleware for MVC:

        public void ConfigureServices
            (IServiceCollection services)
        {
            services.AddMvc();
        } 

        public void Configure(
            IApplicationBuilder app,
            IHostingEnvironment env)
        {
            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }

Add the class to implement filter:

    public class ParseParameterActionFilter : Attribute, IActionFilter
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            object param;
            if (context.ActionArguments.TryGetValue("param", out param))
                context.ActionArguments["param"] = param.ToString().ToUpper();
            else
                context.ActionArguments.Add("param", "I come from action filter");
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        {
        }
    }

In the Home controller add an action method that uses Action filter:

        [ParseParameterActionFilter]
        public IActionResult ParseParameter(string param)
        {
            return Content($"Hello ParseParameter. Parameter: {param}");
        }

Browse to /Home/ParseParameter, you’ll see:

 

Discussion

Filter runs after an action method has been selected to execute. MVC provides built-in filters for things like authorisation and caching. Custom filters are very useful to encapsulate reusable code that you want to run before or after action methods.

Filters can short-circuit the result i.e. stops the code in your action from running and return a result to the client. They can also have services injected into them via service container, which makes them very flexible.

Filter Interfaces

Creating a custom filter requires implementing an interface for the type of filter you require. There are two flavours of interfaces for most filter type, synchronous and asynchronous:

    public class HelloActionFilter : IActionFilter
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            // runs before action method
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        {
            // runs after action method
        }
    } 

    public class HelloAsyncActionFilter : IAsyncActionFilter
    {
        public async Task OnActionExecutionAsync(
            ActionExecutingContext context,
            ActionExecutionDelegate next)
        {
            // runs before action method
            await next();
            // runs after action method
        }
    }

You can short-circuit the filter pipeline by setting the Result (of type IActionResult) property on context parameter (for Async filters don’t call the next delegate):

    public class SkipActionFilter : Attribute, IActionFilter
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            context.Result = new ContentResult
            {
                Content = "I'll skip the action execution"
            };
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        { }
    } 

    [SkipActionFilter]
    public IActionResult SkipAction()
    {
       return Content("Hello SkipAction");
    }

For Result filters you could short-circuit by setting the Cancel property on context parameter and sending a response:

        public void OnResultExecuting(ResultExecutingContext context)
        {
            context.Cancel = true;
            context.HttpContext.Response.WriteAsync("I'll skip the result execution");
        } 

        [SkipResultFilter]
        public IActionResult SkipResult()
        {
            return Content("Hello SkipResult");
        }

Filter Attributes

MVC provides abstract base classes that you can inherit from to create custom filters. These abstract classes inherit from Attribute class and therefore can be used to decorate controllers and action methods:

  • ActionFilterAttribute
  • ResultFilterAttribute
  • ExceptionFilterAttribute
  • ServiceFilterAttribute
  • TypeFilterAttribute

Filter Types

There are various type of filters that run at different stages of the filter pipeline. Below a figure from official documentation illustrates the sequence:

 

 

Authorization

 

 

This is the first filter to run and short circuits request for unauthorised users. They only have one method (unlike most other filters that have Executing and Executed methods). Normally you won’t write your own Authorization filters, the built-in filter calls into framework’s authorisation mechanism.

Resource

They run before model binding and can be used for changing how it behaves. Also they run after the result has been generated and can be used for caching etc.

Action

They run before and after the action method, thus are useful to manipulate action parameters or its result. The context supplied to these filters let you manipulate the action parameters, controller and result.

Exception

They can be used for unhandled exception before they’re written to the response. Exception handling middleware works for most scenarios however this filter can be used if you want to handle errors differently based on the invoked action.

Result

They run before and after the execution of action method’s result, if the result was successful. They can be used to manipulate the formatting of result.

Filter Scope

Filters can be added at different levels of scope: Action, Controller and Global. Attributes are used for action and controller level scope. For globally scoped filters you need to add them to filter collection of MvcOptions when configuring services in Startup:

            services.AddMvc(options =>
            {
                             // by instance
                options.Filters.Add(new AddDeveloperResultFilter("Tahir Naushad")); 

                // by type
                options.Filters.Add(typeof(GreetDeveloperResultFilter));
            });

Filters are executed in a sequence:

1. The Executing methods are called first for Global > Controller > Action filters.

2. Then Executed methods are called for Action > Controller > Global filters.

Filter Dependency Injection

In order to use filters that require dependencies injected at runtime, you need to add them by Type. You can add them globally (as illustrated above), however, if you want to apply them to action or controller (as attributes) then you have two options:

ServiceFilterAttribute

This attributes retrieves the filter using service container. To use it:

Create a filter that uses dependency injection:

    public class GreetingServiceFilter : IActionFilter
    {
        private readonly IGreetingService greetingService; 

        public GreetingServiceFilter(IGreetingService greetingService)
        {
            this.greetingService = greetingService;
        } 
        public void OnActionExecuting(ActionExecutingContext context)
        {
            context.ActionArguments["param"] =
                this.greetingService.Greet("James Bond");
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        { }
    }

Add filter to service container:

services.AddScoped<GreetingServiceFilter>();

Apply it using ServiceFilterAttribute:

[ServiceFilter(typeof(GreetingServiceFilter))]
public IActionResult GreetService(string param)

TypeFilterAttribute

This attributes doesn’t need registering the filter in service container and initiates the type using ObjectFactory delegate. To use it:

Create a filter that uses dependency injection:

    public class GreetingTypeFilter : IActionFilter
    {
        private readonly IGreetingService greetingService; 

        public GreetingTypeFilter(IGreetingService greetingService)
        {
            this.greetingService = greetingService;
        } 

        public void OnActionExecuting(ActionExecutingContext context)
        {
            context.ActionArguments["param"] = this.greetingService.Greet("Dr. No");
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        { }
    }

Apply it using TypeFilterAttribute:

[TypeFilter(typeof(GreetingTypeFilter))]
public IActionResult GreetType1(string param)

You could also inherit from TypeFilterAttribute and then use without TypeFilter:

public class GreetingTypeFilterWrapper : TypeFilterAttribute
{
   public GreetingTypeFilterWrapper() : base(typeof(GreetingTypeFilter))
   { }


[GreetingTypeFilterWrapper]
public IActionResult GreetType2(string param)
 



European ASP.NET Core Hosting - HostForLIFE.eu :: File logging on ASP.NET Core

clock March 8, 2017 10:13 by author Scott

ASP.NET Core introduces new framework level logging system. Although it is feature-rich it is not complex to use and it provides decent abstractions that fit well with the architecture of most web applications. This blog post shows how to set up and use Serilog file logging using framework-level dependency injection.

Configuring logging

Logging is configured in ConfigureServices() method of Startup class. ASP.NET Core comes with console and debug loggers. For other logging targets like file system, log servers etc third-party loggers must be used. This blog post uses Serilog file logger.

"dependencies": {
 
// ...
  "Serilog.Extensions.Logging.File": "1.0.0"
},

Project has now reference to Serilog file logger. Let’s introduce it to ASP.NET Core logging system. AddFile(string path) is the extension method that adds Serilog file logger to logger factory loggers collection. Notice that there can be multiple loggers active at same time.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();
    loggerFactory.AddFile("Logs/ts-{Date}.txt");
 
    // ...
}

Serilog will write log files to Logs folder of web application. File names are like ts-20170108.txt.

Injecting logger factory

Loggers are not injected to other classes. It’s possible to inject logger factory and let it create new logger. If it sounds weird for you then just check internal loggers collection of logger factory to see that also other classes that need logger have their own instances. The code below shows how to get logger to controller through framework level dependency injection.

public class DummyController : Controller
{
    private ILogger _logger;
 
    public DummyController(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger(typeof(DummyController));
    }
 
    // ...
}

Why we have to inject logger factory and not single instance of ILogger? Reason is simple – application may use multiple loggers like shown above. This is the fact we don’t want to know in parts of application where logging is done. It’s external detail that si not related to code that uses logging.

Logging

Logging is done using extension methods for ILogger interface. All classic methods one can expect are there:

  • LogDebug()
  • LogInformation()
  • LogWarning()
  • LogError()
  • LogCritical()

Now let’s write something to log.

public class DummyController : Controller
{
    private ILogger _logger;
 
    public DummyController(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger(typeof(DummyController));
    }
 
    public void Index()
    {
        _logger.LogInformation("Hello from dummy controller!");
    }
}

Making request to Dummy controller ends up with log message added to debug window and log file. The following image shows log message in output window.

And here is the same log message in log file.



European ASP.NET Core Hosting - HostForLIFE.eu :: How to Setup Webpack in ASP.NET Core

clock February 10, 2017 11:11 by author Scott

Webpack is a great tool for bundling the client side assets in your web application. In this post we'll briefly discuss why you should create bundles and see how Webpack can automate that for us in ASP.NET Core.

Why should I bundle

When building web applications, regardless of the server side framework, you'll need to get your client side resources over to the browser. You may have dozens of JavaScript and CSS files in your project but having to reference each of them individually in your HTML markup is just not ideal for production deployments.

Each browser only allows so many concurrent requests per hostname. BrowserScope has some data on this that you can view on their site. If your web application makes more than the allowed number of simultaneous requests then the additional requests will end up being queued. This leads to longer load times and a not so smooth experience for your users; especially on mobile devices.

It would be much better to group resources into bundles so that the browser would have fewer files to download and thus fewer requests to make. This will help in keeping bandwidth usage low and even with battery life on your users' devices.

What is Webpack?

Webpack is a module bundler for the static assets in your web application. Essentially, you point Webpack at the main entry point(s) of your code then it will determine the dependencies, run transformations and create bundles that you can provide to your browser. What's even better is that in addition to JavaScript Webpack can also handle CSS, LESS, TypeScript, CoffeeScript, images, web fonts and more.

Setting up

We're going to start by setting up a new ASP.NET Core project using the dotnet cli tooling. If you don't have tooling installed, you can find can setup files and instructions here. If you're not a Windows user, the tooling works on Windows, OSX and Linux so no need to worry.

Let's get started by opening a terminal and creating an empty directory. Now, we'll generate a new ASP.NET Core project by running the following command:

dotnet new -t web 

Currently, the generated project includes .bowerrc and bower.json files. You can delete these since we'll be using NPM to install packages. If you don't have NodeJS installed on your system, make sure you do so before continuing.

The next thing we'll do is create a folder called Scripts in the root of your project. You'll find out why later on. Also add an empty webpack.config.js to the root of your project. As you might have guessed, this is the file we'll use to configure Webpack. Your project layout should look something like this.

Configuring Webpack

Before we start configuring Webpack, let's check out where our assets actually are. ASP.NET Core places all the files destined for the browser inside of the wwwroot folder by default. If you take a peep inside that folder, you'll see sub folders for your your JavaScript, CSS and image files. Note, the names of these sub folders aren't important. Feel free to rename them if you wish.

Personally, I prefer to reserve the wwwroot folder for the bundles that I want to provide to the browser. What we'll do is use the Scripts directory that was created earlier for the working files that will get included in the bundles.

Let's add two pretty trivial JavaScript files to our Scripts folder.

//other.js
function func() {
    alert('loaded!');
}
module.exports = func;

//main.js
var other = require('./other');

other();

Our scripts have been written using the CommonJS module syntax. The main.js file imports other.jsand calls the exported function. Ok, simple enough. Let's take a look at webpack.config.js.

var path = require('path');

module.exports = {
    entry: {
        main: './Scripts/main'
    },
    output: {
       publicPath: "/js/",
       path: path.join(__dirname, '/wwwroot/js/'),
       filename: 'main.build.js'
    }
};

The webpack.config.js file is a CommonJS module that we'll use to setup Webpack. The sample above shows a fairly bare bones Webpack configuration. Inside of entry, we define a main module and point it to the location of main.js file since that's where our app starts. The name of the bundle can be changed to something else if you like. There's no requirement for it to be called main. Inside of output, we let Webpack know what to name the bundle file and where to place it. The publicPath property configures the relative URL that the browser will use to reference our bundles.

Alright, that's good for now. Before we generate our bundle, make sure you have Webpack installed globally on your machine. Type the following command in your terminal. You'll only have to do this once.

npm i -g webpack 

Now we're ready to create our bundle. Make sure your terminal path is at the root of your project directory. Now run

webpack 

Your terminal output should look similar to below.

In your code editor, open Views/_Layout.cshtml. Near the bottom of the file, add a script reference to our bundle.

<script src="~/js/main.build.js"></script>

The generated ASP.NET Core template adds a few scripts tags wrapped in environment tag helpers. Go ahead and remove these for now.

<environment names="Development">
        <script src="~/lib/jquery/dist/jquery.js"></script>
        <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script>
        <script src="~/js/site.js" asp-append-version="true"></script>
</environment>

Finally, we can run our application and see if our bundle works. Execute the following commands in the command terminal.

dotnet restore
dotnet run

Navigate to http://localhost:5000 in your browser. If everything works as expected, you should see an alert with loaded! in the browser window.

Tying the builds together

We can reduce the number of commands we have to type by leveraging the build events in project.json. Update the scripts section to include the precompile event. To see the other available events, head over to the .NET Core Tools docs.

"scripts": {
    "precompile": ["webpack"],
  },

Now running dotnet run or dotnet build will also run Webpack to generate the bundle.

Conclusion

In this post, we got a short introduction to setting up Webpack in ASP.NET Core. Webpack often gets labeled as overly complex and difficult to setup. Hopefully, this post showed you how easy it is to get started and made you a little more curious about what else it can do.



European ASP.NET Core 1.0 Hosting - HostForLIFE.eu :: How to Publish Your ASP.NET Core in IIS

clock November 3, 2016 09:27 by author Scott

When you build ASP.NET Core applications and you plan on running your applications on IIS you'll find that the way that Core applications work in IIS is radically different than in previous versions of ASP.NET.

In this post I'll explain how ASP.NET Core runs in the context of IIS and how you can deploy your ASP.NET Core application to IIS.

Setting Up Your IIS and ASP.NET Core

The most important thing to understand about hosting ASP.NET Core is that it runs as a standalone, out of process Console application. It's not hosted inside of IIS and it doesn't need IIS to run. ASP.NET Core applications have their own self-hosted Web server and process requests internally using this self-hosted server instance.

You can however run IIS as a front end proxy for ASP.NET Core applications, because Kestrel is a raw Web server that doesn't support all features a full server like IIS supports. This is actually a recommended practice on Windows in order to provide port 80/443 forwarding which kestrel doesn't support directly. For Windows IIS (or another reverse proxy) will continue to be an important part of the server even with ASP.NET Core applications.

Run Your ASP.NET Core Site

To run your ASP.NET Core site, it is quite different with your previous ASP.NET version. ASP.NET Core runs its own web server using Kestrel component. Kestrel is a .NET Web Server implementation that has been heavily optimized for throughput performance. It's fast and functional in getting network requests into your application, but it's 'just' a raw Web server. It does not include Web management services as a full featured server like IIS does.

If you run on Windows you will likely want to run Kestrel behind IIS to gain infrastructure features like port 80/443 forwarding via Host Headers, process lifetime management and certificate management to name a few.

ASP.NET Core applications are standalone Console applications invoked through the dotnet runtime command. They are not loaded into an IIS worker process, but rather loaded through a native IIS module called AspNetCoreModule that executes the external Console application.

Once you've installed the hosting bundle (or you install the .NET Core SDK on your Dev machine) the AspNetCoreModule is available in the IIS native module list:

The AspNetCoreModule is a native IIS module that hooks into the IIS pipeline very early in the request cycle and immediately redirects all traffic to the backend ASP.NET Core application. All requests - even those mapped to top level Handlers like ASPX bypass the IIS pipeline and are forwarded to the ASP.NET Core process. This means you can't easily mix ASP.NET Core and other frameworks in the same Site/Virtual directory, which feels a bit like a step back given that you could easily mix frameworks before in IIS.

While the IIS Site/Virtual still needs an IIS Application Pool to run in, the Application Pool should be set to use No Managed Code. Since the App Pool acts merely as a proxy to forward requests, there's no need to have it instantiate a .NET runtime.

The AspNetCoreModule's job is to ensure that your application gets loaded when the first request comes in and that the process stays loaded if for some reason the application crashes. You essentially get the same behavior as classic ASP.NET applications that are managed by WAS (Windows Activation Service).

Once running, incoming Http requests are handled by this module and then routed to your ASP.NET Core application.

So, requests come in from the Web and int the kernel mode http.sys driver which routes into IIS on the primary port (80) or SSL port (443). The request is then forwarded to your ASP.NET Core application on the HTTP port configured for your application which is not port 80/443. In essence, IIS acts a reverse proxy simply forwarding requests to your ASP.NET Core Web running the Kestrel Web server on a different port.

Kestrel picks up the request and pushes it into the ASP.NET Core middleware pipeline which then handles your request and passes it on to your application logic. The resulting HTTP output is then passed back to IIS which then pushes it back out over the Internet to the HTTP client that initiated the request - a browser, mobile client or application.

The AspNetCoreModule is configured via the web.config file found in the application's root, which points a the startup command (dotnet) and argument (your application's main dll) which are used to launch the .NET Core application. The configuration in the web.config file points the module at your application's root folder and the startup DLL that needs to be launched.

Here's what the web.config looks like:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <!--
    Configure your application settings in appsettings.json. Learn more at http://go.microsoft.com/fwlink/?LinkId=786380
  -->
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*"
        modules="AspNetCoreModule" resourceType="Unspecified" />
    </handlers>
    <aspNetCore processPath="dotnet"
                arguments=".\AlbumViewerNetCore.dll"
                stdoutLogEnabled="false"
                stdoutLogFile=".\logs\stdout"
                forwardWindowsAuthToken="false" />
  </system.webServer>
</configuration>

You can see that module references dotnetexe and the compiled entry point DLL that holds your Main method in your .NET Core application.

IIS is Recommended!

We've already discussed that when running ASP.NET Core on Windows, it's recommended you use IIS as a front end proxy. While it's possible to directly access Kestrel via an IP Address and available port, there are number of reasons why you don't want to expose your application directly this way in production environments.

First and foremost, if you want to have multiple applications running on a single server that all share port 80 and port 443 you can't run Kestrel directly. Kestrel doesn't support host header routing which is required to allow multiple port 80 bindings on a single IP address. Without IIS (or http.sys actually) you currently can't do this using Kestrel alone (and I think this is not planned either).

The AspNetCoreModule running through IIS also provides the necessary process management to ensure that your application gets loaded on the first access, ensures that it stays up and running and is restarted if it crashes. The AspNetCoreModule provides the required process management to ensure that your AspNetCore application is always available even after a crash.

It's also a good idea to run secure SSL requests through IIS proper by setting up certificates through the IIS certificate store and letting IIS handle the SSL authentication. The backplane HTTP request from IIS can then simply fire a non-secure HTTP request to your application. This means only a the front end IIS server needs a certificate even if you have multiple servers on the backplane serving the actual HTTP content.

IIS can also provide static file serving, gzip compression of static content, static file caching, Url Rewriting and a host of other features that IIS provides natively. IIS is really good and efficient at processing non-application requests, so it's worthwhile to take advantage of that. You can let IIS handle the tasks that it's really good at, and leave the dynamic tasks to pass through to your ASP.NET Core application.

The bottom line for all of this is if you are hosting on Windows you'll want to use IIS and the AspNetCoreModule.

How to Publish ASP.NET Core in IIS

In order to run an application with IIS you have to first publish it. There are two ways to that you can do this today:

1. Use dotnet publish

Using dotnet publish builds your application and copies a runnable, self-contained version of the project to a new location on disk. You specify an output folder where all the files are published. This is not so different from classic ASP.NET which ran Web sites out of temp folders. With ASP.NET Core you explicitly publish an application into a location of your choice - the files are no longer hidden away and magically copied around.

A typical publish command may look like this:

dotnet publish
      --framework netcoreapp1.0
      --output "c:\temp\AlbumViewerWeb"
      --configuration Release

If you open this folder you'll find that it contains your original application structure plus all the nuget dependency assemblies dumped into the root folder:

Once you've published your application and you've moved it to your server (via FTP or other mechanism) we can then hook up IIS to the folder.

After that, please just make sure you setup .NET Runtime to No Managed Code as shown above.

And that's really all that needs to happen. You should be able to now navigate to your site or Virtual and the application just runs.

You can now take this locally deployed Web site, copy it to a Web Server (via FTP or direct file copy or other publishing solution), set up a Site or Virtual and you are off to the races.

2. Publish Using Visual Studio

The dotnet publish step works to copy the entire project to a folder, but it doesn't actually publish your project to a Web site (currently - this is likely coming at a later point).

In order to get incremental publishing to work, which is really quite crucial for ASP.NET Core applications because there are so many dependencies, you need to use MsDeploy which is available as part of Visual Studio's Web Publishing features.

Currently the Visual Studio Tooling UI is very incomplete, but the underlying functionality is supported. I'll point out a few tweaks that you can use to get this to work today.

When you go into Visual Studio in the RC2 Web tooling and the Publish dialog, you'll find that you can't create a publish profile that points at IIS. There are options for file and Azure publishing but there's no way through the UI to create a new Web site publish.

However, you can cheat by creating your own .pubxml file and putting it into the \Properties\PublishProfilesfolder in your project.

To create a 'manual profile' in your ASP.NET Core Web project:

  • Create a folder \Properties\PublishProfiles
  • Create a file <MyProfile>.pubxml

You can copy an existing .pubxml from a non-ASP.NET Core project or create one. Here's an example of a profile that works with IIS:

<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <WebPublishMethod>MSDeploy</WebPublishMethod>
    <LastUsedBuildConfiguration>Release</LastUsedBuildConfiguration>
    <LastUsedPlatform>Any CPU</LastUsedPlatform>
    <SiteUrlToLaunchAfterPublish>http://samples.west-wind.com/AlbumViewerCore/index.html</SiteUrlToLaunchAfterPublish>
    <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
    <ExcludeApp_Data>False</ExcludeApp_Data>
    <PublishFramework>netcoreapp1.0</PublishFramework>
    <UsePowerShell>True</UsePowerShell>
    <EnableMSDeployAppOffline>True</EnableMSDeployAppOffline>
    <MSDeployServiceURL>https://publish.west-wind.com</MSDeployServiceURL>
    <DeployIisAppPath>samples site/albumviewercore</DeployIisAppPath>
    <RemoteSitePhysicalPath />
    <SkipExtraFilesOnServer>True</SkipExtraFilesOnServer>
    <MSDeployPublishMethod>RemoteAgent</MSDeployPublishMethod>
    <EnableMSDeployBackup>False</EnableMSDeployBackup>
    <UserName>username</UserName>
    <_SavePWD>True</_SavePWD>
    <ADUsesOwinOrOpenIdConnect>False</ADUsesOwinOrOpenIdConnect>
    <AuthType>NTLM</AuthType>
  </PropertyGroup>
</Project>

Once you've created a .pubxml file you can now open the publish dialog in Visual Studio with this Profile selected:

At this point you should be able to publish your site to IIS on a remote server and use incremental updates with your content.

#And it's a Wrap Currently IIS hosting and publishing is not particularly well documented and there are some rough edges around the publishing process. Microsoft knows of these issues and this will get fixed by RTM of ASP.NET Core.

In the meantime I hope this post has provided the information you need to understand how IIS hosting works and a few tweaks that let you use the publishing tools available to get your IIS applications running on your Windows Server.



European ASP.NET Core Hosting - HostForLIFE.eu :: How to Fix Error 502.5 - Process Failure in ASP.NET Core

clock October 4, 2016 19:35 by author Scott

This is an issue that sometimes you face when you deploying your ASP.NET Core on shared hosting environment.

Problem

HTTP Error 502.5 - Process Failure

Common causes of this issue:

* The application process failed to start
* The application process started but then stopped
* The application process started but failed to listen on the configured port

Although your hosting provider have setup .net core for you, you can face error above. So, how to fix this problems?

Solution

There are 2 issues that cause the above error

#1. Difference ASP.NET Core Version

1. The difference ASP.NET Core version. Microsoft just released newest ASP.NET Core 1.0.1. Maybe you are still using ASP.NET Core 1.0. So, you must upgrade your .net Core version.

2. When you upgraded your app, make sure that your ‘Microsoft.NETCoreApp’ setting in your project.json was changed to:

"Microsoft.NETCore.App": "1.0.1",

3. Fixing your App ‘type’

"Microsoft.NETCore.App": { "version": "1.0.1", "type": "platform" },

#2. Set path to dotnet.exe in the web.config

Here is example to change your web.config file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <!--
    Configure your application settings in appsettings.json. Learn more at https://go.microsoft.com/fwlink/?LinkId=786380
  -->
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
    </handlers>
    <aspNetCore processPath="C:\Program Files\dotnet\dotnet.exe" arguments=".\CustomersApp.dll" stdoutLogEnabled="true" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false" />
  </system.webServer>
</configuration>

Conclusion

We hope above tutorial can help you to fix ASP.NET Core problem. This is only our experience to solve above error and hope that will help you. If you need ASP.NET Core hosting, you can visit our site at http://www.hostforlife.eu. We have supported the latest ASP.NET Core 1.0.1 hosting on our hosting environment and we can make sure that it is working fine.



About HostForLIFE.eu

HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2016 Hosting, ASP.NET Core 2.2.1 Hosting, ASP.NET MVC 6 Hosting and SQL 2017 Hosting.


Tag cloud

Sign in