Wednesday, 20 June 2018

Dotnet core 2.x mocking HttpContext etc.

Unit Test or Integration Test?

Unit testing and integration testing are two very black-and-white concepts - on paper! A Unit Test for an MVC action should call the code as directly as possible, injecting any mocks into either the controller constructor and/or the action - easy right?

Not really! What if you access request, response etc? These should all be injected into the controller right? That would make sense and make it much easier to test writing of headers, reading of request parameters etc.

No, you need an integration test right? Integration testing means testing the application with things joined together: databases wired up, services in place and these can be automated too. Except that dotnet core doesn't (obviously) provide a way to inject any mocks. For integration testing, you shouldn't use mocks, which brings me back to the original problem.

When testing special actions like uploading files, multi-part forms etc. I need to access the context and in one case, the response since I am writing a file to the response directly. I also need to mock certain other services because I do not want to wire everything up, just to check that the things I think I am setting are actually being set!

How do we mock HttpContext etc. in our dotnet core unit tests?

Things to know first:

  • HttpContext is quite complicated!
  • Not all properties have setters, some have to be set indirectly
  • ControllerBase does not use the injected IHttpContextAccessor for its Context property
  • They use this weird Features mechanism to attach data
  • Classes like DefaultHttpResponse have a reference to their parent object (the context), which creates a slight chicken-and-egg problem.
  • In my example, I use DI to get my controller instance but you could instead create one directly and pass the service mocks into the constructor yourself.
So here's what I did, using Moq for mocks and in this case, just providing a concrete response object so that my action could set response headers and query a client ip address without falling over. You could easily extend this for request objects etc:


httpContext = new Mock();
httpContext.Setup(ct => ct.Connection.RemoteIpAddress )
           .Returns(new System.Net.IPAddress(0x2414188f));

contextAccessor = new Mock();
contextAccessor.Setup(ca => ca.HttpContext)
               .Returns(httpContext.Object);

var features = new FeatureCollection();
features.Set(new HttpResponseFeature());
httpContext.Setup(ct => ct.Features)
           .Returns(features);

var response = new DefaultHttpResponse(httpContext.Object);
httpContext.Setup(ct => ct.Response)
           .Returns(response);

var controller = ActivatorUtilities.CreateInstance(services);
controller.ControllerContext = new ControllerContext();
controller.ControllerContext.HttpContext = contextAccessor.Object.HttpContext;

var result = await controller.GetImage(new GetImageModel { accesstoken = VALID_ACCESS_TOKEN, imagename = VALID_IMAGE_NAME}) as FileStreamResult;

Using SASS in a project with Zurb Foundation

So I discovered Zurb Foundation the other day, a well-featured alternative to Bootstrap that is a bit lighter and basically, not Bootstrap!

What is SASS?

Writing a new project, I wanted to use SASS for the CSS generation. If you have never used SASS, but you know what CSS is, SASS allows you to generate CSS by using an additional set of functionality like variables, mixins, nesting of styles and functions. Once you generate the CSS, it is just normal CSS which can be deployed with your site like before.

If you've never used it, you should and you can learn it here. It it very similar to LESS but for reasons I do not understand, SASS is much more popular. 

Foundation

Back to foundation, and like any good modern framework, CSS is generated from SASS (or less) and this is usually to make it super easy both for them to alter things but also for you to customise it. For example, imagine their default button is blue but your brand is green. In most cases, changing a colour in CSS that might be altered for edges, shadow etc. is not a simple find-and-replace but would be very difficult. On the other hand, changing the single @brand-default variable from one colour to another and re-generating the CSS files would be much simpler.

Firstly, there are two formats of SASS and these are in file extensions .sass or .scss. The sass extension is older and uses a non-css style to make the content smaller and "terse" but is not very commonly used any more. .scss is more modern and looks like CSS with extra bits in it so Foundation uses that.

Building SASS into CSS

In general, building SASS into CSS involves using a scripting program, such as gulp or grunt, pointing them at the location of the sass file(s) and then letting it do its job using a pre-built module. In the case of gulp, the code is a few lines long and relatively easy to understand.

One thing to know about the files is that a SASS compiler will attempt to build a CSS file for each SCSS file it finds in the source directory(s) unless the filename starts with an underscore. Files that start with an underscore tell the sass compiler that they are only partial content and will be included by another file. 

Example 1: If your source location had file1.scss, file2.scss and _file3.scss, the sass compiler would produce file1.css and file2.css. Note that the compiler won't know or care whether _file3.scss is actually used anywhere or not.

Example 2: If your source location has main.scss, _file2.scss, _file3.scss, _file4.scss then the compiler will only produce a main.css

In the case of Foundation, there is only a single file, called foundation.scss that does not begin with an underscore. This means you can either point your compiler at the specific file or the entire scss directory, it won't make any difference.

If you don't have anything setup, gulp is a great compiler tool and is based on node modules. There is a guide to setting it up here.

Using SASS Compilation with Foundation

So the problem with foundation is that the documents about building sass are a little poor, especially when you are completely new to the framework. The video doesn't really cover Foundation's use of sass, just sass in general and the documents start talking about a "Foundation project", whatever that is. Most of us will be building it into another project. This is how...

Firstly, you have a number of ways of downloading Foundation, as the installation page describes at zurb. Since gulp is based on node, the NPM package is a good starting point but it doesn't really matter since all of the package managers will give you the same code.

Create yourself a directory that will contain your build script (and node modules) or if you are already using node, you could instead create a gulpfile.js in your root directory (or extend one that is already there) and npm install gulp and gulp-sass.

You will need a project specific _settings file. Do not simply link to the one in the package since that should be allowed to update when the package updates without breaking your own code, although you should copy the one from the package as the basis for your own settings. You need to consider how to manage updates and ensure that you have not accidentally broken your own code generally. You will also need your own "app.scss" or whatever you want to call it. These are not needed directly by the web site so you should put them both into a separate folder somewhere, perhaps called sass (obviously you can move them later if you don't like the choice now).

|- app/
    |- controllers/
    |- node_modules/
    |- sass/
        |- _settings.scss
        |- app.scss
    |- web/
        |- css/
    |- gulpfile.js
    |- package.json

Once you have this basic setup, you will need to edit the _settings.scss and change the import of util, which points to its package relative location by default and change this to use the full path to the package util directory (relative to the scss file you are compiling) e.g. 

from

@import 'util/util';

to

@import '../node_modules/foundation-sites/scss/util/util';

Since Foundation doesn't include settings by default (to allow you to do it) you then need to populate
your own app.scss to include both the settings FIRST and then the foundation file. Something like this:

@import 'settings';
@import '../node_modules/foundation-sites/scss/foundation';


But the important thing to realise is that by default, you will not get any css! This is deliberate. Since each component in Foundation defines mixins as well as content and you might want access to mixins without having to generate all the content. The import part gives you access to everything but you will only get CSS for what you INCLUDE into your scss file AFTER your @import declarations. You can get the full list of items to include from the sass docs page. You might not know what they all are right now but it's up to you how much you include now and how much to try and bring in other stuff later.

Obviously your sass compiler in gulp (or whatever) can put the output CSS wherever it wants, I put mine directly into the web/css directory for use by the site. You can also set various sass options like compression to minify it for production. It is up to you whether you want to use the watch function to automatically rebuild the CSS any time you change your settings or app.scss and also whether you want to use browserlink, which allows the browser to be reloaded automatically after changes, something that cab be useful for rapid testing of CSS changes.

Configuring the grids

Although you won't necessarily need to change colours and stuff right now, you should consider which parts of the grid to include and what to set flexbox to. Flexbox is a nice new feature in HTML but it is not supported in really old browsers. Depending on whether you care, you should leave it enabled or disable it with the $global-flexbox variable in _settings. You can also disable $xy-grid, which is the preferred new grid, if you do not want to support flexbox by default and need to rely on the legacy Float Grid.

Friday, 15 June 2018

InvalidOperationException: Cannot resolve scoped service from root provider.

Dotnet Core 2 and a DI error that kind of makes sense in my head but I couldn't see why I was getting it.

I am using a Controller with an Encryptor injected into the constructor and saw the above message about the encryptor, which is registered as Scoped. My understanding was that Controllers were created in scope so there should be no problem. Like many times I don't understand, I realise I need to read things more carefully!

I thought the error call-stack was only framework code but there was one line that wasn't and it was my code, except it wasn't the controller, it was a library and in the library, I had the following code:

services.AddSingleton(c => new ClientUtilities(
                readonlyDatabase,
                writableDatabase,
                sharedRedis,
                c.GetRequiredService(),
                c.GetRequiredService>()
            ));

Since the encryption provider is scoped, I am not allowed to do this. Now that I realised the actual mistake I made, it made more sense so I thought I would explain it here.

Dependency Injection takes a little while to fully appreciate but most frameworks will allow you to register services in a couple of different ways. In Dotnet core, the built-in DI framework has the following types:


  1. Singleton
  2. Scoped
  3. Transient
1 and 3 are the easiest to understand. 

When you register a Singleton, the first time you resolve the service, it will create a single instance and this will live forever! It will be shared by any subsequent calls to resolve the service. The only exception to this description is that you can register a Singleton (and only a Singleton) using an already created object, in which case, obviously, it is already created and not instantiated at resolve time.

A transient registration means that you will get a new instance every time you call resolve.

A scoped registration means that you will always get the same instance returned within a single scope. You can set these scopes up yourself but an easy to understand example is the request on a web application which automatically creates a scope, meaning that within a single request, you will always get the same instance of a service registered with AddScoped().

The general choice of which to use is related to performance. If you have a utility service that simply performs some static functionality, it would be wasteful to create a new instance of it every time it was needed. In general, you should prefer to register Singletons where possible. The problem with the Singleton occurs if there are member variables i.e. the object has state. If the object has state then multiple threads accessing it would screw things up. You can either make the class thread-safe and keep it as a Singleton or otherwise decide that you should use a Scoped or Transient pattern instead.

Since Transient is the least well performing method, you should reserve this for small, lightweight objects that have no state that needs sharing.

Using Scoped objects can be helpful because you can share stuff only between the same request in a web application, which will only be single-threaded (unless you create more threads yourself), so it doesn't need to be thread-safe, but you can keep the state.

Another use of Scoped is where your service has a resource that itself is not thread safe and you don't want to create a new resource in every single method that needs it, each time the method is called.

The problem you have, however, is that a Singleton cannot use a Scoped service as a dependency. This is for two reasons. The first is philosophical: If you have a single instance of a Singleton, then it doesn't make sense for it to reference something that is designed to have multiple instances since the scope could change at random times from the Singletons view of the world. The second is practical: The system can create instances of Singletons as early as it can in the application's lifecycle. If there is no request, there would be no scope to resolve for the Scoped() service so it can't work.

I believe that you will only get the above exception if the Singleton is resolved early on and otherwise you would get unexpected behaviour later in the lifecycle but don't quote me on that!

Thursday, 14 June 2018

Porting .Net Framework to .Net Core is not a 5 minute job!

There are lots of things to like about DotNet core. It is much faster than the dotnet framework; it has better abstractions which has allowed genuine cross-platform functionality and it has used the best-practices of software development like Dependency Injection to remove much more of the smoke and mirrors and legacy code that underpinned the dotnet framework!

Should you port?

In most situations, you don't get many chances to make a big code change to an existing application, especially if it was written for a Customer who often won't pay for the update. There are benefits for maintenance and performance as well as cross-platform capability but these are marginal benefits for most businesses who would rather throw more hardware at the problem than the time, money and risk of something bigger.

If you are about to do a major revamp/rewrite, why not do it at the same time. It will take longer but not as long as doing it in two separate stages! You get to share the system testing time between your design changes and the framework updates!

Technically though, there is lots to like about dotnet core. It always feels nice to do things the right way and not to have magic global or static variables doing unexpected things and which make it hard to Unit Test properly.

On the other hand, there is some functionality, particularly in third-party libraries that is not ported to DotNet core and which might or might not have an impact on your application.

What do you need to know?

Firstly, you should know that we are currently on dotnet core 2.x and there were a number of changes from 1.x (as expected) so be careful with instructions on the web, which might apply to older versions and which might not work properly or at all on 2.x Some things have also been made easier, which is nice.

If you are porting a library, you can use .Net Standard, which is specifically for shared libraries since it can support dotnet core and dotnet framework 4.x, which can be useful for porting in stages. You can start with the libraries and then work up to the application.

Applications are not .Net Standard because they are not shared. New applications can either be dotnet framework 4.x or dotnet core 2.x

As mentioned before, not everything is the same and some things are not supported in dotnet core so don't burn any bridges when planning the work.

How to port

In general, the easiest way to port is to create a completely new dotnet core or dotnet standard project and then copy in your original code. This ensures you don't end up keeping orphaned files, especially things like AssemblyInfo.cs, which your build tools might be using even though it is not used in dotnet core. This doesn't mean you have to lose your history though. Although many things will change, you can still do it in a branch and simply merge it over the original code. If you try and leave the structure as close as possible, then it might not look too bad on the history! You can then make "work arising changes" later.

Things that change

There are various things that will have to change and the amount of work will depend partly on how well your existing code is written: if you already use DI, for example, it will work much more easily. Also, if you have used a lot of System.Web code, you will have more work. If your current code is already MVC then there will still be changes but the idea of controllers and actions still holds in the same way.

If your current project is really old web forms then you will basically need to bin it and start again as you would if you were porting web forms to non-dotnet core MVC. The whole mindset is different and porting away from web forms is a mammoth job that generally involves going back to the spec and writing the system again!

Dependency Injection

This is a cornerstone of dotnet core. The whole framework is written around dependency injection and since it is built-in, you no longer need to install a DI framework, although you still can if you are happier with what you already know. If you want remove the debt of a third-party framework when porting (we removed autofac and used the MS one) then you will need to spend some time reworking it because they all setup slightly differently and MS don't support all the mechanisms that autofac does like named registrations.

DI can be done badly so if you haven't used it yet, get a good handle on what it is so that you don't create any anti-patterns or code smell in your efforts!

Configuration

Configuration in the dotnet framework was all about web/app.config and the helpers like WebConfigurationManager were very much about IIS. This has all gone in dotnet core. Although you can use XML config files, the standard method is to use json, which can be cascaded by environments like web configs could be and not just for the main config, for any config!

Whole sections of web server management are now unavailable in dotnet core since the configuration would be web-server specific and not appropriate for an app configuration.

The best way to use config now is to bind it to POCO objects so it can be injected into services really easily. It can either be registered using services.Configure (which allows it to be injected using IOptions) or you can create concrete config objects that can be registered like any other singletons. Doing it properly allows hierarchy of configs and notification of config changes. The mechanism also allows environment variables to be bound to the same config so that environment variables or app settings on Azure can override config items in your json files at runtime.

MVC and Web API

The return types from MVC and web api have finally been merged into a single type called IActionResult, which is not the same as the original IHttpActionResult in MVC. There will be some changes here including different Controller methods to return various responses. For example, you used to have to call Content(HttpStatusCode.OK,object), whereas now there is an Ok(object) method. Many of these have changed slightly so you will have some work to do there. Status() now takes a StatusCodes enumeration which is named more verbosely but more usefully such as StatusCodes.Status500InternalServerError.

General Libraries

Some of the libraries you are using might well have their architecture changed for .Net Standard/.Net Core to match the use of DI. For example, we use to have a helper library where we would call IMainInterface.GetSessionUtilities() to return an ISessionUtilities but now we are using DI, the main interface as no value and I can simply inject ISessionUtilities directly to the controller. There were lots of changes due to this.

Unit Testing

You can still theoretically unit test dot net core apps with non dot net core Unit Test projects but you will probably face problems if you are using json config files that the normal unit test projects won't understand (although I think you can run the old style project using "dotnet test").

It is easier to create a dotnet core test project which has the same kinds of changes as normal dot net core apps but which will play much more nicely with injection and configuration.

The amount of work to change your original unit tests will depend massively on how much you had to change your app during the port. It could be anything from minor changes to inject configuration through to massive DI and re-factoring due to major re-architecting. In my case, the Unit Test updates took much longer than the main web service!

Conclusion

There should not be anything that hard to port as long as you understand the high level differences and you haven't done anything crazy in your existing app but just because it isn't hard, doesn't mean it won't take a long time just to go through things and unpick them. You should also allocate a large amount of testing time so that you can try and touch all the lines of code and make sure any changes you've made haven't broken anything!

Wednesday, 13 June 2018

DotNet Core Dependency Injection does not have named instances

I have seen various people asking about named instances on forums for Microsoft Dependency injection and although other DI frameworks support them and they seem useful, it might also be a code smell that can usually be solved by abstracting things a but more!

An Example

You might want something like this:

var db1connstring = "whatever";
var db2connstring = "something else";
services.AddSingleton(s => new DBConn(db1connstring)).Named("db1");
services.AddSingleton(s => new DBConn(db2connstring)).Named("db2");

services.AddSingleton(s => new Service(s.GetNamedService("db1"), s.GetNamedService("db2")));

It all seems legit, I mean after all, our service requires two db connections of the same type, so we need to disambiguate don't we? We can't do this in Dotnet core either because it isn't supported yet or they took the decision that it can cause more harm than good.

Workaround 1 - Concrete types

There is a common mistake that types are added to the services collection just for the sake of constructing something else that is explicitly created in registration when we could potentially just pass the types in directly after creating them. Something which is not fully DI but which works for simple cases. In the above example, we could.

var db1connstring = "whatever";
var db2connstring = "something else";
var db1 = new DBConn(db1connstring);
var db2 = new DBConn(db2connstring);

services.AddSingleton(new Service(db1, db2));

This works fine as long as nothing else that is being automatically created requires the types IDBConnection since they are no longer registered in the services collection. You could register them as concrete objects in services as well but then they would need to be Singletons since you are only creating a single instance when using a concrete type (you'll notice that only AddSingleton has an overload that takes an object). It would also potentially cause ambiguity again, although it would be fine if they were only used from GetServices() elsewhere.

Workaround 2 - Abstracting the Config

If you ever have more than one of the same type in a DI constructor and this is very common with strings, you can instead abstract the whole lot to a configuration type instead of separate items. In our example, we could easily create a ServiceConfiguration type with named properties of the relevant types and inject this configuration instead. In the following example, we have a single Constructor parameter on Service and since the ServiceConfiguration type is available in services, we do not need a lambda for the creation of Service but can use the simple form below.

var serviceConfiguration = Configuration.Get();
serviceConfiguration.MainDB = new DBConn(serviceConfiguration.MainConfig);
serviceConfiguration.BackupDB = new DBConn(serviceConfiguration.BackupConfig);
services.AddSingleton<ServiceConfiguration>();
services.AddSingleton();

Monday, 11 June 2018

Readable useful documentation

Today's award for very useful, concise and helpful documentation goes to the NuGet page on configuration.

Every wondered how NuGet coalesces configuration? Just visit this helpful page.

Friday, 8 June 2018

Method not found: 'Void Microsoft.Azure.KeyVault.KeyVaultClient..ctor(AuthenticationCallback, System.Net.Http.DelegatingHandler[])'

Ah, Fridays. The time of relaxation, chilling and really annoying bugs that should be easy to understand and track down but not! In this case, I pretty much knew the cause but I didn't know how to diagnose exactly what was causing the problem.

The Bug

I ran my unit tests locally and they ran fine. I uploaded to the build server and got the error above. The same problem could happen for any number of libraries and methods, the error being something like this: System.MissingMethodException: Method not found: 'Void Microsoft.Azure.KeyVault.KeyVaultClient..ctor(AuthenticationCallback, System.Net.Http.DelegatingHandler[])'.

The Basic Problem

If it builds OK, then the references are working correctly. Your compiler has found the method, but, of course, only links symbolically to an assembly with a version (in this case System.Net.Http.dll 4.2.0.0). At runtime, however, it uses the DotNet assembly loader, which you have probably learned is a pain in the neck because there can be multiple versions of assemblies on one machine and not another as well as assembly redirects, dependencies of dependencies and different framework versions.

It loads whatever library the other loader tells it to and if that doesn't have the method (or if the types in it are found in a different version of a dll it assumes it is different) you get the error.

In my case, the only real clue was that it worked locally and not on the Build Server. Fortunately I had access to both.

Diagnosing

The first thing that is important is that you shouldn't believe what Visual Studio tells you about versions. When I right-clicked the method that couldn't be found in my code and went to the definition and then to another dll that had another dependency in, it told me it was in System.Net.Http.dll 4.2.0.0 from the SDK install, which made sense and which should be correct.

This was not the case at runtime. I will spare you the things I tried but the first thing is to run fuslogvw on the local machine and log all bindings to disk. This tool is critical for debugging things like this! You should run it as Administrator from a command prompt and then in settings, enable all bindings and set a custom path (e.g. c:\temp\fusion). Then run the task that works locally but not on the build server and then go back to fusion log and disable the logging, just to reduce noise!

You should then look through and find the assemblies of interest, which are listed by the version being requested by the app, in my case 4.2.0.0:


Double-click this entry and you will see an html file with the loading details in it. Note that the order is the order they are processed so you might see the same item twice in the list. In my case, the issue was in the unit test project so I started at the bottom of the list and scrolled up until I found the requested version.

On my local machine, this file contained something very interesting (snipped):

LOG: Version redirect found in framework config: 4.2.0.0 redirected to 4.0.0.0.
LOG: Post-policy reference: System.Net.Http, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
LOG: Switch from LoadFrom context to default context.
LOG: Binding succeeds. Returns assembly from C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Net.Http\v4.0_4.0.0.0__b03f5f7f11d50a3a\System.Net.Http.dll.
LOG: Assembly is loaded in default load context.

Note the first line that says although I requested 4.2.0.0, a redirect was changing this to 4.0.0.0, which was in the Gac and which was loaded! Not the 4.2.0.0 that Visual Studio thought it was using.

Looking at the Build Machine

Well, I did exactly the same thing on the build machine with fuslogvw and sure enough, it was loading the expected 4.2.0.0 and not 4.0.0.0. Ironically, the Build Server seemed more correct but was not working!

There were two obvious questions. 1) Why was my local machine redirecting the assembly and 2) Why did it even matter?

I tried a find-in-files on my local machine and could not find an answer. I could not find the assembly redirect so I decided to look into the second question, which ended up partially answering the first.

The Microsoft Cock-up

Microsoft, I love you but sometimes you make basic errors that affect thousands of people, especially with versions and packaging. One such example, I found on a massive thread here.

Basically, when dotnet core was released, there was more work needed to the main DotNet assemblies to move shared types into the correct place and move out platform-specific stuff into platform-specific libraries. Changes which were not really needed for normal .Net Framework and so the decision was taken to simply continue the development of System.Net.Http.dll for .net core without updating the .Net framework version (which was still 4.0.0.0). These versions had changes, including breaking ones, and also updated the version numbers for the .net framework library even though it was not changed (to keep it in line with the .net standard version) causing all manner of dependency problems, especially where .net standard packages were referencing the NuGet package for this library instead of just referencing it directly (why is the package even there?).

The suggested fix for most problems was simple: redirect anything >4 back to 4.0.0.0, which is the version in the Gac. This explains why it was happening on my local machine but not how. It also explained why the build was failing on the Build Server, which didn't have the redirect in place.

The Solution

Once all of that was understood, the solution was simple: An assembly redirect in the app.config for the test project and it was all happy!

<dependentassembly>
    <assemblyidentity culture="neutral" name="System.Net.Http" publickeytoken="b03f5f7f11d50a3a">
        <bindingredirect newversion="4.0.0.0" oldversion="0.0.0.0-4.2.0.0"></bindingredirect>
    </assemblyidentity>
</dependentassembly>