Thursday, 18 August 2016

Error: Unable to Start Debugging on the Web Server

Got this on an app that has previously worked and I definately knew everything was installed on my local PC.

The problem was that the Web tab of Properties had a url set in project Url that was pointing to a live site (which I usually have redirected in the hosts file to 127.0.0.1). Pressing debug, the project was trying to connect to that live site, which obviously doesn't have remote debugger installed and was failing.

Simples! Just added the entry back into the hosts file so VS can connect to the debugger on the local machine and we're all good again.

Thursday, 21 July 2016

PHP loads extension from CGI but NOT from CLI

So this is a weird one and one I had to workaround instead of fixing.

I've had PHP5.5 working fine but Magento requires 5.6 so I downloaded it along with all the extensions I have been using like pdo_sqlsrv, xcache and memcache.

I then loaded a local PHP web app to confirm it was setup correctly and that worked fine.

I then ran composer from the command line to download Magento and it errors saying PHP Startup: Unable to load dynamic library php_pdo_sqlsrv_56_nts.dll %1 is not a valid Win32 application.

There are all kinds of pitfalls when using PHP on windows and I tried/checked this:


  1. CLI pointing to the correct php.ini? Yes
  2. PHP build is 32 bit? Yes
  3. extension is 32 bit? Yes
  4. Extension path correct? Yes
  5. Anything useful in event viewer? No, only the same as the CLI prints (no error for web app)
  6. Everything else pointing to the right place? Yes
The biggest annoyance is that it was definately working OK in the web app whereas it would just error for the CLI (bearing in mind the local web app was not using the sqlsrv driver). It also only seemed to be that one specific extension it was complaining about (and I have had problems in an earlier life with the same thing).

The only thing I could do was to comment out the offending line in php.ini before running composer update and then put it back in when I next need it. Maybe it's something as simple as the web version not failing on warnings and the CLI one doing the opposite?

Wednesday, 13 July 2016

Convert.ToDateTime() ALSO converts the timezone!

Few things in programming sound as simple as dates but are so complicated. Partly I think previous companies have no appreciated the complexity and tried to simplify things and even Javascript still lacks basic operators like ToString(str) methods.

Anyway, I am trying to display a date to the user in the browser-local timezone and format. This sounds easy enough, I render a UTC datetime to the page and then use the following javascript to render it into the locale sensitive format:

$('#lastLoginDate').text(String.format(
        $('[name="LastLoginFormat"]').val(),
        new Date($('[name="LastLogin"]').val()).toLocaleDateString(),
        new Date($('[name="LastLogin"]').val()).toLocaleTimeString().slice(0, -3)
    ));

if (!String.format) {
    String.format = function (format) {
        var args = Array.prototype.slice.call(arguments, 1);
        return format.replace(/{(\d+)}/g, function (match, number) {
            return typeof args[number] != 'undefined'
              ? args[number]
              : match
            ;
        });
    };
}

The first bit is actually formatting the string to say "Last login at

This bit worked but the time was wrong and I suspected there was some timezone problem. I thought I was storing all dates in the database as UTC, since they are cloud servers and always stay in UTC without daylight savings. A quick check and this was accurate so what was going on?

Well, I wanted to make sure that an ISO8601 was being passed as a string to javascript so I had the following:

model.LastLogin = LastLogin.ToString("yyyy-MM-ddTHH:mm:ssZ");

But it was wrong. So after a simple console app and some copies and pasting of the code I was using in my main app and the following was happening:

(Web service from database)
return Convert.ToDateTime(dataReader["lastlogin"]).ToString("u")

This was correct and returned the string as UTC datetime as per the database.

(Main app)
LastLogin = Convert.ToDateTime(stringFromWebService);

This was NOT correct. What was happening was that the UTC datetime was being converted into the local timezone on my development machine when this code was called. Why? Because I think it is wrong. It is making an assumption that I want the time to be local whereas the SAFE default would be to leave it with the UTC timezone and allow me to convert it later or provide other overloads to the function that would be more specific about whether the conversion should take place or not.

The fix? I have to convert it BACK again by doing something like this:

LastLogin = Convert.ToDateTime(stringFromWebService).ToUniversalTime();

Which is also poorly worded since it is really converting it to the Universal Time ZONE.

I don't know whether we ever understand how all this date stuff works but it would be easier if the framework classes treated everything like UTC and only performed locale dependent stuff with specific functions like DateTime.ToTimeZone() or DateTime.ToLocalTimezone() or whatever.

I'm trying to do the right thing with UTC but .net is not helping :-(


Friday, 8 July 2016

Calls to Azure DocumentDB hang

TL;DR It was my async deadlock and was nothing to do with DocumentDB!

I have been trying to use DocumentDB for ASP.net session. Why? Although it is not the recommended "redis" way, it is resilient, supposedly fast and will save us a packet on the current 3 cache worker roles we have to support.

So I wanted to see whether it was going to work and for reasons I could not understand, the Unit Tests in my NoSQL library all ran OK but doing the SAME thing in my web app and although ReadDocumentAsync worked fine, CreateDocumentAsync would hang. It added the document OK but just hanged and fortunately, when I searched for something more generic than "DocumentDB is hanging", I chanced upon a few articles that talk about the dreaded async deadlock in .Net.

Hopefully, you all know what async is. Some magic glue that Microsoft invented to increase performance in .Net applications, particularly when they are waiting for external things to happen. It is not really the same thing as multi-threading and also not quite what people think of when we talk about asynchronous coding. It can therefore be quite confusing and this confusion is where the deadlock comes from.

What does async do? When you call an async method, it returns a task which is a kind of "handle" on the task and which allows you to carry on and do something else if you want OR you might just want to wait there for it to finish. Let us first look at the correct way to await an async method:

var returnValue = await MyMethodAsync();

NOW, this is the thing. Under the covers, once the async method is called, the thread calling this method will be released back to the thread pool so it can be used for something else. Once the task has finished, the framework will wait for the SAME thread to become available, at which point, control will continue with the same thread. Not surprising since you have a stack that needs to be retrieved once you restart.

You could alternatively get a task and await later on:

var task = MyMethodAsync();
DoSomethingElse();
var returnValue = await task;

So what's the problem? To use the "await" keyword, you must be in an async method and if you are not careful, you end up with all kinds of async methods starting at the lowest level and working their way all the way to the top. This can be confusing and seems to be over-the-top, especially when ReSharper keeps telling you that you have an async method that might not be using await!

So there are a couple of other ways to call async methods. One of them is called Result (which returns the value) and the other Wait(), is used if there is no return value. These allow you to call an async method without using await and implicitly wait on the call. These are synchronous calls, in other words, the thread blocks, it does NOT get released like it would if you called await.

Can you see the problem yet? IF you call the async method on the SAME thread that you then call Result or Wait() on, you will probably deadlock because once the async task has finished, it will wait to re-acquire the previous thread but it can't because the thread is blocked on the call to Result/Wait()

So why was this working in my Unit Tests? Well note my use of the word "Probably". Obviously if you do not wait for the task to complete, you will not deadlock but that is likely to be wrong since you will probably need to handle errors/return values from async methods but also, if the async call is fast enough, it might finish before the task is returned, in which case, it will have already re-acquired the original thread and when that thread then calls Wait() or Result, the data is already there.

So you can use async tasks and await to avoid this problem but there is also another clever trick, certainly in newer versions of the .net framework and that is to invoke your async task on another thread, not on the one you are calling your method with. It is as simple as this:

var task = Task.Run(() => myMethodAsync());

which involves the method on a thread from the thread pool. When your calling thread then waits and blocks using Wait() or Result, the async task will NOT need to wait for your thread, it will re-acquire the one from the threadpool, finish and signal your waiting thread to allow it to continue!

It is important to carefully consider use of async. For instance, you want lots of spare threads when you are waiting for long-running tasks but at the same time, letting more people in when you are already choked in the backend might not improve performance but make everything much slower. You should consider the mix of functionality your site is providing (if everyone does the same thing, you might not save anything) and you should generally not async calls to the database unless it is replicated since the database is usually the slowest point on the system and letting the web servers deluge it with even more threads will not make things faster, quite the opposite!

I'm still learning though.



Friday, 1 July 2016

IdentityServer doesn't work with OAuth2 and its probably Open ID's fault!

I am really excited to be working with Dominick Baier of IdentityServer next week to help PixelPin implement OpenID Connect using the IdentityServer library. We already have a (homemade) OAuth2 solution but OpenID Connect has some more complexity which i didn't particularly want to hack around with myself, especially since the number of test cases goes up exponentially with every option, grant type, response mode etc.

In fact, I have already got the openid connect login working and have been testing it with a wordpress oauth2 plugin. I realised it doesn't work even though openid is supposed to be a superset of OAuth2 and IdentityServer is supposed to work with OAuth2.

My first problem was that "scope" is required by IdentityServer and is required by OpenID Connect but it is NOT required by OAuth2 and not all plugins will pass scope, since OAuth2 allows an IdP to have a default scope if not passed. Whether or not that was a good idea, it is what RFC6749 allows and it should be permitted. I had a discussion on Github with Brock Allen, the other main author of IdentityServer and he didn't seem to understand what I was saying and why it was broken. Since OAuth2 allows a default scope, I offered to create a Pull Request that allows the user to specify a default and only to error if the default is not set AND scope is not passed but Brock didn't agree. I have already modified my copy of IdentityServer to not require scope.

The second problem is that AuthorizeRequestValidator has some sloppy (in my opinion) logic when checking the request. It basically says, "if there are any openid scopes but one of the scopes is not openid, throw an error". The problem is that this assumes that openid scopes are unique to openid and that is not true. Many OAuth2 providers will use a scope called email in a non-openid request and this does not indicate that it is an incorrect openid request, just that it is not openid. I am going to raise this on Github and see what happens!

Anyway, it raises another issue in what happens when you create something like OpenID Connect to "sit on top of" OAuth2 and where the specs conflict. OpenID Connect basically says that the userdata endpoint should use Bearer Token authentication whereas OAuth2 is not specific. What does that mean for implementers? IdentityServer is clearly very much OpenID oriented and requires the userdata request to be Bearer Token whereas my OAuth2 plugin simply provides it as a POST body param, which is also allowed in certain specific scenarios but which is not guaranteed to be supported by OpenID resource servers.

For me, the key should be the openid scope. If it is NOT present then the system should behave in the same way as any normal OAuth2 provider - something that IdentityServer does NOT do. If openid is a requested scope, then the system can go to town on validation and error messages since it is now in the secure world of open id connect.

OpenID Connect is gaining ground but there are still many OAuth2 clients that don't support openid connect and possibly won't work with IdentityServer, at least without some changes to them which might or might be possible.

Anyway, hopefully, Dominick will put me straight! Until then, you have been warned.

Cannot add IIS App Pool identity to windows permissions

IIS7 and above have a really useful and secure feature where Application Pools each have their own user account which makes it harder for a hacked web application to access directories that it shouldn't.

Of course, this means that some developers get lazy and change it to use Network Service or such like because "it just works" but you really shouldn't do this. You should assign permissions correctly so it works. Microsoft makes this easy if slightly un-intuitive when adding permissions to folders.

Usually, the app pool gets no permissions by default - not even read. You will almost certainly have to start by editing the folder permissions of the web app on disk and add the app pool use. You do this by searching for the application pool user by its name but adding "IIS AppPool" as in IIS AppPool\appPoolName:


When you then click check names, if it WORKS, then the name changes to an underlined version:



BUT. If you try and just type Default and press Check Names, it DOESN'T work, you have to type IIS AppPool\ at the start.

BUT sometimes it still doesn't work, even if you know the user exists. There are two reasons this will happen.

1) You have the wrong "location" set. You need to use the local machine location, not a domain since the users do not exist on the domain. If you press Locations and it wants a domain login, just press escape and eventually the list will come up where you can select the local machine without needing to enter domain creds.
2) You do not have Ownership of the folder you are trying to add users to. For some reason, this does not prevent you opening the dialog but when you search for the IIS names, they simply aren't found. Go back up a level and make sure you are the owner of the folder. I had an example where although I was an Administrator, the folder was owned by "Administrators" and not by "Luke". Weird but true.

Wednesday, 29 June 2016

TelemetryConfiguration with different Instrumentation Keys

The last few days have involved learning lots of things, breaking lots of things, fixing them and then breaking some other things. I have lived the Facebook "Fail fast" motto!

Anyway, the latest fun and games is using Application Insights, a pretty swish Azure service that collects not only loads of server and performance data from your web apps but also allows you to collect custom metrics. I have a shared library to track these metrics but realised I needed to be able to track events from multiple apps into the same App Insights bucket, while keeping separate buckets for server data - in other words, have more than one instrumentation key.

Initially, I simply passed the instrumentation key I wanted to my library like this:

client = new TelemetryClient(new TelemetryConfiguration(){InstrumentationKey=myParam});

But when called, this produced the rather obtuse error "Telemetry channel should be configured for telemetry configuration before tracking telemetry"

This is another MS classic that makes sense after you've worked out what went wrong rather than showing you what you did.

The Telemetry channel is set up in config so why wasn't it working? Duh, because I had just passed new TelemetryConfiguration() rather than getting TelemetryConfiguration.Active which is what the default constructor uses. I didn't want to use Active however because changing that could affect the config being used by the app and I wanted it just to be used locally. Fortunately, TelemetryConfiguration provides two helper methods: CreateDefault(), which creates a copy of the active configuration and allows you to change what you need to (which I used) and also CreateFromConfiguration(), which does the same thing from a configuration source if you prefer to do something more complex.

Once I created the default one and simply changed the key, hey presto, it all worked.

Easy when you know how!