Wednesday, 21 December 2016

UnableToResolvePhysicalConnection UnableToConnect

"No connection is available to service this operation: GET"
"It was not possible to connect to the redis server(s); to create a disconnected multiplexer, disable AbortOnConnectFail. ConnectTimeout"

Redis, running on Windows with the default settings.

The problem? ssl was set to true in the connection string but not setup on the server. I set ssl to false (it is only for development) and it all works fine. Why would this not return a useful error like "ssl not setup on server"?

We only found out when one of our Devs happened to spot the ssl setting and wondered if that was why it worked in production and not in dev!

Tuesday, 20 December 2016

MSTest and Unit Tests for .Net on Bamboo - setting the path to the dll

We are starting to use Atlassian Bamboo, a build server that integrates really well with Bitbucket for building git repos.

Anyway, we mostly use .Net and getting that to build was OK but now we have added Unit Tests as well. I added another stage, with another job and an MSTest task and typed in the name of the test dll but it wouldn't work (file not found). I then found out about bamboo server variables and tried adding this to the path to the dll but still didn't work.

Basically, each job has its own build directory so although the application and test dll had been built by the previous stage/job, the unit test job then starts from an empty build directory.

You can share artifacts across jobs using the artifacts tab but in this case, it seems to make more sense simply to put the unit test task into the same stage as the compile task so they share the directory.

Easy peasy.

Wednesday, 23 November 2016

Getting Web Deploy working

When web deploy works from Visual Studio, you get lazy and forget how much hassle it is to set up!

We have a new staging server and want to be able to web deploy so here is what you need to do!

Do these in the order given. There is at least one scenario where attempting to install Web Deploy before running the management service will not work!

  1. Make sure the Web Management Service is running. If it's not installed, you have to add the feature to the server. Assuming you have IIS installed, you need to add the feature Management Service.
  2. Open IIS, click on the server in the left-hand side and double-click Management Service under Management. If the page is disabled, click Stop on the right and Enable remote connections and then click Start. Optionally, you can lock down the remote IP addresses for the service. (You get a 403 if this is not setup)
  3. Open firewall port 8172 for TCP. You can lock this down to IP addresses if required.
  4. Install Web Deploy (current version is 3.5) by downloading direct from Microsoft. The web platform installer might not install the handler needed! (You get a 404 if this isn't installed)
  5. Create a site in IIS, if it does not exist, pointing to a new folder that will be the destination for your site. This will need to match the name you specify for Site name in the web deploy dialog. You can setup https bindings at this point if required.
  6. You have the choice of 2 ways of setting up user access. Either use an IIS user, which you create under the server tab in IIS Manager Users or otherwise use a windows user. If you want to use IIS users, you need to enable this under the Management Service page on the server. (I couldn't get the Windows user to work!)
  7. Click on the web site you want to set permissions for and double-click IIS Manager Permissions. Click Allow User on the right and either choose a windows or IIS user to give permissions.
  8. If you have used an IIS user, you need to add a delegate rule to allow the user to create an application and content. Double-click Management Service Delegation in the server tab and click Add Rule. Choose Deploy Application and Content and then once it is added, ensure the rule has contentPath, createApp, iisApp and setAcl ticked. Then add the IIS user you created to the rule.
  9. Make sure the user you are using has full control permission to the web root folder on the server to create the directories and files (it needs full control, even for subsequent deployments, which is sad but true!). For the IIS Users, you need to add these permissions for whatever user is running the Web Management Service (Local Service by default). If using a windows user, that user needs modify permission also.

Tuesday, 22 November 2016

Automated Builds for Visual Studio Project/Solution with custom NuGet packages and Bamboo

Atlassian Bamboo

We've started using Atlassian Bamboo as a build server for our mainly .Net projects. Apart from the fact that it plays nicely with Jira and Bitbucket and it seems to be very nice to use, it also supports a useful feature that it can build every branch that is pushed rather than just the "master". This allows a build for every defect fix which will live in its own branch but without having to set them up each time.

Bamboo Concepts

Anyway, the concepts are a little confusing in Bamboo because it is capable of very complex builds and workflows but you basically create a:

  1. Project - This is basically what you expect and will likely map onto a single solution in source control. It might however be a "super" project which would integrate several solutions into a single product.
  2. Plan - Each plan is like a "build type" so you might have one for Continuous Integration, one for each type of deployment etc. CI is likely to be automatically triggered and deployments are likely to be manually triggered only.
  3. Stage - Each plan has 1 or more stages. Stages will be run one after the other but are able to share artifacts between them i.e. the output of one stage can be fed to the next.
  4. Job - Each stage can have 1 or more jobs. Jobs are designed to be run in parallel on multiple agents so if you want something simple, just use a single job. Bamboo licensing is based around jobs so adding multiple jobs can cause you to require more licensing.
  5. Task - Each job runs 1 or more tasks. This is the lowest level entity. Each task will run one after the other but you are allowed to say whether the task always runs on failure of any tasks. This might be useful for calling a cleanup task. These are called "Final tasks".

Tasks for Building .NET

Source Code Checkout

Firstly, you will need a Source Code Checkout task. Note that the default properties for Bamboo will check everything out under Bamboo Home, which is in C:\Users\ by default and which might easily fill your C drive. I changed these paths and restarted Bamboo to make it use another drive for builds and artifacts.

Most of this should be easy to setup but you might need to setup SSH keys etc. for Bamboo to access Git, Subversion etc. I did this from the command line using git-scm but it looks like Bamboo does its own stuff by creating the application link to BitBucket and making Bamboo create a key.

NuGet Package Restore

In .NET, you are likely to need to restore NuGet packages. You shouldn't store these in your code repository because they are a waste of space and are easy to pull down onto the build server on build.

You will need NuGet.exe, which you can download from as a single executable, which might as well be copied into C:\Program Files (x86)\NuGet. You will probably already have this directory if you have installed Visual Studio.

Now is the fun part! If you are only using public NuGet packages, then you should be able to skip the next part and go to "Create the Task" in this section.

Custom Feeds

If you are getting packages from any custom feeds, there are various ways for NuGet to find these. Depending on your version of NuGet, the following config files are used when NuGet attempts a package restore:
  1. Project-specific NuGet.Config files located in any folder from the solution folder up to the drive root.
  2. A solution-specific NuGet.Config file located within a .nuget folder in the solution.
  3. The global config file located in %APPDATA%\NuGet\NuGet.Config
  4. Additional machine-wide config files (NuGet 2.6 and later) located in %ProgramData%\NuGet\Config[\{IDE}[\{Version}[\{SKU}\]]]NuGet.Config
  5. (NuGet 2.7 and later) The "defaults" file located at %PROGRAMDATA%\NuGet\NuGetDefaults.config
So what should we use? As with anything, changing the fewest files is the best option. Option 5, above, is a way to share nuget config between separate users by having a potentially network-wide config. As long as security isn't an issue, this file could be pushed by group policy (or some other way) and avoid any additional setup. Option 4 is if you want to lock it to the build machine only, option 3 for the build user only and options 2 and 1 if the settings need to be project specific.

BE CAREFUL that you are not pulling in a config file into the users Roaming profile accidentally. This happened to me and caused a feed to be disabled! You can find out by opening C:\Users\\AppData\Roaming\NuGet and seeing if anything unexpected is there.

Once you've decided the appropriate config file to edit, add your feed endpoint in the normal NuGet format. If you are using VSTS feeds, you can copy the URL from the "package source URL" box in the "Connect to feed" dialog in VSTS. If the feed does not require authentication, it will simply be a key="" and value="" added under packageSources.

If you need to use a VSTS feed, the easiest way to authenticate is to add the CredentialProvider from VSTS, which is designed to bring up Internet Explorer to allow you to authenticate using your VSTS account and allow the provider to download and secure the creds needed to access the package. The instructions for this are linked in the "Connect to feed" dialog and are rubbish!

Download the link onto the build machine from the "Download NuGet + VSTS Credential Provider" link on the "Connect to feed" dialog in VSTS. Unzip it and copy it into a directory named "CredentialProviders" under C:\Users\\AppData\Local\NuGet. NuGet will look in all sub directories of this directory if you want it in another directory. If you cannot easily log in as the build user onto the build machine, you might also want to copy the credential provider into another users AppData directory for testing. The credentials will be cached per user so you will need to run NuGet as the build user at some point.

You MUST disable Internet Explorer Enhanced Security (also known as "no functionality" mode) if using the VSTS credential provider. NuGet won't use another default browser possibly due to the file type it is opening.

Log into the build machine somewhere that you can test package restore. I ran a Bamboo plan that just did a source code checkout so I had a starting point. Open a command prompt (doesn't need to be admin) and navigate to the directory that has the solution (unless you want to put in full paths) and run "C:\Program Files (x86)\NuGet\NuGet.exe" restore SolutionName.sln. It should realise that it needs to authenticate using the MS credential provider and open that Visual Studio login window for you to enter your VSTS credentials. Once that is done, they will be downloaded and stored (I believe in the TPM of the machine) so it won't need to be done again.

Remember that it will need to work as the build user since Bamboo will not be running the same thing as you (hopefully)!

Create the Task

Create a second task of type "Script" and set it to use cmd.exe as the interpreter, use inline as the script location and paste in something like: "C:\Program Files (x86)\NuGet\nuget.exe" restore "${}\MySolution.sln" into "Script Body"

Building .NET

You can theoretically build .NET projects just with MSBuild which is usually installed into the .NET framework directories in c:\windows\Microsoft.NET but actually there are various dependencies that can make this tricky to setup. You will also need to install Visual Studio Build Tools to ensure you get a batch file that is used by Bamboo.

By FAR the easiest way to do this is to install Visual Studio Community edition for free and ensure you also install Visual C++, which installs the required batch file.

Add a new task of type Visual Studio (or you can try and get an MSBuild task working). Add a new executable if required and note it wants to the path to devenv.exe only, not the file name! Options needs something, e.g. /build Debug and the platform is likely to be amd64.

This should now be enough to manually run the plan and see any errors that are produced by the build in the various logs. A common problem is that the build server rarely has as many libraries installed as the development machine so it will potentially cause dependency errors but that is for another day!

Thursday, 20 October 2016

Azure App Services not ready for the big time yet?

I really like the idea of App Services for web apps. It basically looks like cloud services but with shared tenancy so lower cost per "server" than a cloud service.

The other really big winner for me is the much quicker scalability when scaling out. Cloud services understandably take a long time to scale out and cannot handle short traffic bursts for that reason.

App Services on the other hand can scale out in seconds. I think they do this by sharing the code and just increasing the number of virtual servers.

It looks great on paper and deployment is as easy as Right-Click->Publish from visual studio, using web deploy and therefore taking seconds. Cloud service deployment is still cryingly slow!

So that's the good news, what's the bad news?

It doesn't seem to work properly!

We set up an app service plan (Standard) and deployed a web application that calls a web service, which accesses a database. We also use Azure Redis Cache.

We fired a load test at the system to see what sort of load we could handle and setup auto-scale for a CPU of 80% to allow form 2 (minimum) to 6 (maximum) instances.

So what happens when we run the test? Loads of Redis timeout errors. Hmm, the requests per second are only about 100/150 and the response times are less than 1 second so why is the redis 1.5second timeout occurring?

We had a few pains, where re-deployments didn't seem to fix the problem and even though we could see the remote files easily using the Server Explorer in Visual Studio, the presence of the machine key element didn't seem to remove the errors related to "could not decrypt anti-forgery token". Hmmm.

To be fair, it did expose a problem related to the default number of worker threads. In .Net, this is set to the number of cores (1 in our case) and any new threads can take a minimum of 500mS time to create, easily tipping the request over the 1.5 seconds. So I increased the number of threads available but still we were getting timeouts even when the number of threads was way below the threshold for delays.

Maybe we were pegging CPU or memory causing the delays? No idea. The shared tenancy of App Service hosting does not currently allow the real time metrics to display CPU or memory for the individual virtual server so that's no help. It also wasn't scaling up so I guessed it wasn't a problem with resources and why should it be at such a low user load?

I finally got fed up and created a cloud service project, set the instances to the same size as the app service instances, using the SAME redis connection and ran the same test. Only a single error and this pointed to an issue with a non-thread safe method we were using (which is now fixed). No redis timeouts and a CPU and memory barely breaking a sweat.

We ended up in a weird situation where we couldn't seem to get much more than about 100 requests per second from our AWS load test but that is another problem.

Why does a cloud service running the same project work where the same thing on App Services doesn't? I couldn't quite work out what was wrong. A problem with Kudu deployment? Some kind of low-level bug in App Services? A setting in my project that was not playing nice with App Services?

No idea but there is NO way I am going to use App Services in production for another few months until I can properly understand what is wrong with it.

These PaaS tools are great but when they go wrong, there is almost no way to debug them usefully. The problem might have been me (most likely!) but how can I work that out?

Cannot answer incoming calls on Android

Weird problem: Phone rings and/or vibrates but the screen doesn't change. All you can see is the home screen. How can you answer the call? You can't.

Well you can but you need to know what you did!

In my case, I am on the Three mobile network, which is, basically, terrible around where I live and work. They have an app that uses the Internet for calls and texts if you are not in mobile range. If you are, all kinds of pain occurs because texts sometimes go to the app and not the phone etc.

Anyway, like many annoying apps, decide that they can fill up your notification area with a permanent message saying that Wi-Fi calling is ready. Amazing!

Anyway, I had disabled notifications for the Three app to get rid of that annoying message but the thing I hadn't realised is that it will also disable the incoming call notification. For some reason, the phone still rings and vibrates but you can't do anything.

I had to re-enable notifications and then, of course, I can answer the phone! I also have to live with the annoying permanent notification.

The same has also occurred with Google Dialer according to forums but I don't have myself.

Friday, 14 October 2016

Azure App Services (Websites) - Cannot find the X.509 certificate using the following search criteria

Deployed a brand new WCF .Net web service to Azure App Services and when trying to load up the svc file in the browser, got the message above.

Here's what you need to know:

  1. You need to upload the required certificates to the Azure portal
  2. You need to make sure that you are referencing the certs from the CurrentUser store, NOT the LocalMachine store. App Services uses shared hardware so you can only access the CurrentUser location.
  3. You need to add an App Setting to tell the service which certs to make available to the web service. You only need to do this for certs referenced in the app, not for certs you are only using for an SSL endpoint. The key is WEBSITE_LOAD_CERTIFICATES and the value is 1 or more comma-delimited thumbprints for the certs you want to load.
  4. You CANNOT add this only in the web config file, despite Azure merging portal and web config values, it MUST be added in the portal to the Application Settings tab.

Thursday, 13 October 2016

Encrypting/Decrypting Web.config sections with PKCS12

What is it?

I'm not sure why I haven't blogged this before, since it can be quite confusing but this is a cool way of protecting web.config secrets from being seen casually by Developers.

The basic idea is that you use an SSL certificate as the key and then the sections in web.config are not readable. They will decrypt automatically, however, when used by IIS if the certificate is installed locally.

It is worth mentioning that if a developer can run the code locally, they will still be able to find out the secrets, it is more of a protection from people who can see code in a repository but not run it.

You can use any SSL/PKCS12 certificate for encryption/decryption but I recommend using a self-signed certificate that should be in-date (since some services e.g. Azure App Services will not allow upload of expired certificates). If you use a self-signed certificate, you get to control its lifetime more easily if you are worried about it being hacked. If you share one of your public certs, you will probably have more work to do when the cert expires.

You should be able to encrypt all web config sections but although this works in theory, certain sections cannot be encrypted due to when they are accessed by the system. I can't find a useful resource so you will have to try it and see. It will be obvious if a section fails to decrypt.

So how do you do it?

First, you should reference or install Pkcs12ProtectedConfigurationProvider, which does exist as a NuGet package.

You then use aspnet_regiis.exe, which lives in the .Net framework directory(ies) in C:\Windows\Microsoft.Net. If you use the Visual Studio Developer Command Prompt, then this program should be in the path already.

Make sure you run the command prompt as Administrator, otherwise you will not be able to run gacutil and asp_regiis will return the error "Keyset does not exist" because it will not have permission to access the local certificate store.

So how does it know which certificate to use to encrypt? You need to create a section in your web config that references the thumbprint of the certificate you want to use. It looks like this:

        <add name="CustomProvider" thumbprint="d776576a90b5c345f8a9d94e732c86c1076eff78" type="Pkcs12ProtectedConfigurationProvider.Pkcs12ProtectedConfigurationProvider, PKCS12ProtectedConfigurationProvider, Version=, Culture=neutral, PublicKeyToken=34da007ac91f901d">

aspnet_regiis performs various other functions as well but the two options we are interested in are:

-pe to encrypt a section
-pd to decrypt it.

If you use the f option as well, you can specify a physical location for a web.config, which I have found is easier than using the VirtualPath (and in some cases you won't have virtual path).

The -f should point to the directory containing the web.config, not the web.config file itself. You will end up with a command like this:

aspnet_regiis.exe -pef "connectionStrings" "C:\Users\luke\Documents\Visual Studio 2015\Projects\SolutionFolder" -prov "CustomProvider"

Note that prov specifies the name of the key entry that you want to use to encrypt the section.

If you are encrypting a custom section, you might have some fun and games trying to get aspnet_regiis to find and load the dll that defines the config section. You can use gacutil to add these dlls to global assembly cache and then use the correct version and public key (which you can find in a tool like ILSpy or by browsing to the Gac at c:\windows\\assembly\gac_msil and looking at the folder name for your dll e.g. v4.0_1.0.0.0__34da007ac91f901d) so that aspnet_regiis knows to look for the dll in the gac.

If your section is not at the top level, you need to specify it like system.web\mysection. Also, some sections cannot be encrypted, which I think is because they are needed before it would be possible to run the decryption in IIS.

Sometimes, it pretends to work but doesn't. Run it again, it sometimes catches up! Google any errors, they are fairly easy to work out.

You will then end up with a section of the same name but with a reference to the customProvider you are using and an XML encrypted packet.

Remember to upload the certificate you are using to any systems that will need to read the section otherwise you will quickly get an error!

And as always, start with a really simple example and make sure it works before going crazy and trying to encrypt everything.

Wednesday, 28 September 2016

.Net MVC5 - RedirectToAction not Redirecting in the browser

I had a funny one on a test site I was using. Very simple MVC app, a page that requires a login and then a logout link that does the following:

public ActionResult Logout()
    return RedirectToAction("Index");

However, after calling this code, a 302 is returned to the browser for the redirect but it doesn't include the Location HTTP header pointing to the new location, so the browser does not redirect. Instead, the user has the click the link on the displayed page.

I thought this was very strange and dug into the code that produces the Redirect inside System.Web.HttpResponse and found that the Location was always added using the same url that was correctly inserted into the response body. Clearly something was removing the header after it was added and before the redirect took place.

After I commented out SignOut(), the redirect worked correctly so somehow, for some reason, IAuthenticationManager.SignOut() is removing the Location header, but AFTER it has been added in the next line.

I haven't found a reason on stack overflow but I might dig a bit deeper into SignOut to find out what is happening.

Monday, 26 September 2016

Help, my servo isn't working!

I bought some remote-control servos from China recently - got stitched up a bit with VAT getting added later and DHL charging me £11 for the processing, adding a total of 50% to my order so they ended up working out about £2.50 each. I think you can probably get them cheaper if you buy them in smaller numbers so they fly in under the radar as it were (legally!).

Anyway, I knew how they were supposed to work, connected one to my PIC prototype board didn't work! Nothing.

While Googling the potential problem, I have learned the following, some of which is generally good advice for most things!

1) Start with the simplest PIC program you can and if possible, add some checks to make sure it is working as expected. For instance, I made a program that set two servo positions and changed every 2 seconds. I then connected the timer to another LED so I could make sure that the program was running at the correct rate and probably working.
2) Remember that the servo will draw a significant amount of power when it is moving (as little as 45mA but potentially much more). Most controller outputs cannot source that level of current so don't do what I did by connecting the power wire to a switched output of the PIC (I did it so the plug would fit somewhere useful on my board where I couldn't get 5V GND and a digital IO next to each other!)
3) If it moves but doesn't go where you want it, it is possible the pulse width to position is not what you think. See if you can find the spec for your servo - it is not always that easy. You could also experiment with different values.
4) Check (carefully) that the arm is not stuck at one of the extremes. Although the gearing will make the arm hard to move, even under no load, if you are careful, you should be able to move it away from either end.
5) It obviously could be broken so having a second spare one is obviously helpful.
6) Be careful with oscillator settings. If you have accidentally got the timings wrong, the pulses could be 1000s of times too long or short. Again, it is worth connecting a spare LED to an output to ensure the cycle times are as expected.

Monday, 19 September 2016

Git stops working with VSTS - Authentication failed

Annoying bug with no obvious cause!

I have been using Git (via Tortoise Git) to push code to VSTS. I've been using it all day and then suddenly, "Git did not exit cleanly- Authentication failed".

After re-entering my credentials about 5 tims, each time taking even more care that I was typing them in correctly, it wasn't making any difference.

I fixed it in the end by changing my alternate credentials in VSTS to the same password as before but it looks like the act of saving it had refreshed the privileges and then it worked.

You can get to that by clicking your name in the top-right of VSTS and choosing Security->Alternate Credentials.

Migrating Azure Cloud Services to App Services is not a 5 minute job!

So we use Cloud Services, they were the most PaaS you could get 4 years ago when PixelPin started but they are a bit old hat. They are thinly veiled VMs, which are managed by Microsoft but they don't permit an auto scale to happen very quickly and because they don't make good use of shared hardwre, they are not the most cost effective.

Enter App Services, which supports API, mobile apps and web apps but for less money. They also, more importantly, provide a very quick therefore effective way to scale from 2 to 20 or 20 instances in only a few seconds, using hot standby services.

Perfect, so why not simply copy and paste your app from Cloud Services to App Services. Here's why:

  1. You have to learn the concepts and setup of App Services on Azure. You need to learn about plans - shared and dedicated and how you can then have multiple apps hosted on the same plan (which is cool, because you aren't paying more to host more apps, just adding more load to what you are already paying for).
  2. You can theoretically deploy directly from Visual Studio using "Publish" but this always crashes on my machine, which leaves either publish to Git and get App Services to pull in the changes (works OK but you have to use Git, which I hate, as well as spend time working out the deployment workflow). You can also use VSTS, but only if you use Git or otherwise you have to push the deployment from VSTS onto App Services, again, more learning required.
  3. You cannot run your startup script like you used to before because you are basically sharing an environment docker style. For instance, we setup cipher suites by running our startup script on Cloud Services but we are paying for the whole system so that's OK. You can't do that on App Services, you can only control your web.config settings (and I can't guarantee that some of these might not work as expected/be overridden/locked by the parent config). You must CAREFULLY consider what your startup script does and whether it can be done another way or you can accept the risk of not doing it.
  4. Things like certificates and app settings are done differently. App settings work in a similar way, except you no longer access them with the Azure Runtime functions but instead, simply access them as App Settings. It also means if you want to deploy them, they are specified as App Settings in config instead of settings in your cscfg file. There might be security considerations because of this and there are various ways to workaround that.
  5. Certificates are more a pain. You no longer specify them in your cscfg files and you don't have direct access in the same way. What you have to do is add an App Setting to expose the certificate to your app, which is then done in the CurrentUser store (since you are sharing hardware!). An article describes it here: but it looks like you can either expose 1 or all certificates using the App Setting.
  6. All of the stuff you used to set in your cloud project (VM sizes, numbers etc.) is done in the portal or using an Azure Resource Manager (ARM) Template. This decouples the development process from needing Visual Studio and a specific project type etc. and opens up the use of things like Kudu or docker to deploy code directly to Azure.
  7. If you used cloud services for session then you will also need to change that to use Redis, which to be honest is probably cheaper and easier than the Azure Cloud Service provider anyway.
  8. No doubt I will find some more things that are different but be warned! Spend some time on it, attempt to move a really cut down version of the app so you can see the sorts of issues you are having and then spend some time planning how you will need to change things to move forwards.
That said, I am excited about moving to App Services and think we will eventually get a much more responsive and cheaper system as a result.

Friday, 16 September 2016

Git, maybe one day you will be my friend

I have had a terrible week at work. I have basically done nothing for 4 days due to a whole raft of issues related to mostly Microsoft Technologies. Partly due to bugs and partly due to lack of understanding (or both). What people don't always realise is that bugs are annoying to experts but fatal to newbies because when you don't know what something is supposed to do, you cannot spot the bug.

My problems this week have included:

  1. Attempting to get a Load Balancer and Scale Set configured on Azure except the interface was allowing invalid configurations and there is no obvious feedback on healthcheck probes. Needed MS support.
  2. Trying to deploy directly from Visual Studio to Azure App Services except VS publish crashes due to a weird dll reference problem (its looking for an old method in a new DLL). If MS hadn't CHANGED the interface to the dll, there wouldn't be a problem. Spent a DAY removing old visual studio installs and installing VS 2015 again and still no dice. Broken out of the box.
  3. Trying to deploy to Azure instead via VSTS. This doesn't support TFVC, although the docs don't say this, you just wonder why your project is not listed in the portal. Roll on a few days and have finally worked out that you have to use Git on VSTS.
  4. Create a Git repo, check in all my code and get the system to auto deploy to Azure. Deployment error - nice and esoteric, basically means it can't access my private repos but this is not obvious. I can't even remember how I realised this.
  5. Tried to work out how to get Nuget working properly in my project. The project seemed to be OK according to the 10 different versions of documentation, decided I needed feed credentials that were not encrypted to use on App Services but the VSTS portal has changed and the Get Credentials button isn't there any more, despite the documentation being based on the old dialog which worked. So no credentials and no articles describing what must be App Services 101 - using custom NuGet packages. Decided I would have to copy the NuGet dlls into the project manually and reference them locally to the project for now.
  6. Finally get my app up on App Services and a big fat error. No site, just a random 500 error. Tried to remotely debug as per the helpful guide. Doesn't work, can't connect.
  7. DNS outage on Azure for about 4 hours affecting our production system!
  8. Decide the only way this is going to work is to start from a vanilla app in Visual Studio and add the stuff in bit-by-bit until something breaks. Of course, I can't use the Publish mechanism, that's still broken even on a new project.
  9. The vanilla app loads OK, I set up SSL and a public DNS so this should be good right? Add some files, try and do as few as possible but of course the references quickly spider and I end up with half the app in place. Deploy it - compile error. This time, the generously supplied gitignore file for Visual Studio excludes pfx files, which seems reasonable except my system uses them so I had to include them. The app still displays (it doesn't do anything but hey).
  10. Copy the rest of the app over and commit, error 500, 502, all kinds of stuff like that. Custom errors off (doesn't work) turn on all the supposedly wonderful diagnostics, none of them say anything useful. Try and upload debug version etc. eventually find that there is a duplicate web config entry. Why does this work locally? So it will work now? Nope, just more and more server errors and no way to debug them. Remote debugger still doesn't work and all attempts to enable logging and everything else is futile. I find a nice article about debugging errors using App Services SCM endpoint but these also don't match the reality. The "capture now" button simply doesn't exist.
  11. I decide there is only one thing to do. Revert back to the last good deployment that was still looking like the default MVC app (without CSS!) and add in items in much smaller stages, if possible. It's clearly a configuration type issue to make it 500 instead of getting an Application error so this should be doable.
And then there was GIT. I'd forgotten how much I hated it and locking horns at 8pm on Friday evening is not comforting to the soul! Subversion and even TFS are easy enough to revert but, remember, I HAD to use Git to deploy to Azure App Services so what do I do?

I'll show the log first (Tortoise Git) to see where to revert to. Choose the right point and "Reset master to this". OK, soft, medium or hard? No idea but let's copy the entire folder first - I have been burned before. OK, medium looks good.

Hmm, it says it's done something but all my files are still here - what a pile of crap. Why would I ever want to reset something and leave the files here? OK, let's try again. Reset hard right? OK, some things seem to have been reverted but I still have a load of files present that were not present when this was committed!! Why are they here? Copy the entire directory again, I know I am going to be loud swearing soon!

Try and open the project into Visual Studio to see what it is looking like and it won't open - some IIS config problem. Why? It was working fine when I committed this code. Fixed that and it looks OK but still has all these uncontrolled files. Why are they there? Git - I hate you.

Manually delete the files I don't need, commit the ones that should belong to this build. It looks like a pretty early cut but I can handle that, I have all the files elsewhere (please God, let me have the files elsewhere and Git not trying to also reset the other files since I copied a git folder!).

Now. Git push. FAIL. Oh for f**k sake. Pushed files were rejected because your branch is based on an earlier part of the remote branch. Oh really? Well I did f**king reset it so perhaps you could simply ask me, like a sane product, do you really want to overwrite everything with this. Nope. Talks about another load of crappy commands I might have to use to get this to work.

Git I hate you. After looking at the (futile) attempts at newbie Git tutorials, which still leave me utterly confused, even the ones with nice pictures, do you know what I am going to do (and this is VERY damming of the cult of Git and its contempt of people like me who don't get it): I am going to start again. That's right. I am going to delete the entire Git repo, create a new one and copy files in manually. Won't that take a while? I've already lost about 4 hours and have moved on precisely zero knowledge so why not burn 4 hours doing something manually that VCSs are supposed to automate.

Git, maybe one day you will be my friend - but I doubt it.

Source Control from Visual Studio is still terrible!

Sourcesafe used to just about work as long as you used it in the Sourcesafe interface. Trying to use Sourcesafe from inside Visual Studio was fraught and it still is. TFS, VSTS, TFVC, whatever you want to call them are basically still Sourcesafe, you can see by the files that are dumped in controlled directories and the weird entries that are added to project and solution files.

As with many decisions in life, to know we are doing it properly, we need to ask the most pure questions: What is source control supposed to do?

Answer: It is supposed to provide protection for code

It also provides a record of changes but fundamentally it is about protecting code from accidental changes - which can be reversed if needed and also from intentional changes that can be traced easily by check in comments. If your Source Control does not ALWAYS protect code, it is not good.

Let's look back to good old Microsoft who have somehow managed to mangle Sourcesafe into several different forms and somehow keep its geriatric body alive. How does Source Control work in Visual Studio?

1) Connecting a project to source control. This is a complete ball ache in Visual Studio. In VS2015, there is a Team Explorer, a Source Control Explorer, Context Menus, Menus under File->Source Control and workspace mappings. Do I create the project first in VSTS and then connect or can I simply "create new project" from my uncontrolled source? This is so hard to follow that even though I have done it about 20 times on different projects, I still can't do it right first time. Also, why is it that sometimes you "Add solution to source control" and not all of the projects show that little green plus symbol? Answer - no idea.
2) Disconnecting a project from source control. The short answer is that I think this is largely impossible UNLESS you disconnect, move the project to another folder and rename the destination project in Source Control. You can select File->Source Control->Advanced! and then choose to unbind the project. Why this is advanced, I do not know. Unbind it and guess what? There are still source control binding files on the disk, it is just pretending to be unbound. Try and bind it to another project in Source Control? Good luck, you will get that infamous message, "The item blah is already under source control at the selected location". What? I just disconnected it! No you didn't, we were just pretending.
3) Deleting a project on the server. Be VERY careful when you do this. Do you know what happened last time I did that? Visual Studio decided that since I deleted the server project, I obviously didn't want any of my local files. Yes, it deleted everything on my local disk, which is the OPPOSITE of what anyone ever wants. At minimum, you would ask, by default it would keep whatever was there. Another gotcha is basically Visual Studio being completely unable to handle the deleted project when there are local changes - how does it know there are local changes if the project was deleted in SCC? It basically forces you to undo the pending changes, even though they are irrelevant and if you don't? You get errors every time you try and edit workspace mappings. What happens if you undo your changes? You guessed it, you lose all the changes which you cannot undo because the project is now deleted!
4) Moving a project from one SCC binding to another? Forget it. Best way is to great a new project and manually copy stuff over. Honestly, that is Source Control 101 but for some reason, MS still haven't got the basics right.

Other options? You can use the command line or some tool outside of VS to manage source control, it's annoying but it seems to work much better than VS does and makes more sense because everything is in the same place.

The truth is, if you do this all the time, you will HOPEFULLY not delete anything but for the rest of us, be prepared to lose your job...oh yeah and ALWAYS make backup copies of folders before doing anything other than checking-in changes to files.

Thursday, 15 September 2016

Load Testing is not as easy as it sounds

So we have an authentication-as-a-service product called PixelPin. We want to load test it so we can a) tell our customers that it can definitely handle their X,000 customers logging in at the same time, and also to benchmark performance to find any bottlenecks and let us see the effect of architecture changes.

So far, so good.

When it comes to planning the load test, it is actually really hard. Setting up the tests is easy enough using Visual Studio Team Services and Visual Studio Enterprise Edition but actually working out what to test and how is hard.

Why? Because there are so many variables. Various cloud services, bandwidth, CPU, memory and even dependent services that are used for testing like a dummy OAuth login site. If the test site cannot handle enough user load then the load test won't be stressing the tested system anywhere near enough.

In fact, our first step is to try and work out which variables can be ignored. If a certain system is hardly struggling to manage the load, let's not spend any time with that and instead work on the really slow parts.

Another question that can be hard is how to simulate a realistic load test. There's no point saying we can handle 5000 requests per second hitting the home page when that is not what people will be doing. We need to think about the mix of journeys, the effect that caching and browser mix might have on things and a whole host of other reality checks.

You also get another problem if you are not careful about setting up your test environment. We created 5000 test users and put them into a spreadsheet that the tests use to fire at the test site but if you aren't careful (and we weren't), then the same user account is hit twice in VERY quick succession and causes an error that would never happen in real life (unless a real user and an attacker happened to do that). You start having to really understand what the load test is doing under the covers, how is it selecting the users from the spreadsheet? Can I have 2 test agents running at the same time or will that cause parallel errors that shouldn't happen?

It is going to take weeks to sort through the random errors (errors which are usually caused by an underlying problem, the error itself doesn't show that) and then to decide how to measure what we are trying to measure in a suitable way.

Load Testing is not as easy as it sounds!

Wednesday, 14 September 2016

Making Azure Virtual Machine Scale Sets work!


So we are trying to create a test environment for load testing our application. Unfortunately, we use older resources (Cloud Services and VMs) and these were all originally deployed manually so I have been trying to create a close to exact replica of our production system bit-by-bit.

Resource Manager is the new thing in Azure and it is basically a concept and set of APIs that allow you to group resources together and to script the deployment, which will be very useful going forwards for our multiple deployments.

One of the things I had to replicate was a VM set, which we were using to run a PHP app. The reason we used a VM was that the cloud service tools for PHP had various shortcomings and were not suitable so I went old-school. Although there are now App Services Web Apps for PHP, we want to load test the original architecture so that we can then benchmark that against changes, including moving to App Services.

Virtual Machine Scale Sets

To use the new portal, you have to use Resource Manager and resources that are compatible and the closest to classic VMs are called Virtual machine Scale Sets which are basically the same thing but with more configuration.

I assumed this would be a relatively quick and easy setup like it was on classic but the problem with all configuration-heavy features is that if it doesn't work - which it didn't - it is hard to know which bit is wrong.

I got it to work eventually so I thought I would describe the process.

Create the Scale Set

This is like normal. Go to your subscription in the new portal, click Add and choose Virtual Machine Scale Set. It will list several, I am using the Microsoft one. Select it and press Create.

Fill in the details. The password is what RDP is set to use. It is useful to choose a new Resource Group since there will be about 10 resources created so it is easier to group them into 1.

The next page gives you options for the IP address (you can use the default), a domain name, which must be globally unique and options like whether you want to auto-scale. Auto-scale is only useful if you are using an image that will automatically install and run when the instances are increased. In my case, I am installing the code manually so auto-scale isn't much use!

I think that the only way to correctly template the installation is to use a Resource Manager template which is not supported in the portal - so the portal just gives you the vanilla setup.

Setup your instances

By default, the load balancer is setup with 2 inbound NAT rules for port forwarding RDP to your specific instances. You should probably change the port numbers because they are always 50000 series. With these port numbers you can simply RDP to your public IP address (shown in the overview for the load balancer and several other places) and a specific port number. Obviously for each instance, they should be setup the same since they are to balance the load.

This can obviously take time, especially with things like PHP to install, but as with all installations, test it as you go along to narrow down what might be wrong. Once you've finished, access the web apps locally to ensure they work and another cool trick is to point once instance at the other in the hosts file and attempt to access one instance from the other just to make sure firewall ports are open. This takes place on the virtual network so won't give any public access yet.

Setup a probe

In order to load balance instances, the load balancer will need a way to probe the health of your instance. Your choices are HTTP or TCP and this probe(s) is setup in the Probes menu for the load balancer.

When you are first testing it, it might be tempting to use HTTP simply to your web site but be warned that there are some cases where this won't work and you will not know anything except it doesn't work! I'm not sure that https is supported (although you can type it into the Path box) and it doesn't appear to be happy with host headers.

Unfortunately, there is no obvious to test the probes in real-time, you can only enable diagnostics and hope for some useful information but it is very user unfriendly.

You can always start by trying an HTTP probe, if it doesn't work try the following trick.

Download psping from sys internals onto each instance and run it as administrator from the command line with the arguments psping -4 -f -s and it will run a TCP server that simply pings a response to a request on that port. You can then setup a TCP probe to point at the port you specified which worked in my case showing me that the HTTP part was the problem.

If you must use HTTP which is the most useful in most cases, you might need to create a default web site with no host header and put your probe endpoint in there. That worked for me (I used iisstart.htm, which was already in wwwroot). Note that ideally the probe would do more than just see a web page load but would also carry out other basic tests to make sure the instance is healthy - not too heavy though it will be called every few seconds.

Setup a Load Balancing Rule

The Load Balancing Rule is for endpoints that are load balanced across the instances (e.g. HTTP and HTTPS). To create the rule, you must have a probe but otherwise it is straight-forward. A name, a front and back port (these are likely to be known ports for public sites like 80 and 443). By default, there will only be 1 pool and the probe you just created to choose from and unless you need sticky sessions (best to design your system NOT to need them) disable session persistence which will round-robin requests to each server.

Other Things to Check

As with all of these things, don't forget things like public DNS settings to point to the correct location, SSL certificates and consideration of how you will update/fix/maintain your instances. It is possible for these instances to die so ideally you should have an automated deployment although for now, I am not going to bother spending time on that!

Friday, 9 September 2016

What is it like owning a Toyota Auris Hybrid?

Hybrid Electric Cars

There is lots of excitement about both electric cars and hybrids. The hybrid has a petrol engine and an electric motor, which is basically to extend the range of the car since only a few electric cars have a useful range, unless used as a second car for local journeys only.

Within the world of hybrid, there are two main types. The cheaper standard hybrid like mine and the more expensive plug-in type which is only available on certain models. The basic difference is that to make use of cheaper and cleaner plugin energy, the car needs more batteries to store the energy which makes the car more expensive. I'm sure there are also design issues with fitting these extra batteries in. The non-plugin models like mine simply reclaim energy from braking or active deceleration (the same effect as taking your foot off of the petrol engine accelerator pedal), which is a much smaller amount and therefore needs only a handful of batteries.

So what it is like? Is it amazing? Is the fuel economy fantastic?

The simple answer is no. It's OK but it's not great and here's why.

What's bad?

Firstly, the MPG figures (or KPL) are vastly overstated as they are for many cars. Quite simply, and to the shame of the industry, the MPG factory tests are completely useless for any real-world usage and in the case of the hybrid, obviously the tests are not done over a long enough period to remove the effect of the stored energy in the battery, which is equivalent to extra petrol that is not being counted by the test. The book figures? 75MPG urban and extra-urban. The real figures? No more than 55MPG, and this drops in the winter to about 45 and I think this is partly due to some configuration that would be nice to fix!

The active deceleration - where the electric motor slows the car - seems way too weak compared to a petrol car. I don't know why they can't increase this. The problem is that it requires more braking when slowing down, which leads to the danger of bringing in the brake pads when the motor could be capturing the spare energy instead.

You will find it harder to get it maintained or fixed. Even my local Quik Fit wouldn't look at the exhaust because, "the only stuff we can do on Hybrids is tyres and wipers". This basically means main dealer for everything, which is potentially expensive (I do have a warranty which might or might not cover some of it) but also, their wait time is currently 3 weeks for a booking.

Another setting which seems strange is the setting which keeps the battery topped up really high (about 80/90% on the indicator), which means that sometimes, when you are for instance coasting down a long hill, the battery quickly charges right up to the maximum and then what happens? Any more energy that could be captured by the car is lost. This happens reasonably easily, especially in the hilly areas near where I live. It would make more sense to either keep the battery more empty or to do something cleverer, which perhaps switches modes on a longer journey to assume that long hills are more likely than needing the battery as full as possible for the next day. It could even have a button to stop charging it before a hill.

What's good?

Well a petrol car with 55MPG is not bad, although other cars can match this and some diesels can beat it soundly (but who wants a diesel?). This is to be expected since most of the time, the car is running on its petrol engine - the batteries could only power the car for about a mile. There is no reason why the petrol engine would be amazing so its a normal engine with the benefit of some reclaimed energy.

In the UK, the tax is free for electrics and hybrids, so that saves about £160 per year.

My insurance is cheap but I don't believe that this is because of the hybrid car. Most quotes were in the same ballpark as my last car, which was a Toyota Avensis petrol.

It drives smoothly, being an automatic, and the power mode is very pokey when needing some quick energy, although it can take a few seconds to kick in so don't rely on it for emergencies!

You can also switch to EV mode to run pure electric for short distances - useful if you know you are not travelling far, and if the battery gets too low or you put your foot down, the engine will cut in and it will switch back to eco mode automatically.

It's a generally nice car - in as much as it has a nice trim level, auto wipers and lights, nice seats etc. but none of that is because it is a hybrid.

Friday, 2 September 2016

Do people still not understand "safe by default"?

So I've been on another Microsoft journey today. Journies that should be SOOOOO much quicker than they are due to ill thought-out documentation and unlayered documentation which means reading pages of stuff looking for things that might not be there because it's unclear what is going on.

I'm talking about multi-factor authentication (MFA), something that is a no-brainer for any admin accounts for cloud accounts.

Do this in AWS and its REALLY easy to find, setup and it works with Google Authenticator. On Azure? Nowhere near as useful.

Why? Because Microsoft do 3 sorts of MFA that are different but really similar: Office 365, Azure Administrators and a general systems for Azure AD users.

The first two are free - included in your subscription and the 3rd isn't. I wanted the second one only, to secure access to the portal.

Eventually after seeing a number of MS people confusingly pointing to links for the 3rd option which is much more involved I found a useful clue that if you use a Windows Live ID to access Azure (like I do) then you CAN'T set up the MFA that is part of Azure - wherever that is supposed to be done - and for reasons that are not really very clear. You have to set it up instead under

So off to there, download the Microsoft Authenticator app and enter my password on that (which actually enables 2-step implicitly, which I found slightly confusing). If you then go to "enable 2-step verification" on the web site, it doesn't seem to do anything, just takes you through some screens asking if you do certain things (which I don't).

Anyway, the main moan is about the screen that comes up when you try and login with your account:

You see the line that very helpfully says, "I sign in frequently on this device. Don't ask me to approve requests here". It is ALWAYS ticked by default. If you untick it, next time you login, it will be ticked.

Of course, the first time it happened, I wasn't reading this screen, I was clicking the approve button on the device, only to look up and see this checkbox just as the screen automatically navigated to the page I was going to. Too late to untick it.

Of course, I could not find any way to "un-remember" this selection anywhere, the only option seemed to be disabling 2-step and then enabling it again but the next time, I unticked the box, since I always want to use it with this email address but next time you login, it shows it ticked again.

Microsoft, and other companies, why do you KEEP getting these basics so wrong? Don't tick the box by default. If I want to tick it, I can and I'll not see the message again. If I forget to tick it, the next time it comes up, I can easily tick it and we're all good - this is just wrong - as if no-one thinks about these things in an objective way.

Which means that the award for the most recent example I have found of "unsafe by default" is Microsoft.

Sure, the attack vector is that someone would have to login from my device but for those of us in the corporate world, it is highly likely that an attacker would use my own device against my account and therefore this one poor decision has completely undermined 2-step security.

Wednesday, 31 August 2016

Microsoft seriously need to sort out their online Identity model

Account Nightmares with Microsoft

I have spent hours over the past 2 weeks with various account problems with Microsoft. Partner Network, MSDN, Azure, Bizspark and others. They seriously need to sort this out because it is becoming a joke so old, it will become a cliché!

What are they doing wrong? Like many people, their identity model is FAR too simplistic and therefore it causes a great confusion.

Example 1 - Bizspark, confusing personal identity with account identity

Bizspark is a program by Microsoft that supplies software, hosting and support (thanks MS, it has helped us enormously) but to login to your Bizspark account, you only use a Microsoft account and nothing else. Why is this bad? Because I have about 6 Microsoft accounts and I don't know which one I have used for Bizspark. I thought I did but to make it worse, it was originally setup (for reasons I can't remember) linked to an old email account I no longer have access to, and of course, I need to use the email for verification so I had to contact MS to change the email address over. To make matters worse, they pulled in the "name" from the billing contact, who is not me and which makes it even more fun when dealing with support.

Here's the thing. I should have an unambiguous identifier for the Bizspark account. Perhaps a federated company identifier but something that says, "I know which door I need to open, now let me try my various keys to get in". This is of course not helped by the security community who insist on trying to hide "helpful" information like which account I should be using to login or which account simply has a wrong password entered but is otherwise correct. Today, it is so easy to add an extra step to reset via an email or send a token to a phone with 2-factor or something that basically means they can be helpful without risking security.

Example 2 - MSDN, having multiple identities that don't securely match

So I also signed up for the MS Action pack subscription, a useful way for small companies like ours to get some office and development software and it gives you 3 MSDN subscriptions. Great. How do we use them? We create or login to MSDN using a Microsoft account (any account) and then add the subscription details, which might well live under a different email address. Basically, if I have an MSDN subscription number (or technical ID from Action Pack), I can link the MSDN account.

If the MSDN subscription is given to me under the Action pack subscription, then MSDN should require me to login. It should know that the account I use for MSDN is the same person as the MSDN subscription because I should be able to link the accounts. This way, MSDN should be able to pull the data through automatically without the additional time delay problems that we experience when trying to link these purely in MSDN.

Example 3 - Transferring ownership.

One of the problems with the email identity model is that if I want to give someone else ownership of one of the products we use online, if I'm lucky I can add the other user and even possibly remove mine but most of the time, I would have to give that person my password so they can login using my account.

Again, this breaks the idea that I should have a company account, which I can give people access to, either globally (which is useful in small companies) or on a more specific basis, perhaps access to Azure and Bizspark but not the Partner Network etc. This way, I can plan for the day when I leave the company or am fired and someone else will already be able to take over the accounts and remove my permissions.

Example 4 - Multiple account hell

There is another major problem when your federation doesn't link accounts to a single entity. Multiple account hell. I log into Azure with my support account and then open up the Partner Network in another tab. Of course, it automatically logs me in with the support account and there is nothing useful to see. Why? Because I need to be logged in as with my luke account. What do I have to do? Log out as support and log in as luke. Great, now back to Azure and - oh yeah, can't see anything any more. Account switching is poor with MS but it might not be required if the sites were identifying ME and not just a specific email address.

Example 5- Unregistered access to program sites

What happens on most web sites when you click to go into a restricted area without having registered? You will be told to sign in or sign up. Authentication 101. What happens on Bizspark or the Partner Network? You can log in with any MS account and not immediately realise that you are not registered because you get let in with some "helpful" context-sensitive menus that have hidden some options because you are not registered to the program even though you have authenticated to the site. This is terrible when attempting to remember creds for a specific site because it always lets you in when you would rather it said, "this account does not have a registration for Bizspark", even if it let you then click through to start the program registration, at least you would see that straight away.


So you see, there are various ways in which MSs problems show themselves. Unlike Google (which is pretty bad itself) and Apple, MS have many sites and sub-sites which just makes it worse. Apple and Google seem to basically have developer and normal sites with everything behind these umbrellas.

The terrible mess of online accounts with Microsoft also seems to get worse and worse. You are constantly redirected to various systems and there are inconsistencies that simply shouldn't exist. Logout from one site and it takes you to (still logged in!) but log out from somewhere else and you go to (for goodness knows what reason?).

There needs to be global coordination of these sites, both the design and consistency of UX, as well as a better thought-out strategy for taking these forwards, like the UK Government did to lots of their portals and won awards for simple and easy to use sites.

Why Microsoft don't have a global VP for web access I don't know - it appears each area runs their own sites to their own designs with various levels of usability and design quality. Some of the sites like the Partner Network are terribly commercial looking and hard to navigate due to the overly excessive functionality and menus, others like Bizspark look nice but still lack coherent navigation so that some items are not immediately visible and others that you are told to click on are not there because you are logged into the wrong account!

Thursday, 18 August 2016

Error: Unable to Start Debugging on the Web Server

Got this on an app that has previously worked and I definately knew everything was installed on my local PC.

The problem was that the Web tab of Properties had a url set in project Url that was pointing to a live site (which I usually have redirected in the hosts file to Pressing debug, the project was trying to connect to that live site, which obviously doesn't have remote debugger installed and was failing.

Simples! Just added the entry back into the hosts file so VS can connect to the debugger on the local machine and we're all good again.

Thursday, 21 July 2016

PHP loads extension from CGI but NOT from CLI

So this is a weird one and one I had to workaround instead of fixing.

I've had PHP5.5 working fine but Magento requires 5.6 so I downloaded it along with all the extensions I have been using like pdo_sqlsrv, xcache and memcache.

I then loaded a local PHP web app to confirm it was setup correctly and that worked fine.

I then ran composer from the command line to download Magento and it errors saying PHP Startup: Unable to load dynamic library php_pdo_sqlsrv_56_nts.dll %1 is not a valid Win32 application.

There are all kinds of pitfalls when using PHP on windows and I tried/checked this:

  1. CLI pointing to the correct php.ini? Yes
  2. PHP build is 32 bit? Yes
  3. extension is 32 bit? Yes
  4. Extension path correct? Yes
  5. Anything useful in event viewer? No, only the same as the CLI prints (no error for web app)
  6. Everything else pointing to the right place? Yes
The biggest annoyance is that it was definately working OK in the web app whereas it would just error for the CLI (bearing in mind the local web app was not using the sqlsrv driver). It also only seemed to be that one specific extension it was complaining about (and I have had problems in an earlier life with the same thing).

The only thing I could do was to comment out the offending line in php.ini before running composer update and then put it back in when I next need it. Maybe it's something as simple as the web version not failing on warnings and the CLI one doing the opposite?

Wednesday, 13 July 2016

Convert.ToDateTime() ALSO converts the timezone!

Few things in programming sound as simple as dates but are so complicated. Partly I think previous companies have no appreciated the complexity and tried to simplify things and even Javascript still lacks basic operators like ToString(str) methods.

Anyway, I am trying to display a date to the user in the browser-local timezone and format. This sounds easy enough, I render a UTC datetime to the page and then use the following javascript to render it into the locale sensitive format:

        new Date($('[name="LastLogin"]').val()).toLocaleDateString(),
        new Date($('[name="LastLogin"]').val()).toLocaleTimeString().slice(0, -3)

if (!String.format) {
    String.format = function (format) {
        var args =, 1);
        return format.replace(/{(\d+)}/g, function (match, number) {
            return typeof args[number] != 'undefined'
              ? args[number]
              : match

The first bit is actually formatting the string to say "Last login at

This bit worked but the time was wrong and I suspected there was some timezone problem. I thought I was storing all dates in the database as UTC, since they are cloud servers and always stay in UTC without daylight savings. A quick check and this was accurate so what was going on?

Well, I wanted to make sure that an ISO8601 was being passed as a string to javascript so I had the following:

model.LastLogin = LastLogin.ToString("yyyy-MM-ddTHH:mm:ssZ");

But it was wrong. So after a simple console app and some copies and pasting of the code I was using in my main app and the following was happening:

(Web service from database)
return Convert.ToDateTime(dataReader["lastlogin"]).ToString("u")

This was correct and returned the string as UTC datetime as per the database.

(Main app)
LastLogin = Convert.ToDateTime(stringFromWebService);

This was NOT correct. What was happening was that the UTC datetime was being converted into the local timezone on my development machine when this code was called. Why? Because I think it is wrong. It is making an assumption that I want the time to be local whereas the SAFE default would be to leave it with the UTC timezone and allow me to convert it later or provide other overloads to the function that would be more specific about whether the conversion should take place or not.

The fix? I have to convert it BACK again by doing something like this:

LastLogin = Convert.ToDateTime(stringFromWebService).ToUniversalTime();

Which is also poorly worded since it is really converting it to the Universal Time ZONE.

I don't know whether we ever understand how all this date stuff works but it would be easier if the framework classes treated everything like UTC and only performed locale dependent stuff with specific functions like DateTime.ToTimeZone() or DateTime.ToLocalTimezone() or whatever.

I'm trying to do the right thing with UTC but .net is not helping :-(

Friday, 8 July 2016

Calls to Azure DocumentDB hang

TL;DR It was my async deadlock and was nothing to do with DocumentDB!

I have been trying to use DocumentDB for session. Why? Although it is not the recommended "redis" way, it is resilient, supposedly fast and will save us a packet on the current 3 cache worker roles we have to support.

So I wanted to see whether it was going to work and for reasons I could not understand, the Unit Tests in my NoSQL library all ran OK but doing the SAME thing in my web app and although ReadDocumentAsync worked fine, CreateDocumentAsync would hang. It added the document OK but just hanged and fortunately, when I searched for something more generic than "DocumentDB is hanging", I chanced upon a few articles that talk about the dreaded async deadlock in .Net.

Hopefully, you all know what async is. Some magic glue that Microsoft invented to increase performance in .Net applications, particularly when they are waiting for external things to happen. It is not really the same thing as multi-threading and also not quite what people think of when we talk about asynchronous coding. It can therefore be quite confusing and this confusion is where the deadlock comes from.

What does async do? When you call an async method, it returns a task which is a kind of "handle" on the task and which allows you to carry on and do something else if you want OR you might just want to wait there for it to finish. Let us first look at the correct way to await an async method:

var returnValue = await MyMethodAsync();

NOW, this is the thing. Under the covers, once the async method is called, the thread calling this method will be released back to the thread pool so it can be used for something else. Once the task has finished, the framework will wait for the SAME thread to become available, at which point, control will continue with the same thread. Not surprising since you have a stack that needs to be retrieved once you restart.

You could alternatively get a task and await later on:

var task = MyMethodAsync();
var returnValue = await task;

So what's the problem? To use the "await" keyword, you must be in an async method and if you are not careful, you end up with all kinds of async methods starting at the lowest level and working their way all the way to the top. This can be confusing and seems to be over-the-top, especially when ReSharper keeps telling you that you have an async method that might not be using await!

So there are a couple of other ways to call async methods. One of them is called Result (which returns the value) and the other Wait(), is used if there is no return value. These allow you to call an async method without using await and implicitly wait on the call. These are synchronous calls, in other words, the thread blocks, it does NOT get released like it would if you called await.

Can you see the problem yet? IF you call the async method on the SAME thread that you then call Result or Wait() on, you will probably deadlock because once the async task has finished, it will wait to re-acquire the previous thread but it can't because the thread is blocked on the call to Result/Wait()

So why was this working in my Unit Tests? Well note my use of the word "Probably". Obviously if you do not wait for the task to complete, you will not deadlock but that is likely to be wrong since you will probably need to handle errors/return values from async methods but also, if the async call is fast enough, it might finish before the task is returned, in which case, it will have already re-acquired the original thread and when that thread then calls Wait() or Result, the data is already there.

So you can use async tasks and await to avoid this problem but there is also another clever trick, certainly in newer versions of the .net framework and that is to invoke your async task on another thread, not on the one you are calling your method with. It is as simple as this:

var task = Task.Run(() => myMethodAsync());

which involves the method on a thread from the thread pool. When your calling thread then waits and blocks using Wait() or Result, the async task will NOT need to wait for your thread, it will re-acquire the one from the threadpool, finish and signal your waiting thread to allow it to continue!

It is important to carefully consider use of async. For instance, you want lots of spare threads when you are waiting for long-running tasks but at the same time, letting more people in when you are already choked in the backend might not improve performance but make everything much slower. You should consider the mix of functionality your site is providing (if everyone does the same thing, you might not save anything) and you should generally not async calls to the database unless it is replicated since the database is usually the slowest point on the system and letting the web servers deluge it with even more threads will not make things faster, quite the opposite!

I'm still learning though.

Friday, 1 July 2016

IdentityServer doesn't work with OAuth2 and its probably Open ID's fault!

I am really excited to be working with Dominick Baier of IdentityServer next week to help PixelPin implement OpenID Connect using the IdentityServer library. We already have a (homemade) OAuth2 solution but OpenID Connect has some more complexity which i didn't particularly want to hack around with myself, especially since the number of test cases goes up exponentially with every option, grant type, response mode etc.

In fact, I have already got the openid connect login working and have been testing it with a wordpress oauth2 plugin. I realised it doesn't work even though openid is supposed to be a superset of OAuth2 and IdentityServer is supposed to work with OAuth2.

My first problem was that "scope" is required by IdentityServer and is required by OpenID Connect but it is NOT required by OAuth2 and not all plugins will pass scope, since OAuth2 allows an IdP to have a default scope if not passed. Whether or not that was a good idea, it is what RFC6749 allows and it should be permitted. I had a discussion on Github with Brock Allen, the other main author of IdentityServer and he didn't seem to understand what I was saying and why it was broken. Since OAuth2 allows a default scope, I offered to create a Pull Request that allows the user to specify a default and only to error if the default is not set AND scope is not passed but Brock didn't agree. I have already modified my copy of IdentityServer to not require scope.

The second problem is that AuthorizeRequestValidator has some sloppy (in my opinion) logic when checking the request. It basically says, "if there are any openid scopes but one of the scopes is not openid, throw an error". The problem is that this assumes that openid scopes are unique to openid and that is not true. Many OAuth2 providers will use a scope called email in a non-openid request and this does not indicate that it is an incorrect openid request, just that it is not openid. I am going to raise this on Github and see what happens!

Anyway, it raises another issue in what happens when you create something like OpenID Connect to "sit on top of" OAuth2 and where the specs conflict. OpenID Connect basically says that the userdata endpoint should use Bearer Token authentication whereas OAuth2 is not specific. What does that mean for implementers? IdentityServer is clearly very much OpenID oriented and requires the userdata request to be Bearer Token whereas my OAuth2 plugin simply provides it as a POST body param, which is also allowed in certain specific scenarios but which is not guaranteed to be supported by OpenID resource servers.

For me, the key should be the openid scope. If it is NOT present then the system should behave in the same way as any normal OAuth2 provider - something that IdentityServer does NOT do. If openid is a requested scope, then the system can go to town on validation and error messages since it is now in the secure world of open id connect.

OpenID Connect is gaining ground but there are still many OAuth2 clients that don't support openid connect and possibly won't work with IdentityServer, at least without some changes to them which might or might be possible.

Anyway, hopefully, Dominick will put me straight! Until then, you have been warned.

Cannot add IIS App Pool identity to windows permissions

IIS7 and above have a really useful and secure feature where Application Pools each have their own user account which makes it harder for a hacked web application to access directories that it shouldn't.

Of course, this means that some developers get lazy and change it to use Network Service or such like because "it just works" but you really shouldn't do this. You should assign permissions correctly so it works. Microsoft makes this easy if slightly un-intuitive when adding permissions to folders.

Usually, the app pool gets no permissions by default - not even read. You will almost certainly have to start by editing the folder permissions of the web app on disk and add the app pool use. You do this by searching for the application pool user by its name but adding "IIS AppPool" as in IIS AppPool\appPoolName:

When you then click check names, if it WORKS, then the name changes to an underlined version:

BUT. If you try and just type Default and press Check Names, it DOESN'T work, you have to type IIS AppPool\ at the start.

BUT sometimes it still doesn't work, even if you know the user exists. There are two reasons this will happen.

1) You have the wrong "location" set. You need to use the local machine location, not a domain since the users do not exist on the domain. If you press Locations and it wants a domain login, just press escape and eventually the list will come up where you can select the local machine without needing to enter domain creds.
2) You do not have Ownership of the folder you are trying to add users to. For some reason, this does not prevent you opening the dialog but when you search for the IIS names, they simply aren't found. Go back up a level and make sure you are the owner of the folder. I had an example where although I was an Administrator, the folder was owned by "Administrators" and not by "Luke". Weird but true.

Wednesday, 29 June 2016

TelemetryConfiguration with different Instrumentation Keys

The last few days have involved learning lots of things, breaking lots of things, fixing them and then breaking some other things. I have lived the Facebook "Fail fast" motto!

Anyway, the latest fun and games is using Application Insights, a pretty swish Azure service that collects not only loads of server and performance data from your web apps but also allows you to collect custom metrics. I have a shared library to track these metrics but realised I needed to be able to track events from multiple apps into the same App Insights bucket, while keeping separate buckets for server data - in other words, have more than one instrumentation key.

Initially, I simply passed the instrumentation key I wanted to my library like this:

client = new TelemetryClient(new TelemetryConfiguration(){InstrumentationKey=myParam});

But when called, this produced the rather obtuse error "Telemetry channel should be configured for telemetry configuration before tracking telemetry"

This is another MS classic that makes sense after you've worked out what went wrong rather than showing you what you did.

The Telemetry channel is set up in config so why wasn't it working? Duh, because I had just passed new TelemetryConfiguration() rather than getting TelemetryConfiguration.Active which is what the default constructor uses. I didn't want to use Active however because changing that could affect the config being used by the app and I wanted it just to be used locally. Fortunately, TelemetryConfiguration provides two helper methods: CreateDefault(), which creates a copy of the active configuration and allows you to change what you need to (which I used) and also CreateFromConfiguration(), which does the same thing from a configuration source if you prefer to do something more complex.

Once I created the default one and simply changed the key, hey presto, it all worked.

Easy when you know how!

Tuesday, 28 June 2016

Contained users in Azure BACPAC files not working

So we have an Azure SQL database that previously we have downloaded onto a test server every night both as a backup and also to act as a source of metrics and a test copy of the database. Apart from a few hiccups over the years (mostly when I updated the Azure instance and didn't check the backup process), it works OK.

The backup is a .net exe which runs in Task scheduler and calls an old SOAP web service to create the bacpac and store it in Blob storage. It is then downloaded and I call SqlPackage.exe to restore the bacpac onto a local instance of SQL Server.

Then the other day, we created a Contained user - a user that is linked to the database rather than a server login and which allows geo replication to work without screwing up permissions during the copying (since the users are really a security ID not just the display name). This all worked online but again, it broke the backup process or specifically the database restore process.

Long story short, the web service we were using to create the bacpac does not add the "Containment" setting in the output. This might be because Azure seems to cheat this setting and automatically have it enabled regardless of what settings your database has. I tried to use their new service API and I just couldn't get it to work. I then looked at the documentation for their .Net libraries which was woefully lacking in guidance. I decided for now to manually modify the bacpac to at least get a working database.

The bacpac is just a zip file with another name so renaming it from bacpac to zip, you can then extract the files to make them easier to work with. I then edited model.xml and added  at the top where the other settings are. This actually breaks the package because there is a checksum in origin.xml.

Fortunately, there is a program which calculates the new hash for you called dacchksum and which is hosted on github here. The idea is to put back all the stuff into the bacpac again and run the program against it which will tell you what is currently in the file by way of check sum and what should be. You then copy the value it should be, replace the current value in origin.xml and package it up again.

IMPORTANT. I had a problem with the app telling me the package was invalid and the github source doesn't build in order to debug it. The problem is caused by the way that windows sends files to a compressed folder. If you right-click the entire folder, you get a zip file of that name but the zip has a root folder inside it before adding the child files. In other words, if you want to make the package, you must select the files to exist at the root level of the package (i.e. model.xml, origin.xml etc.) send those to the compressed folder and then rename it to what you want.

I did this and then imported the bacpac manually, which looked like it worked OK but it did not correct import the password for an Azure contained user. I checked the model and it does have some encrypted password information but for reasons I have yet to work out, this doesn't work - possibly because it uses a server key for security reasons, I don't know.

So be careful anyway, these portable users are not obviously portable - although the geo-replication seems to work OK!

Monday, 27 June 2016

The depressing feeling of the software slippery slope....

Today has been a day when I have implemented precisely nothing and have spent the day fault-finding and fixing two problems. I have only managed one of them but it is all too common in software, especially when you manage several different parts of the same system.

1) The web app errors
I came in to an email that our site was not working. Although it seemed to be working for me, the error emails (very useful, highly recommended!) told a different story. A part of our web service is supposed to speak to a read-only replica of our database as part of our move towards higher bandwidth on our site, obviously only read-only procs are supposed to use this connection otherwise you get errors (which we had). I had seen this before and it seemed weird since the code "clearly" showed that these methods were not calling the read-only replica so should not have produced the errors. After probably too long, I noticed that my lazy loading was "very lazy" and I had reused the same variable between the writable and the read-only databases which meant whichever one was called first became the connection and every subsequent database call would try and use it. Bah. But not too bad, at least it was an easy fix.

Except I had stepped onto the slippery slope

2) The updated project
Since this site was last deployed, we have been moving some of the projects from SubVersion to Visual Studio Team Services (online) which will hopefully track and streamline our deployment moving forwards. This meant that some of the projects that were referenced in my solution were now referenced as NuGet packages, which is much neater and pretty sweet. What I hadn't realised was quite how bad some of my old library references were. Not only was I referencing projects, but also dlls in other projects on disk and also some really weird references like something that used to exist in my downloads folder. The new online projects were good because they were building online without any nasty physical references but not all missing libraries will cause a build failure. My web service had already been updated and I rather optimistically deployed my fix with the project reference changes and it failed online.

You would think the library loading errors would be more useful by now but nope. Cannot load Security.Cryptography or one of its dependencies, a strong named assembly is required. Well my calling library is strong named - I checked. Security.Cryptography doesn't seem to have any non-system dependencies so what on earth is going on? Trying to roll the project back was a pain since I had deleted the original folders once the VSTS projects had built because I hadn't changed the code and didn't need the old copies any more.

About two hours of fiddling, rebuilding, debugging, enabling remote desktops and faff later, I narrowed it down to one library and restored the old project bindings just to get it working. So now would be a good time to find out exactly what is wrong by using a local test server and fix all the dependencies.

3) The broken web publish in Visual Studio 2015
The NuGet package system requires Visual Studio 2015 for some reason so I opened my project in VS2015 (it was 2013 originally), built it and clicked "Publish". Bang. VS crashed. Followed some online suggestions. Clean. Crash. Delete profiles and recreate. Crash. Save the profile before deployment, choose publish again, crash. Debug in another Visual Studio and there is some dirty error about a missing method in the publish mechanism. Like most visual studio searches, there are about a trillion different forum questions about everything and about 100 different answers which might or might not work including loads of "this worked for me". Still nothing. In the end, I had to use VS2013 to publish it - didn't have any more will to try and get help.

The slope was still slippery though

4) The case of the missing database tables
I have a task that runs daily to backup our live database, download it to a test server and install the most recent one as a test copy of the database. It also runs some statistics processing - number of users etc. but at some point recently, it had obviously stopped working, the database was being created but none of the data was being unloaded from the bacpac.

The error log was hopeless, just another generic SQL Server error - I have no idea why it couldn't have logged the actual (and very simple) SQL error that was occurring when SqlPackage was being run: "You can only create a user with password in a contained database".

Of course you cam, and our database is contained, which is why it contains the contained user in the first place - something we learned the hard way that you need for geo-replication. Somehow the bacpac file did not contain the contained setting which caused the failure. Head is hurting now.

5) How to export the bacpac with the correct option
A Google search revealed a few other people had the same issue and a lovely comment from MS that since it only affects contained users, it only affects a few people and will be a low priority. Anyway, it was allegedly fixed according to the Connect issue although very lazily, there were no details about when/how/where it was fixed. Clearly it didn't work using the horrible old web service method of database backups which was basically a Soap web service.

I clearly needed to use a newer method which was much more likely to work correctly. There are two ways that you can allegedly work with SQL databases from .Net (3 if you count the "classic" method) and they are 1: REST API and 2: .net libraries. Both techniques are woefully un-documented. I spent about 2 hours trying to get the REST API to work since the current app uses an API style syntax. Nothing. I eventually managed to get a 401 with the detail that "the Authorization header is not present". Really interesting since it is COMPLETELY absent in the API documentation, not just what credentials are supposed to be used but even that is required at all! OK, I'll assume the response is correct and I need an authorization header.

Whoops, the slope is still descending

6) The HttpWebRequest class in .Net is often used for API connections. It appears pretty obvious to use (although several people online recommended some better third-party alternatives). So what do you do? You set request.Credentials. OK, so I'll try that. Nope. PreAuthenticate = true? Nope, that doesn't pre-authenticate, it only caches creds for subsequent calls to the same url. Now what normally happens is that if a protected resource is requested, the first response is 401 (Not Authorized) which MUST include a header that says what Auth mechanisms are acceptable. The client can then choose one, send it back in the correct format and the second request will then return the correct response if authentication is successful. Except it wasn't working. Does the HttpWebRequest do the 2 calls automatically under the covers or do I need to catch the first 401 and call it again? Couldn't find an answer to that which makes me think that it should. Except it wasn't - or at least I was still getting 401. Then saw something weird about the web client not handling 401 if it comes back in json format rather than plain html.

I'm tumbling down now, the top of the slope is far away

7) Fiddler problems
Fiddler is a great tool but it is getting harder now with SSL sites because although Fiddler should be able to decrypt SSL, it didn't work with this API request - it just caused the Trust to be broken by the API connection even though I had trusted Fiddlers root certificate. So I can't even see what is happening - whether there are two requests or whether something else weird is going on. Maybe the MS API is not returning the headers correctly so that the client can see them.

This was feeling like a real dead end so perhaps the .net libraries would be an easier way to perform the same job.

8) .net documentation is like a Brutalist housing estate
Most of you who program in .net have experienced the sinking feeling in amongst the vast, vast reams of .net documentation. You have experienced lack of examples, lack of complete documentation, lack of documentation of all possible return values, all errors even basics like like the specific format of a parameter rather than something useless like "the azure server name".

The documentation for .Net SQL Azure access is appalling. It is a vast list of auto-generated documentation. No guidance on the front page, no list of basic tasks just hundreds of classes. Basically unusable.

Then I tried finding the front page of Azure documentation and again, nothing under "Develop", nothing under "Manage" and no real thought into moving away from these useless abstract verbs and into more concise groupings that will allow me to find what I want.

9) I am dead
My system doesn't work, I have exhausted all my avenues right now so unless I get some help on Stack Overflow, I might have to download the backups and not process them until such a point as I can afford someone in my team to spend a week getting it to work again.