Wednesday, 6 December 2017

MongoDB Insert from Mongo Shell but not from C# API

Another annoying one that makes sense after you work it out!

I tried to run a console app which was inserting documents into a MongoDB database via a tunnel. The app had previously been used successfully on another DB that wasn't on a tunnel and didn't have any auth so I assumed it was related.

When I inserted, I got the following error:

Command insert failed: not authorized on resources to execute command

(resources is the name of the database I was using)

But when I ran an insert directly in Mongo Shell with the same creds, it was fine.

The problem? I was not specifying the database in the connection string, which means that even though the insert command was specifying the database, the connection wouldn't have authenticated the user from the resources database (I guess it would have tried to authenticate against admin or something).

Basically, instead of this:


it should have been this:


You can also use the MongoUrlBuilder, which allows you to set all the options you might need and allow it to create you the URL e.g.

var builder = new MongoUrlBuilder();
builder.Server = new MongoServerAddress("");
builder.Password = "thepassword";
builder.Username = "MongoUser";
builder.DatabaseName = "resources";
var url = builder.ToMongoUrl();
var client = new MongoClient(url);

Monday, 4 December 2017

IdentityServer 3 certificate loading woes!

TL;DR: Set Load User Profile to True on the Application Pool (advanced settings)

We have a cloud system that uses IdentityServer and loads signing certificates from pfx files in the App_Data folder - works great.

We've deployed an on-premises system based on the same code and it doesn't work. The following errors are logged:

  • ERROR Startup - Signing certificate has no private key or the private key is not accessible. Make sure the account running your application has access to the private key
Which is definitely not true. Same certs as production, same passwords, files definitely there and can be imported into the Cert store to prove it.

I ignored this initially but when we then log in over OpenID Connect, it gets to token signing and bang:

  • System.InvalidOperationException: IDX10614: AsymmetricSecurityKey.GetSignatureFormater( '' ) threw an exception.
  • SignatureAlgorithm: '', check to make sure the SignatureAlgorithm is supported.
  • Exception:'System.Security.Cryptography.CryptographicException: Invalid provider type specified.
  • If you only need to verify signatures the parameter 'willBeUseForSigning' should be false if the private key is not be available
Which is actually a pile of errors that are not actually related to the problem. I wasted a ton of time creating a different format of signing key (lots of forums say that .Net doesn't work with the Enhanced Security Provider), created some new certs directly into the local Cert store, exported them as pfx into the same location, changed the web.config to use these new keys but now the site won't load! Disable CustomErrors and now get this error:

  • System.Security.Cryptography.CryptographicException: The system cannot find the file specified.
Which is really weird since the new certs are in the same place. This is, however, a more useful error. I double-checked everything and it seemed weird until I realised that the error doesn't necessarily mean that the cert file itself is not found but it does something weird and tries to load the user profile to check certain parts of the certificate (which seems strange when loading from file but anyway...) I then found a useful blog post that says that the app pool doesn't load the user profile by default, which is why it doesn't work. IIS -> Application Pools -> Select -> Advanced Settings -> Load User Profile -> True

I enabled this and the site worked! I then reverted to the original certs and they worked too so this was simply IdentityServer covering up a random error or not providing a helpful enough error message. This also explains why it works on Azure, where the App Services system must simply enable Load User Profile by default.

Wednesday, 29 November 2017

NuGet package not visible in Manage NuGet Packages

I had published a package to a private NuGet repo and even though it was visible in one project, it was not visible in another. I looked on the server and it was definitely there.

This happened because the NuGet package was targeting .Net 4.6.1 but the project was only targeting 4.5.1 so it was simply not listed.

I updated the project to 4.6.1 and the package appeared!

Cannot install NuGet package even though it is listed

I had a package that I wanted to install listed in the NuGet packages list but after clicking install, it seemed to wait a bit and then do nothing.

HOWEVER, clicking back onto the output tab in Visual Studio (since it was unhelpfully changing to the Error List tab), I noticed that there was a constraint that didn't allow the installation. In my case, the Microsoft.Net.Compilers package was too old for the NuGet package to install.

Once I saw that, it was easy to install the NuGet package after updating the other one!

Monday, 20 November 2017

No, your web site being broken is not OK!

Those of us who write software know that we make mistakes. Developers don't consider specific scenarios, Testers miss certain tests and with the best will in the world, even the simplest applications can have bugs.


The main happy paths should work largely fine. If something goes seriously wrong for a large-scale public-facing web site, one of a number of things absolutely must happen:

  1. Ideally, the company will already know because they will get an error message emailed/displayed on a big screen/whatever
  2. If it is more subtle, maybe a user will contact the company and if this happens, it is embarrassing, so you act immediately, especially when the bug relates to a happy path that you ABSOLUTELY should have tested
  3. If it is something with a non-obvious workaround (or none at all) the Development Team make it number 1 priority and work flat out, 24/7 if required, until it is fixed. Why? Because it was a screw up that something so serious got out the door and it is a matter of quality and corporate pride that it gets fixed and quickly.
  4. The Test Manager gets a serious talking to along the lines of, if this happens again, you're fired.
  5. The Technical Team has a serious review about how this was allowed to happen and puts in place real measures to prevent a repeat the next time. This is fed back to the Management Team so that people can be accountable where they need to be - the Management Team need to ensure they are getting the whole truth, not just what someone might say to cover their own back.
What isn't OK is:

  1. Not putting any kind of banner on the web site to say that you are experiencing problems
  2. Not working with whoever found the problem to quickly work out exactly what has happened and why
  3. Telling users to delete their cookies to make it work
  4. Telling users that only some users are having the problem (as if that makes it better that it's broken for me)
  5. Not properly testing updates to consider not just the new site in a clean happy place, but what happens when a user with existing cookies and a number of different browsers comes back to a new site.
  6. Acting like a serious bug report from an end user is just business-as-usual rather than, "I'm really sorry, I'm just going to call the Software Manager to tell them" or even, "We know of a problem and the Team are still trying to find exactly what causes it".

What do we assume when we contract people to write software for us?

Many of you have been there. You need something done, you find someone, they give you a quote for the work, you agree, they do stuff then send it to you. It's just not really good enough. It's probably not terrible (but it might be) it's probably OK but there are enough things wrong with it that you can't just shake hands and pay them their money.

On the other hand, they have put in the hours, so it's probably not right just to not pay them anything but if you are going to have to mostly rewrite it - which defeats the point of getting the work done - what can you do short of legal action?

One of the most obvious things that we don't always do is write a good contract/requirements. Like a good Job Description, if you write it well, it should simply be a case of "is the person doing this?" if so, great, if not, they don't get paid.

Let us take an example. We want someone to write a plugin for Magento that handles an OAuth2 handshake for a web site. Sounds simple and it sounds like something that a Contractor would say, yeah, OK, I'll do that! But there are many things missing from such a simple requirement. One of those might be a simple question: "Have you ever written a Magento plugin before?" Why? Because although the PHP might be easy, the architecture and philosophy of Magento is not something you can simply learn from a book in a few days and I certainly don't want to pay a Contractor to watch videos and try and learn it. What if they produce something that seems to work but they did it really badly? You might not know until later.

Secondly, assuming that they have some experience that you are comfortable with, there is then a question of quality and speed. Most of the time, we are not contracting to crazy deadlines but there is still a large difference between fast and slow, especially when you are paying a day-rate of money and to make it worse, speed and quality are proportional so fast is not always good. How can you tell what their quality and speed are like before taking the plunge and committing to large amounts of money?

You can do two simple things up-front. Ask them to send you an example of some of their code from another project (or even something they have contributed to Github or whatever) does it read well? Does it look like a professional or someone who might have made something work by luck rather than skill! Secondly, set them a test - or rather the first part of the work. For example, in our example above, ask them for the basics of a plugin that doesn't do much (maybe does a browser redirect) using some hard-coded values, some basic UI - anything that should be quick and easy, the hard stuff is always in the details. You can pay them for that work if they have done OK up until now and then review what they've done and decide whether it is good and whether it matches the expectations they set. You should be honest with these people - if they don't convince you that they are producing good enough code in reasonable time, you are not going to continue to use them. Paying 2 days up-front for a project that might be 2 months long is good business! Best to lose a small amount early on and find someone else than to have to fix everything later.

Another important point is to document and communicate your expectations. If they need code to look neat, it needs to be said. Not all Developers care and if they don't know you need that, you can't complain to them when they don't deliver it. What about Unit Tests? Design sign-off? Acceptable libraries? Browser testing? If there is something complex that your project involves, can you separate that into another package and get them to prove they can do that? If not, let them do the easy stuff and pay someone else to do the hard bit.

Hopefully, you eventually find some good Contractors who you trust, whose code you know is quality, who are responsive to the work you are giving them and who are not charging crazy amounts of money in the process. This will be ongoing if your business is growing but so many of us have to use Contractors that it is a skill that your company needs to have.

Wednesday, 15 November 2017

How to interview for a Senior Developer

This is based on my experiences in the UK, trying to recruit quality people into Senior positions. My conclusion, there is a very big difference between how people view themselves and how I view the role of Senior Developer. The average salary request for a Senior Dev in the UK (outside of London) is about £50K+ ($65K+), which for many companies is a lot of money to pay out in addition to the recruiters fees, which can be anything up to about 25% of that yearly salary and all of this before you even know whether the person is any good.

I am an employer and I get nervous when I interview someone. They are usually polite and of course they can do the job that you need them to do but the simple truth is that the recruiter and the potential recruit have a virtually zero-risk opportunity to talk themselves up to convince me they can do the job. If I take the risk on them and they are not very good, I either have to let them go at 3 months, losing several thousand in recruiters fees and potentially wasting a lot of time on a person who takes more than they are bringing to the company.

If you are that person who is applying for a role at my company, what am I going to ask you?

Firstly and hopefully this shouldn't be a shock, I am going to ask you about your experience in the areas of the job description. Example: This position requires a strong interest or experience in web application security. "Tell me about your experience in web application security", "I haven't done much". "Then why are you here wasting my time just on the hope that somehow you will convince me that I should still take you on?"

We even had a guy apply for a Development Manager position and all of my questions about, "What will you need to do as a Manager that you don't currently do as a Developer?" basically caused responses along the lines of "erm...", "hmmm.." as if the person hadn't even asked himself what a Development Manager actually does.

Secondly, I will ask what it is that makes you Senior (even if you are not a Senior, I would still ask you what separates you from the crowd) and I am fed up with the number of times that the answer is basically, "It means I have more experience", "What experience do you have that a Junior doesn't?". "Ermm...."

What do you know about Dependency Injection? IoC Containers? Test-Driven-Development? Deployment? The cloud? Node js? Angular?. These are all things that I would expect a Senior Developer to understand. Not to be super-experienced: we don't all get to do these at work - but anyone with any decent interest in web design meets these subjects all across the web. Even if you don't know exactly what it is, do you not even know the basics of why an IoC container might be useful? If not, why not?

Thirdly, I will ask why you are special. So you know some stuff about .Net and you have been programming for 15 years? Top tip: I don't care about anything before the last 5 years because we don't use Web Forms, VBA or FoxPro here! We are a startup and it takes commitment, interest and passion. Don't have a blog? Why not? Your own web site? Involved in any clubs outside of work? Developer hangout events? Member of an Institution?

The simple reality is that for most of the people we have interviewed, the sum total of their CV is: I have been writing code for average companies for X years and there is nothing that demonstrates that I am anything other than a sheep who will do what I'm told but I never think of the bigger picture and my job is largely just to pay the bills.

Even though the market for Developers in the UK is massive and the supply is terrible, I will not take any person on who is asking for £50K just because they have 15 years in the business. If you want that Senior Developer job, you should love coding. You should love it so much that you can easily demonstrate how much you love it. How you owned stuff in your previous job, you were the go-to person, you built stuff, fixed stuff, upgraded it, especially when you weren't asked to do it!

.Net MVC Controller Action 404 only on one system!

We have an existing web app that is running successfully in production. We deployed it to a new machine and any action we try and access on a certain controller returns a 404 (an IIS one, not the custom 404 page that we use in the app). The Home controller and another controller seemed to work as expected.

Logging proved fruitless, perhaps because the underlying problem was masking the logs.

Long story short: The controller that wasn't working had a default constructor (that was creating some services). One of these services was failing (as it happens due to the format of a connection string being incorrect), which caused the constructor to fail and rather than a 500, IIS/.Net was producing a 404 instead!

Because of the lack of usable logs, I had to keep commenting things in and out to work out what was actually going wrong (as well as adding an empty 'test' action to remove any other possible variables).

Tuesday, 24 October 2017

Address labels in PHP without special software

This sounded like a simple job. Print out address labels from a database in a certain format (8 x 2 labels on a page) in a way that can then be printed directly onto the label sheets. Easy right?

Not so much. HTML and CSS3 is supposed to add a load of print functionality and physical sizes but they don't work well at all. Browser treat them all differently, Chrome applies margins in addition to what you set in CSS so everything gets squashed and whatever I tried, it didn't seem to make sense. On top of that, the Developer tools allow you to render using the print css but this does not really allow a real print preview while tweaking the styles.

Fortunately, I chanced upon a suggestion to use FPDF, a PHP library to generate PDFs in code. It looked easy enough although unfortunately, you cannot simply create a fixed size "cell" and wrap text in it. A Cell is one line of text and multi-cell will simply create more cells for each new line of text. Not quite right but fortunately, using the position functions setX, setY etc. the maths is relatively simple to keep track of column number, row number and then work out where to add a new page.

Use the following code as a reference - note it is from Yii 2 framework and so some of this won't be relevant to you. Then check out the notes below for additional help.

public function actionAddresslabels()
    $request = \Yii::$app->request;

    // Set defaults for layout
    $cols = $request->get('cols', 2);           // Number of columns
    $rows = $request->get('rows', 8);           // Number of rows
    $top = $request->get('top', 8);             // Top margin in mm
    $left = $request->get('left', 5);           // Left margin in mm
    $vspacing = $request->get('vspacing', 0);   // Spacing vertically between each label in mm (excludes outside margins)
    $hspacing = $request->get('hspacing', 2.5);   // Spacing horizontally between columns in mm (excludes outside margins)
    $padding = $request->get('padding', 3);

    // Compute some numbers
    $pageSize = $rows * $cols;
    $colSpacing = (210.0 - (2*$left) + $hspacing) / ($cols);
    $rowSpacing = (297.0 - (2*$top) + $vspacing) / ($rows);

    $dataProvider = new ActiveDataProvider([
        'query' => User::find()
            ->where(['year' => Date('Y')]),
        'pagination' => false,

    // Load data into local variables for loop
    $models = $dataProvider->getModels();
    $modelCount = $dataProvider->getCount();
    $currentModel = 0;
    $currentY = 0;
    $currentX = 0;

    // Basic setup of PDF
    $pdf = new FPDF();
    $pdf->SetLeftMargin($left + $padding);
    $pdf->SetTopMargin($top + $padding);

    // For each cols x rows of addresses, add a page and render them correctly
    while ( $currentModel < $modelCount )
        if ( $currentModel % $pageSize === 0)
            $currentX = $left + $padding;
            $currentY = $top + $padding;
        $model = $models[$currentModel];
        $this->writeAddressLabel($pdf, $model);
        $currentY += $rowSpacing;
        if ( $currentY > (297 - 20) )
            $currentY = $top + $padding;
            $currentX += $colSpacing;

    $this->layout = false;
    \Yii::$app->response->format = \yii\web\Response::FORMAT_RAW;

  • The first section allows you to pass different values from the defaults into the query string for this action.
  • $padding allows all the text to be in from the top-left corner of each label and needs to be included in various calculations
  • The second section does some calculations for page size (number of labels total per page), column and row spacing are the pitch values so include the gutters between the labels.
  • The ActiveDataProvider is simply how I am querying the people to produce labels for. What I end up with is an array of objects ($models) that I will pull the individual address parts from.
  • $modelCount is simply used to control how long the loop below will continue for
  • The next section sets some static values for the PDF instance. The margins will shift all of the setXY stuff in from the edges of the page.
  • The main loop goes through all of the "users" in my models array 1 by one. 
  • The first section inside the loop uses mod arithmetic to see whether the current item is the first on a page, in which case a new page is created, and the X and Y positions are reset (they are relative to the current page, not the entire document).
  • The cursor is then positioned with SetXY
  • SetLeftMargin is called to ensure the current column has a hard left edge, otherwise the text becomes indented.
  • The method WriteAddressLabel is a helper method in my class that simply contains a number of calls to $pdf->Write(5,$model->town.PHP_EOL); With some wrapped in if ( $model->address3 !== "") so that they are not printed if blank. In your code, they might equate to null but in my code, they are blank strings if not set.
  • After the address is written, the Y position is moved down by a label pitch and then if this goes below the bottom of the page (hard-coded for A4 paper size 297mm minus a margin), then the column is incremented, Y is reset back to the top. We do not need to check for the column overflowing the page, since the mod arithmetic at the top of the loop will automatically create a new page when we have written the total number of items on the page.
  • The 4 lines below the loop are partly methods to tell Yii to output the correct format and not render a HTML layout and the call to $pdf->Output() closes the document and sends it to the standard output, which in this case is the response object.

Monday, 16 October 2017

JWT, JWE, JWS in .Net - Pt 3 - JWE

JWE is the encrypted version of a JWT. Encryption provides a way to ensure privacy of the data from an attacker and if using a pre-shared key, a very strong way of transmitting private data.

The .Net version of the JWT libraries does not also require a signature to be applied, you could assume that the data has integrity if you use an AEAD algorithm for encryption - which you should. However, it appears that you cannot validate the token if it does not have a signature - I'm not sure if there is a way to do that or whether it does not make sense to validate a token with no signature?

Fortunately, to produce a JWE in .Net is very similar to producing a JWS, although you need to generate a cryptographically secure symmetrical key as well as using a certificate to sign it. Naturally, all of this has overhead so although encryption-by-default can be useful, it does come at a price, especially for high-volume systems.

To create a key (the Content Encryption Key - CEK) , you can either just use RNGCryptoServiceProvider from the System.Security.Cryptography namespace like this:

var crng = new RNGCryptoServiceProvider();
var keyBytes = new byte[32];   // 256 bits for AES256

Or you can hash some other piece of data using SHA256 to stretch it. Be careful with this method since you need the input to the SHA function to already be cryptographically secure random or an attacker could discover a pattern and work out how you generate your keys! For instance, do not stretch a numeric userid or guid. In my case, I was stretching a 32 character randomly generated "secret" from an oauth credential to create my pre-shared key.

var keyBytes = SHA256.Create().ComputeHash(Encoding.UTF8.GetBytes("some data to stretch"));

Be careful with SHA256 and other cryptography classes for thread safety. It might be quicker to pre-create certain types like SHA256 but if ComputeHash is not thread safe, you might break something when used by multiple threads. I believe some forms of the .Net cryptography classes are thread safe and others are not.

Once you have your CEK, the only extra step is to create EncryptingCredentials as well as SigningCredentials:

var symSecurityKey = new SymmetricSecurityKey(keyBytes);
var creds = new EncryptingCredentials(symSecurityKey, SecurityAlgorithms.Aes256KW, SecurityAlgorithms.Aes256CbcHmacSha512);

Note that you need to use algorithms that are supported in .Net (I can't guarantee that the SecurityAlgorithms enum equates to what is supported), that the selected algorithms match the length of the key provided (i.e. 32 bytes for AES256) and that the second algorithm, which is used to encrypt the actual data is a type that includes authenticated data - i.e. a signature for the encrypted data to verify it was not tampered with before decrypting (such as GCM or HMACSHA). If you choose the wrong length of key, the call to CreateSecurityToken will throw a ArgumentOutOfRangeException. The first algorithm is the one that will be used to encrypt the key itself before it is added to the JWE token.

You can use RSA-OAEP for the first parameter but this is not the same as when it is used for the JWE. Firstly, it will only use a 256 bit key for RSA to match the second algorithm (the size of the key) but also, it will need a public key to encrypt and the recipient of the token will need the related private key to decrypt the CEK.

By providing the SigningCredentials and EncryptingCredentials to the call to CreateSecurityToken(), the library will create the token, sign it and then encrypt this as the payload in an outer JWE. This means that the header for the JWT will only contain data about the encrypting parameters (alg, enc etc) and only after it is decrypted, will the signing parameters be visible.

As mentioned before, you do not have to set a SigningCredential but when I tried this, the call to ValidateToken failed which sounds like it cannot validate data that is only encrypted, which might be possible to bypass (since the encrypted data already requires the use of an authenticated algorithm),

Validating is otherwise the same as it is for JWS, except for also setting the value of the TokenDecryptionKey in the TokenValidationParameters in the same way as it was set when it was created.

JWT, JWE, JWS in .Net - Pt 2 - Validating JWS

Fortunately, validating a JWS (and for that matter, a JWE) is very straight-forward thanks to JwtSecurityTokenHandler.ValidateToken().

Quite simply, you take the serialized string, create a TokenValidationParameters object with the relevant fields filled in to validate and then call ValidateToken, it looks like the following. Note that the same code is used for JWS and JWE tokens, the only difference is whether you fill in the TokenDecryptionKey property. This shows both:

 private ClaimsPrincipal ValidateToken(string tokenString, byte[] keybytes)  
   var signingkey = new X509SecurityKey(new X509Certificate2(certFilePath,certPassword));  
   var jwt = new System.IdentityModel.Tokens.Jwt.JwtSecurityToken(tokenString);  
   // Verification  
   var tokenValidationParameters = new TokenValidationParameters()  
     ValidAudiences = new string[]  
       "123456"  // Needs to match what was set in aud property of token
     ValidIssuers = new string[]  
       ""  // Needs to match iss property of token
     IssuerSigningKey = signingkey,  
     TokenDecryptionKey = keybytes == null ? null : new SymmetricSecurityKey(keybytes)  
   SecurityToken validatedToken;  
   var handler = new System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler();  
   return handler.ValidateToken(tokenString, tokenValidationParameters, out validatedToken);  

In my method (in a Unit Test), I simply return the ClaimsPrincipal that is returned from ValidateToken() but you could also get the validated and decrypted token that is returned as an out parameter if you wanted to continue to use it.

Also note that I am simply loading the same pfx I used to sign the token to validate it, whereas in real like, you are likely to visit the url of the issuer and perhaps and find the public key for the signed data using the kid property from the token.

This method allows the caller to pass null for the keybytes if only validating a signed JWS or real key bytes matching the encryption key used for the JWE. This is for pre-shared keys only. In a later post, I will talk about extracting the encryption key, which is actually embedded in a JWE and does not need to be pre-shared.

In part 3, we'll look at JWE (encrypted JWT)

Friday, 13 October 2017

JWT, JWE, JWS in .Net

JWT in .Net

When I first approached the idea of doing JWT (json web tokens) in .Net, it all seemed a little confusing.

Firstly, it IS confusing because Microsoft have started with a Microsoft.IdentityModel.Tokens namespace, which was eventually migrated into System.IdentityModel.Tokens, deprecating the original namespace but THEN, they added new functionality in System.IdentityModel.Tokens version 5 that references NEW code in Microsoft.IdentityModel.Tokens (which is resurrected). All the usual chaos has started since some things are the same (most class names), some are different. Some code written for v4 of System.IdentityModel.Tokens will not work in version 5. Anyway...

The Basics

Before you can understand how to do this, you should know what json is (JavaScript Object Notation), which is a fairly small way to move data around - much smaller than xml for instance, but it is generally web friendly.

You should also understand the basic concepts of signing and encryption.

Signing using asymmetric key encryption (RSA, DSA etc) allows you to create a packet of data, sign it with your private key and send it to a recipient. Even though the data is NOT private because it is NOT encrypted, the recipient can use your PUBLIC key to verify the signature that you applied to the data, which provides 2 protections (assuming keys are secure etc.) Firstly, it provides integrity of the data. An attacker could not modify the data and leave a valid signature since the private key needed to produce the signature is not available to the attacker. The recipient would know this when they validate the token and should/must discard the data if the signature fails. Secondly. signing provides non-repudiation, which means the sender cannot deny signing the data unless they admit to losing their private keys to an attacker.

It is also possible to sign the data with a symmetrical key, which would be useful if the sender and receiver already have a securely shared secret, which would therefore remove the need to perform asymmetrical signing/verifying, which is computationally expensive. (See Amey's answer here). Obviously this means that either party or anyone who is given this secret can also sign data so is not normally used.

Encryption is about obscuring the real data so an attacker cannot read the data when it is at rest or in transit. With encryption, it is assumed that either there is a securely shared key or that the same key can be derived using something like Diffie-Hellman key exchange.

JWT is an abstract idea that is made concrete in 2 sub-types.

JWS is a form of JWT for signed data, it is not encrypted.

JWE is an encrypted and signed form of JWT.

JWS - Json Web Token Signed

JWS is relatively straight-forward. It is composed of a json header with typ, alg and kid to identify the type ("jwt"), algorithm (signing algorithm, for example "RS256" or a URL for RSA) and the key identifier so the recipient knows which key can be used to verify the signature.

You will need the namespaces:

using Microsoft.IdentityModel.Tokens;
using System.IdentityModel.Tokens.Jwt;
using System.Security.Cryptography.X509Certificates;

You can create a header either explicitly in .Net or you can allow the helper method CreateSecurityToken to do it for you:

Method 1: Create the JwtHeader yourself (from certificate in this case)

var key = new X509SecurityKey(new X509Certificate2(certFilePath,certPassword));
var algorithm = "";     // RS256
var creds = new SigningCredentials(key, algorithm);
var header = new JwtHeader(creds);

Method 2: Use CreateJwtSecurityToken helper method

var handler = new JwtSecurityTokenHandler();
var token = handler.CreateJwtSecurityToken(issuer, audience, null, creationTime, creationTime.AddSeconds(lifetime), creationTime, creds);
// token now has header automatically

In method 2, the payload is being populated at the same time as the header.

After the header is the payload, which is another json dictionary (a claims list) with some standard claims such as nbf (not before), exp (expiry), iss (issuer) and aud (audience i.e. recipient) as well as any other additional claims that you wish to send. Issuer can be any string that is relevant but if you are using public key discovery, it is useful to use a URL that can be used to lookup .well-known/jwks to see a list of keys related to key ids (kid in the header of the JWT)

If using the first method, you can create the payload in a number of ways but this is probably the easiest:

var payload = new JwtPayload(

var handler = new JwtSecurityTokenHandler();
var token = new JwtSecurityToken(header, payload);

token is simply an object that contains the data to serialize into JWT.

The second method above already shows how to add the required payload (nbf,exp,iss,aud) in the same call to CreateJwtSecurityToken. Other claims would need to be added afterwards simply by calling token.Payload.AddClaims().

Once these are created, the data is combined and the signature computed across the data using the specified algorithm and key. Once this is done, each of these is base64 encoded and concatenated with a period (.) into 3 blocks. This part is really easy because if you have specified your key correctly, you simply tell the handler to write the token:

var serializedJwt = handler.WriteToken(token);

The result might look something like this:


NOTE: The spec uses URL friendly base 64, which means + becomes -, / becomes _ and the = symbol is stripped from the end.

In part 2, we'll describe how to validate the token on the other end.

SecurityTokenEncryptionFailedException: IDX10615: Encryption failed. No support for: Algorithm

Microsoft.IdentityModel.Tokens.SecurityTokenEncryptionFailedException: IDX10615: Encryption failed. No support for: Algorithm when trying to create an instance of Microsoft.IdentityModel.Tokens.EncryptingCredentials()

I tried variations for parameter 2 (alg) but none seemed to work. I was really foxed until I found the source code online and realised that I was passing in the signing key (RSA256) rather than the encryption key (AES256) for the first parameter!

The code will attempt to use the key's crypto factory to lookup the specified algorithm and obviously that won't work when trying to specify AES on an RSA key.

Just a typo!

While I'm here, the algorithms for alg and enc need to be the same length because they will use the same key (parameter 1).

Tuesday, 18 July 2017

.Net Web API Validation

So I'm writing a Web Api .Net service to call from some mobile apps. Before you ask, I haven't used .Net Core since it requires all the support libraries are portable and that is not a 5 minute job!

Anyway, it basically works but I found a couple of funnies that have been reported elsewhere but they are not things that are obviously broken - thank goodness for Unit Tests!

1) I have an attribute that validates the model required by the API action and then sets BadRequest if the model doesn't validate - this saves calling if (ModelState.IsValid) everywhere. It didn't seem to work, IsValid was true when I called an action with no parameters. The reason? If the model is null, it passes validation! Terrible but true. I had to add an additional line of code to ensure the model was null before checking whether it was valid.

2) The RegularExpressionAttribute does not validate empty strings according to the regex. It would be nice if it was a property of the attribute but it isn't, it just doesn't. Again, I had to subclass RegularExpressionAttribute, override IsValid to ensure the value is not empty and then call the base class IsValid. I then subclassed this into my specific Attributes so that they all work as expected.

Tuesday, 20 June 2017

Client Certificate does not appear in Windows Credential Manager

This is one of those jobs I have done several times but couldn't remember why it didn't work the next time.

You add a client certificate to your personal store under Current User, it is in-date, it chains OK but when using Windows Credential Manager to add a connection, it doesn't offer you this certificate to choose,.

As pointed out here, you have to edit the properties of the certificate and untick "Smart Card Logon" and "Any Purpose" otherwise Windows will ask for a Smart Card to access the client cert!

Wednesday, 14 June 2017

OutOfRangeInput One of the request inputs is out of range on Azure CDN

Setting up a new environment that was (theoretically) the same as an existing system. Created a new CDN on Azure, pointed it at blob storage and tried to access it and Azure gives you a rather esoteric (and apparently catch-all) error.

Most answers that I found related to using invalid naming i.e. requesting a table with upper-case letters, when tables are not allowed to have upper-case letters (which matches the error message).

The issue here is that the CDN is hiding an error that is actually a storage error and, surprise, surprise, is nothing to do with the request but is related to a permission error.

I had setup the storage blob with "Private" permission but it actually needs "Blob" permission, which allows anonymous to read but not write blobs.

I updated it to use the correct permission but it still didn't work's a CDN and everything takes ages to propagate. I waited a while and it worked.

Wednesday, 24 May 2017

Build sqlproj projects with MSBuild on a Build Server

God bless Microsoft, each time a new Visual Studio comes out, they make an improvement, like making the install directories more logical and allowing better side-by-side installations. The problem? Most of these are not backwards compatible and it creates a whole load of compatibility problems.

Install VS 2017 and create a database project (sqlproj) in a solution. Open up the sqlproj file and you will see some really poorly thought out targets:

    <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">11.0</VisualStudioVersion>
    <!-- Default to the v11.0 targets path if the targets file for the current VS version is not found -->
    <SSDTExists Condition="Exists('$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets')">True</SSDTExists>
    <VisualStudioVersion Condition="'$(SSDTExists)' == ''">11.0</VisualStudioVersion>

Basically, what this says is that if I don't know what the visual studio version is when I build, then I will assume that I should look for v11 (VS2012) directories and fail if I don't find them rather than what most people would do, which would be either to fail if the version is not passed in, or to hard-code the version you chose when you added the project.

Run this on a build server with MSBuild instead of Visual Studio and you might see the following error:

The imported project "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\VisualStudio\v11.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found

Which makes sense because I don't have VS2012 installed on the build server at all.

I eventually realised the issue is that VS injects the version into the target whereas MSBuild does not. A simple parameter passed to MSBuild (/p:VisualStudioVersion=15.0) sorts that problem and tells it to use VS2017, which I have installed on the server, although only the Build Tools.

I then get a different error:

The imported project "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\VisualStudio\v15.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found

Well, it looks the same but this time it should work, since I have v15 installed. I had a look and sure enough, the SQL tools were not installed. Installations have changed in VS2017 and although I tried to install the Data Processing workload for the Build Tools, the option was not there. I installed the workload using the VS2017 community edition, checked for the target file, which was now there but the build failed again.

Looking closer, I noticed that the path was almost correct. MSBuild uses the BuildTools subdirectory of 2017 whereas proper Visual Studio uses community (in my case). Basically, there is no obvious way to install SSDT into the Build Tools area, which is where MSBuild looks so instead I copied over the MSBuild\Microsoft\VisualStudio\v15.0\SSDT folder from community into buildtools (with its directory structure) and also copied over Common7\IDE\Extensions\Microsoft\SQL* directories, which are used by the sqlproj target and the build worked!

Weird errors deploying new MVC site to IIS with Web Deploy

This is a brand new MVC .Net site, which has old functionality ported into it and it works fine on my local machine. Deploy it to an internal web server using Web Deploy and I get some strange errors:

The first is obvious, .Net 4.5.2 is not recognised. Using the Web Platform installer, I download and install that.

Then I get a weird compiler error: "The compiler failed with error code -2146232576". This was simply because the site was trying to compile but the App Pool Identity did not have modify access to the web site folder so I added that permission.

Then I get another weird error: "%1 is not a valid Win32 application". This basically means that something is 32 bit but is attempting to be accessed by a 64-bit only app pool. I tried enabling 32 bit in the app pool but that didn't fix it. Then I found that there is an issue running the roslyn compiler (I don't know why) and the workaround is to edit the "Allow precompiled site to be updatable" in the publish settings and disable it. This means everything will be compiled during deployment and it won't need to happen in-place.

Not sure why these things are alive in the wild but at least the site works now.

If you do need to be able to update the site, you might be stuck for now....

Tuesday, 9 May 2017

New to MongoDB and starting out

When you first try something new, you don't know what you don't know. Unfortunately with MongoDB, there is a large mixture of old and new tutorials. Some of them are still linked from the official site even though they are not relevant any more.

So there are two things I wanted to point out when using the instructions from MongoDB and doing your first operations on a database.

Firstly, the instructions about setting up auth and creating an admin user are incomplete. You try and connect to a test database and it doesn't work. Why? Because the official docs only tell you to give the admin user a role of userAdminAnyDatabase, which is exactly what it sounds like. If you are just playing around and don't want to start creating users, you will also need to use dbAdminAnyDatabase and readWriteAnyDatabase roles. If you have already set the user up, you will need to use the console and run db.updateUser()

Secondly, you should know that the operations on the SDK are lazy-invoked. For instance, if you call GetDatabase(), it will return a meta-object whether or not the client can reach the server. It is only when you actually need to query or write to the database that the connection is attempted and at this point, the operation might fail for several reasons. This means that you can use, for instance, GetCollection() and test for null to see if it exists, because it will never be null even if the collection doesn't exist (but you'll find out later!). Instead, in that example, you would instead use something like await db.ListCollectionsAsync(), which will block and call onto the database.

Thirdly, you should know that users are usually added to individual databases, so you would need to use the database name as part of the credential. HOWEVER, if you need to access several databases with the same user, you should instead create a single user in admin (which is the name you would pass in the credential for database) and add roles to this user that specify the database see example here and the large list of built-in roles here. Please don't deploy production systems with super user connections!

Tuesday, 2 May 2017

Removing google secondary email address

Just when you thought Google couldn't make their interface any more confusing, I got tripped up, couldn't find anything useful by searching Google and had to work it out myself. Not the pinnacle of usability!

I wanted to delete someone's secondary email address, which was actually an alias that was added to the user to continue to receive an ex-employees emails. She didn't want them any more.

Opened up the details in Admin screens, and press Edit next to the secondary email address contact information. Deleted the email address, pressed Update User, it all looks happy but behold, the email is still listed as a secondary email address.

The problem? You first have to delete the alias for the user and press Save. Then you can edit the contact information, remove the email address and it stays removed!

Wednesday, 12 April 2017

Bamboo Visual Studio Build Fails - Works in Visual Studio

Trying to create a new Plan in Bamboo, cloning an existing plan that works and the build fails no useful error except:

Failing task since return code of [C:\Users\Bamboo\bamboo-home\DotNetSupport\devenvrunner.bat E:\bamboo\xml-data\build-dir\SL-CCI-JOB1 C:\Program Files (x86)\Microsoft Visual Studio 14.0 amd64 MyProject.sln /build Debug] was 255 while expected 0

I opened the same code in Visual Studio and it built with no errors.

Thanks Bamboo! I tried various comparisons of good and bad files but eventually, I compared the sln file itself between the original working plan and the new broken one and saw two things that were different:

1) The broken build had an older solution version at the top ("2013" instead of "14")
2) The presence of SourceSafe bindings (that we weren't using any more).

I removed both of these and the build works fine! Not sure which was causing the problem, maybe it was asking if I wanted to upgrade but I couldn't say yes!

Thursday, 9 March 2017

Could not establish trust relationship for the SSL/TLS secure channel with authority

This was a surprising and annoying error we experienced on Microsoft Azure when our web app was calling a WCF web service but it was only happening randomly.

Fortunately, I knew certain things worked which made it easier to narrow-down the problem.

I knew the web service worked, I knew I could connect to it with a valid SSL certificate chain, the only variable was that I was using Azure Traffic Manager to balance load between Japan and Ireland. Normally, the web apps in their respective areas would get sent to their local web service but in the unlikely event an entire web service dies, Traffic Manager could send the request to the other data centre.

Every now and then, I would see the following error:

The underlying issue is that when you make an SSL connection to a load balancer, the load balancer terminates the SSL (usually) so that if your request gets sent to a different web server, your connection stays up OK.

HOWEVER, if your request gets sent to another load-balancer, which would happen if Traffic Manager decided your previous one was unavailable, then the SSL connection cannot be resumed on the new load-balancer and you get the error above which, as often, contains a misleading message.

You wouldn't notice this effect in a browser since browser will automatically retry the connection if it drops, in which case they would reestablish the connection to the new load balancer and carry on. The call from the web app to the web service uses SvcUtil.exe to create a proxy class and this doesn't have any built-in functionality for reestablishing dropped SSL connections, it will instead throw the Exception and fail.

There is a project that provides some error handling for web service clients provided here, which I haven't tried but which looks like it might get around the problem.

I have worked around the problem by disabling Traffic Manager for the app to web service call so it is always local, which opens up a small risk if one web service died, but it should be OK for now.

Friday, 3 March 2017

Azure Traffic Manager shows degraded status for App Services https

I was surprised to see that the endpoints that Azure Traffic Manager was monitoring were showing degraded.

I looked into it and Google said that the Traffic Manager would check for a 200 response (and it won't follow 3xx responses) from the site but from where was it calling?

I thought that the problem might be the http->https redirect I had on the site so I needed the Traffic Manager to call the https endpoint and not the http one but when you click on the endpoint and press Edit, it doesn't show the endpoint.

What you need to do INSTEAD is to click Configure on the Traffic Manager itself and set the endpoint location in there:

Note that I am using the favicon in the path. The reason for this is that if I hit the default endpoint (/) it might cause a redirect to another page. Favicon is a nice static known resource that should always return 200. You could, of course, point it to anything else.

Tuesday, 7 February 2017

nginx returns 200 for image but no image!

This was a weird one. I copied a site into a new directory on a web server, set up nginx for the new site and accessed it. The php files worked as expected but I noticed an image wasn't displayed properly.

When I looked in the network tab, the server had returned a 200 for the image but trying to view the image showed something that wasn't the image it was supposed to be.

Very confusing but very simple!

The images I had copied up were corrupted somehow (possibly a previous use of WinSCP in text mode?) so the browser couldn't displayed them properly although nginx found them and returned them.

I had the originals on the web server so re-copied those into the new directory and it was all good!

Monday, 30 January 2017

Azure App Services. http works but https doesn't

The statement in the title is not true but I thought it was. Why?

I deployed from Visual Studio, using web deploy, directly to an App Services web app. It was a WCF web service project and when I visited with http, it worked fine. I then uploaded a TLS cert, setup the custom domain, tried to visit and BANG.

Once I had enabled all my detailed errors and read the log, the only information was 0x80070005 - Access Denied.

No real clues what was going on and what "access" is deined.

Anyway, after 90 minutes of poking around by an MS support technician, it appears that there is a compatibility problem when using Client Certificates. I was using one and although I had not used Resource Manager to deploy the app, there is a resource manager and its default configuration is:

"clientCertEnabled": false,

Open up the site's resource group in Resource Explorer, navigate to the site itself and you'll see the json on the right-hand side. Press the Edit button, find this setting, edit it to be true and PUT it and it should all magically come to life!

I need to automate this, but it will be fine for now.

Wednesday, 25 January 2017

WSUS client download error 0x800b0109 "Some update files aren't signed correctly"

This is another error that says what the problem is but doesn't give you any clues.

In the case of setting up a WSUS server to serve Windows Updates over a LAN, the WSUS server creates an SSL certificate for the endpoints and chains this to a self-signed root cert that is installed on the sever only.

When a client connects, due to the absence of a chain of trust, downloading metadata fails and the brief error above appears.

What you need to do is find the SSL cert being used on the WSUS server in IIS (under bindings on the main site), then export this certificate without private key from mmc.exe, then distribute this to your client PCs.

I'm sure you can automate this with GP but I just emailed it out for people to use!