Monday, 16 October 2017

JWT, JWE, JWS in .Net - Pt 3 - JWE

JWE is the encrypted version of a JWT. Encryption provides a way to ensure privacy of the data from an attacker and if using a pre-shared key, a very strong way of transmitting private data.

The .Net version of the JWT libraries does not also require a signature to be applied, you could assume that the data has integrity if you use an AEAD algorithm for encryption - which you should. However, it appears that you cannot validate the token if it does not have a signature - I'm not sure if there is a way to do that or whether it does not make sense to validate a token with no signature?

Fortunately, to produce a JWE in .Net is very similar to producing a JWS, although you need to generate a cryptographically secure symmetrical key as well as using a certificate to sign it. Naturally, all of this has overhead so although encryption-by-default can be useful, it does come at a price, especially for high-volume systems.

To create a key (the Content Encryption Key - CEK) , you can either just use RNGCryptoServiceProvider from the System.Security.Cryptography namespace like this:

var crng = new RNGCryptoServiceProvider();
var keyBytes = new byte[32];   // 256 bits for AES256
crng.GetBytes(keyBytes);

Or you can hash some other piece of data using SHA256 to stretch it. Be careful with this method since you need the input to the SHA function to already be cryptographically secure random or an attacker could discover a pattern and work out how you generate your keys! For instance, do not stretch a numeric userid or guid. In my case, I was stretching a 32 character randomly generated "secret" from an oauth credential to create my pre-shared key.

var keyBytes = SHA256.Create().ComputeHash(Encoding.UTF8.GetBytes("some data to stretch"));

Be careful with SHA256 and other cryptography classes for thread safety. It might be quicker to pre-create certain types like SHA256 but if ComputeHash is not thread safe, you might break something when used by multiple threads. I believe some forms of the .Net cryptography classes are thread safe and others are not.

Once you have your CEK, the only extra step is to create EncryptingCredentials as well as SigningCredentials:

var symSecurityKey = new SymmetricSecurityKey(keyBytes);
var creds = new EncryptingCredentials(symSecurityKey, SecurityAlgorithms.Aes256KW, SecurityAlgorithms.Aes256CbcHmacSha512);

Note that you need to use algorithms that are supported in .Net (I can't guarantee that the SecurityAlgorithms enum equates to what is supported), that the selected algorithms match the length of the key provided (i.e. 32 bytes for AES256) and that the second algorithm, which is used to encrypt the actual data is a type that includes authenticated data - i.e. a signature for the encrypted data to verify it was not tampered with before decrypting (such as GCM or HMACSHA). If you choose the wrong length of key, the call to CreateSecurityToken will throw a ArgumentOutOfRangeException. The first algorithm is the one that will be used to encrypt the key itself before it is added to the JWE token.

You can use RSA-OAEP for the first parameter but this is not the same as when it is used for the JWE. Firstly, it will only use a 256 bit key for RSA to match the second algorithm (the size of the key) but also, it will need a public key to encrypt and the recipient of the token will need the related private key to decrypt the CEK.

By providing the SigningCredentials and EncryptingCredentials to the call to CreateSecurityToken(), the library will create the token, sign it and then encrypt this as the payload in an outer JWE. This means that the header for the JWT will only contain data about the encrypting parameters (alg, enc etc) and only after it is decrypted, will the signing parameters be visible.

As mentioned before, you do not have to set a SigningCredential but when I tried this, the call to ValidateToken failed which sounds like it cannot validate data that is only encrypted, which might be possible to bypass (since the encrypted data already requires the use of an authenticated algorithm),

Validating is otherwise the same as it is for JWS, except for also setting the value of the TokenDecryptionKey in the TokenValidationParameters in the same way as it was set when it was created.

JWT, JWE, JWS in .Net - Pt 2 - Validating JWS

Fortunately, validating a JWS (and for that matter, a JWE) is very straight-forward thanks to JwtSecurityTokenHandler.ValidateToken().

Quite simply, you take the serialized string, create a TokenValidationParameters object with the relevant fields filled in to validate and then call ValidateToken, it looks like the following. Note that the same code is used for JWS and JWE tokens, the only difference is whether you fill in the TokenDecryptionKey property. This shows both:

 private ClaimsPrincipal ValidateToken(string tokenString, byte[] keybytes)  
 {  
   var signingkey = new X509SecurityKey(new X509Certificate2(certFilePath,certPassword));  
   var jwt = new System.IdentityModel.Tokens.Jwt.JwtSecurityToken(tokenString);  
   // Verification  
   var tokenValidationParameters = new TokenValidationParameters()  
   {  
     ValidAudiences = new string[]  
     {  
       "123456"  // Needs to match what was set in aud property of token
     },  
     ValidIssuers = new string[]  
     {  
       "https://logindev.pixelpin.co.uk"  // Needs to match iss property of token
     },  
     IssuerSigningKey = signingkey,  
     TokenDecryptionKey = keybytes == null ? null : new SymmetricSecurityKey(keybytes)  
   };  
   SecurityToken validatedToken;  
   var handler = new System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler();  
   return handler.ValidateToken(tokenString, tokenValidationParameters, out validatedToken);  
 }  

In my method (in a Unit Test), I simply return the ClaimsPrincipal that is returned from ValidateToken() but you could also get the validated and decrypted token that is returned as an out parameter if you wanted to continue to use it.

Also note that I am simply loading the same pfx I used to sign the token to validate it, whereas in real like, you are likely to visit the url of the issuer and perhaps https://theurl.com/.well-known/jwks and find the public key for the signed data using the kid property from the token.

This method allows the caller to pass null for the keybytes if only validating a signed JWS or real key bytes matching the encryption key used for the JWE. This is for pre-shared keys only. In a later post, I will talk about extracting the encryption key, which is actually embedded in a JWE and does not need to be pre-shared.

In part 3, we'll look at JWE (encrypted JWT)

Friday, 13 October 2017

JWT, JWE, JWS in .Net

JWT in .Net

When I first approached the idea of doing JWT (json web tokens) in .Net, it all seemed a little confusing.

Firstly, it IS confusing because Microsoft have started with a Microsoft.IdentityModel.Tokens namespace, which was eventually migrated into System.IdentityModel.Tokens, deprecating the original namespace but THEN, they added new functionality in System.IdentityModel.Tokens version 5 that references NEW code in Microsoft.IdentityModel.Tokens (which is resurrected). All the usual chaos has started since some things are the same (most class names), some are different. Some code written for v4 of System.IdentityModel.Tokens will not work in version 5. Anyway...

The Basics

Before you can understand how to do this, you should know what json is (JavaScript Object Notation), which is a fairly small way to move data around - much smaller than xml for instance, but it is generally web friendly.

You should also understand the basic concepts of signing and encryption.

Signing using asymmetric key encryption (RSA, DSA etc) allows you to create a packet of data, sign it with your private key and send it to a recipient. Even though the data is NOT private because it is NOT encrypted, the recipient can use your PUBLIC key to verify the signature that you applied to the data, which provides 2 protections (assuming keys are secure etc.) Firstly, it provides integrity of the data. An attacker could not modify the data and leave a valid signature since the private key needed to produce the signature is not available to the attacker. The recipient would know this when they validate the token and should/must discard the data if the signature fails. Secondly. signing provides non-repudiation, which means the sender cannot deny signing the data unless they admit to losing their private keys to an attacker.

It is also possible to sign the data with a symmetrical key, which would be useful if the sender and receiver already have a securely shared secret, which would therefore remove the need to perform asymmetrical signing/verifying, which is computationally expensive. (See Amey's answer here). Obviously this means that either party or anyone who is given this secret can also sign data so is not normally used.

Encryption is about obscuring the real data so an attacker cannot read the data when it is at rest or in transit. With encryption, it is assumed that either there is a securely shared key or that the same key can be derived using something like Diffie-Hellman key exchange.

JWT is an abstract idea that is made concrete in 2 sub-types.

JWS is a form of JWT for signed data, it is not encrypted.

JWE is an encrypted and signed form of JWT.

JWS - Json Web Token Signed

JWS is relatively straight-forward. It is composed of a json header with typ, alg and kid to identify the type ("jwt"), algorithm (signing algorithm, for example "RS256" or a URL for RSA) and the key identifier so the recipient knows which key can be used to verify the signature.

You will need the namespaces:

using Microsoft.IdentityModel.Tokens;
using System.IdentityModel.Tokens.Jwt;
using System.Security.Cryptography.X509Certificates;

You can create a header either explicitly in .Net or you can allow the helper method CreateSecurityToken to do it for you:

Method 1: Create the JwtHeader yourself (from certificate in this case)

var key = new X509SecurityKey(new X509Certificate2(certFilePath,certPassword));
var algorithm = "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256";     // RS256
var creds = new SigningCredentials(key, algorithm);
var header = new JwtHeader(creds);

Method 2: Use CreateJwtSecurityToken helper method

var handler = new JwtSecurityTokenHandler();
var token = handler.CreateJwtSecurityToken(issuer, audience, null, creationTime, creationTime.AddSeconds(lifetime), creationTime, creds);
// token now has header automatically

In method 2, the payload is being populated at the same time as the header.

After the header is the payload, which is another json dictionary (a claims list) with some standard claims such as nbf (not before), exp (expiry), iss (issuer) and aud (audience i.e. recipient) as well as any other additional claims that you wish to send. Issuer can be any string that is relevant but if you are using public key discovery, it is useful to use a URL that can be used to lookup .well-known/jwks to see a list of keys related to key ids (kid in the header of the JWT)

If using the first method, you can create the payload in a number of ways but this is probably the easiest:

var payload = new JwtPayload(
    token.Issuer,
    token.Audience,
    null,
    token.CreationTime.UtcDateTime,
    token.CreationTime.AddSeconds(token.Lifetime).UtcDateTime);

payload.AddClaims(token.Claims);
var handler = new JwtSecurityTokenHandler();
var token = new JwtSecurityToken(header, payload);

token is simply an object that contains the data to serialize into JWT.

The second method above already shows how to add the required payload (nbf,exp,iss,aud) in the same call to CreateJwtSecurityToken. Other claims would need to be added afterwards simply by calling token.Payload.AddClaims().

Once these are created, the data is combined and the signature computed across the data using the specified algorithm and key. Once this is done, each of these is base64 encoded and concatenated with a period (.) into 3 blocks. This part is really easy because if you have specified your key correctly, you simply tell the handler to write the token:

var serializedJwt = handler.WriteToken(token);

The result might look something like this:

eyJhbGciOiJodHRwOi8vd3d3LnczLm9yZy8yMDAxLzA0L3htbGRzaWctbW9yZSNyc2Etc2hhMjU2Iiwia2lkIjoiMjVGN0Y2NThDQ0E2NjI2QjRENTdBNEExQTMyOUUyMjBDMjNEQUI3QSIsInR5cCI6IkpXVCJ9
.eyJuYmYiOjE1MDc5MTA0NDAsImV4cCI6MTUwNzkxMDQ3MCwiaXNzIjoicHAiLCJhdWQiOiIxMjM0NTYiLCJteWRhdGEiOiJzb21ldGhpbmdpbmhlcmUifQ
.TNSLgLuj-j9eSC5HiQqS86LLjZ6ZJnGoMAkGMsTqY-pjptQ8qItmrrSE3bf12E5aYHfNr4IWmSdY4-qkQRUXmtb-Ev2c0wE5NkABYqRdAok-AHuBUcRds7VrEQTanB69sKAhtEIZHshLPc4D9IMlYpc2opOzTCBGIB15mX0HodF6hdP6-LeaEeM-rR8v6bnmMsuvzu3GSGOSvPRm_yvZv25ywp4IzEYlbmLcw6NBJt4fx_8ZSEvIQtvCYtMgtAkFbiJ85Lo3sY7m8o8w84ChIG4AgDQi-woRwGU-3RFouppmAgjqPMCgMYn5Tt7Q3rjVtAgulMv0z0tVNvmOVg0zt04sI-CJIkgimAdaNM-O35Lyh4DzPasOXM_HsZ5_3EQoQn1pVRNU_6iBy4X1vGTRNICZon0x3v2MLvQQskaKfbUkSEqs1mceKXgu3cp6GRdT18z1ZUduP5hNYrysCvXtYjmcmmjC-RbnjgNcv3vC51gZvoyQ5OvmtBIvYv_QF14paV1uIxxd8Z_P0z0Z4RrNokUkWwb2n4RYOmW0Ihs7HR-1ba6Cervh7noGM9MQnbSD9lLHu9aQRp9Jl7vS4YPJI98IizYOzCndRcDKpv9LsJNJgXX3OVpxaUqbEiRVGcYMu7m72mdj4kNAyO86JejnaEgkFItUSg8jAU6DnkBwDHU

NOTE: The spec uses URL friendly base 64, which means + becomes -, / becomes _ and the = symbol is stripped from the end.

In part 2, we'll describe how to validate the token on the other end.

SecurityTokenEncryptionFailedException: IDX10615: Encryption failed. No support for: Algorithm

Microsoft.IdentityModel.Tokens.SecurityTokenEncryptionFailedException: IDX10615: Encryption failed. No support for: Algorithm when trying to create an instance of Microsoft.IdentityModel.Tokens.EncryptingCredentials()

I tried variations for parameter 2 (alg) but none seemed to work. I was really foxed until I found the source code online and realised that I was passing in the signing key (RSA256) rather than the encryption key (AES256) for the first parameter!

The code will attempt to use the key's crypto factory to lookup the specified algorithm and obviously that won't work when trying to specify AES on an RSA key.

Just a typo!

While I'm here, the algorithms for alg and enc need to be the same length because they will use the same key (parameter 1).

Tuesday, 18 July 2017

.Net Web API Validation

So I'm writing a Web Api .Net service to call from some mobile apps. Before you ask, I haven't used .Net Core since it requires all the support libraries are portable and that is not a 5 minute job!

Anyway, it basically works but I found a couple of funnies that have been reported elsewhere but they are not things that are obviously broken - thank goodness for Unit Tests!

1) I have an attribute that validates the model required by the API action and then sets BadRequest if the model doesn't validate - this saves calling if (ModelState.IsValid) everywhere. It didn't seem to work, IsValid was true when I called an action with no parameters. The reason? If the model is null, it passes validation! Terrible but true. I had to add an additional line of code to ensure the model was null before checking whether it was valid.

2) The RegularExpressionAttribute does not validate empty strings according to the regex. It would be nice if it was a property of the attribute but it isn't, it just doesn't. Again, I had to subclass RegularExpressionAttribute, override IsValid to ensure the value is not empty and then call the base class IsValid. I then subclassed this into my specific Attributes so that they all work as expected.

Tuesday, 20 June 2017

Client Certificate does not appear in Windows Credential Manager

This is one of those jobs I have done several times but couldn't remember why it didn't work the next time.

You add a client certificate to your personal store under Current User, it is in-date, it chains OK but when using Windows Credential Manager to add a connection, it doesn't offer you this certificate to choose,.

As pointed out here, you have to edit the properties of the certificate and untick "Smart Card Logon" and "Any Purpose" otherwise Windows will ask for a Smart Card to access the client cert!

Wednesday, 14 June 2017

OutOfRangeInput One of the request inputs is out of range on Azure CDN

Setting up a new environment that was (theoretically) the same as an existing system. Created a new CDN on Azure, pointed it at blob storage and tried to access it and Azure gives you a rather esoteric (and apparently catch-all) error.

Most answers that I found related to using invalid naming i.e. requesting a table with upper-case letters, when tables are not allowed to have upper-case letters (which matches the error message).

The issue here is that the CDN is hiding an error that is actually a storage error and, surprise, surprise, is nothing to do with the request but is related to a permission error.

I had setup the storage blob with "Private" permission but it actually needs "Blob" permission, which allows anonymous to read but not write blobs.

I updated it to use the correct permission but it still didn't work because.....it's a CDN and everything takes ages to propagate. I waited a while and it worked.

Wednesday, 24 May 2017

Build sqlproj projects with MSBuild on a Build Server

God bless Microsoft, each time a new Visual Studio comes out, they make an improvement, like making the install directories more logical and allowing better side-by-side installations. The problem? Most of these are not backwards compatible and it creates a whole load of compatibility problems.

Install VS 2017 and create a database project (sqlproj) in a solution. Open up the sqlproj file and you will see some really poorly thought out targets:

<PropertyGroup>
    <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">11.0</VisualStudioVersion>
    <!-- Default to the v11.0 targets path if the targets file for the current VS version is not found -->
    <SSDTExists Condition="Exists('$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets')">True</SSDTExists>
    <VisualStudioVersion Condition="'$(SSDTExists)' == ''">11.0</VisualStudioVersion>
  </PropertyGroup>

Basically, what this says is that if I don't know what the visual studio version is when I build, then I will assume that I should look for v11 (VS2012) directories and fail if I don't find them rather than what most people would do, which would be either to fail if the version is not passed in, or to hard-code the version you chose when you added the project.

Run this on a build server with MSBuild instead of Visual Studio and you might see the following error:

The imported project "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\VisualStudio\v11.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found

Which makes sense because I don't have VS2012 installed on the build server at all.

I eventually realised the issue is that VS injects the version into the target whereas MSBuild does not. A simple parameter passed to MSBuild (/p:VisualStudioVersion=15.0) sorts that problem and tells it to use VS2017, which I have installed on the server, although only the Build Tools.

I then get a different error:

The imported project "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\VisualStudio\v15.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found

Well, it looks the same but this time it should work, since I have v15 installed. I had a look and sure enough, the SQL tools were not installed. Installations have changed in VS2017 and although I tried to install the Data Processing workload for the Build Tools, the option was not there. I installed the workload using the VS2017 community edition, checked for the target file, which was now there but the build failed again.

Looking closer, I noticed that the path was almost correct. MSBuild uses the BuildTools subdirectory of 2017 whereas proper Visual Studio uses community (in my case). Basically, there is no obvious way to install SSDT into the Build Tools area, which is where MSBuild looks so instead I copied over the MSBuild\Microsoft\VisualStudio\v15.0\SSDT folder from community into buildtools (with its directory structure) and also copied over Common7\IDE\Extensions\Microsoft\SQL* directories, which are used by the sqlproj target and the build worked!

Weird errors deploying new MVC site to IIS with Web Deploy

This is a brand new MVC .Net site, which has old functionality ported into it and it works fine on my local machine. Deploy it to an internal web server using Web Deploy and I get some strange errors:

The first is obvious, .Net 4.5.2 is not recognised. Using the Web Platform installer, I download and install that.

Then I get a weird compiler error: "The compiler failed with error code -2146232576". This was simply because the site was trying to compile but the App Pool Identity did not have modify access to the web site folder so I added that permission.

Then I get another weird error: "%1 is not a valid Win32 application". This basically means that something is 32 bit but is attempting to be accessed by a 64-bit only app pool. I tried enabling 32 bit in the app pool but that didn't fix it. Then I found that there is an issue running the roslyn compiler (I don't know why) and the workaround is to edit the "Allow precompiled site to be updatable" in the publish settings and disable it. This means everything will be compiled during deployment and it won't need to happen in-place.

Not sure why these things are alive in the wild but at least the site works now.

If you do need to be able to update the site, you might be stuck for now....

Tuesday, 9 May 2017

New to MongoDB and starting out

When you first try something new, you don't know what you don't know. Unfortunately with MongoDB, there is a large mixture of old and new tutorials. Some of them are still linked from the official site even though they are not relevant any more.

So there are two things I wanted to point out when using the instructions from MongoDB and doing your first operations on a database.

Firstly, the instructions about setting up auth and creating an admin user are incomplete. You try and connect to a test database and it doesn't work. Why? Because the official docs only tell you to give the admin user a role of userAdminAnyDatabase, which is exactly what it sounds like. If you are just playing around and don't want to start creating users, you will also need to use dbAdminAnyDatabase and readWriteAnyDatabase roles. If you have already set the user up, you will need to use the console and run db.updateUser()

Secondly, you should know that the operations on the SDK are lazy-invoked. For instance, if you call GetDatabase(), it will return a meta-object whether or not the client can reach the server. It is only when you actually need to query or write to the database that the connection is attempted and at this point, the operation might fail for several reasons. This means that you can use, for instance, GetCollection() and test for null to see if it exists, because it will never be null even if the collection doesn't exist (but you'll find out later!). Instead, in that example, you would instead use something like await db.ListCollectionsAsync(), which will block and call onto the database.

Thirdly, you should know that users are usually added to individual databases, so you would need to use the database name as part of the credential. HOWEVER, if you need to access several databases with the same user, you should instead create a single user in admin (which is the name you would pass in the credential for database) and add roles to this user that specify the database see example here and the large list of built-in roles here. Please don't deploy production systems with super user connections!

Tuesday, 2 May 2017

Removing google secondary email address

Just when you thought Google couldn't make their interface any more confusing, I got tripped up, couldn't find anything useful by searching Google and had to work it out myself. Not the pinnacle of usability!

I wanted to delete someone's secondary email address, which was actually an alias that was added to the user to continue to receive an ex-employees emails. She didn't want them any more.

Opened up the details in Admin screens, and press Edit next to the secondary email address contact information. Deleted the email address, pressed Update User, it all looks happy but behold, the email is still listed as a secondary email address.

The problem? You first have to delete the alias for the user and press Save. Then you can edit the contact information, remove the email address and it stays removed!

Wednesday, 12 April 2017

Bamboo Visual Studio Build Fails - Works in Visual Studio

Trying to create a new Plan in Bamboo, cloning an existing plan that works and the build fails no useful error except:

Failing task since return code of [C:\Users\Bamboo\bamboo-home\DotNetSupport\devenvrunner.bat E:\bamboo\xml-data\build-dir\SL-CCI-JOB1 C:\Program Files (x86)\Microsoft Visual Studio 14.0 amd64 MyProject.sln /build Debug] was 255 while expected 0

I opened the same code in Visual Studio and it built with no errors.

Thanks Bamboo! I tried various comparisons of good and bad files but eventually, I compared the sln file itself between the original working plan and the new broken one and saw two things that were different:

1) The broken build had an older solution version at the top ("2013" instead of "14")
2) The presence of SourceSafe bindings (that we weren't using any more).

I removed both of these and the build works fine! Not sure which was causing the problem, maybe it was asking if I wanted to upgrade but I couldn't say yes!

Thursday, 9 March 2017

Could not establish trust relationship for the SSL/TLS secure channel with authority

This was a surprising and annoying error we experienced on Microsoft Azure when our web app was calling a WCF web service but it was only happening randomly.

Fortunately, I knew certain things worked which made it easier to narrow-down the problem.

I knew the web service worked, I knew I could connect to it with a valid SSL certificate chain, the only variable was that I was using Azure Traffic Manager to balance load between Japan and Ireland. Normally, the web apps in their respective areas would get sent to their local web service but in the unlikely event an entire web service dies, Traffic Manager could send the request to the other data centre.

Every now and then, I would see the following error:


The underlying issue is that when you make an SSL connection to a load balancer, the load balancer terminates the SSL (usually) so that if your request gets sent to a different web server, your connection stays up OK.

HOWEVER, if your request gets sent to another load-balancer, which would happen if Traffic Manager decided your previous one was unavailable, then the SSL connection cannot be resumed on the new load-balancer and you get the error above which, as often, contains a misleading message.

You wouldn't notice this effect in a browser since browser will automatically retry the connection if it drops, in which case they would reestablish the connection to the new load balancer and carry on. The call from the web app to the web service uses SvcUtil.exe to create a proxy class and this doesn't have any built-in functionality for reestablishing dropped SSL connections, it will instead throw the Exception and fail.

There is a project that provides some error handling for web service clients provided here, which I haven't tried but which looks like it might get around the problem.

I have worked around the problem by disabling Traffic Manager for the app to web service call so it is always local, which opens up a small risk if one web service died, but it should be OK for now.

Friday, 3 March 2017

Azure Traffic Manager shows degraded status for App Services https

I was surprised to see that the endpoints that Azure Traffic Manager was monitoring were showing degraded.

I looked into it and Google said that the Traffic Manager would check for a 200 response (and it won't follow 3xx responses) from the site but from where was it calling?

I thought that the problem might be the http->https redirect I had on the site so I needed the Traffic Manager to call the https endpoint and not the http one but when you click on the endpoint and press Edit, it doesn't show the endpoint.

What you need to do INSTEAD is to click Configure on the Traffic Manager itself and set the endpoint location in there:


Note that I am using the favicon in the path. The reason for this is that if I hit the default endpoint (/) it might cause a redirect to another page. Favicon is a nice static known resource that should always return 200. You could, of course, point it to anything else.

Tuesday, 7 February 2017

nginx returns 200 for image but no image!

This was a weird one. I copied a site into a new directory on a web server, set up nginx for the new site and accessed it. The php files worked as expected but I noticed an image wasn't displayed properly.

When I looked in the network tab, the server had returned a 200 for the image but trying to view the image showed something that wasn't the image it was supposed to be.

Very confusing but very simple!

The images I had copied up were corrupted somehow (possibly a previous use of WinSCP in text mode?) so the browser couldn't displayed them properly although nginx found them and returned them.

I had the originals on the web server so re-copied those into the new directory and it was all good!

Monday, 30 January 2017

Azure App Services. http works but https doesn't

The statement in the title is not true but I thought it was. Why?

I deployed from Visual Studio, using web deploy, directly to an App Services web app. It was a WCF web service project and when I visited with http, it worked fine. I then uploaded a TLS cert, setup the custom domain, tried to visit and BANG.

Once I had enabled all my detailed errors and read the log, the only information was 0x80070005 - Access Denied.

No real clues what was going on and what "access" is deined.

Anyway, after 90 minutes of poking around by an MS support technician, it appears that there is a compatibility problem when using Client Certificates. I was using one and although I had not used Resource Manager to deploy the app, there is a resource manager and its default configuration is:

"clientCertEnabled": false,

Open up the site's resource group in Resource Explorer, navigate to the site itself and you'll see the json on the right-hand side. Press the Edit button, find this setting, edit it to be true and PUT it and it should all magically come to life!

I need to automate this, but it will be fine for now.

Wednesday, 25 January 2017

WSUS client download error 0x800b0109 "Some update files aren't signed correctly"

This is another error that says what the problem is but doesn't give you any clues.

In the case of setting up a WSUS server to serve Windows Updates over a LAN, the WSUS server creates an SSL certificate for the endpoints and chains this to a self-signed root cert that is installed on the sever only.

When a client connects, due to the absence of a chain of trust, downloading metadata fails and the brief error above appears.

What you need to do is find the SSL cert being used on the WSUS server in IIS (under bindings on the main site), then export this certificate without private key from mmc.exe, then distribute this to your client PCs.

I'm sure you can automate this with GP but I just emailed it out for people to use!

Simples.