Monday, 8 April 2019

The CodeDom provider type could not be located

Team City has this weird feature that certain build problems are considered warnings (might be MSBuild) but then the build continues until something serious happens later on. I then got the following message:

error ASPPARSE: The CodeDom provider type "Microsoft.CodeDom.Providers.DotNetCompilerPlatform.VBCodeProvider, Microsoft.CodeDom.Providers.DotNetCompilerPlatform, Version=2.0.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" could not be located. (C:\TeamCity\buildAgent\work\54cc4b00255b0695\Presentation\Admin\web.config line 148)
/global.asax(1): error ASPPARSE: Could not load type 'Admin.Global_asax'.

This was very strange since we hadn't changed much since the last successful build and fortunately, I could quickly check that we DID have the nuget package installed, they WERE the latest and correct versions and web.config WAS correctly setup up.

It was only when I dug into the build log and found out that some project references could not be resolved (we were manually modifying paths to an area outside of the project root) and therefore certain types etc. could not be found. I don't really know why broken references is not an error but hey-ho!

Fixed the references and the random error disappeared!

Thursday, 28 March 2019

Database projects in Visual Studio 2017 with SSDT and Devops deployments

Introduction

If you work in the .Net stack, the main way to have versioned database projects where changes can be easily monitored is to use Sql Server Data Tools that are available as part of Visual Studio. For some sad reason, however, these have been left to languish and there are various shortcomings and tooling problems that have not been fixed in 5+ years.

Fortunately, most of these have workarounds, which are not that hard to understand.

Using a combination of hacks and workarounds, we can plug this into our build pipeline and deploy it manually or automatically to our SQL servers.

Getting the tooling

Getting the tooling is straight-forward. Run the Visual Studio installer and choose the Data storage and processing workload to install. This gives you the Visual Studio templates, which is enough to get your project started and to be able to deploy directly from Visual Studio to a SQL server. You can also reverse engineer an existing database and deploy updates rather than dropping and recreating a database.

Creating your project

In Visual Studio you click to create a new project as usual and search for SQL Server Database Project, which should be underneath Other Languages -> SQL Server. Give it a name etc. as you do with any other project and this gives you an empty project that does nothing.

Reverse Engineering an existing database

You can skip this step if you are not creating a project from an existing database. Otherwise, right-click the project name and choose Schema Compare. This brings up the comparison dialog and by default, it is pointing from your database to a target. Click the double-arrow in the middle to swap this round and then choose Select source. This will bring up a dialog to either compare to another project, a Data-tier application (if you have a DACPAC file available) or directly to a live database server.

In this example, I will choose Database and press Select Connection. This might have history in it, otherwise click Browse and either browser local servers if you wish or directly enter the details to connect to your server. Warning: Make sure you click Remember Password here otherwise a bug in Visual Studio might leave you unable to connect to this same data source again! From what I understand, if you don't choose to remember, the history tab will contain a password-less connection which will be used for subsequent connections even if you attempt to browse to it again! It also won't prompt you for the password. If you have done this, you might need to create a new schema compare file, right-click to remove the old item from history and browse it again.

Click OK and then press Compare in the tabbed view to compare the database with your empty project. You will see all of the objects in the source database listed and you can choose which of these to copy into your project. With what you have selected, press the Update button and SQL files will be created in your project to generate the schema required.

Referential Integrity

Something that is both useful and annoying is that the project will not build unless it can understand all of the object references. If you build your new project and look at the errors, you will understand what I mean (unless you have used a simple database which is already OK). A typical error will be User username has an unresolved reference to Login loginname. This will be caused by the SQL script CREATE USER username FOR LOGIN loginname, which was reverse engineered from the database.

of course, this occurs because many of the references are to objects that are outside of your database. For example, any references to sys views, logins or sys tables will all exist in master and will not compile in your project.

If you are never going to create the database from scratch, you can probably just delete all the roles and users but if you reference system items, you cannot build until you add a reference to the master database schema.

If you want everything to work and/or you need to create everything from scratch, you need to create another database project for every external reference including master. If you are referencing logins, they will not exist in the default master reference so you will need another project just for shared logins.

Referencing master

Referencing master (and msdb if needed) is straightforward because you simply right-click references and Add Database Reference, then choose System database and select the one you want to reference. It will then import, which should remove any errors related to references to master objects like sys.views.

Referencing other projects

The way you reference other projects depends on whether you want them all to be in the same solution (which is easier) or whether you want them to exist and be maintained separately.

To add them into the same solution, they work in the same way as your current project. Create a new project into the solution and either reverse-engineer it or just add types for the references that are broken in your other project. Then add it as a reference right-clicking references and choosing Add Database Reference and then choose Database projects in the current solution. Set a default name for the database and choose a variable name if it needs substituting into deployment scripts.

To reference separate projects, create another solution, repeat the steps above and once completed, build the project and reference the output DACPAC file inside this project by right-clicking references and choosing Add Database Reference and selecting Data-tier Application (.dacpac) option. I don't know if the build will fail if the dacpac is not available on the build server, I suspect it will so I have a pre-build step that copies it from the source bin/release folder into a local XRefs folder and then reference it locally.

Database Settings

Some issues you will only find by trial and error but one thing I found was that the default collation for my database project was not automatically reverse-engineered from the source database. To set database settings for when deploying a new database, you need to right-click the project and choose properties -> Project Settings -> Database Settings of which there are many. Items like contained databases, all the usual file settings and also weird stuff that most of us don't understand. Some of these can be overridden at deployment time via SqlPackage.

Making it Build

When I first built the project, there were lots of warnings related to the case-sensitivity of object references, even though SQL doesn't care e.g. MemberID instead of MemberId. You can suppress specific SQL warnings in the Build tab of the project properties in the Suppress Transact-SQL warnings box. Although removing warnings is optional, it is good-practice to either fix things you care about or suppress things you don't. That way, any new warnings are obvious.

You should be able to get the project to build without errors and hopefully without warnings in which case you might decide to either get it building on your CI server or otherwise continue to add post-deployment scripts to populate default data into the tables.

Post-deployment Data

If you are creating a database from scratch, you probably need to add some default data into certain tables. This is quite easy except that the SQL project only allows a single script to do this by default. Fortunately, it can pull in other references as long as it is in SqlCmd mode!

Create an optional folder and add a new "Post deployment script" from the new item dialog. All this does is add an empty script and set its Build Action to PostDeploy. Be very careful that you don't try and add another one since the first one will be set back to None (there should be warning about this).

Once you have added this file, you can add other generic SQL files (ideally in the same folder) and don't set them to PostDeploy. Once this is done, you can then add references to the generic scripts from the post deployment one like this:

:r .\FirstPart.sql
:r .\SecondPart.sql

If it shows an error, just press the SQLCMD Mode button at the top of the file that has an exclamation mark (a bang) on the end of it. Note that the paths are relative to the current script, not the project.

Top tip: If you want to generate the insert statements automatically from a database, right-click the database in SQL Server Management Studio and choose Tasks -> Generate Scripts. Here you can choose tables to export data for but in the Set Scripting Options page, ensure you select Advanced and make sure Types of data to script is set to Data only and in my case, I disable the generation of the USE DATABASE statement. Also, choose file as the output medium since attempting to use the new query window has a very low limit on memory and will error for anything except the smallest generation. These can be copied into your post-deployment scripts.

Note that separating the data scripts and merging via the single file is optional since the file is treated as a single large file when deployed to a database.

Variable Replacement

SSDT allows you to do variable replacement in deployment scripts, for example, you could change the database name for an external reference, and it uses $(variablename) syntax. There are always variables created for external references, which should be used in-place of hard-coded references. This allows, for example deployment of Database1->ExternalDatabase1 to be done with the same project as a deployment of Database2->ExternalDatabase2.

You can add your own variables also in the Project Settings and access these in the same way.

Warning: This can cause problems if you post-deployment scripts have e.g. javascript code in them which includes things like $(function), which will look like a variable but which will fail. You can either remove them, rename them like jQuery(function) or you could try and add a variable called function which expands to the word "function"!

Building on the Build Server

You might think that this is simply a case of adding the Data storage and processing workload to the build server and then building with the Built Tools but you would be wrong! For reasons that can only be described as poor, the Build Tools a) does not include all of the files that Visual Studio does and b) Puts the files it does add into a different location. The build will fail.

Fortunately, the errors will be very explicit and will allow you to copy files from either your developer PC and the Build PC into the correct locations to save you installing either an old version of Build Tools or the full version of Visual Studio.

Also note that in my case Team City detected that I needed Visual Studio 2013, which is the version of the solution created by Visual studio 2017 when creating a SQL Server Database project - another very poor situation. I simply told Team City to use 2017 instead, which is fine.

The two files the build is looking for are a) The targets file for the build. That is present on the build server but is installed to another location. Simply find it and copy it to the location that the build says it is looking in. b) The schema definitions for master and msdb which are NOT installed by the build tools and need to be copied from a machine that has all of Visual Studio installed into the location specified by the error. Again, this is relatively easy even though it is annoying.

Deploying from the Build Server/CD Server

At this point, your individual situation will dictate what you want to do next but currently, I have my build deploying a fixed name instance to a test database server for using in our functional tests. This simply involves the use of SqlPackage.exe which can take the dacpac from the build and do a number of things including copying it, publish, package, report etc. See here for more details.

The use of build variables is up to you but I have success with the following command (in Team City, I am using the Command Line - Executable with parameters task):

C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\SqlPackage.exe
/Action:Publish
/SourceFile:%teamcity.build.checkoutDir%/DatabaseProj/bin/Release/DatabaseProj.dacpac
/TargetDatabaseName:Luke_Test
/TargetServerName:%targetDatabaseServer%
/TargetUser:luke
/TargetPassword:%targetDatabaseServerPassword%
/TargetTrustServerCertificate:True
/Variables:ExternalDatabase=ExternalDatabase
/Variables:Shared=master


The % items are build variables that can be injected. Also, you must provide values for your variables EVEN IF they have default values. It should only call scripts on the main database project so it won't attempt to create new logins that are specified in the ExternalDatabase or master.

Versioning and deployment

I haven't got this far yet but basically you can package the dacpac and whatever else you need into a nuget package like anything else. Mine is only 3MB including the deployment scripts so it won't kill nuget. You can then use things like Octopus deploy to both version and move these packages to the correct environments.

I want to think carefully about variables for these deployments so this can all be as automatic as possible but I will try and update this article later if I remember.

Tuesday, 26 March 2019

OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL...

...timed out after 60 seconds
 
So here's the thing: we have functional tests that use Selenium and they have mostly been working fine for many months with the occasional random failure. In the last week, as sod's law would have it, I updated loads of projects to start referencing netstandard dlls and also migrated some web sites to web applications in .Net.

Push the code and as expected, various tests were failing. I had to look into these. Some were the tests that often failed for reasons unknown so I introduced some more delays and WaitForDisplayed and what have you, but it wasn't helping, tests were randomnly failing.

Then loads of other projects starting failing too and most of these had some kind of timeout exception either the one in the title or a similar one caused when we were waiting for something to load.

Much headscratching and fault-finding ensued including scratching our heads as to why it worked when we ran them locally but failed on the build server. They also worked fine when run on the build server via RDP in NUnit GUI.

Well the problem was that ChromeDriver.exe was sandboxing the tests by default. This meant that calls to non-sandbox friendly methods like Window.Maximize() were failing. They failed, however, by not receiving a response and therefore eventually timing out - why they didn't return friendly errors I do not know. I also do not know why the failures were random instead of consistent in the past.

I don't know, but I do know this, we had to tell Chrome not to sandbox the tests using code like the following in VB.net:

Dim options = New ChromeOptions
options.AddArgument("no-sandbox")
Driver = New ChromeDriver(driversPath, options, TimeSpan.FromMinutes(1))

or C#

var options = new ChromeOptions();
options.AddArgument("no-sandbox");
WebDriver = new ChromeDriver(driversPath, options, TimeSpan.FromMinutes(1));

Monday, 25 March 2019

StructureMap Type from assembly does not have an implementation

An annoying error where the error message made sense but the cause was not obvious:

System.TypeInitializationException: The type initializer for 'MyCallingAssembly' threw an exception. ---> System.TypeLoadException: Method 'Set' in type 'MyAssemblyName.SmartDbContext' from assembly 'MyAssemblyName.DataAccess, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' does not have an implementation

The type SmartDbContext inherits System.Data.Entity.DbContext, which DOES have a Set() method but for some reason, the StructureMap assembly scanner was erroring to say it didn't, and therefore SmartDbContext did not implement ISmartDbContext.Set().

Ignoring these two types in Scan() did not work.

The problem was simply that the consuming and the creating assemblies had different versions of EntityFramework installed. Not very different versions but maybe it was a missing redirect or maybe the interface did break between version.

Anyway, I simply updated both versions of EF to the same version and the error went away.

I am having an Assembly Redirect nightmare week - any suggestions welcome.

Wednesday, 13 March 2019

StructureMap not automatically resolving ICollection

I had one of those annoying problems when you don't know enough about a library to know if you have coded it correctly or not!

I have registered a number of IFileServerManager types in StructureMap and trying to inject them into the constructor of a FileManager as ICollection. StructureMap says this will automatically resolve according to this: http://structuremap.github.io/the-container/working-with-enumerable-types/

The problem is, it doesn't. At least not in my program. All the other types work but ICollection doesn't so the solution was simply to change the constructor injection to IList instead and it worked fine.

Hmmmm

Tuesday, 12 March 2019

Creating Good Devops Channels on Team City

Actually, this applies to most CI/CD tooling but I am using Team City so let's start here. I assume you know basically what the CI/CD tool is doing and how you have some hierarchy of projects, jobs, tasks and channels.

For my post, I will call a Project something that links to a single code repository. A job can be described as a set of tasks that are run against a project so you might have one job per project or you might have e.g. CI and FullTest builds triggered differently. A task is a single unit of work.

The main end-game for Devops is to automate as much as possible since that reduces your workload for deployment, it finds problems quickly and it allows you to repeat good stuff multiple times with almost no risk and bad things are magnified quickly so they are easier to find.

1) Do your homework for a good tool

Probably most of the CI/CD tools can handle most project types, since almost all of them have the ability to run command-line, powershell, bash etc. to do anything that is not built-in. This is good but is also a reason for the developers to be slower at releasing features. You want .Net support? Write it in a script! Some are much better than others for certain frameworks so look into it, the time you might save could be measured in weeks if you can avoid writing loads of hard-to-maintain powershell scripts just to do something that should be simple.

2) Carefully think about project layout

A common question is whether to have a single giant project or to split things into smaller projects. Smaller projects build quicker but then you have depedency problems and the latency of making a library change, waiting for a build, then doing a change in the consuming project that might or might not work. Sure, you can temporarily import the library into your work area while testing stuff out but that is asking for accidents to happen.

Build (and test) times are key so smaller is generally better. We have a fairly simple app that runs a good level of functional tests (50%ish - could be better) and that takes 20+ minutes to run. Currently, we don't have capacity to build feature branches so that is a long time to realise you broke something. Reducing the risk by separating libraries and running their own unit tests first should help with reducing build times.

3) Have a fast way to deploy new build agents

Many CI/CD tools have a limited number of agents before you start paying the big dollars but actually, paying a few hundred pounds for the extra clout might be worth it when you are up against a deadline but it is no use if you cannot deploy new agents quickly.

Team City, for instance, can deploy an agent using push, which is relatively painless except it can only auto-install a limited number of tools like DotCover. From scratch, installing a new agent onto Windows Server for a .net build (node, nunit, dotnet core etc.) takes a good few hours. The solution? Find a good virtual appliance or build your own and keep it somewhere safe.

We keep a snapshot of a VM on the cloud which can be used to run up new servers relatively quickly (5 minutes ish) and all it requires is occasional updates and a re-snapshot.

4) Make sure your steps work on remote agents

It is common to start by putting everything onto one server and getting it work. We did this and as soon as I added a new remote build agent, a load of the builds wouldn't run. Some of the causes were easy to fix, others having taken more effort. These causes include:
1) Hard-coded paths in the tasks. Use build agent parameters instead.
2) Resource code on the build server that is copied in to the build. This should be part of the project if it is necessary and pulled in from the source checkout.
3) Accessing netbios paths and other internal areas. Use nuget, myget, sftp or built-in tasks to ship code back from the agent. Team City supports automatic artifact uploads but this requires that you setup the artifacts correctly.
4) Tests that access a dev database. We do not want to make our database publically accessible so we need to change the build to deploy a temp database to use for functional testing.
5) Any tests that are locale sensitive. We shouldn't have any but a few were assuming what the default locale was, which then failed on the build agent despite trying to set its locale correctly.

5) Keep your config in source control

Most of us hopefully plan for a web server going down and can redeploy quickly but what happens if our build server goes down? Link your CI/CD tool to source control so you get visibility of changes and can quickly redeploy a new instance. You can also make changes directly in the config files and get the build server to automatically apply them the next time it runs e.g. adding some new dlls to the unit test task.

6) Learn to read the build logs

Most build logs are extremely verbose and are not always easy to understand when an error occurs. This only comes with time but learning this can make your job a lot easier when debugging!

Thursday, 7 March 2019

Amazon Broke My Music (And how to fix it)

It astounds me that somebody can't seem to make a good music player. You know, one that simply allows you to play from an album or artist, that integrates well with Windows or Android (which I mostly use) and which doesn't screw you over.

I haven't found one.

I tried iTunes but that was completely proprietary and meant I had to re-rip all my CDs on the Macbook which has that famous problem that if it doesn't like a CD, it lowers to something like 2x reading and takes 10 minutes to rip. I also hate the interface. How long can Apple pretend that the Mac interface is anything other than obnoxious art that Mac fanbois pretend to love?

I haven't tried Google music but no-one like them any more right?

I ended up on the Amazon music player for no other reason than it seemed simple enough, you could buy music online and it was automatically added to your library and also, you could upload your own tracks from CD to add to your library. This was to be my downfall!

I have had many issues with the Amazon music player app. 1) When you exit it is still running enough to cause Android to tell you that it's still running - why? No other app does that. 2) It keeps trying to make me buy their stupid music unlimited and the options on the dialog are "Yes please" or "Maybe later" rather than "bugger off and stop asking me". 3) If you press pause on the lock screen controls, the controls disappear soon afterwards meaning you can't resume without unlocking the phone and going back into the app. I think this relates to 4) It has a habit of not staying where you were in the app. Leave it alone for a while and it seems to take you away from the tracks you were playing. 5) If you search for something, it defaults to an online search at Amazon.com rather than assuming I am more likely to be trying to search my music!

Anyway.......I received the email saying that Amazon couldn't be bothered to offer music storage any more, they only want to host the stuff you bought on Amazon so download your songs or they will be deleted!

I did so this evening and decided to import them into Windows media player (why not!) and say a problem more clearly that I thought was related to the Amazon music app: That terrible pain when it splits an album into usually two parts. They seem to have the same details (one might have different album art) but are otherwise the same (or they appear to be) what gives?

Closer examination reveals that some of the MP3 tags don't match (particularly "Album artist" which is not displayed in WMP but is critical in the grouping) but more weirdly, some albums have some tracks in m4a and others in mp3 format, which WMP splits by default (it makes technical sense but not usability sense).

What the?

I then remembered the wonderful Amazon music upload process that must have been at least 5 years ago when I spent an eternity uploading my CD collection to Amazon. My rips were all in m4a because they were all ripped in iTunes and m4a was supposed to be a good audio format. But Amazon doesn't want to do anything as simple as uploading your crappy CD rips and having to store them, oh no. It will attempt to look up the track in their online database and then if it has it, it will give you its amazing mp3 file in place of your uploaded file i.e. a mix of m4a and mp3. This wasn't particularly noticeable on the Amazon app, it was probably clever enough to know it had butchered things. The other problem though was that its tagging was not the same as my tagging on the uploaded files. It had populated the Album Artist on its own tracks but my m4as did not have that set.

The real issue is that Amazon should NEVER have partially replaced album tracks leading to dissimilar files. They should have found an exact match or none at all but they were searching singles so some albums show tracks from 10 other compilation albums and not the actual album they came from (no offence Amazon but the album name was there in the tag - find it or give up!)

So fast-forward to me downloading and WMP has even more trouble with these files and I now have about 50% of my albums which are split and the only way to fix them is manually fixing the tags! Fortunately, you can find the tracks via WMP by right-clicking the album icon and you can set the tags on multiple files at the same time but what a pain.

Thanks for nothing Amazon. You had years to sort this out but never did!