Tuesday, 26 March 2013

Bootstrap carousel modifications

I ended up using a lot of bootstrap to make my user interface look all snazzy and modern and realised that it could do all the things I was currently using in jQuery UI, which I was therefore able to remove. One of the interesting controls is called the carousel and performs some fairly basic (but nice) functionality of scrolling elements (usually images) across the screen in order.
Since the functionality is basic, you always get navigation anchors and the elements loop from the end back to the beginning, fairly standard fare for image carousels but not so useful for my "tutorial" which would look nice in a carousel but which I didn't want to loop and which required some extra code. Firstly, here is the code I have for the basic carousel (with my content removed for clarity):

<div id="myCarousel" class="carousel slide" data-interval="false">
    <div class="carousel-inner">
        <div class="active item" data-location="first">
        <div class="item">
        <div class="item" data-location="last">
    <a class="carousel-control left" href="#myCarousel" data-slide="prev" style="display: none;">&lsaquo;</a>
    <a class="carousel-control right" href="#myCarousel" data-slide="next">&rsaquo;</a>

Nothing much to note here, the classes and names are all default. However, I have added the data-location attribute to the first and last tabs and also hidden the left-hand navigation anchor by default.
I have then hooked into the Javascript event "slid", which is called after a slide transition has occurred:

<script type="text/javascript">
(function ($)
        .carousel({ interval: false })
        .on('slid', function (event)
            var currTab = $('.active', $(event.target));

            if (currTab.data('location') == 'first')
            else if (currTab.data('location') == 'last')

This is not massively neat but you can probably get the idea. If you hit the first or last tab, the relevant navigator will be hidden, otherwise they are shown. I have also set interval: false in order for the elements to only transition when the button is clicked and not automatically.

Tuesday, 19 March 2013

The pain of costing

My brother is a Builder. He gets lots of people calling him up asking for "quotes" on work that they want doing. Of course, they want an accurate quote - a price that will remain the same if they want the work done, but for a job that might cost £100,000, this kind of quote is expensive to produce. My brother has to take a day or two off of work (or worse still, use his evenings and weekends instead of spending time with his family) in order to produce something accurate.
You might think that he has other options. Firstly, he could tell the customer either "no", or that he can only produce a ballpark figure. The problem with that is the customer will simply go elsewhere and find plenty of builders who will make up a price - knowing full well that they will not necessarily be able to do the job for that money.
He could work out a very crude figure, add 20% to allow for inaccuracies in the quote and run with that. You cannot compete in building like that. Plenty of other builders (including those who don't pay VAT) will provide what are actually very good prices (again, with little chance of either having a quality job done, or having the job done at all!).
So he is caught between a rock and a hard place. The problem is that the customer is not usually educated in the building industry. To them, it is like calling up and asking how much a car is. If they call 3 builders, 2 of those will have to produce a quote that won't even get used. Most people go for the lowest quote, again, because they don't understand that there is more to building work that price e.g. quality and reliability but so what? If people don't understand, they go on blindly and then moan that their builder wasn't very good.
Why do I tell you this? Well, the exact same problem exists in the software world. Someone calls up and asks, "how much is it for a web site?" or worse, "how much is it for a super-duper financial system that supports 1 million customers and takes less than a second to do something". You might as well ask how much a city costs, the answer is that it depends on about a million things.
There is a power play going on. The customer probably either has a budget or otherwise wants to know roughly how much something is before they decide whether they want the software. They don't want to tell the supplier their budget because they don't trust the supplier not to charge potentially more than they would otherwise charge once they know the budget. The supplier on the other hand doesn't want to fix pricing contractually until they know what is actually required. They also don't want to say nothing to the customer knowing they will simply go somewhere else and get the answer they want.
The two key points are then embedded in the last paragraph. Trust and Specification. When I worked at my last company, they were starting to formalise a Software Development Lifecycle (SDLC), which some of you possibly already have, but this was not just for the coding stage but for everything from initial contact through to delivery and sign-off. The key to the SDLC is both building trust with your customers (that you know what you are doing) and also being able to explain to the customer in a common language which dictates the time and thus cost of software development and at what stage they might expect to know the price with more certainty. At the same time, the commercial mind must also appreciate that you cannot say to a potential customer that you can't give them a price until they've paid you to complete a detailed design. You should be able to offer them example prices for different types of systems, the accuracy of that ballpark price (+100%/-50%) and more importantly, educate your customers on the types of things that can make the solution more expensive and those things that make it simpler. For example, most non-technical customers will not understand that a downloaded theme used as-is is much quicker and cheaper than trying to change everything around, fiddle with colours, placements etc. Again, you should be able to demonstrate to the customer, the kind of extra time these changes might take and how much that will cost. Even an idiot should understand the choice between taking an off-the-shelf product for, say, £5000 and a fully customised solution for £10000 even if they don't understand why that is!

Tuesday, 12 March 2013

Working Example - Azure Diagnostics using log4net and table storage


This has had me tearing my hair out - and I don't have much anyway but I have had so many problems with diagnostic logging in Azure that it has quite literally tempted me to ditch Platform as a Service and go back to Infrastructure as a Service and have to hand-craft all my boxes.
The problem is a cloud problem. How do you use traditional file-based logs on boxes that might be recycled, added to or removed from at any time? How do you merge logs from multiple instances into a single file? The obvious answer is that you can't really. Microsoft have provided a mechanism however using the DiagnosticMonitorTraceListener which is designed to copy log data from the standard System.Diagnostics trace data into permanent Azure storage and this will work across multiple instances into a single table.
Before we start, I would seriously advise starting with the simplest example possible and building it up once you know it works. I fall foul of this so often when I start with a large example with too many variables.
You will need all of these sections so in no particular order...


This is the entry point for your role and might be called something different but you will need to modify OnStart to setup your transfer schedule:

public override bool OnStart()
    Trace.Listeners.Add(new DiagnosticMonitorTraceListener());
    string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));
    RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
    DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();
    config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);
    config.Logs.ScheduledTransferLogLevelFilter = LogLevel.Error;
    config.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
    return base.OnStart();

The first line might or might not be necessary but it does allow you to trace information from within this webrole. This is important because the webrole runs as a different account from the IIS worker role and will have different permissions.
The rest of the code effectively sets up the connection string for storage (Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString) and configures how often and at what level to transfer logs from trace into storage. Most examples uses 1 minute and in my case, I only want the Error traces (and above) transferred, not everything.
Notice I am copying Windows event log entries but only where the Provider is HostableWebCore which is what the web application runs as.


Because of the order I did these changes in, I'm not sure whether the addition of the trace listener into web config is just a duplication of the one added in WebRole.cs or whether this is also required for the web application logging. It doesn't seem to do any harm so here it is:

    <trace autoflush="true">
        <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics" name="AzureDiagnostics">
          <filter type="" />

Nothing magical here, just the standard way to specify a trace listener. Because I copy the diagnostics assembly locally, I do NOT add the public key and culture information which I believe forces web config to look into the GAC for the assembly which may or may not be present.


This might be included in your web config but in my case it is in a separate file to keep web.config cleaner.

    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
    <appender name="TraceAppender" type="MyAssemblyName.Security.AzureTraceAppender, MyAssemblyName">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />
      <level value="ERROR"/>
      <appender-ref ref="TraceAppender"/>

What you will notice is that I am NOT directly using the log4net TraceAppender but a sub-class called AzureTraceAppender. I got this code, thanks to Pete McEvoy who noticed that the log4net TraceAppender logs everything using Trace.Write which basically outputs everything as INFO level and which therefore is blocked by any filtering done by the DiagnosticMonitorTraceListener. His suggestion is the following code in a sub-class in your project which does what it should have done in the first place:


using System.Diagnostics;
using log4net.Appender;
using log4net.Core;

namespace MyAssemblyName.Security
    public class AzureTraceAppender : TraceAppender
        protected override void Append(LoggingEvent loggingEvent)
            var level = loggingEvent.Level;
            var message = RenderLoggingEvent(loggingEvent);
            if (level >= Level.Error)
            else if (level >= Level.Warn)
            else if (level >= Level.Info)
            if (ImmediateFlush)


In your code, you can then simply use log4net as you normally would: log.Error("My Error");


If this all works as expected, your application, once deployed to Azure should create a table in the table storage of the storage account used on your application called WADLogsTable. To ensure this works, you could add a simple Trace.WriteError("Starting role"); into your web role OnStart method and this will at least ensure early on that it works. Windows logs are copied into blob storage, I can't remember the exact container name but it is something obvious like wad-iislogs.
If it doesn't work. Try using System.Diagnostics.Trace.WriteError() instead of log.Error() and this will bypass log4net and see whether the problem is with your log4net config or the DiagnosticMonitorTraceListener configuration. Also, check what your diagnostic connection string is set to in your cloud role configuration, especially after moving storage accounts!

Friday, 8 March 2013

basicHttpsBinding could not be found

I am undergoing the infuriatingly slow process of trying to get a wcf service working with client certificates and have used the above binding for the metadata so that I can add a service reference to a visual studio client. I found the solution here: http://blogs.msdn.com/b/praburaj/ and it relates to the fact that this (and a few other bindings) are only available in .net 4.5. Despite my project being set to use 4.5, the web config, by default, was targeting version 4.0. The fix is to adjust web config by doing the following under system.web (obviously debug="true" is temporary:

<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5"/>

The full error was Configuration binding extension 'system.serviceModel/bindings/basicHttpsBinding' could not be found. Verify that this binding extension is properly registered in system.serviceModel/extensions/bindingExtensions and that it is spelled correctly.

Wednesday, 6 March 2013

Command line arguments not allowed during New User Setup

I am using SpiderOak for backup and I like the idea. It is dead easy to setup and use and you get 2Gb for free so you can essentially try before you buy. The only problem you sometimes encounter is this dreaded error which sounds obvious to fix but isn't always.

Anyway, I have a headless Linux box running SpiderOak and when I log in and run it, it works fine but when it runs from cron.daily, I get the above error emailed to me.

The actual issue is easy once explained. Cron (in my case and by default) runs as root. When I log into the box, I log in as me. The error occurs because root looks for a .SpiderOak directory in its home folder but there isn't one because I set it up and it lives in my folder. EDIT apparently in later versions the directory is called .config

All I've done is sudo cp the folder into /root and let root find it.

Beware that if I log in again and change my folder selection, I need to re-copy this into root. You can setup anacron to run as another user but that was a bit of a ball-ache so I'd rather do it this way.