Wednesday, 29 May 2013

The compute emulator had a error: Can't locate service model..

Another annoying message from Start-AzureEmulator but this time very much my fault. In my case I was trying to get something working locally and was mucking around with the various config files to try and make it work. In the end, the problem for me was that I had removed an SSL endpoint and added a non-SSL (port 80) one instead but hadn't updated the endpoint reference under the site/bindings element in ServiceDefinition.csdef.

That wasn't the end of the problem with PHP Azure but that is another story....

A parameter cannot be found that matches parameter name 'Subscription'

Never mind the winter of discontent or the night of the long knives, today has been the Powershell pain-in-the-a**e day. I think I have spent most of the day trying to fix one thing while breaking another. It all started with the suspicion that although the publish of my PHP Azure project went without error, it wasn't actually updating the live site, certainly with the newly changed files.

Anyway, one of the errors I ended up with was the above when attempting to run my (previously working) Powershell deployment script. It turns out that the updated version of the Azure Powershell scripts has removed this parameter, something to do with a new way of deployment! Anyway, what you need to do now is the following:

Select-AzureSubscription SubscriptionName
Publish-AzureServiceProject etc....

Tuesday, 21 May 2013

MacBook Pro very poor wifi reception

I saw this problem a while back and I'm not sure whether I blogged it so here goes.

I bought a MacBook pro a while back because "Mac's just work" and the battery life is very good. Anyway, I was having problems connecting to the WiFi, the router would appear and disappear from the list and trying to connect caused all manner of errors and timeouts. I had a D-Link wireless router and despite these being the cheap cable company ones, they should be more than adequate, especially sitting across the room from the laptop. Anyway, the MacBook has a lot of metal on it and the antenna is alongside the screen so it's never going to be great. No doubt the Apple AirPort is cranked so that this issue is not so obvious and to give Apple staff a chance to sell you something you shouldn't need!

I called up Virgin Media and complained and they sent me a new router since the one I had was a good 3 or 4 years old and might have been dying. I had the same problem.

Fortunately, the new wireless router was supported by a free, open-source router firmware for many types of hardware. The thing that I was particularly taken with was the adjustable antenna power. I installed it and bumped up the power and all was again fine in MacBook land.

Anyway, I have just been upgraded to a Virgin Super Hub which is a cable modem and wireless router combined and which allows a 50Mb download. Not really thinking, I tried to get the MacBook to connect and had problems again, eventually remembering my previous problems. It seems even the new Super Hub is not powerful enough for a MacBook Pro. In the end, I plugged the other wireless router back in to the ethernet on the Super Hub so I have two wireless access points but at least I can connect the MacBook. I might disable the wireless on the new Hub. Seems daft but the local airwaves are already polluted with WiFi.

Friday, 17 May 2013

Debugging SQL Server Connection Problems

After having a bit of a nightmare last night trying to get a SQL server connection working, I decided that it was a good time for a debugging blog post. Often you get all manner of errors when you cannot connect to SQL Server and they may not be strictly accurate in their error description. I have had the usual code 40 could not connect, which mentions Named Pipes but also invalid username and/or password which I knew was not the case. Anyway, here is a fairly complete list of things to check when you cannot connect to SQL server. You can retry the connection after each thing you change, some of these will already be correct.

  1. Sanity check. Ensure the server is connected to the network and that you can ping it from your client. You might need to open ICMP-Echo Request in the firewall to allow this but often it is enabled by default.
  2. Run up the SQL Server Configuration Manager (which is in the Start Menu under SQL Server/Configuration Tools) and do the following.
  3. First click on the SQL Server Services tree item and ensure that SQL Server service is started. If not, it might be set to manual, disabled or there may be some other problem which will usually be logged in the Windows Event Viewer if you cannot start the service. (You do not need the browser service to be running, this just allows your machine to broadcast its server to the network which makes it easier to find from various database applications.)
  4. Secondly, click on the SQL Server Network Configuration tree item and the item Protocols for (in most cases there will probably only be one item in this menu). If you are connecting from another machine, TCP/IP must be enabled. Sometimes it is not enabled by default for security reasons. Right-click it and choose "enable" if it is disabled.
  5. After this, right-click TCP/IP and choose properties and click the IP Addresses tab. The entry under IP ALL should have nothing in TCP Dynamic Ports and 1433 (or another port if you prefer) in the TCP Port box. 1433 is the standard port for SQL Server.
  6. Open SQL Server Management studio on the server and connect to the server locally using an sa account. Right-click the server in the Object Explorer and choose properties. Then select the security element and check to see that the server has windows and/or SQL Server authentication depending on what you want to use (e.g. if you are trying to connect with a username and password but the server is windows only, it will not allow you to connect, even with sa)
  7. Open the security folder at the server level in Management Studio and open logins. Ensure that this lists the account you are trying to login with and that the accounts are enabled. At this point, these logins only imply that you can connect to the server, not that the login has any database access - that is a different issue related to roles, database users and permissions. If your problem is permission related, your error will be more specific (cannot open Database x, it does not exist or you do not have permission)
  8. Ensure that the Windows firewall has TCP port 1433 open (or whatever port you have used) and that the scope is correct. For instance, it might be restricted to specific ip addresses or remote ports. If your server is behind a corporate firewall, it is often safe to reduce the scope on the rule although you can always restrict connections to the local subnet. If you are using SQL Server Browser, you will also need UDP port 1434 open.
  9. Ensure that the client machine doesn't have outgoing connections firewalled. This is not the default for Windows but it still might be a problem if someone has switched it on (outgoing connections are usually allowed unless explicitly blocked).
  10. If you are having problems with Windows logins, ensure that the SQL server has been joined to your domain so it has the ability to find Active Directory and the permission to check the login credentials.
  11. If you are having problems with a SQL server login, create a new login on the server to ensure that it is not just a case of a forgotten password or something else unusual that has been done to the login or its permissions.
  12. If you are logging in to SQL Server Express, your connection usually needs to use \SQLEXPRESS as the sever name since the express database is added as a named instance. On full SQL Server, there is usually a default instance in which case you can connect directly to . You can see what the names are when you connect to it in Management Studio and the name is in the Object Explorer.
  13. If you are logging in using a name rather than an IP address, make sure that the DNS is correctly resolving the name to the right IP address. You can use ping on the command line to see what ip address is being resolved to. If this is wrong, you might have an entry in your hosts file (windows\system32\drivers\etc\hosts) pointing to a hard-coded IP address, otherwise your DNS server needs adjusting.
  14. If your server and your clients are not on the same subnet, you will need a bridge to allow your subnet to talk to the other subnet. This needs to be configured by your IT department.

Thursday, 16 May 2013

When customers spend way too much on software....

If you work in anything related to IT, you have probably experienced the dreaded, "oh? You work in computers? Can you fix my printer" or something very similar. To the uninitiated, IT is a single entity, you either know about it or you don't

The problem with this is that most people now think that since they have an email address and use a few computer programs, they are now "in" IT and to buy something in, such as a new web site or even, God forbid, a business application, is easy right? These insecure people try and cover over their ignorance when discussing backup policies, scalability and choice of hardware by pretending that they do understand and will chastise the expert when their idea for a button is apparently going to cost another £1000 for reasons that are not immediately obvious. On the other side, a supplier will gladly regale the customer with buzzwords, strange marketing names for standard things or bedazzle them with proprietary systems that add no value, cost too much and lock the customer in.

The problem we have now, though, is that most companies realize they need some kind of IT, at very minimum a web site and so all these people are let loose onto the technical supplier community who offer these kinds of services and there don't seem to be any hard and fast rules about how to go about this effectively. This is, of course, very similar to getting a builder or plumber in but the stakes are actually, often, much higher. Spending a few hundred more on a plumber is annoying but spending 1000s on a piece of business software which, at best, might add no value to the business but at worse might make it less efficient and cost more overheads can be the death of a company. This is pressure that many of these 'buyers' don't seem to be able to manage properly and they end up putting pressure or unrealistic expectations on their IT suppliers. I know one company who spent £200Kish on basically a conference booking database which, to be honest, could have been written from scratch for much less but this is what happens when the stakes are high and the expertise (even possibly of the supplier) are low.

So what do we do about it?

Well, firstly, we could employ someone with experience to manage these projects. People who know the lingo and who can tell a supplier when he is wrong and who can then translate language in both directions between the supplier and customer. Of course, this might not necessarily work very well because then you need this consultant to be trustworthy and, naturally, they are likely to tell you why they are the best choice, even they are a complete chancer! It only moves the problem, it doesn't solve it sadly (unless you happen to already know a capable person of course).

We can buy off-the-shelf products. This is not as bad as it sounds. One of the things I've noticed is the tendency for companies not to accept the built-in functionality of something and therefore feel the need to create bespoke i.e. expensive software. With off-the-shelf software, you will usually know or be able to find out what functionality is included, you can try the software out and see whether it is correct for your staff and the pricing is often very straight-forward. This also goes for web sites, since now you can buy basic off-the-shelf sites with minimal customisation for not very much money. It might be way cheaper overall to miss-out on some functionality but buy something cheap and standard.

One other issue that is often missed is that unless the company understands its own business processes and has optimised them, it is almost impossible to support this effectively in software. The simplest software always mirrors the most optimal processes whereas scatterbrain processes (possibly implemented by control-freak managers for their own ends or evolved over time so that none of the current staff even know why it works like that) create complex software which will be costly and almost certainly be riddled with bugs. Best case, it works fine but is impossible to modify later when the customer changes their mind.

I am currently of the opinion that the software lifecycle process is as strong as the weakest link. While there are uninitiated customers, untrained suppliers, egotists, poor communicators and poor business people, all of these sequences are, by definition, fragile and potentially expensive.

The answer, therefore, is to remove people as much as possible from the process. This means specifications and documentation. It means that software is not changed until the changes are signed-off and agreed. It means that a design costs x and a re-design costs y. It means that changes are managed by process ABC and most importantly that these procedures and processes are continually refined and adjusted to suit. No-one wants a one-line text modification to take 2 weeks of paperwork to sign off but equally, it is foolish in this poison business that we're in, to "just make the change" when a customer then comes back and claims they never asked you to make the change.

This requires suppliers to manage the process because they are the domain experts. These processes are explained up-front, along with day rates or unit rates as appropriate and I would also suggest a few examples of where a poorly managed project from the customers point-of-view costs double or more that of a well-managed project so the customer can appreciate why and how indecisiveness or pickiness is an expensive business. At this stage, if the customer knows that something is likely to change often, they can decide up front to build-in the functionality rather than manually changing things everytime the change is required.

It is time that software suppliers upped their anti. It is time to stop acting like people sitting in garden sheds and learn about quality assurance, communication, appropriate contracts and general business sense otherwise I fear their stress levels will continue to increase along with businesses who fold because they cannot charge the amount of money they should be for a project that they know should have cost much less.

Using hashing AND symmetric encryption

I'm talking about on the same piece of data! What?

Well, we sometimes think that we can choose either/or but in some cases we might need both. Let me explain, it all comes back to the way in which symmetric encryption works and the way hashing has to work.

Hashing a value MUST produce a consistent value. Why? Because all you can use it for is to compare with another hash of the same value, if they were different, you would not know whether there is a match or not. This consistency is why hashing is useful but sadly it is also its downfall. Since I know that a hash algorithm is consistent, I can also hash values and compare their results with a hash that I am attacking. If I find a match, I know what data created the hash!

Symmetric encryption, which is designed to be unencrypted with the same (or directly related) key as the encryption key, would be weak if it was consistent since this would allow similar attacks as we find in hash attacks. For this reason, a good implementation will create a random initialisation vector and use this to seed the encryption. That means that encrypting the same thing twice will produce two different results. In this case, however, we don't care, because we will store the IV along with the encrypted data and use this to seed the decryption process to get the original data.

Take a scenario however. Imagine you have encrypted an email address in your database against each user and someone wants to register a new account. You want to ensure the email address is not taken, what do you do? You cannot encrypt the given email and compare it in the database because you are using symmetric encryption (since you might want to decrypt the original email and use it somehow) so encrypting it again means you will get a different value. The other horrible alternative is to effectively get all user rows and iteratively check the given email against the decrypted version from each record. As well as being horribly inefficient (and not scalable), it also risks exposes data into memory which is another risk.

The solution in this case is to do both. Symmetric encryption gives you the option of decrypting and using the data, while hashing gives you the option of doing a simple WHERE clause in your database to check for duplicates. So, you see, there is a place for both. Is it more secure than just using a non-initialised symmetric algorithm that produces the same output for a given input? Hmm, not sure but I think slightly because the hashes never have to come out from the database whereas the symmetric data could be exposed (accidentally or otherwise) and be less likely to be cracked, if used with the IV.

Monday, 13 May 2013

What is wrong with our software?

Everytime I read articles like this: I shiver at the way in which, despite this being 20-odd years since the internet began, we are still allow systems to be broken by basic insecurities.

I am not talking about zero-day exploits or weaknesses in Operating systems, DNS poisoning or SSL attacks, most which we would neither protect from or in some cases, even understand. What we have seen in this recent attack, however, is another system failing on the very basics. From the fact that the gang was even able to hack into the banks involved as well as the ability to modify the database unseen and then to carry out a mass cash-withdrawal against only 12 bank accounts without any attack detection occurring costing in excess of $45M to the banks (read bank customers or investors).

This is not an edge-case. This is not the same scenario as wondering whether 5 new accounts created from a single IP constitute an attack or not, this was 3,000 cash withdrawals, all from within New York (and others around the world) against 12 credit accounts. Accounts, that had already had their max limit increased to be unlimited.

Firstly, the fact that there are many unlimited withdrawal cards around in the first place is worrying but regardless, because this might be the case, you would have a set of standard/obvious security controls to ensure this didn't lead to abuse, as it did in this case. For instance, you could have a mechanism which says that any withdrawal over, say, $500 dollars must be authorised directly by the bank, not by any intermediary. In this day and age, it shouldn't take long. It should also include logic that will not allow this card to be used multiple times within, say, 5 minutes from other ATMs - even the London Boris Bikes will not let you take another bike out until 5 minutes after you replace the previous one! In this case, the cards were mag-stripe cards, something which is still in-use (very sadly!) in America. America take note! However, even if this is the case, why weren't the cards locked down to a specific country? If the real cards were Chip-and-Pin enabled, did the database record this in a way that means a mag-stripe withdrawal would be denied? It's one thing to pay a merchant using mag-stripe where you at least have a chance of easily identifying the culprits but an ATM?

Going back though to the original hack, it does beg the question as to how any bank detects intrusion into their system? With something as massively valuable as an unlimited withdrawal card, why are these not inside a protected database vault with limited access? Why were the database changes not detected or otherwise, if the bank's own system was used to access it, why were these not logged and verified before being actioned?

So many questions but I suspect the answers is obvious like it usually is. There are "process" shortcomings, lack of responsibility (both before and after the attack), probably a combination of ignorance and laziness, security not being given a high enough agenda. All of these things are OWASP Top-10 type issues and most could be thought up by any half-decent trained software developer but I wonder how much longer this will go on until governments start holding companies criminally accountable for gross security breaches - even if it is only money that is stolen.

Friday, 10 May 2013

Chrome always redirecting http site to https

I have killed off an old domain and redirected the old name to our new server and new domain name => PicturePin used to host one of our live/test sites whereas the new domain is for a marketing web site.

After updating DNS, I was trying to check that the update had worked by typing in the old domain and expecting it to redirect to the new one (with the new web server rewriting the url to match the new one). Worked fine in FireFox, IE went to a landing page and Chrome would always insist on redirecting the URL to before accessing the site (and then failing for certificate reasons). The IE issue was related to flaky/not quite updated DNS but the Chrome issues was puzzling.

What I eventually found out was that as well as caching DNS results, Chrome also caches sites that have "strict http" set in their headers and which causes browsers to insist that all site access is carried out over https only and not http. In my case, the old site used to point to a server that had strict http enabled but was no longer. For some reason, the information was cached against the url and not the ip address so Chrome didn't notice it had changed to a new server without strict http.

The solution was to go into chrome://net-internals/#hsts in Chrome and enter the domain name you want to remove the cache entry for in "Delete domain". This stops Chrome from automatically forwarding you!

Thursday, 9 May 2013

Who or what is Application Pool Identity?

In IIS, there is often some confusion as the to identities that the application pools can take and which determine the permissions that you need to set for folder access if, for instance, your web site writes to a log file or stores files on the file system.

Historically, there were a fixed set of options you could choose including a specific user but since most people are lazy, they tended to opt for Network Service which would always be available and which would have pretty good access to everything. You could also choose to use Windows Integrated authentication in your site which could impersonate the user who is accessing your web app.

In IIS7, however, the folks at Microsoft decided to add another (default) identity for an application pool and that is "Application Pool Identity", which very simply is a user created with the same name as the Application Pool itself and which allows you to have very fine grained control over who has access to what, especially on a server that hosts multiple sites.

So, if your application pool is called "My App Pool", the user will also be called "My App Pool". This means to add permissions to folders, simply find the user with the same name as the App Pool and give them the appropriate permission.

Warning: Null value is eliminated by an aggregate or other SET operation.

You might see this when you execute some SQL against MS SQL Server.

Simple answer, you are doing something across multiple rows where the column you are using contains NULLs so the result might not be what you expect. Use COALESCE to explicitly state what you want to happen.

Longer answer, take the following set of data:

Which is taken from a user account table. You can see a rowid which is the primary key and then two columns with the datetime of the last login and the number of logins. Now suppose you want to average the number of logins per user. You write the following:

SELECT AVG(nologins) from userdata

And when you run this, you get the warning above. This makes complete sense. When you are averaging NULL, what are you expecting? We might assume NULL is like zero, but of course it isn't necessarily - it literally means the value is undefined. What actually happens is that the rows with a NULL value are EXCLUDED from the function. This might not be too bad for a SUM operation, the results are probably what you would expect but in the case of AVG, because AVG uses the total number of rows in the calculation, if some rows are excluded, the AVG results will differ.

In my case, if the nologins is NULL, it means the user has never logged in and therefore it is equivalent to zero (and I could have used that as the default!) but in this case, I need to modify my select to be the following:

SELECT AVG(COALESCE(nologins,0)) from userdata

Which tells SQL Server that I want a zero when it finds a NULL rather than an excluded row. In the small set of data I have, if I ignore the warning, the first select statement returns 6, whereas the second returns 5!

You might not even see the warning sometimes, so just take a little extra time when using aggregate operations to think about what you want the result to do.

Tuesday, 7 May 2013

Backing Up SQL Azure Databases to offsite

I have just been looking at doing something fairly routine - automating backups from SQL Azure so that they can be stored off-site. This is partly for resilience, partly to reduce storage costs and partly to be able to use the backup for reporting reasons and not having to load-up the live database.

There are a surprising lack of such tools in SQL Azure but I found the start of a helpful project here:  which I forked and modified to perform the backup from TaskScheduler on a windows server.

The project in github only does the first bit, which is to copy the database and then export it to a bacpac which can be stored and/or restored onto another machine.

IMPORTANT, you should know the balance between copy/export and export-only, Especially related to charges.

Copy/Export - Since export does not guarantee transactional consistency of your database, you need to use the copy functionality in Azure first. Copy is much slower than export (perhaps 20 or 30 times slower). Even my few hundred kilobyte database took over a minute to "copy" and this is done via SQL with the CREATE DATABASE ... AS COPY OF... Also, for every new database you create, you will be charged a minimum of a whole day of database hosting (even for the few minutes it might exist for) as per the Azure charges. If you copy, for instance, 12 or 24 times per day, you would be charged for 12 or 24 days of database charges PER DAY, the amount depending on the size of your database. This would quickly become very expensive, especially since your database copies are largely transient.

Export Only - Export-only on the other hand is much quicker than copying first and leaves you with something that is potentially transactionally inconsistent. This means you might get half an update of a row for instance. Because the export is made directly to blob storage, you are only charged for the storage of the export and not for another database. Blob storage is charged as an average over the month (and is very cheap anyway) so storing something while you download it is cheap enough.

In my fork of the SQLDatabaseBackup, I copy the blob down to the local file system and then delete the blob, I then drop the current database copy and import the one I downloaded. Currently, since the names are changed each time the blob is created, they are all kept in c:\temp and can be kept/backed up/deleted as required. The whole process takes, perhaps 10 seconds, although the database is small (~700Kb) so I don't know whether this time is largely linear on database size or not.

You need to decide what it is you actually need the backups for. Since SQL Azure uses full transactional logs, it might be enough to use copy/export once per day (effectively doubling your database charges unfortunately) and rely on the cloud resilience for anything in-between, bearing in mind they have backup power-supplies and potentially redundancies and everything else. The other way you could tackle this is a basic cost/flexibility issue. If you are asked for multiple back-ups you might suggest that this comes at a large cost. If you are a bank or such like then the extra cost is peanuts compared to the requirement for backups, otherwise, live with old-school applications where checks and validation protected against some stupidity and training and disciplinaries dealt with others!

If, like me, the purpose is largely for reporting, you might be able to live with the odd inconsistent row and simply export directly to blob storage.

Microsoft have mentioned on some forums the intention to create something more usable/less expensive for regular backups and imho they need this urgently if they expect people to use Azure for heavy business uses. There are no dates or confirmations about when this will happen so don't hold your breath.

Friday, 3 May 2013

error .popover() is not a function

I had this problem in a page that was using jQuery and Bootstrap to enable the bootstrap popover. This was confusing since I had used the function elsewhere in the project and with the same script includes.

After getting some expert help and someone looking at the source, the problem was that jQuery and Bootstrap were included explicitly by me but then ASP was including another link to jQuery as part of the validation code. This second include, caused the bootstrap extensions to jQuery to be removed and cause the error. I only worked out that it was included for validation since it was in the same area as other code that was obviously related to validation.

I dug into System.Web.dll with ilspy to try and find out what was happening in code and why this additional jQuery is included. The relevant code is shown below:

This shows the method OnPreRender in System.Web.UI.WebControls.BaseValidator which amongst other things calls RegisterUnobtrusiveScript() which in turn registers the jQuery include script. The only way to prevent this script being registered (assuming that you want the validators to be enabled client side) is to change whatever IsUnobtrusive returns to false. IsUnobstrusive looks like this:

And it is clear that the only thing we can change is a property called Page.UnobtrusiveValidationMode, something I had not heard of. Anyway, it was easy enough to set in my (master) page - you could set it in an individual page if required, like this:

Which means you don't want the validation and all dependents to be added in an "unobtrusive way" i.e. in a way that doesn't require the site author to add other dependencies just to make something work out of the box. This simple change meant that the validators didn't add the jquery include and therefore it didn't redfine the $ symbol and break bootstrap.