Thursday, 27 June 2013

Base64.encodeToString adds newline character

In another installment of the weird decisions that were taken when Java was written.....

The class Base64 is useful when doing web work. As you will know, sending data in a web request, particularly in a URL or URL encoded form is fraught with problems because the text tokens used to delimit fields might actually appear in your data. Binary is particularly problematic. Enter Base64 encoding which uses alphanumeric characters and usually + and = to represent binary data. Lovely.

Anyway, using Base64.encodeToString() in Java happens to do something strange. The method requires a flag to tell it how to encode and being naive, I decided to use Base64.DEFAULT thinking it would just do this. But no! In the infinite wisdom of someone, somewhere, the default flag adds a newline character (\n)  to every string it encodes so you end up with something like AB65SS=\n which is almost certainly not something you want.

You have to use the flag NO_WRAP to not include this newline. Here's an idea Java authors: Why not have DEFAULT just do what 99% of people would expect and do a vanilla encoding. If people need a newline for some application, create a flag for that instead?

Missing POST body in Android HttpsURLConnection request

This was driving me nuts. I had an app calling a web service. It all worked fine. I added a new web service method and changed my app to call the new method. All of a sudden, nothing worked any more.
I knew the web service was OK because I could call it from a test harness. I knew I hadn't changed any code and I knew that it used to work fine. The failure was really early on in the web service so I knew the web service was being called but that it couldn't find any post variables and returned an error (I returned an HTTP 409 so I knew it was definitely coming from there).
Basically, the HttpsURLConnection which is a subclass of HttpURLConnection is a funny beast. I can see why people still swear by the Apache web client despite Android "advising" that people use the Java class.
So my first piece of code was this:

conn = (HttpsURLConnection)url2.openConnection();
conn.setRequestProperty("Accept-Charset", CHARSET);
conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=" + CHARSET);

Now the first weird thing is that calling openConnection does NOT actually open a connection, it merely starts the process. I know this because if you call setDoOutput() on a connected connection, it throws an IllegalStateException and this was happening to me. It is daft to call it openConnection if it doesn't. They should deprecate this method and call it createConnection().

Anyway, the exception meant that the connection was open. How could it be if I know that openConnection does not connect and the immediate next line throws an exception? After fannying with the debugger and trying to attach source, surely enough, conn.connected in the debugger window showed false. It was not connected (as expected) bu setDoOutput effectively just says, if ( connected) throw IllegalStateException.

I tried to attach a proxy to see at what point a connection was established but the error occurs too soon to see what was happening. Also, in my code, I was catching the error and carrying on when I should have been returning an error, which caused another error further on.

Eventually, it dawned on me. There was no proxy stripping out my POST body, I had added conn.getResponseCode() to the DEBUGGER watch window. Naturally, this is evaluated after every line of code is stepped over. What it does under the covers, is opens the connection if not already open and asks for a response code. Effectively, this was before my POST body was setup and unsurprisingly, this caused the error 409 to occur. Removing this sorted out the problem.

Why would I do this? Well, another (in my opinion) stupid design in this class is that if the server returns any 4XX code, rather than just report this as a 4XX and have the error text in the response stream, getInputStream throws an exception! A FileNotFoundException of all things (really, I have not made this up!). To avoid this, you have to check for the response code not being 4XX before you attempt to open the input stream. I was doing this and having problems with the codes which is why I had added it to the watch window. From then on, everything had gone down hill.

This is the law of unforeseen circumstances. A poor design has cost me time/money and although the Eclipse debugger was behaving as expected, the strangeness of HttpURLConnection had caused me all kinds of problems.

So, here are my list of major weirdness design issues in the classes:

  1. url.openConnection does not open a connection and should not be called that, it is misleading.
  2. You should NOT have to call setDoInput and setDoOutput just to tell the connection whether you are writing to or reading from the request/response. This is all normal behaviour and should be transparent.
  3. Input and output are confusing because output actually refers to the "output to the other end" and not the response, which is what I think most people would expect. Why not call then requestBody and responseBody?
  4. Calling getInputStream should definitely NOT throw an exception. A 4XX error is not necessarily exceptional and in many cases, there will be data in the response with more information that the caller would want to interrogate.
I if knew enough about Java, I would write my own version of these classes, but reading other people's rants makes me think that Oracle should just rewrite it and move them to another namespace to avoid any confusion. Microsoft do this in .net to their credit when they realise that something was written wrongly.

Wednesday, 26 June 2013

RSA Public Key Encryption, Java to PHP

I want to use RSA public key encryption to encrypt something in Java in an Android app using a public key and then to decrypt this in a web service in PHP. As with most of these tasks, there are so many variations, file formats, missing functionality and whatever in each language that these tasks can cause muchos confusion.

Anyway, I managed it, thanks partly to this article from 2009. Some basic information, I have a public key in DER format, which is the binary format and not the Base64 encoded format. The code to utilise this in Java/Android, from the blog article is like this:

private String PublicKeyEncrypt(byte[] data)
    PublicKey pk = null;
        InputStream f = getAssets().open("publickey.der");
        DataInputStream dis = new DataInputStream(f);
        byte[] keyBytes;
        keyBytes = new byte[(int)f.available()];
        X509EncodedKeySpec spec = new X509EncodedKeySpec(keyBytes);
        KeyFactory kf = KeyFactory.getInstance("RSA");
        pk = kf.generatePublic(spec);
    catch (Exception e) {
    final byte[] cipherText = encrypt(data, pk);
    return Base64.encodeToString(cipherText,Base64.DEFAULT);
private static byte[] encrypt(byte[] data, PublicKey key) 
    byte[] cipherText = null;
    try {
      final Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding");
      cipher.init(Cipher.ENCRYPT_MODE, key);
      cipherText = cipher.doFinal(data);
    catch (Exception e) {
    return cipherText;

Note that getAssets() is just how I obtain the location of the embedded public key in Android. Also note that I use the RSA/ECB/PKCS1Padding. There are variations but this is the most standard of these. You could combine these two methods into one, I just separated them up to make them more readable.

The much simpler decryption using a .pem base64 encoded private key in PHP looks like this:

private function DecryptAndValidateData($data)
        if (openssl_private_decrypt(base64_decode($data), $decrypted, file_get_contents(Yii::getPathOfAlias( 'webroot' ) . '/privatekey.pem')))
            // Do something with $decrypted
        catch(CException $e)
            $this->sendResponse(500, 'Internal Server Error');

Note that I base64 encoded/decoded the encrypted data since it is sent across a web channel and I didn't want any of the data to be lost or corrupted along any non-ascii safe connections. The Yii stuff is because I am using the Yii framework so obviously the obtaining of the private key file path might be different for you.

Easy as that! I started with the .Net version but reading key files is a bit of a pain in .net for some reason and the BouncyCastle library download site is currently broken!

If you would like to use OAEP padding instead of PKCS1 (standard) padding, which apparently is a good idea, you can use RSA/ECB/OAEPWithSHA-1AndMGF1Padding in the Java call to Cipher.getInstance and pass the flag OPENSSL_PKCS1_OAEP_PADDING to openssl_private_decrypt as the third parameter in PHP.

Tuesday, 25 June 2013 with 4XX responses

Did you know, if you connect to a web service from the Java HttpURLConnection and the method returns a 4XX error code, then getInputStream throws a FileNotFoundException? Well I just found out after scrambling around and fortunately, found a helpful forum post on Stack Overflow. A 400 is surely an error but in many cases might not be considered exceptional. 401 means authentication required, which is very likely to be a normal operation and various other 400 errors might simply mean that the particular user is not authorised to access a resource, again, not exactly exceptional.

There is a workaround. You can call getResponseCode first and avoid getInputStream if you have an error situation but this seems heavy handed, especially since my web service responses contain information in the body that my Android app is supposed to use to determine the situation.

I think Java has misunderstood what "Exception" means. These scenarios are supposed to indicate something majorly broken, not just something that is an error but is an expected error. What's the difference? Looking for a user in a database might fail, this is an error but it is (probably) not exceptional because presumably, someone might have searched for the wrong thing. Trying to connect to a database and failing is (probably) exceptional since in most cases, a missing database is a major issue since we would always expect to be able to connect.

Exceptions incur a large overhead in loading the call stack and any symbols that are present to provide to the error handler. It should be possible to design a system that does not throw any exceptions in normal/expected operations.

Monday, 24 June 2013

GMail relay stopped working on Azure Site

I had a bit of a mare after starting work Monday morning and seeing several error messages from my Azure-hosted ASP site like this:

System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.5.1 Authentication Required. Learn more at
   at System.Net.Mail.MailCommand.CheckResponse(SmtpStatusCode statusCode, String response)
   at System.Net.Mail.SmtpTransport.
SendMail(MailAddress sender, MailAddressCollection recipients, String deliveryNotify, Boolean allowUnicode, SmtpFailedRecipientException& exception)
   at System.Net.Mail.SmtpClient.Send(MailMessage message)
   at ...

The thing that was annoying was that no updates had been made to the site for probably at least a week and certainly nothing in the email sending code for a couple of months. Of course, being on Azure meant it would not be easy to debug or fix.

Firstly, I tried to change the gmail portto 465 from 587, apparently 587 (which I was using) is strictly for IMAP and 465 for SMTP. It also appears that one is SSL and the other TLS which is confusing since TLS is the replacement for SSL so I don't know whether this is something historical. Anyway, it didn't work.

What I then did was to use Azure role configuration for the various gmail settings since changing the hard-coded values and updating was being too slow. After doing this, it all appeared to start working again but with the original values that were causing the error.

Basically, something was happening/had happened somewhere between Azure and GMail and this had broken my site. Since I use emails for account validation, as many sites do, this was a big deal for me since it means people could not sign up.

The thing I despise the most about the net is that we are still largely unable to work out what happens at network level, what part of the internet is slowing us down, where things are broken etc. How many times do you have to call and complain to your service providers who may or may not be at fault? The Azure site monitoring thing is OK but when I had problems with the mail, the site was locked up but the monitoring said the site was running fine so I don't have a lot of trust.

Anyway, I wrote this so other's might realise that you haven't done anything wrong!

Friday, 21 June 2013

The Android Activity lifecycle

When I first started developing Android apps, I knew there was a lifecycle for activities but I didn't really care because I, like many, assume that all Activities very quickly move from one state to another and nothing runs for long enough to be a problem. This is bad thinking and there are two reasons why this has caught up with me:

  1. If your activity has a slow running process, such as connecting to a web service, your activity might be destroyed while this is happening. If you have referenced things incorrectly, you can cause your activity to leak!
  2. Even if you think like a stack, an activity might be destroyed underneath the current activity for any reasons, mostly related to resources. This means when you exit and return to the calling activity, it be created again (by calling onCreate) before it then calls handlers like onActivityResult. This can screw things up if, like me, you have a habit of putting code into onCreate which actually belongs elsewhere.
  3. Rotating the screen causes the current activity to be destroyed and recreated potentially doing something twice that should only happen once.
To be fair, most activities are probably little more than a set of widgets which are displayed and then things only happen when you press buttons etc. These will automatically be saved and restored into their current state but anything more than that or things that you don't want to be repeated have to use various techniques.

Firstly, you need to consider carefully what would be done in onCreate and what should be done in, perhaps, onStart or onResume. By referencing the Android Activity Lifecycle diagram (there are loads on Google), you should work out what you would need to happen every time, for instance, the activity is paused and resumed, what you would need to happen only after destroying etc. and place the code in the relevant places. For instance, anything to do with layouts and views should be done in onCreate since it will be called each time the orientation changes and the resources might need different images/layouts etc for the new orientation. If, as in my case, I want to force the orientation in a particular activity to be landscape, my first call in onCreate is to check the orientation, and if it is not correct, set the requested orientation and immediately return. This will cause the activity to be destroyed, re-orientated and then onCreate called again, at which point I can load all the resources etc.

Long running processes are another thing that can catch you out. What happens if you start something and then the user turns the phone round? I'll tell you what happens, the activity will be destroyed, recreated and you would possible call your long running process again. Firstly, you should carry out this activity in another thread, using something like a subclass of AsyncTask, which allows your UI to remain responsive and allows you to, for instance, update the screen with progress while you wait. I found a good tutorial here: and this is cool because it also protects against the potential of a destroyed activity by allowing the activity to detach and re-attach to the thread if it is recreated and not run the process again.

The last issue is a little harder. Imagine this situation. I have an activity which itself calls a long running process to authorise the mobile device with a web service. If this is successful, it starts another activity. When the child finishes and calls finish(), I expect the parent's onActivityResult to be called but for some reason, the parent was destroyed beneath the child and is being recreated so onCreate is called before onActivityResult. This is documented but is a problem. There is no obvious way to distinguish between the activity being called the first time and being called after exiting the child process as far as I know. There are a couple of things that can be useful but these are a little flaky and are only workarounds as far as I know. The first is that the bundle state passed into the onCreate method will always be null the first time into the method so this can indicate that you are coming in and need to call the long running process. I have done this but I am concerned that there might be places where this is not null (like rotating the screen) which should also call the long running process but which won't. The other, more horrible but more reliable, is to set a flag in the application configuration which says whether you are entering or exiting the activity. I supposed, you could store the result of the long running process into configuration and then use this to dictate whether to do it again or not, it depends on what your long running process does.

I would be interested on whether there is a proper way to determine this, such as a piece of data passed in saying whether this is a new-new Activity or whether it was destroyed for lack of resources and is being recreated.

Wednesday, 12 June 2013

Calling WCF SOAP web services from PHP

Why would I want to? There are two reasons why this might be necessary, firstly, there might be a third-party web service that is SOAP and secondly, as in my case, I need a PHP web service but it needs to use functionality that does not readily exist in PHP but does in .Net - so I will call a .Net web service to do the heavy lifting.

In my case, the requirements are:
  1. Fairly standard WCF web service
  2. Transport security (https)
  3. Client certificate security
  4. SOAP 1.2
  5. PHP 5.3 SOAP client
Now, I eventually got this to work using the code below but by all means try and remove some of the options to see if they are already set. As for me, I kept making changes until it worked.

$protected_url = "";
$my_cert_file = Yii::getPathOfAlias( 'webroot' ) . "/login.pem";

$client = new SoapClient($protected_url, array('local_cert' => $my_cert_file, 
                                              'soap_version' => SOAP_1_2,
                                              'trace' => TRUE) );
$actionHeader = array( new SoapHeader('',
                      new SoapHeader('',
    $something = $client->CheckImage(array('userId' => "23",'imagename' => 'test'));
catch(Exception $e)
   $this->sendResponse(500, $client->__getLastRequest(),FALSE);
$this->sendResponse(200, $something);

So, what's going on? The first url is where the soap client will retrieve the WSDL from. In WCF, the standard location is the service name with ?wsdl on the end. There is also a similar wsdl with ?singleWsdl on the end but I don't know the difference (maybe this was my problem!).

The second line, in my case, is just building a path to a locally installed client certificate (in my case, I am using Yii).

When creating the client, I have to specify the client certificate and the soap version in the options array. I also set trace to TRUE and this allows me to use __getLastRequest() later on if I have problems.

I had various SOAP errors and these were related to missing headers which meant the endpoint filters were complaining, so I specify these as additional headers. If you search your wsdl for your function , you will find an Action attribute that has this information in it. For the "To" header, similarly, you can find this under the relevant endpoint as the value of
(which might also have a namespace like wsa10:Address).

The try block calls the name of my function and passes in an associative array of parameters. This allows the endpoint to marshall them to the correct positions in the function.

In my case, if an error occurs, I return the value of __getLastRequest() to the caller and this returns the XML that was sent to the web service. This is really helpful when things don't seem to work because you can compare it with known good calls that you could make from .Net to .Net.

sendResponse is just a helper method that I use to return HTTP codes.

Tuesday, 11 June 2013

Setting a default value for object property in ASP.Net MVC4 Razor

Well, my scenario is this: I have a hierarchy of items that I am editing in MVC, when an item is created, therefore, it already has a parent id and I pass this into the Create action on the controller. The problem is that an object is not created at this point, just a view returned so where should this parent id be stored so that it is picked up when saving the new object away to database? There are many horrible hacks that could provide what you need but I found a neat way, which even looks like it might be the right way!

Firstly, add your default parameters into your action so they can be passed in when you are creating the child items. Inside the action, pass this to ViewData - but importantly, ensure the key name is the same as the property name you want to set:

public ActionResult Create(int parentId)
    ViewData["Parent"] = parentId;
    return View();

Then in the view, all you have to do is use @Html.Hidden (note, NOT HiddenFor):


Presumably, you could also use the one that doesn't take a second parameter, I haven't actually tried! A couple of things happen here which make it all work. Firstly, the hidden will automatically take the value from ViewData (and as it happens, various other places and failing that, the value you pass in), obviously using the key name as a match. The second part, however, is that by using the name of the object property (Parent in my case), the framework will automatically pass this value into the object when serializing it for saving in the same way it does with the WhateverFor(model => model.Property) set of methods.

It feels so slick, it might just be right!

PHP Azure - HTTP Error Message: A storage account named something already exists

This PHP Azure stuff is doing my nut a bit. I need to download the source for the Powershell scripts so I can make them better but errors like the above are not helpful, because I know the storage account exists, it is what I want to use to deploy the web service I am developing.

The problem in my case is that although I "imported my publish settings" using the powershell script GetAzurePublishSettingsFile, I am the admin of two subscriptions and this call automatically downloads both and therefore uses the first one by default. I didn't notice this at the time but it meant I was deploying using the wrong subscription and when it wanted to create the storage account I had specified, it already existed in another subscription so it couldn't create a new one.

The answer is (unsurprisingly) to specify the correct subscription in your call to Publish-AzureServiceProject using the Select-AzureSubscription cmdlet. You can find the name in the downloaded publish settings file and copy it across. I created a simple Powershell script to avoid having to repeat this:

Select-AzureSubscription "Windows Azure Bizspark 1111"
Publish-AzureServiceProject -Location "North Europe" -StorageAccountName something -Slot Production

Wednesday, 5 June 2013

Setting up linux to relay mail via gmail

We use gmail for business and have gmail accounts etc. We also run servers that need to send email out so we do it via gmail to ensure we have credibility and don't end up in SPAM filters etc. I have recently installed wordpress and that uses phpmail to send emails but which relies on an underlying mail delivery system to send it on. Below is how you get it to work with gmail.

  1. sudo apt-get install postfix and choose "satellite system" as the system to configure. Set the relay host to You can optionally add port 587 to the end but I think it automatically does that anyway?
  2. Once the install is finished, sudo nano /etc/postfix/ scroll to the end and set the following entries: 
  3. Set myhostname = whatever you want to appear as the mail host in the email trail, this will also affect how delivery addresses are resolved if, for instance, you mail an address without an @ sign. A standard would be
  4. Add mydomain = the domain part of the host e.g.
  5. remove the entries from mydestination, otherwise you might get addresses rejected.
  6. Add the following new entries in the same block. Items in bold are my choice of names:
    smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
    smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
    smtp_sasl_auth_enable = yes
    smtp_use_tls = yes
    smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
    smtp_sasl_security_options = noanonymous
    smtp_sasl_tls_security_options = noanonymous
  7. You now need to create the sasl_passwd file required to authenticate with gmail.
  8. Run sudo nano sasl_passwd in the postfix directory and add a single line equal to the following, items in bold need setting to relevant values:
  9. Note the colon between username and password. These need to be valid login credentials for gmail (obviously?).
  10. Run sudo chown root:root sasl_passwd and  sudo chmod go-rw sasl_passwd
  11. Run the command sudo postmap saslpasswd and if you get an error, you will need to install it (and run it again). It is called sasl2-bin package on ubuntu.
  12. Ensure that both the sasl_passwd and sasl_passwd.db files are owned by root:root and are only read/writable by root.
  13. sudo service postfix restart
  14. To test it, create a file (called for example) in your home directory and type sendmail -v user@domainname < test.mail

Because your install is in a directory, the sites in your WordPress network must use sub-directories.

This was doing my nut in! I was installing (what I thought) was a vanilla installation of WP for multi-site but in the network setup, I got this error. This error is meaningful in the case where your site is not in the root of your web server but in a sub-directory (folder) like, subdomains requires the use of the document root.

The problem with the error is that it is also displayed if you are accessing your site via "localhost" or an ip address - which in my case I was doing while I was waiting for a new DNS entry to be setup. What this meant was that the wordpress installation had used my ip address in its configuration rather than the domain name I intended to use.

What I had to do was access the WP database (I used phpmyadmin) and edit two entries in wp_options table, the entries for siteurl and home, changing them to my domain name.

Creating a hardened LAMP server on Amazon Web Services (AWS)

This has become something I have had to do a few times now and rather than keeping scratching my head to remember what to do, I thought I would describe the process of creating an Ubuntu LAMP stack on AWS which is never unnecessarily exposed to the web before it is hardened.

Firstly, create a new security group. It should have only the ssh port enabled on 22 and be locked down only to your public IP address, it can also have the rule for your eventual ssh port added, either open to the whole world (if like me you move around a lot) or your company/home public IP(s) if you usually only access it from one place.

Note that in the above image, it also shows the public ports 80 and 443 that eventually added for my web server, you can choose not to add these now but it is up to you. Since you will probably have a fairly hardened version of apache installed by default, it is not a big deal either way. In the case of port 22, note that I added my public ip address and a subnet mask of 32, this locks it down to only that one ip address. If you want a range of ip addresses in the rule, the subnet mask is the number of binary ones in the subnet mask. For instance, the common subnet mask of is equivalent to 24 1s and would restrict the rule to any ip address between A.B.C.0 and A.B.C.255. Note also, the port I have coloured over is the port I use for ssh (it is a 4 digit number way up there) and this prevents a massive amount of ssh attacks which are launched against port 22

Once your security group is setup, create your new Ubuntu instance (the same instructions will be pretty much true of most Linux distros). And then choose to create a new key-pair or use an existing one if you have one - I prefer to create multiple keys to reduce the chance of one being stolen/obtained and used to access all systems, despite the slight additional hassle of this. Obviously choose to use your new security group for this new instance. You could alternatively have a security group you use for all new instances and then a second group to use once the instance has been hardened.

Run up the new instance and connect to it. If you are connecting from Linux, you will need to add the private key for the instance to your ssh keys (there are plenty of guides but you probably already know how to do this anyway). If you are using Windows, you can use Putty but you first need to use Puttygen to convert the downloaded PEM into a Putty PPK since Putty has its own key format for some very annoying reason. To do this, run puttygen and File-Load Private Key. Change the file filter to *.* and find your key.pem and select it, putty gen will automatically import it and then you have to choose Save Private Key and add an optional passphrase - this would mean that even if someone else had access to the key e.g. access to a shared pc, they would still need the passphrase to connect.

Once you have generated the ppk file for the key (and added an optional passphrase to it), you can create a Putty session that uses the public dns for your instance and port 22. You then need to set the key under Connection-SSH-Auth in the private key file box. Note, the aws instances do not allow password access by default - which is good - so if you do not specify the key (or an incorrect one) when you try and connect, you will get a console error which says Permission Denied (publickey). If your firewall rule is using the wrong ip address, you will get a dialog saying Permission Denied when you attempt to connect from Putty.

Once you are in, the first thing I always do is to run sudo apt-get update and sudo apt-get upgrade to get any package updates before I do anything else.

The next thing to do is to sudo nano /etc/ssh/sshd_config which will change the configuration for the ssh daemon. When you edit this, the first thing you want to do is move the default port from 22 to, let's say, 8989 which is not standard and so is unlikely to be be attacked. There are a couple of other things to do like disable PasswordAuthentication (set it to no) and ensure that PermitRootLogin is also set to no since you will not be able to login as root anyway (the private key uses the pre-created user "ubuntu") and this adds another defence against an attacker who could otherwise do anything that root can do on the box. Once you have edited and saved it, sudo service ssh restart and then ctrl-d to log off. Now the port will have changed and your current putty session will no longer be able to connect. If you have not yet done so, you can remove your port 22 from your security group, change to a new security group and/or add a rule for your new port optionally locked down to specific hosts.

You can now edit your Putty config and point it at the new port (and save your settings). When you connect again, you will get another warning about trusting the servers key which you can go past and you should be into your box again.