Monday, 22 December 2008

More efficiency in forums

I sometimes trawl asp and c# forums looking to share a little of my knowledge to people who are learning or perhaps are just stuck on one thing. Something really annoys me however, when somebody asks a question that would be the first Google hit on a search (like how do I assign a value to a textbox) or something that is obviously way above that person's skill level (I am designing a content management system, please tell me what to do). It wastes a lot of time and if people don't reply to these unreasonable requests then they might think there is a rudeness or arrogance to the business.

I list some suggestions to people, particularly who are new to software, that they can follow to get more helpful advice:

  1. Software is not something you can just start doing without some training. Spend some time in a classroom or on a dedicated tutorial web site before asking really easy questions. You would not expect to go onto a doctors website and ask for advice on drug doses if you were not medically trained.

  2. Do not agree to do a project that is far too difficult. If I am asked to do a job, I might not know everything but I will know enough to approach a problem and think about what I need to learn in order to be able to complete it. Asking somebody how to do the whole thing will not help anyone to give you advice and I'm not sure if you expect somebody to sit down and spend a day designing a system for you for free, just because you asked.

  3. Ask specific questions, you are more likely to get answers. If you ask "what style sets the strokethrough on a text label", you will probably get a quick and friendly reply. If you ask, "how can I write a music download site", you are unlikely to get anything helpful.

  4. Use the search engines. I cannot believe how many people do not start here. They are fast and effective although sometimes you need to know the correct words to search for.

  5. Ask friends/colleagues/teachers for help before asking complete strangers. They are more likely to be able to help and you will not spend hours trying to explain what you are doing.

  6. Pay somebody else to do it (if you are commercial anyway). Why should you get free technical support and design services. If you are able to do it - great, if not, you either don't get it or you pay. You wouldn't ask for a free car just because you can't afford one. Either that or improve your own knowledge and maybe you can do it.

There are lots of people like me who really do want to help people - even newbies - but are turned off by people who seem either lazy or unqualified to do what they are doing and not willing to pay for the right person.

Tuesday, 2 December 2008

Annonymous javascript function woes in IE

I have a control which senses any changes made in an page controls and then if the user clicks away from the page without saving, an error is displayed. It works by iterating all controls on a page and attaching event handlers to onchange type attributes in various controls.

I've noticed a strange behaviour when running in IE7 which is fine in Firefox 3.0 this might be because Firefox totally ignores the offending code and there works by default or because it correctly places the annoymous function in memory.

if ( o.tagName == 'SELECT' )
if ( o.onchange == null )
o.onchange = SetDirty ;
var oldevent = o.onchange ;
o.onchange = function()

I know there is a way to add an event handler instead of using the function but it also does other things (not shown) which means the annoymous function was chosen. What happens is that after this code is called on 3 drop-down lists, ALL 3 end up pointing to the oldevent of the 3rd list rather than their own events. It is as if the annoymous function is placed statically in memory and the last call to it points the oldevent() at the 3rd event handler. This is very annoying since my 3rd drop list invokes a postback which my 2nd list doesn't so whichever way round I put the code, I either get post backs where I don't want to (and not the javascript I am supposed to call) or I don't get postbacks when I need them. Strangely Firefox doesn't do this. I would appreciate it if anyone knows whether the whole approach is wrong (ie wrong syntax) or whether IE just has a bug in this regard. My work around is to hard code the change tracking javascript and customise it for each control but that is not very neat or maintainable.

Friday, 21 November 2008

What flavour of HTML should I use?

Something that has plagued the web develop since the days when Netscape and Internet Explorer 2.0 were battling for the web market is how to code a web page. It is a strange question to the uninitiated since surely I want to code it a way that will look the same on all browsers.

Herein lies the problem. For acceptable and unacceptable reasons, various browsers do not agree on how to lay out a page. In some cases, a browser is simply not complete and does not support a particular css style. You have to drop its use but then you can't do what you want so sometimes you have to use a workaround. Sometimes the specification is not clear about what attributes do and different browsers have interpreted them in different ways. Often in CSS you should default the common candidates for this like font and paragraph spacing to 0 at the page top level and then apply them specifically at each other level to be explicit. Sometimes, browsers are simply wrong and a classic example was the old broken box model in Internet Explorer. Does the margin go inside or outside the specified size of a div? What about the padding? The border? Again there were various workarounds and you would now think that things have been mostly sorted in the latest browsers.

You would be correct but there are two problems. 1) The newest technologies are always likely to be incomplete or misunderstood until the browsers are released, tested and fixed (hopefully) and 2) Many of your potential users are not using the latest browsers and maybe for one reason or another cannot use them. What this means is you have to code your page in a way that works on an older browser, with all the quirks and workarounds which means at no point can you code a page that is fully HTML or XHTML compliant. What is one to do?

It is good practice to develop a site using a compliant browser before trying it on non-compliant one and trying to fix any major flaws. Try a newer version of HTML/XHTML if it gives you something you need, there is no point being at the hairy edge of new development if all you need for your site is available in HTML 4.0

Personally, I use HTML 4.01 which is reasonably well known by now but not too old so as to be rubbish. Most browsers interpret this correctly. It is not worth trying to create two sites for, say, HTML 4.01 and HTML 5.0 since web sites are usually majorly rewritten when they are updated (if they are updated) since people tend to want something much more snazzy for their money than a change of font or colour. For this reason, any second site 'for the future' would be binned along with the first site. If you need one for older browsers, use an older HTML.

The other approach which can be slightly militant but perhaps called for is to lay out a page so it is usable on browsers that are not compliant but perhaps looks a bit rough and then put a link to a compliant browser at the bottom saying "this page is best viewed with opera/firefox/whatever" and perhaps a 'why?' link next to it which explains why some browsers do not encourage good site design and why you would like to write your pages properly. If you do this, please word it fairly, there are too many people out there who do nothing for the industry than re-enforce the stereotype of some social misfit who has nothing better to campaign for than the abolition of Microsoft and anything related to it!

If anyone has any more useful ideas then let me know.

Monday, 22 September 2008

Headers and Footers referring to report items

If you have used Microsoft Reporting Services and have tried to make a header or footer more complex that just plain text and images, you have probably at one point received the error: "The Value expression for the textbox 'Item_Desc' refers to a field. Fields cannot be used in page headers or footers." What a pain in the jacksie.
You'll be pleased to know there is a workaround although currently it is limited. What you need to do is put the field you want to refer to into the report body and set the visibility-hidden to true (assuming you do not want to show it in the body). In the textbox for the header or footer, refer to the hidden textbox value rather than the field itself: =ReportItems("Item_DescTextBox").Value This is permitted for some reason, although it is annoying. Presumably it is related to the header trying to render outside of the report body but who knows.
There is a gotcha though, you might find in some reports, that the item appears in some headers/footers but not others. Quite simply, if the hidden textbox is not visible on the page that the header and footer is on, it cannot be referred to which is total garbage but that is how it is. If this happens, you need to try and put the field into a part of the report body that is repeated on every page. For instance, if you have some intro text, a table and then some summary text, put the hidden field inside one of the repeating table rows and try and make sure part of this table is on the first and last pages, this way, the header and footer will always be able to refer to it.

Tuesday, 16 September 2008

Poor quality images in generated PDF

PDF has become the de-facto standard interchange format for documents. It seems to handle lots of types of data and seems surprisingly to be one of the few media that are consistent across computer platforms.
I was using it at work the other day after realising that one of our corporate brochures had a very poor quality copy of our logo on it. I checked the original Word doc and the image looked fine.
I ran it through CutePDF just to make sure this was the correct original and sure enough the image came out bad - looking like 20 dots per inch!! The original image was a few thousand pixels in width and height and was scaled down to about 50mm x 15mm so resolution wasn't an issue.
I then installed Adobe Acrobat (which fortunately we already had so I didn't have to shell out £300 for the standard version!!). I tried converting the file from Acrobat and then after realising there was a bug, fixed it so that I got the buttons inside Word and converted it from there, both with the menu item and also using the Adobe PDF printer driver. None of this worked, the image still looked bad. Interestingly I tried several formats and they came out different but equally terrible. I then switched off compression and downsampling and got a massive PDF (2 pages 45Mb) but still a rubbish logo.
Almost at wits end, I put in copies of all the logo formats I had and generated the PDF again. The ONLY one that looked OK was the Windows Meta File (WMF) type which is probably because it is vector unlike the other bitmapped images.
So I found a workaround but no help on the net (other than lots of people saying how rubbish the Acrobat application is). I am very disappointed. Even MS Word is cheaper than Acrobat and has much more functionality. In fact you can probably get Office basic for the same price!! Acrobat gave me nothing useful and obviously couldn't handle what were very basic images. It managed to convert other jpegs so if my images weren't in the correct format, something should have complained.
Bloatware Bah!

Monday, 8 September 2008

Patch Tuesday

I notice this morning that Microsoft are touting 4 critical security patches next Tuesday in their monthly update patch. Apparently these affect all versions of windows, including vista:

Every time I read these things I get angry but not surprised. Remember every time MS have released a new version of windows, they always tout the "more secure" marketing jibe. This seems to make sense except that not long after they release these things, another load of patches come out which undermine the fundamental idea of security. To be fair, these patches might be related to applications rather than the Operating System but for goodness sake Microsoft, these problems have been occuring for years and you still haven't fixed your security model. You've released Vista on the promise of more security, and it certainly adds a lot in the annoyance department with all the "are you sure" messages, but still haven't thought about the underlying problems. Quite simply, it should be impossible on most configurations for a user visiting a web site to do anything dubious such as deleting or reading files off the hard disk. Why doesn't Internet Explorer simply not permit it for any site in any scenario? How many people really do need to access files from the browser, certainly not most home users.

The other problem is that saying "more secure" is not a lie. Having 10 known vulnerabilities instead of its predecessor's 100 does make it "more secure", until the next 1000 are found anyway!

I used to really like Windows. XP isn't bad but over time it has become bloated, slow and generally annoying. Linux on the other hand has got better and better and now presents me with little hassle for the fact it is fast, robust, secure and free!

Please go and buy it.

Wednesday, 13 August 2008

Don't use exception for normal program flow!

This might sound like an obvious statement since "exception" doesn't mean "normal" but in .Net, Exceptions are used for things like "Key not found" in a map.
In one of my programs, I was reading in a database table of projects, not all of which I was working with. I thought it was simple enough to simply try and lookup the key in my map and catch the key not found exception, ignoring it and carrying on:

foreach (DataRow row in ds.Tables[0].Rows)
m_Jobs[Convert.ToInt32(row["projectid"])].ActualLabourDays = hours;
catch (Exception)

This seemed like the solution with the least code but I didn't appreciate the overhead in generating exceptions, even if they are ignored and the program continues. When I ran this block of code, over 3000 rows, it took about 20 seconds since probably 2900 exceptions would have been thrown. I modified it to check for the key in the map before setting its value and bingo: 1 second!

Thursday, 31 July 2008

Email Best Practices

Email is very commonly used in the workplace for a variety of reasons but in some cases is totally unmanageable with people having several thousand emails in their various folders, most of which is not read or certainly not digested giving cause for missing important information and affecting the proper process flow of the business. Hopefully your company spends 90% of its time doing what it does in a normal way and then 10% of its time dealing with problems that naturally arise when things break, aren't delivered, are affected by individual lack of performance etc. Of course, in your company it might be more than 10% but if this amount is too great then your company is simply wasting money. Email and lack of best pratice can be a major player in this market. What follows are some straight-forward tips on email.

  1. Give everyone in your company email training. In my experience, more people than you might think don't know how to do the basics like bcc and expiry dates let alone more advanced features that will make their email experience a slave rather than a master.

  2. Ask whether sending an email is the best way to communicate. I have been part of many email discussions that take much more time to type than if I simply picked up the phone and spoke to someone. You can still record the fact that you had the conversation if you need to into a CRM tool or even a word document.

  3. If you do need to send it, ask if all the recipients need to read it - avoid overusing mailing groups when perhaps only a subset of the people need the email. Adding people as recipient when the email is for their information only is fine unless they are a more senior manager or person who receives much mail when you should really ask if they need to see the information. Once they are cc'd they might receive lots of replies which they don't need to see.

  4. If you are receiving lots of emails that are not relevant, do not be afraid to ask the sender to not send them to you. Tell them you are trying to reduce your inbox.

  5. Don't be lazy with subject lines. Carefully thought out subjects means people can see exactly what you want to talk about and can choose to ignore something that is low priority until they want to look at it. Subjects like "Question" should not be used whereas something like "How do I return an item to stores" will allow the recipient to know that it is something to be answered straight away. This is especially true when you are adding cc people to the email

  6. If you are sending out mass emails externally to your company, put everyones email address in the bcc field so that each recipient cannot see all other email addresses. You have a duty (sometimes legally) to protect email addresses from prying eyes.

  7. Do not leave emails in your inbox. People can easily miss new emails because they fall amongst the others. Read the emails and then either delete and action the item or file it - perhaps in a todo folder. This way you will not miss emails and they are easier to delete if unrequired because everything gets moved or deleted.

  8. If you need clarification on an email, ask yourself whether a phone call would be more efficient and then you can delete the email and know exactly what you need to do

  9. Make good use of tasks to know what you need to do rather than keeping emails for the same task. It makes things neater and you can easily copy and paste text from the email into the task description. You get the extra benefit of priorities and scheduling.

  10. If you have a problem with SPAM and junk mail, use junk filters or change mailbox names every once in a while, people who need to contact you will be able to get your new email address easily enough if they really need it.

  11. Make use of inbox rules to automatically move regular emails into a folder where you can then choose to read them, keep them or delete them.

  12. Remember that a lot of company confidential information is kept in emails so make sure you lock your pc when you leave it and regularly delete unwanted emails and sent items (perhaps after 1 year) which will reduce the potential impact of people reading your emails.

  13. Have a company email policy and treat the subject seriously. Trying to be informal is fine in theory but we are talking about wasted time in your company and it should be taken seriously. You then have specific comeback on somebody who perhaps continues to pollute peoples inboxes with junk mail or generally increases other people's workload by not using best practices.

Monday, 21 July 2008

Software Development Coming of Age

There was a time when computer science was the preserve of academics and big business for the simple reason that computers were expensive and their per-hour cost was high. You wouldn't have an A-Level student being let loose on a system for hours trying to hack out a project for school.
Time have changed however and computers are extremely cheap. Even in countries where the average income is low, many people have at least limited access to a PC which can be used for amongst other things programming.
Like most things, this is both a blessing and a curse. It is a blessing because people who might be very skilled programmers have access to something that they wouldn't have 20 years ago and these people are part of the skill set that businesses use to produce productive software (we hope). However, the curse is that the skill set is polluted with thousands of people with very limited skill and although not necessarily bad in itself, a percentage of these people seem very free with their advice to others who are struggling with something and propose solutions that might be, well rubbish. There is no easy way to work out the value of this advice because Programming is often considered like mathematics where if the solution works, it must be correct. A better analogy would be car mechanics where just because an engine fits and turns over doesn't mean it is the best type of engine or the best way to connect it up - although it might well work. Programming is often a set of balances where speed is offset against readability or where pure theory can be the enemy of pragmatism and just getting something that is 'good enough' rather than 'perfect' in a reasonable timescale. The skill of the programmer is not whether they always produce the fastest code but whether what they produce is appropriate to any given requirement.
Well this can be one problem but it can get worse: people write rubbish sometimes because of poor advice or lack of training but also people often re-invent the wheel. How many people must have written a 3-tier database web application which is 80% the same as every other one in the world? Why can't we share what we have done to more quickly move into the future? Well we sort of share but we can face similar problems to above, we get given eaxmple code by somebody when it might be varying levels of rubbish or we might take some existing code and by not understanding it, we might either modify it and make it rubbish when it was OK before we changed it or otherwise we might apply it to a system where it is not appropriate. For example, a non-secure database application might be fine for a corporate network where hacking is seen as unlikely but it would be inappropriate for a public network where hacking is commonplace. This is compounded by the seemingly high number of people on forums who seem to have little or no programming knowledge asking things like, "how do I generate 3D graphics" or, "how can I write a flight simulator" - can we trust these people to write robust software?
So what do we do? I read a book not too long ago called "Emergent Design: The Evolutionary Nature of Professional Software Development" by Scott Bain and he was talking about more regulation for the profession that is called Software Engineering. A person cannot merely decide that he wants to be a doctor or lawyer and start practising. Even if he is poor or seemingly able, he must attend various courses and take exams to prove his competence. Even for mundane things like driving a car, people have to be a certain age and have to pass a driving test. Why? Because these things carry responsibility. Driving or being a doctor without proven skills is dangerous. Being a lawyer without skill can end up causing somebody to be prosecuted without good reason or cause somebody who is guilty to be released into society when they should be locked up. What about Software Development? Well poor software is often blamed by companies for various corporate problems and who is in a position to deny it? We have all experience poorly written software so we almost expect things to be less than perfect. These bugs can cost us time and money as well as frustration. Although the year 2000 'bug' was not really a bug in one sense, it cost companies millions in proactive and reactive costs over the new millenium eve in case their systems crashed. While we have a totally unregulated industry, we are all in danger. So, imagine we had a regulated system where somebody has to have a certain level of qualification before they can call themselves a "developer" or "software engineer" this would help to solve the general quality of systems being developed - or at least improve them over time since currently many people who teach computer science won't necessairly have a qualification themselves or otherwise they learnt their trade a generation ago when priorities were different.
In order to solve the second problem, i.e. people re-inventing the wheel, I think if the industry became regulated then the industry body could support a single 'red book' which would describe all of the best practices in software where they exist with any caveats to the design that might be appropriate. It would not be a copy and paste because we don't want people to copy-and-paste from one context into another - that causes bugs. What we do want is for a single defining place to say, "if you are designing a new database, you must consider 1) security of database access (link to sub page) 2) Layout of tables and links (link to sub page) etc". A sub page might say "you must implement a security model for stored procedures if you a) have a publically visible server, b) have an application that behaves differently for different users..... but if you secure the procedures you will a) incur additional development time, b) you must produce a comprehensive test case to ensure you have secured them (or create a process that means they will definitely be secured as they are created)...etc"
Hopefully you get the idea. It will never be able to be totally definitive for a specific scenario because the context of software always differs but at least if there is a 1-stop shop for information, people will not forget something and will be able to see the pros and cons of every decision before they make it. Of course, good practice might change with time so the system would need a way of updating users so that they know this has happened but what we end up with is a way of sharing knowledge from bona-fide engineers who know what they are talking about but in a way that does not encourage copy-and-paste with all of its pitfalls.
I suspect such a system exists in various companies and probably a lot of the content is the same but rather than trawling google to find something of dubious value, if this content was all in one place with known reliability then we can all move onwards and upwards.

Thursday, 17 July 2008

Writing good logic in code

This article does not relate to a specific language, but to many languages although I am only familiar with about 10 so you will have to decide whether the comments are appropriate for yours or not.

There are two major points to cover and the reason for needing good logic is simple. I'm not sure there are many statistics but I bet the majority of software bugs are related to broken or incomplete logic. The first point is that we need to reduce the amount of logic required in the first place and the second is that we need to simplify and rework any logic that is required in order to make it readable, maintainable and testable.

How can we reduce logic? The first and obvious point is that we need to learn it properly. I saw some code the other day that said something like:

if ( myString.Length == 2 && myString.SubString(0,2).ToUpper() == "SC" )...

and it made me chuckle but this is typical of the first point. Can you see what is wrong with it? If you are checking for the string SC appearing in the first two characters of a string and then checking its length is equal to 2, surely the string EQUALS "sc" in other words, the following would be equivalent:

if ( myString.ToUpper() == "SC" )...

Which is clearer of the two? The second is by far clearer and has many less potential defects. The first example could have an accidental = instead of ==, it could have different string references for the first and second checks and it could get the substring values wrong. All of these on top of the basic potential defects found in both examples. Now imagine if we counted the potential defects in a piece of software before and after rationalising poor logic like this and you might have a 50% reduction in defect risk - nice! There are plenty of examples of places where logic is confusing, how many people have used horrific "if, then, else if, if, then end if.." as if that is perfectly acceptable. Ask your bosses to enforce a simple logic policy that something is simple or it is re-factored.
The second way to reduce logic is to use polymorphism and inheritance to create structural logic. You need to understand the difference between structural logic and behavioural logic because one should always be implemented in polymorhism and the other might use polymorphism. A car does not need to control whether the drive-shafts turn the wheels because for a given car, they always will. That is not to say that all drive-shafts turn wheels but for a given car, once it is built, that is how it works. On the other hand, the gear to use in the car is not fixed but is dependent on how the driver is driving it. It requires behavioural control and can change frequently, the drive shafts are fixed and have logic or behaviour dependent on the structure. How do we equate this to software? Suppose we have a simple application that has 2 dialogs. One for normal users and one for administrators, they have largely the same properties and perhaps one has an additional option on it. When we want to query the properties set by the user in the dialog, we could say:

if ( Dialog.Name == "Admin" )

or we could think we are being slightly cleverer by using the pointers/references:

if ( AdminDialog != null )

Sure, it's not the end of the world but it already has a built-in assumption in the first example that the dialog has a name that is not going to change and in the first and second one that there are only two dialogs. If you add another type in both of these examples, the code will still compile. This is a structual situation since once the dialog type is defined, presumably it stays set until the application is 'rebuilt' or restarted for another user. In this case, we could create a base class or interface for our dialogs, keep a pointer to the base class and lose the logic:

MyDialog.DoSomething(); // Will call whatever is currently attached

Ah you say, but I do things differently for each dialog and need the logic brackets to separate it. Most of the time you do not. At the level you are handling this, you probably do not need to know about the detail but if you do, ask the dialog to do it or ask the dialog for a handler class which can be specialised for each type so that the caller still does not need to know what type of dialog is displayed. If you create a new type, you will need to implement the base class interface or dialog so once the compiler is happy, you will have precisely 0 strutural defects with your code!
Behavioural logic is related to things that change all the time, you might receive a network message and decide what to do based on the message number:

if ( Message.Number == 1 )
// Handle message 1

Remember that although this logic might be required you can still use design patterns to handle things in a way that does not create large and convoluted logic that is hard to maintain. You could argue that for simple cases the logic is OK but my experience is that there is no simple case - what if you accidentally mistype the value you are checking or put a value in twice, do you really test every single message to make sure it works?

The second area that is important is simplifying logic in order to make it usable. This follows on from the behavioural discussion since the structural logic should be hidden away in polymorhic function calls.

How do we simplify logic? Again, we need to learn how to re-factor logic and we can do things like logic reversal, so instead of:

if ( VariousConditions )
Control1.Enable = true;
else if ( SomethingElse )
Control1.Enable = true;
Control2.Enable = true;
else if ( AnotherThing )
// You get the idea

which is very common in software, you can reverse the logic and have the individual items dictate what their logic needs to be:

Control1.Enable = VariousConditions || SomethingElse;
Control2.Enable = SomethingElse || AnotherThing;

Can you see how much neater that is? Consider this as a possible refactoring tool whenever you see terse logic statements.

Another technique is very simple but often underused. You create a function with a helpful name and you move the logic into that function or more commonly functions. I can only think it is laziness that prevents us creating these helper functions that can turn otherwise impossibly complex logic statements into a collection of helpful and very readible statements:

if ( MessageIsTaggedAsUrgent(Message) || MessageIsFirstInQueue(Message) )

The logic that defines each of the two unrelated functions will make more sense in separate functions than lumped together in a single statement. Also if one type of message is not being handled, a single easily testable function can be examined.

The last suggestion for majorly helping after everything else is done is to use automated testing tools (many great free ones exist such as csunit - although not sure why we don't like paying for things!). I am constantly amazed about how often I am caught out by a very innocuous defect in an otherwise simple function. A function to add items to a list? How hard could that be but let us consider what might fail even in a single function. 1) The list could be null/unitialised, 2) The item might already exist in the list which might not be allowed 3) The list could be full 4) The item might not be the right type for the list (not always easy to trap with the compiler for un-typed collections) 5) You might be trying to insert it at an invalid position 6) The object you are trying to add might be null and this might not be allowed. You get the idea - there are often more considerations than we can think of so how to do we cope? We don't. We have peer code reviews, we get trained, we get experience, we use robust languages and frameworks and we test,test,test. How can we test the function? We can throw a whole load of automatic data at the function and find out what happens in certain realistic conditions or we specifically handle the exceptions or errors that might be thrown by an invalid condition (or both). We don't necessarily care about running out of memory after adding 4 gazillion items to a list but it might be an issue. We might instead care about what will happen with a full list or a null object, we might want to test the logic by setting external conditions and calling the function. If you can't easily test the function then break it down until you can. Remember that our functions often assume too much about the parameter data or member variables they use. These assumptions coupled with poor logic design is defect central!

Unfortuantely most logic issues are not found until after release because there are too many of them to test and many are very subtle or specific. By carefully approaching the design and build process, you will find a massive reduction in these!

Monday, 14 July 2008

Reporting Services, Margins, Page Layout etc

If you use Reporting Services, I bet you have spent a while trying to get the reports to print out as expected! I'm not sure why something that wastes so much of everybodies time (printing things out properly) is not well-known and fixed in software now - it should be impossible to get it wrong. The print knows its paper size, the software knows the paper size yet it prints out most of your document on one sheet and then a little strip on the next - obviously what you wanted!
Anyway, in reporting services there are some quirks that you need to know about in order to get your report correct.
1) Select Report - Report Properties and the Layout tab. There are paper sizes and margins in here. Note that the page sizes here need to match the physical paper size. I'm not sure what effect they have because it doesn't draws these on your report design!
2) You might think that is it BUT you then need to right-click on the grid area of the report in the layout view and choose properties (properties of the body of the report) and lo and behold there is another field here called "Size" which consists of width,height in measurement units (mm, inches etc) which defines the area of the report body. For some reason this is not restricted by the page size in report properties and you won't know if you make this too big! Anyway, this needs to equal the page size minus margins if you want it to fit on one sheet.

Example A4 paper is 210mm by 297mm so you set the report layout (in Report -> Report Properties) to these figures and then suppose you set the margins to 10mm all the way round. The report body should be set to 210 - 20 (for width) and 297 - 20 (for height) when in portrait or 297 - 20 (width) x 210 - 20 (height) when in landscape mode. It sounds really simple but it still takes time to find these things out!

Wednesday, 9 July 2008

Why Windows Vista is pointless

Vista is a strange beast, touted as the next big thing by Microsoft (MS) and much scorn from various people in the IT and business world. My own take is quite simple, Vista is an Operating System in which case it provides a 'desk' on which to run various applications or applets. Windows XP is also an operating system and also ran most things I need so why would I upgrade?
1) If it was free then I might upgrade just to get something that looks more modern but it is not free, it is pretty damn expensive in fact.
2) OK so it costs money, I therefore have to weigh up the cost/benefit ratio and this should help me decide whether the big dollars is worth it. As far as I can see, for most users anyway, particularly business users who prefer not to use bells and whistles, there is precious little value that it adds. One of the biggest selling points was a snazzier interface but this only came with the ridiculuous expensive and humerously named "Ultimate" edition so most people didn't benefit from this. Why would people want to pay massive bucks for a snazzier interface if that is all it is?
3) It is supposed to be more secure from hacking etc but we were convinced by MS that XP was secure so has something happened since Vista was released that has made XP less secure? If in fact Vista was basically unhackable whereas XP was certainly vulnerable in some areas (and perhaps there were many latent security problems waiting to be discovered) then this would be a reason to upgrade maybe, but it isn't and we still get patches for Vista, obviously the fundamental security model is still very lacking. Still no reason to upgrade.
4) People have complained about lack of driver support but this would always be the case with new operating systems and to be honest i think most people would bear with MS as the drivers are developed if this was the only issue with Vista. What MS seem to have forgotten is that most people in the world still use XP so hardware manufacturers are not in a rush to write Vista drivers!
5) OK, let us assume it was not so expensive for nothing much, then we would autmoatically upgrade our current (probably XP) OS to Vista? Not on your life as far as I am concerned. For some reason, in supposedly going back at least partly to the drawing board and coding Vista, we could have reasonably expected a load of bloat and slowness to be cut out, after all it does not need to support 16 windows applications (does it?) and there must be other stuff which is basically redundant. They could simplified lots of the Windows API stuff and generally made it AT LEAST as fast as XP by the time they added in some new bits but no sir-ee-bob, it is slower, noticeably slower except on the latest machines with 4 cores in their processor which can pretty much handle it. The problem is, all that hidden power being used to run the OPERATING SYSTEM is not available to run what needs to be running, i.e. the APPLICATIONS. I'd rather run XP on an older machine and still have loads of power to spare for my apps.
6) All of this performance hit would be forgiveable if to get ultimate security (which Vista is striving for presumably) will always cost loads of processor cycles and memory overhead etc but interestingly the latest incarnations of Linux are much more secure than Vista and run much faster too. Why? Because they have a good security model that does not require massive OS overhead to manage.
7) I've always wondered why Windows XP and Vista make generous random use of the hard disk when I have not noticed it once on Linux. Linux installs and removals are quick and painless, Windows ones can take hours! Despite Linux potentially being very flaky with all those grubby programmers having fingers in pies, it is, I am sad to say, scoring higher on my useability list than Windows! The only one thing that is lacking in Linux but getting better all the time is the number and quality of applications available. Office, internet and email are fine and these are 95% of what i use anyway. The development environments are not quite up to Visual Studio quality but they are very much useable despite this. There are even cool programs in Linux that you can't get (at least for free) on Windows including XTrkCad a model railway cad program.
8) Sorry MS, you have well and truly missed the point with Vista and you probably know it so stop telling us we desperately need Vista and go back to the drawing board!

Monday, 7 July 2008

The definition of the object is hidden

Strange compiler error today in VS2005 C# web site. I had renamed a couple of old files and copied some back into the web directory from elsewhere and hit "Build". I got loads of errors in that fields in the csharp file I copied back into the solution were not defined even though they clearly were in the aspx file. I checkd all the usual spellings, @Page names etc and still nothing. When I right-clicked the field names and selected "go to definition", it went into the aspx page and then gave the suitably abstract error "The definition of the object is hidden" which is fine except it was very confusing (apparently it is talking about the actual code rather than the server control on the page).
Anyway to cut a long story long, it was because the build tool digs everything out of the web directories including my old definitions of the pages I was building. It then obviously linked the names to the old class definition and then complained that my new definitions didn't work. All I had to do was remove the backup files from the directory (or presumably could have renamed them to a different extension) and it was all fine.
How flipping annoying.

Friday, 4 July 2008

Request for permission SqlClientPermission failed

I am developing an XAML Windows Presentation Foundation (WPF) app which is designed to be browser only (an XBAP) and when trying to debug it using a shared database library, I got the above exception when it called even though it had an identical string to another working part of the system. I was concerned that it was related to user contexts and session and authentication and other nastys that hide under the covers of a web app but fortunately it was easier than that.
My XBAP application was marked as partial trust which means it is more likely to be loaded without hassle on a strangers browser if you posted an xbap on your company web site for example. This trust level however means amongst other things, you cannot open a connection to a database (fair enough I guess, you could be trying to hack someones pc). If however you mark it as Full Trust (under project properties -> security) it will only be loaded from a trusted location i.e. the local intranet usually unless a user specifically allows it or turns the down the security on their browser but it does mean that you can do more useful stuff like opening connections to databases.
I switched it to full trust since this is for a corporate network and it worked fine (well in fact it came up with an unrelated error but that is another story!).

Thursday, 3 July 2008


VisualTreeHelper.HitTest is a cunning function available in Windows Presentation Foundation (WPF) classes that provides mouse hit testing for the given panel and point. It returns a result if the given point is inside a control in the given panel (grid, stack panel etc).
The basic form takes a panel and a point and returns a result, which if not null contains a VisualHit object which can be casted to whatever control was hit.
The reason I am telling you what you could find out on msdn is that when you pass a point and a panel into the function, it will assume the point is relative to the child coordinates of the panel and it will NOT take into account the fact that the panel is not necessarily located at 0,0 on its own parent panel. To ensure the point is passed in as child coordinates, inside your mouse function use the function:
and pass a reference to the panel on which you will be hit-testing rather than null or the parent panel which will give you the wrong child coordinates. For example:
<StackPanel MaxWidth="5000">
<Grid Name="JobsGrid" Canvas.Bottom="0" MaxWidth="5000">
<Grid Name="PeopleGrid" Canvas.Top="0" MaxWidth="5000">

protected void CanvasMouseMove(object sender, MouseEventArgs e)
HitTestResult Result = VisualTreeHelper.HitTest( JobsGrid, e.GetPosition(JobsGrid));
// etc

Wednesday, 2 July 2008

static or non-static?

Are static functions good or bad? Should data ever be static? Well firstly we need to say that we are talking about OO design and not structural code. Static in C++ sometimes meant that something had file visibility and sometimes that a single function existed requiring no instance of the parent class to call it.
So good or bad? We should be pragmatic about this. It is rarely correct to say that there is "never any reason to use static" or vice-versa so let us state what we know about the pros and cons of static methods:
1) Quick and dirty (bad reason)
2) Allows a consumer class to obtain a reference to something without knowing its concrete type (e.g. encapsulated constructor) means that the consumer calls something like
IMyInterface inf = HelperClass.CreateObject();
rather than
IMyInterface inf = new ConcreteClass();

3) In the case of certain scenarios such as database helper functions, it might seem neater and clearer to have
rather than
new ClassName().NonStaticFunction(whatever);
ClassName cn = new ClassName();

1) No polymorphism of function, i.e. a subclass cannot override the static method, it can only hide it.
2) If it is a non-constructor then it ties the consumer to the type of class providing the function. If you want to change the provider, you have to modify the consumer.
3) Static functions are not implicitly thread safe because all threads would access the same local variables. Instance function local variables are created on the thread stack and are isolated from each other.
4) Static functions can gloss over a badly levelled system. If you have a class called
that has a static function
Employee[] GetAllEmployees();
then levelled correctly, you should have another class called, i.e. Company and this new class would have a non-static method called GetAllEmployees() which would return an array of type Employee. The levelling has removed the need for the static function.

My own opinion is that I start with instance functions and only use static ones where the instance ones do not suit the situation for some reason. Up until this point, I have only ever used static functions to match existing code or to call static functions in libraries that I did not write although I have been convinced recently that the encapsulated constructor using a static method is a good idea (not to be confused with Singleton pattern which is similar but only permits a single instance).

Thursday, 26 June 2008

Reporting Services Data Corruption

Not sure how you might find this post via google. Not sure what you would search for but maybe: Reporting Services, Corrupted Data, Corrupted Layout, Preview Layout not Working, Array index out of bounds in preview, preview doesn't match data, grouping not working as expected.
Anyway, however you got here, I experienced some very weird stuff in a reporting services report and it was related to RS caching data for the report before changing the sql for the data source. Despite pressing refresh and save and all sorts of things I was getting very weird errors, fields appearing in the wrong place on the preview, weird error messages and just something appearing to be not right. The fix? Close Visual Studio, find the folder with the report in and delete the file and then reopen. VS will requery the data and you should hopefully be alright. Sweet as.

Copy and Paste is Not your friend!!

I heard that quote a few years back from a trainer on a .Net training course and the more time goes by, the truer it becomes. Almost all of my code errors are related to copying and pasting code and forgetting to modify it for the new location. It is not surprising then that there are techniques to avoid copying and pasting or "inheriting from the clipboard" as I've also heard it.
Firstly, if you are copying and pasting, you might well be hiding a required function. Even a simple repetition of 2 lines could be a function:
Rectangle rect = new Rectangle();
rect.Width = 100;

When copied several times (but almost invariably getting renamed in the process) is prone to bugs even though it is surely so simple it is idiot proof!? A "factory method" is simply a method that creates an object for you, hiding the detail, and then passes it back, e.g.:
Rectangle Create100WideRect()
Rectangle rect = new Rectangle();
rect.Width = 100;
return rect;

Note that although the function is 6 lines instead of the original 2, you only need to replace 3 calls to save room and more importantly there is no grey area, every single place that needs a 100 wide rectangle can call the factory method. Note that if you need to make slightly different instances then either call the factory method and modify the object afterwards or pass the difference into the function and let the function customise it for you. If the items you are creating exist in another library then try and avoid the need for a local variable by passing the create function directly to whatever needs the object like:

and you haven't tied the user to the creation of the object.
The second common reason for copy and paste is when you really need another class. If you are doing two thing similarly, maybe you need a base class doing all the same things and then two sub-classes that specialise the code. Again, doing this means that no two blocks of code are the same because the same bits are put into a base class in 1 place! Just because you think that the two blocks of similar code belong in two functions in the same class doesn't mean you can't create some helper classes purely to use for those two functions rather than:
if ( something ) Function1(); else Function2();

You could instead do:

with no ifs, no buts and solid separation of functionality. You can pass in any variables these functions need to use from your class as constructor or function parameters.
The third common use for copy-and-paste like the second is when your single class actually needs to be abstracted into some sort of hierarchy. Suppose for instance you have a large switch statement with lots of similar code, this could be a place to use a pattern, perhaps "chain of responsibility" or "state pattern" where each case block becomes an object and the code that is the same for multiple blocks can be abstracted into base classes. Remember that two blocks of code is double the bug risk so keeping things in one place means there is only one place to fix it!

Events and access in .Net

I was trying something today in C# (.net) that I thought might be doable, although I wasn't sure.
I wanted to pass the event from an object into another object so that the second (helper) object could subscribe to the event. I think of events as objects so didn't think this would be a problem. I added the event to my event source class:
public event EventHandler WorkDayChanged;

I then passed this event externally to the constructor of my helper class:
ColourBoundRectangle r = new ColourBoundRectangle(w.WorkDayChanged, w.GetTypeNumber);

but when I compiled, I got the error:
The event 'ResourcingBusinessObjects.WorkDay.WorkDayChanged' can only appear on the left hand side of += or -=

I didn't really understand but here is the crack: when you use the word "event" on the event in the event source class, it will only allow external objects to add and remove handlers to the event, even though it is public. If you want to access the event as an event object, you must miss off the word event and use (in my case):
public EventHandler WorkDayChanged;

Which seemed to compile OK anyway! I tried the code but the events weren't being fired (in the debugger I noticed that the events were empty at the time of firing) even though I was calling += on the event with my helper class. Nice having a good debugger, I realised that because the event is really a delegate which seems to behave like a value type, because I wasn't passing it by ref into my helper class, I was adding a handler onto a local copy of the event which went out of scope outside of the constructor. I simply changed my code to:
ColourBoundRectangle r = new ColourBoundRectangle(ref w.WorkDayChanged, w.GetTypeNumber);

And it was all fine. Sweet.

Wednesday, 25 June 2008

Problems opening .net project

I grabbed a .net project off of code project the other day to find out about drag and drop. When I tried to open it in Visual Studio, I got an error that a project import could not be found: c:\microsoft.csharp.targets and therefore the project couldn't open. I had a look around the net and even on the page I downloaded it from but to no avail.
Firstly I found out that I had installed the .net framework 3.0 redistributable rather than the framework for building apps. This hadn't caused problems before but anyway, I found and installed .Net framework 3.5 and now found two copies of the file that was being looked for in c:\windows\\framework\v2.0.50727 and now in the v3.5 directory. Since the project I was looking at was based on v3 stuff I decided that the project should find the version 3 one.
I opened the project file into wordpad and found that the import looked like this: and when I checked, there was no environment variable set up for MSBuildToolsPath, even for version 2 of the .net framework so I went into Control Panel - System - Advanced - Environment Variables and added a new key: MSBuildToolsPath = c:\windows\\framework\v3.5 and then OK'd out of the dialog.
I restarted Visual Studio to force it reload the environment variables and then opened the project and hey presto. Quality.

Tuesday, 17 June 2008

Tables and Repeater Controls

Still knocking around with an 2.0 web site. Surprise, surprise it is possible to make it xhtml 1.0 strict compliant. I found one issue which was due to the way that a previous coder had implemented a table with an asp:repeater control. He had effectively written
with relevant code in between. The problem arises when the repeater has no data and the page therefore renders:
which is illegal in xhtml and during validation it complains: "end tag for "table" which is not finished".
The solution is easy enough. Use the and tags inside the repeater control to start and close the table. That way, if the repeater is empty, none of the table is rendered and no xhtml errors produced. The header template code is rendered once when there is one or more items in the repeater and likewise the footer is rendered once but after the items. As well as displayable items that you might want to display (header and total rows) you can use them to output functional markup like xhtml tags.
Always easy when you know how!

Tuesday, 10 June 2008

Changing to

I thought I would share a few experiences changing 'code-behind' files into c# using Visual Studio 2005 for an web site.
1) Make sure you are not changing files in the wrong place - it won't work!!
2) You can leave the solution open when you do so: Go into windows explorer and rename the code file from something.aspx.vb to something.aspx.cs
3) Open the aspx html file for the file yu have changed in the solution and change the page directive to language="c#" and CodeFile="something.aspx.cs".
4) Important! save and close the aspx file before compiling. If you don't do this, the compiler will change things in the 'c#' file back to VB style - e.g. "namespace" will become "Namespace" and "try" will become "Try". It took me a while to find out.
5) Open the code file which is currently full of vb but now expecting c# and do some find and replaces to make the syntax correct. For example find and replace "If " with "if ( " and "End If" with "}" (Don't worry about indenting, you can do this later.
6) Hopefully you can work out most of the differences between C# and Vb but some of the funny ones are a) VB uses = instead of == in logical comparison. Fortunately c# will complain about this! b) If VB functions take no arguments, they are called with no brackets which in c# will cause an error which is slightly helpful as to the cause (i.e. you need ToString() instead of ToString) c) VB doesn't use brackets and semi-colons for scoping code so these will need adding.


Tuesday, 3 June 2008

The pains of SQL Server 2005!

I quite like Sql Server and have used it at various jobs but I recently had a problem restoring a backup that failed with "There is insufficient free space on disk volume" and "restore failed for server" or such like and I had to start a long foogle (find out on google) to try and track down the cause. It is worth mentioning that the free space reported was much lower than was actually available.
It was confusing because I had successfully restored it previously but since the last restore I had installed Windows over the top of the old installation and re-installed SQL Server so I assumed it was a SQL Server setting that had been reset but couldn't find anything.
It turns out the message is quite common and like a lot of half-arsed programs provides a very unhelpful message that maybe sounds more reasonable once the real fault it found out but doe not really lead you there. I thought it was because SQL server might require contiguous space on the disk so I copied off the 35Gb backup which took hours and this didn't help! I also checked that the size of the restored database would fit onto the server since some people think the size of the backup is the same size it will take up when restored but unfortunately not: SQL Server pre-allocates the sizes of tables required even if they are empty. You can fix this on the database but you would then need to take another backup afterwards since you can't modify the backup in this way. Since I had already restored the backup previously, the number reported (35Gb) was about right and I had this space.
My problem turned out to be "quota management" which is turned on by default in windows server 2003 and was preventing me from using over 30Gb of disk space during the restore even though I was not putting the database into My Documents! Not sure why because the quote management looked more like I had 5Gb max but some random number was obtained and displayed to me in the error dialog.
I turned off the quota management (Right-click the disk in explorer and choose Properties->Quota) and everything was fine except now I had copied the backup away from the server I had to restore from the network which was painfully slow.
Not sure why it is such a difficult thing but you get a help link when you get the error and it goes to a microsoft site in which I have yet to see with any help at all, it usually says, "there is no additional help for this error". What a load of crap. These types of problems are massively common but rather than nailing down the help system and making it sweet and usable, we have 400,000 pages of google to try and filter down to our specific problem, oh and old links get broken which is also crap.

Wednesday, 5 March 2008

Simple midi interface from Linux

I wanted to connect my electric piano (a Roland RD700) to my laptop so that when playing live, I can quickly and simply change sound settings to my favourite sounds without having to click lots of pluses and minuses on the piano itself.
I found an M-Audio Midisport Uno interface which is USB on one end and 2 midi connectors on the other - one in, one out.
I rather foolishly didn't check out the linux driver issue beforehand but was pleased to find out that the interface was theoretically supported on Linux with a third-party sound driver in OSS. I rather impulsively installed OSS and needless to say, it totally shafted my working alsa system and stopped other things working. Eventually for this and other reasons, I reinstalled Kubuntu 7.10 and read an article about installing midi support.
It seems that all I needed to do was "sudo apt-get install midisport-firmware" which brought in the drivers and fxload which is a USB firmware loader and then simply plugged in my interface. It now appears in my midi hardware list so although I haven't tried it yet, it appears to have worked fine.
On another note, check out Rosegarden. It is a sequencer and has incredible midi and notation support. It enables banks of sounds to be set up in profiles so that you can choose exactly what sound on your sound module (or keyboard) to use for each track. There are lots built in but it probably won't take much to tailor one for your keyboard if it isn't present.

Ripping in flac - the way ahead!!

After reading a post about which format to rip into, I decided that the best thing to do was to rip all audio to flac format. This is lossless which means you don't lose the (theoretically) unimportant parts of the sound that you do with lossy compression such as mp3, wma, m4a etc. Of course this means that the resulting files are much larger - about 20-30Mb each but the advantage is that you will never have to re-rip when a better format comes out in the future (if it does at all), you can simply convert your flac files to the new format which will result in exactly the same quality as if they were ripped from CD but much, much quicker because they are on a hard disk rather than loads of CDs.
Since hard disks are cheap now, unless you have millions of CDs, they will only take up a few 10s of Gb of space and save the re-ripping.
My plan is to put them on a server in my house and use the full quality files for playing in the house and then convert them to m4a format for my mp3 player and laptop.
By the way, KAudioCreator will not rip to flac straight out of the box even though it is theoretically set up to do so. You need to install the flac encoder which you can get simply with the command line: "sudo apt-get install flac" If you don't have it installed, when you select flac, it will start ripping and then fail when it tries to encode the file.