'Software Development' Archive

Sticking with interfaces that work

October 9th, 2007

I’m often viewed as a luddite when I suggest avoiding fancy pants, gadgety interfaces for software applications. A fictional example? SmartCompany wants to build an application that allows team mangers to take the score sheets from their sports team home, scan them and store them online, and then (insert features here)…..

High level meetings like this get me thinking about interfaces. In this useless example, paper is the interface and it currently works just fine. Why mess with it?

The problem I see over and over again is failing to recognize these successful interfaces and instead assuming that they pale in comparison to the new technological ones you’re planning to create and sell. That’s often not the case. At least for a minute take a step back and assume that the existing interfaces are there because they just plain work.

A good example of this thinking is ScanCafe. I haven’t tried them out, I’ve just read about them at CoolTools. What I like is that instead of building an application that requires users to migrate to a new interface (digital camera, scanners, etc), they’ve stuck with what user’s already know how to work with (prints, slides, negatives, mail, etc).

“Here is how it works: You pack up your images and mail them to ScanCafe’s headquarters in Northern California. They count them up, and repackage them before shipping the pieces to India. In India they are scanned, touched up, rotated and then privately posted to your account at their website. You then go through the images online and select which ones you’d like to keep.”

Open Sourcing

August 14th, 2007

Mark Shuttleworth, if you’re reading this please contact me asap! Ok fine, it was worth a shot.

As we delve deeper into creating open source projects, I suppose I’ll just have to do the research myself. I like what I’m reading about Mark, Ubuntu, Canonical, etc. Canonical seems to be thriving with a cool mix of open source and proprietary projects.

It appears we share some heritage as Canonical “originally started as a wholly virtual organisation, all of the employees working from home. With no traditional office space at all”.

Prototypes and Crappy Software

June 19th, 2007

Typically in software when someone asks you to build them a prototype what they’re actually saying is “I have little to no cash”. I’m cheap myself so I love the idea of building things quickly for little cash. The sh*t hits the fan when the person asking doesn’t truly understand what they’re getting.prototype.jpg

There are those rare to non-existent occasions where they’re actually saying “I know exactly what I’ll end up with in asking you to meet these unreasonable deadlines but just build it fast”.

The longer the time horizon gets, the more expensive building software this way will get. So if you build it, get it into some sales meetings or get it in some customers hands and the concept is proven flawed and you either abandon the business idea or significantly refactor it then perfect.

You just saved yourself a ton of cash and are a genius. Clearly building a so-called enterprise grade product would have been a very expensive mistake. Of course you also lack a business but let’s ignore that for now. It was good learning, it was better to know sooner than later, and it didn’t cost you much.

Short time horizon, cheap.

This will sound strange but the worst thing that can happen from a purely software perspective is the opposite. That being that you hit a home run with a valid money-making business idea and your software ‘prototype’ proves it. Now you start tacking things on, expanding, etc. Two years later, you’re almost in the black, you’ve got an actual working business but somehow your ability to progress technically has ground to a halt. You and your team have literally stopped dreaming cool ideas because you’ve been conditioned that technically they can’t be done, regardless of reality.

What happened? You delivered a prototype to prove a business concept, get some quick cash in the door. That prototype then became your product and eventually became an unmaintainable code base.

What’s unmaintainable mean? In most cases, unmaintainable means unreasonably expensive to make even the slightest change whether that’s new features or handling more volume etc.

Well there’s no such thing as unmaintainable right? Anything’s maintainable with sufficient piles of cash? Sort of but it’s uglier than that. One example is people. Smart, talented developers will flee from code bases like this. I guarantee you this and have seen it over and over again. So now you’re paying a lot of money to bad developers to tack pushpins onto your ball of rubberbands.

Long time horizon, expensive.

I wish I knew where I wrote this quote down from….”being first to market and developing the application as rapidly as possible often comes at the expense of long-term maintainability. In addition, these goals require different sets of skills, quality control practices, and management mentality”

Reread the above paying close attention to the role of the word “often”. Ah, a glimmer of hope. So can you do both? Can you build as rapidly as possible AND deliver long-term maintainability?

Short answer, no. Trust me, it’s just simpler for everyone involved if you take that answer and don’t read any further. Still reading eh? Ok, so I will tell you yes you can have both but it’s so rare and takes such an obscure combination of people and skills that it’s just safer to work on the assumption that it can’t be done. How do you do it? Well as hammy hamster would say, that’s another story….

Code Reviews

May 31st, 2007

I’m looking into moving some of our projects to some form of a tool-assisted code review process. The intent is to stay clear of traditional ‘big company’ reviews and all the emotional baggage that goes along with those. The goal of our reviews will still be better quality, more maintainable software, however, almost equal to that is to generate team collaboration in the context of actual code.

This is about mistakes and finding them, however, that cannot be a negative experience. To state it the other way around, he who makes the most mistakes wins. This will be about finding mistakes as early as possible, learning from them, and moving forward. If you fail on that part and drift into using the number of mistakes as a carrot or stick then you’ll build development teams that are very good at hiding their mistakes which is terrible for everyone involved.

Bugs are good. Mistakes are good. If you aren’t finding mistakes then something’s broken. Your testing could be broken, your developers may be hiding or not fulling disclosing bugs. Another, equally bad possibility, is that your developers aren’t pushing themselves and are instead only doing what they are 100% certain they can safely deliver. If that’s the case then I guarantee you that you’ll be sitting in a meeting someday soon titled “How do we get our developers to innovate?”.

I’d like to try an open source product but at this point I’m most impressed with what SmartBear has. Some related links:

http://smartbear.com/docs/BestPracticesForPeerCodeReview.pdf
http://www.ganssle.com/Inspections.pdf

and a free book offer…
http://smartbearsoftware.com/codecollab-code-review-book.php

Python Notes

March 30th, 2007

I’m taking python, the language not the non venomous snake, for a test drive of late. One of python’s core tenets reminds me of something some guy once wrote. Mister python says:

“By philosophy, Python adopts a somewhat minimalist approach. This means that although there are usually multiple ways to accomplish a coding task, there is usually just one obvious way, a few less obvious alternatives, and a small set of coherent interactions everywhere in the language.” (quote from Learning Python)

I wish I’d wrote that in that previous post because it’s certainly another perq of the approach of building the smallest API footprint possible. Doing so makes the framework simpler to use and accelerates a developer’s ability to familiarize themselves with it.

Is Your Software Doomed?

March 29th, 2007

Leon over at secretGeek put together a way for you to check.

Unit Tests <> QA

March 15th, 2007

Based on this post, I’m guessing that secretGeek isn’t a hardcore fan of test driven development(TDD). I’m actually unsure whether I’m a fan of pure TDD as I’ve never adhered to it in the “pure” sense of always writing a test before you write code.

What I can say is that I’m a huge fan of test driven bug fixing(TDBF), and yes I did just make that up, catch phrases really are that simple. What’s TDBF? When you find a bug, in development or QA or whenever, you start by writing a unit test to reproduce it.

Here’s why I’m a fan of this. What’s the generic process for fixing a bug?

    1. Reproduce: Hack away in a gui, web app or debugger until you can continually reproduce the bug in a controlled environment.
    2. Research: Once you can reproduce it then it’s recon time to understand the bug.
    3. Solve: Design an appropriate solution and implement it.
    4. Validate: This consists of doing exactly what you started with, trying to reproduce the bug by hacking away in a gui, web app or debugger, etc.

      The first and the last step are almost identical. You will likely add to the validate stage, possibly testing in varied environments etc. No matter how you do the first step, you will have to reproduce that step again at the end. So why not start by capturing it in code? Do that and you’ve not only accelerated the last step but you now have a repeatable means of ensuring this bug doesn’t exist tomorrow or a year from now.

      My take on unit tests is that you are capturing debugger statements in a usable form NOT performing any form of quality assurance. Unit tests should never ever ever ever ever be considered a replacement, in any form, for QA.

      Think of this in terms of development cost. Everytime you pay a developer to fix a bug without using unit tests, you are paying that developer to write a lot of code that escapes into the void. Sure you may not think of the act of reproducing a bug as writing code but it’s a small shift for it to be just that. Without that, this ‘code’ is never checked in anywhere, it’s likely not even saved in any fashion. In a sense, you are throwing away valuable code.

      So I agree with secretGeek on that point, however, the flaw is in allowing unit tests to be viewed as replacing QA. It’s not a flaw in TDD itself.

      Will you have to refactor your unit tests? Definitely and if you’re going to have unit tests then I suggest you stop thinking of them as some appendage. If you’re building a software product and you’re using unit tests then you must think of them as being part of your product.

      Flexible Code = Expensive Code

      March 8th, 2007

      I got into a conversation today about code bases and building them to be flexible. An overly simplified example is constructors.

      Let’s say we’ve got an object for a person called Person.cs. Person has a properties for Name and HairColour. Those properties have sets and gets. So conceivably we could make Person more flexible by adding upwards of 4 constructors:

      Person()

      Person(string Name)

      Person(string HairColour)

      Person(string Name, string HairColour)

      So now you can use the empty constructor and set the properties, or call one of the other three. More flexible to use than having a single empty constructor. Flexible = good?

      Sure, but flexible = expensive as well. It’s hard to imagine in this overly simplified example, however, by adding this so called flexibility we’ve increased the number of available code paths. All of which we’re on the hook to develop and support which translates directly into cost, ie cash.

      As well, by creating more code paths, we’ve increased our ability to introduce more bugs, and more importantly we’ve made it more challenging to hunt down those bugs. Why? Again, a simple example:

      // path 1

      Person me = new Person();

      me.Name = “duder”;

      // path 2

      Person me2 = new Person(“duder”);

      Clearly path 1 and path 2 will produce identical results right? Well no, not necessarily. They’re unique code paths. The second constructor could be calling a private variable where Name is stored instead of using the set on the Name property. Someone may have incorrectly added name related logic into the path 2 constructor instead of in the Name set.

      Bottom line, the more code paths available, the more expensive it is to build and the more brittle the application becomes. Brittle meaning easier to introduce bugs, harder to find those bugs.

      So flexibility in a code base is bad? No, not at all. Flexibility is important when it adds to the overall functionality of the application. It’s great to have all those constructors on Person, however, it does not add any functionality. I guarantee you it’s cheaper to build and support that class with only an empty constructor and it has no less functionality.

      Programming Languages

      March 6th, 2007

      I really don’t care much anymore when it comes to what language I work in. I’d be happy to work on non-MS stuff as most know. You’d be hard pressed to convince me that one language has something the other doesn’t. It’s like video game consoles, there’s always going to be a flavour of the day but they all watch and learn from each other.

      If you want to know about a language then you need to look long term at the overall architectural direction of the language. Don’t get caught up in the day to day hype. Look at who’s leading the technology and whether their track record shows strong decisions that make sense over the long term, some of which may be to NOT implement certain hyped features? Sometimes the strongest decision you’ll have to make is one of non-action.

      You’ll spend a lot of money buying new video games, and learning new controllers if you change video game consoles every year.

      A Wrong Solution

      February 15th, 2007

      Drew passed me this link to Microsoft’s Partial Class Definitions:

      “It is possible to split the definition of a class or a struct, or an interface over two or more source files. Each source file contains a section of the class definition, and all parts are combined when the application is compiled.”

      Why would you ever want to do this? Well Microsoft says:

      “When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously.”

      Ummm….pardon? Let me translate that for you….

      “We couldn’t really get source safe working well so instead we added this. So now you can have TranslateEngine_Dave.cs, TranslateEngine_Steve.cs, etc. Cool eh?”

      Unless I’m missing something, this is a grossly wrong solution to a real problem. This is something I personally would not allow in a code base. When you have to start writing tools to parse your code base to ensure that certain language features like goto statements and this are NOT being used then you start to wonder about the overall direction of a language. Have the strength to say no, please.