Tuesday, December 20, 2011

The architectural success of any solution is directly related to the capabilities of the people who deliver upon it

One thing that an Architect must never forget is the capabilities of those around them. Over-complicate your solution to adhere ridiculously high standards or lengthy time-consuming review and approval processes and it will more than likely fail. Your vision must be able to be translated into modular, workable components that can be designed to integrate seamlessly into a larger picture as highly robust, reusable and deliverable, software components. In other words you must ensure your designers and developers are able to design, develop, test and deploy their work in as simple and efficient a manner as possible. Too much fluff leads to too much uncertainty and guesswork, too much complexity and the project timeline will be half-over before you’ve got any code running on a build server

Being a good architect requires (amongst many other skills) the overall ability to choreograph a high-wire balancing act that revolves around producing a solution that solves the business problem, meets the business functional requirements, meets the quality standards of their IT departments (and yours), delivers on-time but most importantly of all: can be delivered with the people you have at your disposal. To blindly assume every developer is as highly competent or better than you are is to fail before you’ve even started.

A good test for assessing how easy or complex your delivery environment has become is by noticing the ramp-up time for a new starter. If a new person needs to spend a considerable amount of time building their workstation, asking numerous questions of co-workers, has to perform a number of operating system tweaks or custom software installs to just get to the starting line then something is already going wrong.

Anyway rather than waffle any further let me present a story from the trenches to illustrate the point.

Early in my career I cut my teeth working on a highly complex, bespoke developed CRM application that went through a number of versions and iterations over its lifetime. This application was massive (and I do mean massive), it had over 100 developers alone at its peak, over a 1000 CR’s, and featured hundreds of Use Cases. As it grew bigger with every release more architects got involved to assist with the delivery of the enhancements. This injection of architects (all with their own opinions and persuasions) caused complexity to go through the roof as they deemed that new architecture frameworks were necessary to meet the increasing functionality demands of the business. Wild ideas ran rampant, various splinter groups went off on tangents creating their own processes and standards to deliver upon them as they all strived to create perfect architecture frameworks. A fall-out of this process also meant highly complex co-existence measures were required to ensure the new frameworks still worked with the previously established ones too – primarily because re-platforming was not an option. Knowing that delivery complexity was increasing exponentially as a result of adopting these new frameworks (and fearing that the technical designers, developers and testers were struggling to understand their vision), the architects started micro-managing every minute detail of the solutions being delivered to ensure they conformed to the numerous standards and processes they had created. They established numerous review and QA stage-gates to measure and ensure project delivery remained compliant with their architectural vision and established lengthy review and approval processes to govern the production of key deliverables.

Starting to see the tidal wave building now? The guiding mantra of simplicity and efficiency effectively went out the window at this point.

Approval processes to make for small changes took weeks to get finalised and developers who could not grasp the complex processes started to under-perform and produce poor code. This then in turn caused numerous defects, blew out testing times and projects started running well over budget causing late nights for everyone concerned (the record holder was set by a deployment architect – 40 hours straight, I only managed 24 as it was around that time that the code started to dance on the screen and I developed muscle spasms around my eyes). In short the desire to gold-plate and genericise every aspect of the delivery put such high demands on developers that they began to crumble under the strain. Thus the delivery of new business initiatives on the new architecture frameworks became impossible using existing time-frames established in the past for similar sized business initiatives. The large numbers of developers just could not grasp the complex coding structures, frameworks and processes set by the architects.

Delivery costs are now increasing, try explaining to the Business why you need 30% more time to deliver a CR now, than you did in the past, for an equivalent sized piece of functional change. Architectural purity? Good luck!

Compensation for the increased delivery times, and exponentially increasing complexity, meant the estimates produced for new projects blew out in ever increasing margins because no-one really knew how it all hung together. Tech Leads built in large amounts of contingency as they were afraid of not making the deadlines with the teams they had at their disposal. Delivery resources that had been on the project for a number of years were fought over at the resourcing table by PM’s as no-one wanted to take on a “new guy” because they knew they would not cope. And by and large they didn’t so those that were competent were kept in their roles because they could be relied upon – to the detriment of their own promotional aspirations.

Now a “hero” culture has been created.

So as the estimates kept growing fights broke out between the PM’s, technical designers and the architects as those at the coal-face could see the processes was not working. The architects struggled to translate their vision and ideas, a lot did not really know the intricacies of the frameworks, and others did not see how it had become so difficult. This malaise in turn caused other groups involved in the delivery of the application (PM’s, Testers, DBA’s etc..) to grow frustrated at the late deliveries which also were causing them late nights to test/deploy changes to get the projects over the line. The finger pointing began in earnest as all grew disenchanted with the entire delivery process and hence lost faith in the architecture team because they could not see the value in what they doing.

Once people lose faith in the architecture you’re on a very slippery slope

The architects however, despite the numerous issues could see the value in what they doing. Their frameworks were creating a much more modular, generic, extendible and reusable enterprise architecture platform. It was highly configurable, well-documented and the code (at least at face value) looked very well-written as they had condensed so much of the common functions down that developers only needed to (in theory) fill in the code logic gaps. Unfortunately to everyone else all the new architecture frameworks did for them was to make everything take longer to deliver, become more complex to test and deploy than it did developing on the older architecture frameworks. In other words it’s not that it was fundamentally flawed and would never work, it’s just that it was too clever for its own good and the capabilities and abilities of the people tasked to deliver on it.

So what did I learn from being involved in all of this?

Always consider the simplicity and efficiency of the architectural decisions you make as they are interpreted and communicated down the chain to the delivery, testing and deployment teams. Always keep in mind the capabilities of those that will implement it and the SDLC process they must follow to deliver it. Compensations must always be made in order to have a successful delivery of IT projects, knowing where and how best to do this unfortunately only comes with experience – you can’t teach it.

And finally always remember that when choreographing your high-wire act you don’t set the rope too high to cause a high degree of danger to your team or too low to provide a low-degree of value for your audience.  Better make sure you have some safety nets in place to catch those that are likely to fall too.

Thursday, October 20, 2011

How I fell in love with the stored procedure all over again

There was once a time when getting data in and out of a database involved embedding SQL Script directly in code, it was a horrible, ugly way to extract data, prone to error and more often than not a drain on performance, not just with the inefficiency of connection pooling, but also in the parsing of the raw SQL code itself. Not to mention a serious security flaw with SQL injection attacks which burst onto the scene with the uptake of the web as the 1990’s was drawing to a close and is still highly prevalent today.

So why the stored proc?

To address these issues many software systems started being architected more and more with the use of stored procedures as the standard API into the database. Despite the obvious performance benefits there were security benefits too as the use of parameters when calling them meant that SQL Injection attacks could never work. In short it was a win-win and as their use grew wider-spread, support for them in coding frameworks grew along with them. From the early days of COM-based ADO with Microsoft Visual Basic programming, through to ADO.NET with .NET and of course the various incarnations of JDBC amongst other languages. In short the control of how data was extracted and processed from the database was now controlled; within the database.

For all intents and purposes leaving this domain of control in the database was a very good idea. Stored procedures after all get pre-compiled and optimised for performance within a database (well, at least in the bigger enterprise styled ones they do) and thus it made for a very fast and efficient way to pull data in and out of a database. But like any service-based layer it is only as good as the code and architecture behind it and as over-zealous developers and projects with a lot of budget cuts relegated the programming of stored procedures to developers, this often produced disastrous results. Stored procedures would get bloated and use inefficient data processing SQL statements and this situation only got exacerbated if the RDBMS itself was poorly designed. Projects of course that had good DBA’s and database developers did not suffer the same fate and it was soon realised that on large development projects having a dedicated database programmer for stored procedures was essential.

But ultimately as the development processes matured with their usage the goal of a maintaining an architecturally pure, service-based façade around a data layer became possible. This theory of course is inline with modern design best-practices for software systems. So to complement these advancements frameworks started arriving that helped to auto-generate the code around stored procedures to make their usage in code even more efficient. Writing code to call stored procedures was, after all, a laborious process so any tool that could auto-generate the code-wrappers to call them would save a great deal of time. And as these frameworks got more and more sophisticated, they matured into what is now known as Object Relational Mapping tools. These things were heralded with great fanfare as they could enable the by-pass the stored procedures and instead enable the creation, in code, of a strongly-typed object model that is based on the tables and relationships in the database. Architects and developers flocked to these tools, no longer would there need to be a dedicated SQL Programmer on a development team, architects could design the RDBMS (mostly) themselves and a tool would write all the code to take care of developers using the data-structures in code with all the relationships maintained and protected. Hibernate, NHibernate, LLBLGEN, CodeSmith and many tools flourished and their up-take became so popular that almost overnight it became inconceivable to run a development project without them. Development teams would claim upwards of 30% in reduced development times, some even higher and even attempts at justifying why DBA’s are no longer needed by architects were even thrown around – ridiculous I know but I did hear it!

But then things got ugly.

Because the ability to be able to create a framework that is generic enough to manage complex, in code, relationships based on a well-normalised relational database gets very, very difficult the more joins you create in the code. Underneath this lovely, generated, strongly-typed object model you still need to get the data out of the database – and that means you need to write SQL Code. So that means you need to have a framework that can generate this code – for all the possible different relations and permutations. And that means these statements get really big and really quick, the more joins you make. To date I have seen SQL statements that have some of the biggest SELECT query syntax I have ever seen being produced by Microsoft’s LINQ to SQL and the Entity Framework, the same goes for NHibernate and LLBLGEN – although the latter are usually a bit better. All these statements have to get compiled by the database, every time they run, and that means the database starts taking a lot longer to get data in and out. Steps have been taken to perform lazy lookups and delayed execution by these object models, this has helped to address immediate performance issues but an you imagine, if you will, being the architect having to explain to the DBA at your client the reason why a SELECT query with three joins needs to be printed out on an A3 sized piece of paper for the system you have just crafted?

So where does this leave us?

 Well put simply,

Don’t stop using stored procedures
  • Do stop trying to build complex object models of database relationships, tables and structures in code.
  • Only use ORM’s to manage calling stored procedures.

But admittedly this is not doing ORM’s justice. They do have their usage and their place, but they are not the total solution for always pulling data from a database as many in this industry would have you believe. They work well for simple table inserts and updates and when complex joining operations are not being performed. For small applications or ones where calls to the database are not common they are also very suitable. They do cut down on the amount of code that is needed to be manually written and also provide securities around how data is managed through a strongly typed object model.

Data Caching is a big plus with ORM’s, with stored procs and indeed any calls to a database the same static information is served time and time again. ORM’s have gotten good now at caching data and thus for static, referential data this is a big bonus, especially when volumes are large and being served to web-based applications.

Development wise ORM’s also facilitate a much faster turnaround in getting the modelling of the data from the database done correctly,  and of course the type-safety of the generated objects eliminates the plethora of bugs that this always used to generate back in the days when you had to roll your own database access layer.

So what do we use ORM’s or stored procs?

Simple answer here: use the right tool for the job and follow some basic rules
  • Big bulk inserts, lots of complex joins and database logic required– use a stored proc.
  • Simple CRUD apps, lots of reads of static referential data – use an ORM.

And yes the two can coexist if you architect the design well enough because no single solution can ever guarantee a blanket answer for an IT system when it comes to managing transactional and referential data.

Thursday, July 14, 2011

Gold-plating or the Curse of the Architect

No solution should ever be engineered to be so technically complex or genericised to the nth degree that it becomes virtually impossible to redevelop, extend and maintain. While your years of technical experience have made things in your mind once seemed complex to now be easy,  the same is not true of those in your team who are likely to have much less experience than you. The same applies for the process and method that you must implement, which extends across the gathering and documenting of requirements, designing the software, developing it, building it, testing it, deploying it, maintaining it and so on and making sure it all integrates in a seamless fashion to deliver what is required on time and on budget. If only a select few or no-one at all “gets it”, then you’ll fall behind the moment you start. Communication is the one key attribute that you have to master; being able to communicate what must be done in clear, simple, language that is easily understood by all is a fundamental skill for an architect.


Complexity will bind you

Create too many staging gates, too many cumbersome and lengthy review and QA cycles, fail to clearly specify the deliverables, who owns them, how they align to the methodology and project plan or enforce a tightly coupled and rigid developer environment with no automation of quality and far too room for creative thought and things will fall apart. Nowhere is the need for this more pressing than in offshoring of software development too. The method, the process, the standards, they must be so well-defined and translatable from the architecture and the requirements right down to the lines of code that the concept of the “code factory” can actually be realised. But more on that in another article.

Consistency will save you

You must make sure that the solution is designed and broken down into components that can be easily understood by designers and developers so that they ultimately become reusable, testable and maintainable. Make sure that the way in which every single artefact is produced is done in a consistent fashion. There is no shame in creating more components within a solution if it improves the overall simplicity and consistency in the process of design and development. In fact it may end up making it quicker to produce than alternative methods because a simple and efficient process, once engrained and embedded in the minds of those following it, becomes innate, repeatable, measurable and also predictable. Make aspects of the solution, or the process to produce it, do too many things and it will grow out of control quickly because you will lose track of where and how things are being done. If consistency is inherent in everything you do changing things is simple. A highly modularised design is easy to modify and extend than one which is tightly coupled, cumbersome and inconsistent from one software layer to the next. We’ve all heard of the importance of architectural patterns, no doubt we’ve all read the Erich Gamma and co work, one of the principles that underpin this form thinking is consistency.

A saying I picked up early in my career as a junior developer from a highly skilled, if somewhat socially inept, architect was that I have never forgot: “I don’t care if you make mistakes; all I care about is that if you do make them, you make them consistently. Consistent mistakes we can fix inconsistent ones we cannot”

Minimalism will break you

There is a perception amongst many architects and developers that trying to be as minimalist as possible by putting as much complexity into the artefacts they produce is somehow conducive to creating a highly elegant and functioning application. It isn’t. Unless you are blessed with a team of people as equally smart and intelligent as yourself it will not work because fundamentally all IT projects are produced by humans and humans all think differently. Know your teams capabilities, know the expectations of the client and create processes, standards and a solution that meets these requirements in a simple and consistent fashion and you will be successful. Your worst enemy is always yourself, over-think, over-engineer, over-complicate it for your own ego’s sake and it will fail. You can sometimes get away with it on a small project < $500,000 AUD but you won’t on anything above and beyond $1M AUD

Owning the failures and sharing the success = respect

Back yourself and your judgement. Be confident in your decisions and people will buy-in to what you are selling, be cagey, un-cooperative and aloof and those below you will lose faith in the directions you set. There is no shame in being wrong or not knowing all the answers, just be accountable for your mistakes and learn to accept you are not always right and you will be amased at how well things will turn out. Don’t be afraid to stick your neck and take responsibility for when you fail. Because you will fail. But the most important thing is the way in which you handle and respond to it. Start pointing fingers, shouting and blaming others and you will lose respect. Own the response to fix the problem, commit yourself and always tell the truth, even if it hurts when doing so, and you’ll be respected.

How to sum it up? Why quote a luminary of course

I am both a victim and a perpetrator of this quote from Frederick Brooks, bookmark it and remember it, to keep yourself grounded:
An architect’s first work is apt to be spare and clean. He knows he doesn’t know what he’s doing, so he does it carefully and with great restraint.
As he designs the first work, frill after frill and embellishment after embellishment occur to him. These get stored away to be used “next time.” Sooner or later the first system is finished, and the architect, with firm confidence and a demonstrated mastery of that class of systems, is ready to build a second system.
This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable.
The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one. The result, as Ovid says, is a “big pile.”

Monday, July 4, 2011

Common Information Model, Canonical Schema, whatever you call it, just do it. Always.

Whatever name you apply to it, any software that is being developed be it a custom ground-up build, to a piece of integration middleware, one of the first and foremost task of any designer is to model the data that the system is going to use and the structures and relationships that compose it.

Before you start creating your sequence and activity diagrams survey the domain of the problem you are trying to solve. Look at all the unique pieces and groupings of the data that is going to be used throughout the software layers and interfaces, how the hosts systems categorise and organise relationships, how the business requirements reference and refer to it and so on to use that information to create the model. You’ll get a lot of this from the use-cases being constructed (if they are thorough enough) and also from system interface specifications, database structures, screen layouts. If you’re lucky the client may have already done this task on a previous project and hence you may be able to leverage the work already done, sometimes you can find evidence of it within the enterprise architecture – although more often than not in this case it will be very high-level and difficult to leverage without a lot of decomposition.

Generating the model is pretty straightforward, you can use any modelling tool that is available, but try and use one that enables the generation of a Class diagram into code such that it is always easy to maintain updates. My preferred tool of choice is Enterprise Architect by Sparx Systems. Not just because it is Australian, but because it is simple to use, cheap and very, very powerful. Other solutions would be to use XML, middleware tools such as BizTalk Server adopt this approach when defining the data schemas.

The level of modularity you build into the model is important and should take into consideration how this model can be extended and reused within the project you are working on and potential ones in future. One of my preferred methods to break a model down is to use a common database design theory known as normalisation. Once you have created your first drafts of the data model start the process of normalisation and break it down so that it becomes more modularised and hence more extendable and reusable. The extent to which it gets broken down is dependent on what is appropriate for the system being built and is dependent on a number of factors so at least get it to second or third normal form and leave it there.

Once the model is defined its usage should be permitted only within the layer it has been created for. A Business Logic Layer model should not be used in the Presentation layer and not in the Data Layers (as they should model their own data accordingly) - if it is for an integration solution the concept of internal and external schemas should be adhered to, the principle is the same. Exposing any of the models entities within service interfaces should be forbidden as the flow-on impact of a change to a model object will not be contained within the layer itself but instead will impact the services that expose it as well. For these reasons this means all requests should be translated to and from the data model within the services that expose the interfaces. The following diagram illustrates this concept in more details


Encapsulated Data Model

Now I bet some of you read that last paragraph and thought that was a load of crap? If you didn’t good, if you did then consider this question: why did the major database vendors start incorporating stored procedures into their platforms to control access to the data held within tables to offer alternatives to making direct table access calls from functions in code? Not sure? Because it was a bad idea 20 years ago and it still is now. Keeping that factor in mind lets us ponder another: it is both an accepted fact and considered best-practice within the IT industry that all logical layers of a software system should have a boundary of controlled entry points, and that these entry points must not be bound to the data structures and logical functions within to avoid both exposure of data and logic (sometimes this can be a security issue) and that the entry points should be able to be versioned and extended without impacting the logic and functionality underneath. Sound familiar? This is one of the principles that govern the implementation of service-based system – also known as being part of a broader SOA implementation. See how what I have described above is just following the same pattern? Yes you could avoid it on small applications where the code base is small but if you don’t do it on enterprise scale applications with large development and design teams you’ll be screwed so therefore why not follow the same pattern and just make it a habit. At times it may be a bit more work but I believe the trade-offs are worth it.

Wednesday, June 29, 2011

IT Architecture – some of the basics

Simplicity, Efficiency and Consistency

Early on in my career I was lucky to work with some very brilliant minds. I learned a lot from these people, both good and bad, but the two main things I always remembered from working with them were:
  1. Simplicity and efficiency of the production of design, development and method are the two things that must be strived for; and
  2. It doesn’t matter if you make mistakes, just be consistent. A consistently made mistake is easy to correct, inconsistent ones can become impossible.
What I have always taken from those pieces of wisdom is that IT architecture is not so much about the constant pursuit of perfection, but more to do with maintaining the balance between solving the problem the client has described in the best way, with the time, budget and capabilities of both their organisation and yours. Get too carried away with trying to create a solution that is plated in gold and you’ll most likely watch it die a death from a thousand cuts. Overcommit and you’ll be working 7 days a week, under-commit and you’ll more than likely deliver a piece of crap that will get an underwhelmed response and you’ll be shown the door when the next piece of work is up for tender. Keep simplicity, efficiency and consistency at the forefront of your mind and you’ll be closer than you think to deliver a robust, scalable and high-value solution that will make your clients happy and give you a stepping stone to hang your career hat on.

You don’t need to be a genius

It doesn’t take a powerful brain to be able to architect 90% of business software solutions, you just need to know what the client wants, what problem it is trying to solve, what methods and processes to deliver it you need to follow/implement to get it to work, how to make it all work effectively to stay on-time/budget and how to lead those within your teams to implement it correctly. You do need to know the technology, unless of course your role is very high-level and you’re just drawing boxes on a board, because you do need to know what works and how to assist/advise the people you work with.

You don’t need to feel like you must do everything yourself.

Don’t make the mistake of trying to do it all yourself. I have, several times, but late nights and missed time with family stopped becoming fun after having a couple of kids and I started to really need my weekends to get my mind into another space. Identify and know the skills and abilities of those in your team to ensure that they can be relied on, and driven, to get the job done. Always get yourself a good technical architect(s) to diagnose those difficult technical issues, assist with hardware and infrastructure planning, performance testing, platform configuration and the production of all important proof-of-concept applications to confirm technical and design directions. You also need competent technical team leads, people who can delegate work, lead development teams, ensure software is produced in-line with established standards and ensure as much automated QA tools are integrated into the build and development environments as possible. Remember that all good architects have to be good leaders; they have to drive teams, direct people and keep them on the right path. In other words you need good social and communication skills and yes, you do need to be across everything so being well-organised is a must.

If you’re micro-managing everything then you are either not delegating work properly or, communicating what and how things need to be done effectively. Worse of course could well be that you’re working with a team of idiots, it happens to us all at one point or another, and in such cases you need to be on top of the issues and raise/escalate them as they arrive.

You must manage the architectural risks and issues – to ignore them is fatal

As an architect one of the most important things I always look to do first on any project is to identify and mitigate against architectural risk because without doing this exercise your entire solution could be flawed from the outset and it’s likely that you’ll miss something critical which will only rear its head at a later date. Changing from 32 to 64 bit over the last few years has been one that has caught a lot out; a good one in future will be data migration to relational database engines in the cloud, when you’ll find that that your love for using certain system functions will no longer exist. Remember that as the architect you’re accountable for the solution so always err on the side of caution. Don’t commit to changes in technology platforms, business requirements etc… without doing impact assessments first, if you see a risk, raise it, puy it in a log that is visible to all relevant parties (Project Managers, your own managers, project stakeholders etc…) and ALWAYS suggest a resolution or course of action to mitigate against it. Don’t just raise issues without offering a way out, it always looks more credible when you’re the one taking the lead, after all that is your job. And finally always follow issues up.

So why have I bothered


I contemplated long and hard about doing this blog thing. But after a number of years in the job I forget things and I don’t want to be able to rely on just a collection of learned experiences to see out the next 10 to 20 years. Therefore what better place to put them than online, because out here it ain’t going anywhere – well at least not for a while, and if it helps others and doesn’t raise too many peoples bullshit flags well then that’s a plus as well. Also I like having my beliefs and ideas challenged.