Thursday 9 October 2008

IASA Connections - Day 3

Day 1, day 2, and now day 3:

Scott Ambler – Practice Leader in Agile Development, IBM Software Group
• Data modelling from an agile perspective.
• Don’t get too hung up on purist agile, good techniques are good techniques regardless of where they come from.
• Software development is more of a craft or an art than engineering – hence we tend to apply some practices that don’t make things the easiest for us.
• Agile is becoming mainstream – you cannot dodge it – and data is not going to change from being an asset, so how can these things coexist?
• Data and quality are 2 words that are hardly ever seen together, yet this is so important.
• In most “Big Requirements Up Front” projects, 45% of features never used, 19% of features rarely used, so this is effectively a waste of over half the development budget.
• 66% of teams work around internal data groups, most common reasons are difficulty in engagement, responses too slow, and development teams don’t understand the value. So data architects have work to do.
• Modelling is like anything else, it just needs to be good enough for the task at hand (repeatable results, not repeatable processes).
• If you can’t get active stakeholder participation, then cancel the project because all that will happen is disaster
• Clear box testing is just as important as black box testing, because you can test triggers, views, constraints, and referential integrity rather than just observable functionality.
• Do continuous integration of databases – there is nothing special about data so why shouldn’t it be treated the same?
• Make it easy for people to do the right thing – loads of verbose documentation and white papers are less likely to be effective than a small number of run-able assets and some 1:1 support to integrate them.

What I think
• Obviously Scott was wearing his data hat this time, but clearly the whole session was predicated upon the fact that you believe relational databases are themselves the right solution…
• Really like the “repeatable results not repeatable processes” phrase, it is such a powerful idea – I am always battling the ‘1 process to rule them all’ crowd.
• Probably best whiteboard walkthrough of the “karate school” example I’ve ever seen.
• My approach to modelling has always been to define things in order of ‘most difficult to change later’ to ‘least difficult to change later’ so when you run out of steam the unknowns you’re left with are the cheapest ones.
• Abstractions – a lot of the examples in the talk were based around the assumption that applications directly accessed data, we need to think more about how we can build systems that don’t need to know details about schemas etc (access via object etc).
• Totally agree with the active stakeholder participation thing, if it’s important enough for them to expect you to deliver it then its important enough for them to invest themselves in it.

Dr Neil Roodyn - independent consultant, trainer and author
• A session titled “software architecture for real time systems” was about patterns for performance and performance tuning.
• Hard vs. soft real-time distinctions important.
• Time is so often not considered in system design – we think about features and quality and failure but so little about latency.
• Automated failure recovery is so much more important in real time computing because you cannot stop to allow human intervention upon failure.
• There are some strong similarities between real time computing thinking and distributed systems thinking:
1. Consistency
2. Initialisation
3. Processor communication
4. Load distribution
5. Resource allocation
• Asynchronous is kind of “cheating” as it leads to the illusion actions have completed before responses are returned.
• The 3 most important considerations in real time computing are time, time, and time (haha very good Neil).
• Common software tricks and patterns for real time systems (obviously assuming real time performance is your overriding requirement):
1. Use lookup tables for decision making
2. Used fixed size arrays
3. Avoid dynamic memory allocation
4. Reduce number of tasks in the system
5. Avoid multithreaded design
6. Only optimise popular scenarios
7. Search can be more efficient than hash
8. Use state machines
9. Timestamps instead of timers
10. Avoid hooks for future enhancements
11. Avoid bit packed variable length messages
12. Reduce message handshakes
13. Avoid mixed platform support
14. Minimise configurable parameters
• Overall know your system and approach performance tuning scientifically, observe where time is lost and spend energy there, don’t just guess.

What I think
• When we think about SLAs for latency we have to make sure we consider time from the users perspective – if you have a very fast back end but it takes ages for results to render for users, then is it really high performance?
• Even if you have a few processes in your system that need to be real-time, chances are the majority of your system does not, so don’t be afraid to mix memes because if you make the whole thing real time end-to-end you might be unnecessarily sacrificing some scalability or maintainability.
• Totally agree with points on developers needing to know more about the tin their systems run on and how this will lead to better software overall.
• I cant’ help but think we’re getting lazy from our (relative) lack of constraints – back when you had 32K to play with you really thought hard about how you used that memory, when you had to load everything from tape you really planned that storage hit…

Neal Ford – Software Architect and Meme Wrangler, Thoughtworks
• Fairly by-the-book session on “SOA won’t save the world” but via the humorous and engaging analogy of Chuck Norris facts.
• “Peripherally technical manager” types are the key causes for SOA overpromising.
• Good definition of a service as a unit of course grained, self-contained business functionality.
• The tension will always be on between centralised planning and tactical action, so you need to learn how to plan ahead without the planning becoming a constraint on business agility.
• Beware of “cleaning up” the messy communications paths in the enterprise by hiding them inside a box (ESB) – you still have same underlying problems, but you suddenly have less visibility of them.
• Beware of the difference between ‘standards based’ and ‘standardised’ i.e. most vendor ESB solutions share some common basic standard foundations but the functionality has been extended in a proprietary way – so it can still mean lock in.
• Keep track of the number of exceptions you’re granting against the governance you have in place – too many and you might have excessive governance.
• Using ubiquitous language is a must; perhaps even have a formal dictionary for each project.
• The business must be in integration projects not just initiate them.
• We all like a bit of ESB-bashing but they can still be useful for connecting up things like mainframes and 3rd party systems to your nifty REST/WS fabric.
• Exchanging metadata is a great way to negative communication parameters (reliability, security, synchronicity etc) between services and consumers.
• SOA is undebugable – but it is testable - so a good testing discipline is essential.

What I think
• The insurance industry is the place I’ve worked with the most legacy out there (you cant retire certain mainframe systems until all the policyholders die!) and the ESB attachment is just the right milk for this cookie.
• We as engineers contributed to the overselling as much as the Neal’s peripherally technical manager did – I think we got carried away with the interest that a bit of technology was suddenly getting and we were happy to be on the bandwagon as we usually struggle so hard to get technical things on the agenda – how could we not ride this wave?
• There are more benefits to SOA than reuse yet that’s all we ever seem to talk about. How about what it can do for scalability? Failure isolation? Concurrent change? Hot maintenance?
• Yes. BPEL. Terrifying.

The Big Summary
Overall I think this was a very good flagship event, my thanks to the organisers. The turnout was a good size – small enough to feel personal and allow plenty of networking opportunities, yet big enough to ensure a variety of engaging discussions.

The IASA mission is a worthwhile one, and one that I think we don’t do enough about in any technology discipline. Whether we’re talking about architects, system administrators or developers, how much time do we spend investing in our community? In ourselves as a profession? When was the last time you looked for someone to mentor? Went out of your way to share your ideas beyond your organisational boundaries? Visited a university to see what the talent you’ll have to work with in 5 years is actually learning? Investing in ourselves as an industry is undervalued, and I’m happy to be part of a group trying to address this.

If there is 1 thing I would change for next year, it would be the format of the talks. There are already enough conferences you can go to and watch someone meander through a slide deck (this was great meandering though!). If we change the focus so that speakers only use 50% of the allotted time and treat their role as setting the scene, then we could use the other 50% to get a real group discussion going on the topic at hand. I would certainly find that more valuable, and I would suggest that promoting high intensity peer discussions on tough technology and strategy issues would probably better serve the mission of establishing architecture as a profession.

I have been assured that you will eventually be able to find the slides from the formal sessions and keynotes here.

So that was IASA Connections. Tomorrow I’m off to UC Berkeley for the day, so probably best ease back on the beer this evening (it makes it harder to maintain the illusion of cleverness I’ll need). Sayonara.

No comments: