Sign in
  • Blog
  • Change management
  • Processes

Author  Dan Boutin - Vice President of Digital Strategy, SOASTA

November 4, 2015 |

 5 min read

  • Blog
  • Change management
  • Processes

Though a performance analytics and testing company, SOASTA use ITIL® best practice to consider key processes for any application lifecycles and have applied them to their development methodology. In this blog, Dan discusses a little of what they have learned so far.

Three players of Boston Celtics



As everyone who has read my blogs knows by now, I am a big Boston sports fan so I have applied a little Boston sports history to application development. In Boston, the Big 3 is a term associated with the Boston Celtics, starting in the early 1980s with Larry Bird, Kevin McKale and Robert Parrish, and continuing in 2007-2012 with Pierce, Allen and Garnett.

Our Top 4 Key Findings from our Development Methodology

1. Configuration management is more than version control

Without version control, there is no reliable way to know what a given unit of work contains. At any point in the development-testing release process, the first debugging question should always be ‘What changed?’ A version control system helps to answer that question, but doesn’t show the whole story.

Configuration management covers the entire process that includes not only software but hardware, tests, documentation, connection pool settings, other configuration files and more. It identifies every end user component and tracks every proposed and approved change to it from Day One of the project to the day the project ends. It ensures reproducible builds and removes the waste of manually assembling code, a key component of safety - you can’t safely deliver frequent changes if you don’t know what you’re releasing.

Manual system configuration exacerbates consistency problems – primarily environmental inconsistency. Developers often run different versions of a Software Development Kit (SDK) and test against system software that differs from the software running in the integration environment. That environment, in turn, doesn’t match production; the result is pure waste.

Using configuration automation and treating infrastructure as code solves this problem. A single set of configuration scripts can be used to provision development, testing and production environments; consistently deploying configuration changes across numerous environments and machines becomes as simple as checking in a configuration script change. This is one of SOASTA's internal best practices; it should be yours, too.

A good user experience (UX) starts here, whether you are developing with an agile, lean, waterfall or other process, this is the foundation for the ‘Big 3’ and a key part of any solid DevOps environment.

2. Understand change management (aka ‘the only constant is change’)

Though seemingly simple, it needs to be right by identifying potential changes to an application, no matter where it is in its lifecycle continually.

  • Potential changes should only originate from three places in the development of application software: The business owner/customer who requests that the application be developed (e.g. submitting a requirement)
  • A bug which can originate from an end user through a ticketing system (e.g. a problem report or a feature request)
  • Internally from Quality Assurance (QA) during any of the different testing phases for any and all functional and performance tests that should be executed continuously across the development lifecycle.

3. Know the difference between release management vs. build management

These terms are used interchangeably far too often but the two processes are not the same. The standard definition of a release is "a set of approved changes or features approved for the application" - such as change requests that have several new features being added or numerous problem report resolutions that impact features being added/changed in any release.

A build, on the other hand, is typically an incremental set of requirements/changes/problem resolutions that in the CI process world are released and tested incrementally throughout each testing process in the conveyor belt (e.g., functional testing and performance testing in Dev, Test/QA and in pre A production/staging).

Continuous integration build failures may be relatively quick and easy to fix, however, they should be avoided where possible. Development teams should have the goal of checking in complete, correct code. The further "shift left" in the lifecycle you find a bug, the less costly it is to fix and less costly still to find them prior to check-ins.

The best way to do that is to have developers and testers work together rather than to treat testing as a post-coding activity by sharing, reviewing and running each other’s code. Running the test suite locally - as with short-lived feature branches - can minimize the likelihood of discovering bugs later in the lifecycle.

4. Achieving high speed cycle times requires automation of more than just the test itself

SOASTA's internal approach is particularly conducive to compressing the Systems Development Lifecycle (SDLC) of web and mobile applications through the use of continuous integration (CI) tools and best practice involving the key processes above, all the while coupled with a solid performance engineering process.

When combined with Jenkins/Hudson, for example, it is possible to automate the entire process from build through test and into reporting and diagnostics. Results are displayed in a common interface and automated regression testing can be done completely hands off. This alone would not necessarily obviate the need for any manual testing, but it does make automation, maintenance, and reusability accessible to developers and testers to achieve speed with a quality focus.

Are change, release and configuration important to you or your organization's management of services? Do you use these techniques or other methodologies? Please share your thoughts and experiences in the comments box below.