Testing Client/Server Systems


Paul Gerrard


Systeme Evolutif Ltd.
www.evolutif.co.uk

What's different about testing client/server?

Client/server architectures allow complex systems to be assembled from components. However, multiple operating systems, changing technologies and greater architectural complexity make integration more difficult. Risks such as poor reliability, performance, configuration management, security and other non-functional issues are more prominent. None of the risks in client/server are new, but there has been a change in emphasis. Since its purpose is to address risk, the emphasis of testing in client/server must change.

The complexity of client/server also makes testing more difficult. Complex systems can be delivered faster because they are assembled from bought-in components. Testing is often estimated to be in proportion with the development cost but in a typical client/server system, the development cost may be small compared with the overall cost of the system. The end-result in such projects, is that testers are presented with more complex systems, but the time left for testing is squeezed. In client/server, testers often have to do MORE WITH LESS!

Client/server test process

A client/server test strategy must identify the risks of concern and define a test process that ensures these risks are addressed. It is axiomatic that a problem is cheaper to fix if identified early, so the test process should be aligned very closely to the development process. Testing of a deliverable should occur as soon as possible after it has been built.

Functionality is usually tested in three stages: a unit or component test, often performed by programmers; a system test, of the complete system in a controlled environment performed by a dedicated test resource; finally, an acceptance test under close to production conditions, often performed by users. The objectives, techniques and responsibility for these three test stages are directly comparable to test stages in more traditional host-based systems.

The changed emphasis in testing client/server is associated with integration and non-functional testing.

Integration is a big issue because client/server systems are usually assembled from around twelve components (for a simple 2-Tier system) to perhaps twenty components for a complex architecture. These components are usually sourced from multiple suppliers. Although standards are emerging, client/server architectures often use components which have never been used before in combination.

The sheer number of interfaces involved make interface problems and inter-component conflicts more likely. Assumptions made by developers of one component from one supplier may contradict assumptions made by another developer. These problems may be encountered for the first time ever in your installation. Getting suppliers to take such problems seriously may be difficult because in client/server, it is the system integrator who takes responsibility for integration testing.

Performance consistently presents a problem in client/server. `Intelligent clients' may call for and process large volumes of data across networks. The amount of code exercised in large numbers of architectural layers may be substantial. Delays between distributed processes may be only ten or twenty milliseconds, but when a transaction requires hundreds of network messages, delays can be considerable.

Other non-functional issues such as security, backup and recovery and system administration also present risks. What was taken for granted in a mainframe environment often presents problems in client/server.

The test strategy must address all these risks, but testing does not happen `at the end'. Testing occurs at all stages and includes reviews, walkthroughs and inspections. Developers should be responsible for the products they deliver and should test their own code. System tests should cover non-functional areas as well as the functionality. The overarching principle is test early, test often.

Managing testing

To ensure testing gets attention, define testing as a specific activity within the development phase. Instil a regime that says components are delivered and acceptable only when tested without error and signed off.

Promote better test practices within the development team. Developers are often reluctant (or incapable) of testing their own code, so implement a `buddy scheme', where pairs of programmers test each others code. If practical, put testers into the development team to obtain the benefits of independence.

When a project moves from the development phase into system and acceptance testing, the nature of the project changes. Incident logging, management and reporting become the project manager's main method of project control. Progress is measured by tests performed, errors detected, fixed and retested. A simple PC database is usually adequate for recording incidents and providing useful progress reports and statistics.

Without a strong configuration management and change control regime, systematic testing can be ineffective. Complexity makes this more important, but also more difficult. Ensure a definitive inventory of components exists, system builds are automated where possible and releases are base-levelled.

Test automation

If you are not planning to conduct formal, documented functional tests, do not buy a tool to `do regression testing'. It will probably be a distraction, and lose you both time and money. If you do intend to implement regression tests, formulate a highly selective and economically viable regression test policy and only attempt to create automated regression tests when the software is sufficiently stable.

Integration and interface testing using test drivers or harnesses should be encouraged. Given test running and dynamic analysis tools, your developers can usually build test harnesses quickly. Where object-oriented languages, such as C++ are used, dynamic analysis tools are essential to detect memory leaks and give programmers invaluable insights into the behaviour of their code. Consider developing extended or `soak' tests of components where bespoke, low level code has been written.

Architectural components should be load and stress tested individually and as a complete system if resilience or performance are of concern. Whether you adopt a top-down, bottom-up or other integration strategy, the principle is to gain confidence in one build configuration, before integrating the next components for test.

System performance testing requires considerable tool support. Load generation, response time measurement, server, database and client monitoring and performance analysis may all be required. Although several vendors offer sophisticated test running tools with load regulation and performance analysis facilities, there is no one solution available. A combination of proprietary test tools, in-house utilities and innovation are usually needed to build, execute and analyse comprehensive tests.

Performance testing and tuning tends to follow three stages. Early tests tend to cause failures in the system caused by bugs or gross mis-setting of system parameters. In stage two, application design errors, such as inefficient SQL are identified. Finally, when the system is reliable and the application tuned, the database and server operating system parameters can then be tuned to achieve optimal performance.

Testing skills

Testing is a skill which requires a disciplined, systematic approach. Few developers, testers or users have ever received formal training in testing. Training helps testers not only to do better testing but helps them to be more motivated. Users, in particular, can benefit from testing training because they will better understand how to organise themselves, to prepare more comprehensive tests and be more critical of the systems they are expected to accept. If RAD is used, responsibility for testing shifts from the developer to the user, so user testing skills are essential.

Whether training courses are used or consultants hired to provide support and skills transfer, the reduction in client/server risk ensures a rapid payback.