Understanding the question(Relevant to Paper 2.1)
Professional Scheme
Relevant to Paper 2.1
A common reason for examination candidates gaining low marks on a question is that they do not realise, or do not take care to fully appreciate, what it is that the question is asking. This article focuses on the meanings attached to terms such as measure, quantitative measure and metric.
Answer the question
Question 1: What does this room measure?
Answer: It measures 3 metres long by 4 metres wide.
Question 2: What measure would most enhance the appearance of this room?
Answer: Painting the walls.
Question 1 is clearly asking for a quantitative answer. Question 2 is asking for an action that should be taken to achieve the desired result. These two meanings of the word measure should be clear enough in the above examples. Frequently, though, examination candidates write inappropriate answers because they fail to recognise when a question is specifically seeking quantitative ways of measuring something.
Words such as measure, measures, measured, measuring, metric or metrics, and terms such as quantitative measure or quantitative measures, may occur in questions and it is important to recognise what type of answer is required. Let's look at some more examples:
Question 3: What measures could be taken to protect computer data against unauthorised access?
Answer: employing security guards at the entrance to the building which houses the computer installation controlling access to the computer area with swipe cards using passwords to control access to files and databases installing anti-virus software and updating it frequently.
Question 3 is asking for measures in the sense of steps or actions that could be taken, so the above answer is appropriate. The question is not asking for ways of measuring anything, although there would certainly be no objection to including quantitative information in the answer. For example: 'Issue two different passwords, one to staff who are authorised only to read data, and another to staff who are authorised to read and modify data.'
Question 4: Suggest two ways of measuring the usability of an information system.
Answer: log each call to the helpdesk and calculate the average number of calls per user, per week ask users to complete a questionnaire in which they rate the usability of the system on a scale of 1 to 5, and then calculate the average score.
Question 4 is asking for metrics, that is ways of obtaining quantitative measures of something, in this case usability. The answer is appropriate because the ways suggested for measuring usability would provide quantitative results such as three calls per user, per week and an average usability score of 4.2 out of 5. However, a typical answer given by many examination candidates to Question 4 would be:
use a graphical user interface with easily recognised icons
for the input of data from paper forms, the layout of the screen should match that of the paper forms.
This type of answer to Question 4 will gain few, if any, marks because it has missed the point of the question, which is about suitable metrics. The candidates have answered a different question, such as: 'How can a user interface be made easy
to use?'
What is a relevant metric?
In the answer to Question 4, the first way suggested for measuring the usability of an information system was:
log each call to the helpdesk and calculate the average number of calls per user, per week.
Notice that the reference to 'the average number of calls per user, per week' is better than 'the average number of calls per user' or merely 'the number of calls'. An information system that prompts 1,000 calls to a helpdesk in a year from 200 users is likely to be far more usable than a system which prompts 100 calls from 20 users in a week. Even better would be to note that only calls to the helpdesk about usability issues should be counted for this particular metric. Having said that, Paper 2.1 is not looking for particularly sophisticated metrics, so do not be deterred from proposing some reasonably plausible metrics. Showing that you understand the idea of a metric is a good start.
Study session objectives
Metrics are relevant to the learning objectives of several Study Guide sessions for Paper 2.1. For example, metrics could be used in evaluating competing bids for an invitation to tender (Session 17) and in quality assurance (Session 22). Session 25 refers directly to metrics.
Session 25, Post-implementation issues:
- describe the metrics required to measure the success of the system
- discuss the procedures that have to be implemented to effectively collect the agreed metrics
- explain the possible role of software monitors in measuring the success of the system
- describe the purpose and conduct of an end-project review and a post-implementation review.
Some aspects of these objectives are considered below in the following examples of metrics:
- the number of transactions (eg customer orders) processed per day
- the growth rate for the number of transactions processed
- the response time for database queries by users of an application
- the time taken to print a standard report
- the mean time before failure of the system (eg an average of 30 days between crashes)
- quantitative data obtained from questionnaires
- the number of calls per user, per day to a helpdesk
- the total time spent on corrective maintenance of an application
- the volume of data stored on a database (eg 10 terabytes)
- the number of requirements changes requested for adaptive or perfective maintenance
- the percentage of customer orders delivered by the agreed delivery date
- the percentage of business functions defined in the requirements specification which have been successfully implemented
- the cost of implementing corrective changes to a system as a percentage of the cost of producing the system.
Collecting metrics
Metrics may be gathered manually or by computer. Calls to a helpdesk may be logged manually on paper records by helpdesk staff, although it is more likely nowadays that a computerised call-management system would be used.
The helpdesk staff would classify each help request. For example, a paper jam would be a printer problem, a forgotten or compromised password would be a security problem, an inability to amend a customer's address would be a database update or a usability problem, as appropriate. Statistical reports could then be generated easily by computer. A software monitor could be used to measure the performance of the system, for example the number of hits per hour on a web page, or the number of input data errors per 1,000 transactions.
Use of metrics in review stages
Both a post-implementation review and an end-project review are likely to include metrics. The metrics used to assess the success of system development should be agreed at an early stage of the project and specified in the Project Initiation Document.
The post-implementation review will then use these metrics to evaluate the implemented system. For example, two objectives for a company trading on the Internet might be (i) to confirm orders by e-mail within one hour, and (ii) to despatch the goods within one day. When the system goes live, statistics on time- to-confirm-order and time-to-despatch-goods can be collected and subsequently assessed at the post-implementation review to see how well the system has met these objectives.
For an end-project review, one metric might be the average percentage overrun on project tasks. The end-project review could use this data to recommend changes in how future projects are managed. Statistics could be gathered for individual stages of system development, such as analysis, design, programming, testing and implementation. The end-project review could then assess whether there were significant problems with the conduct of any of these stages and recommend what measures (meaning actions this time!) should be taken to avoid such problems in future projects.
Examination question wording
Examples of question wording that might be used to refer to measures, in the sense of steps or actions, are:
- describe three ways of mitigating the risk of fire or flood for a computer installation
- describe measures that could be taken to quality assure a bespoke application development
- what clerical and software controls are available to assist in maintaining the integrity of the system?
- what steps could an organisation take to ensure the physical security of its IT systems?
- identify four measures that would assist compliance with data protection legislation.
Examples of question wording that might be used to refer to metrics include:
Question 5: Briefly describe the following three elements of software quality and explain how each element can be measured:
- functional correctness of the software
- reliability of the software
- usability of the software.
Question 6: For each of the following, identify two possible quantitative measures:
- system performance
- effectiveness of change control.
Question 7: Define two metrics for each of the following and briefly explain how these metrics could be collected:
- functionality
- performance.
Sometimes a question may involve both measures (as in steps or actions) and metrics, as in: 'What measures could be used to collect metrics on system performance?'
Conclusion
Always read each question carefully. Make sure that you answer the question that was asked: this may not be the question you thought had been asked, nor the question you would like to have been asked. If a question asks for 'measures', ask yourself: 'Is this question about steps or actions, or is it about metrics?' You can then ensure that your answer is a measured response (meaning well-thought-out, carefully considered) which measures up to (conforms to, reaches) the required standard.
David Howe is assessor for Paper 2.1.