Lab Evauation 4… Khaja Moinuddin

ISO 9001 Clauses

Description

Related ITD Procedure

1.         Scope

Scope of the QMS, areas covered and any exclusions identified

Quality Manual

2.         Normative Reference

The normative reference ISO 9000 (fundamentals/vocabulary) must be used in conjunction with the standard itself

Quality Manual

3.         Terms & Definitions

The terms and definitions given in ISO 9000 apply.

Quality Manual

4.1       General Requirements

           

This clause covers the requirement of the organisation to actually set up a quality management system, and broadly sets out the activities associated with this. The organisation shall document, implement and maintain a quality management system (QMS) and continually improve its effectiveness in accordance with the requirements of the standard.

 

 

Quality Manual Quality Manual Each procedure under
   Effectiveness Review section

Quality Manual

Quality Manual  and
     Management Review
ITD Corrective & Preventive
   Action Procedure and
    Management Review

 

4.2       Documentation Requirements

The important issue here is that people must have the information they need to do their job. The QMS must include quality policy, quality objectives, quality manual, documented procedures, documents to ensure the effective planning, operation and control of its processes, and records. All documents are required to be controlled. Also required is the control of records, i.e. identification, storage, protection, retrieval, retention time and disposition of records.

Document Control Procedure

5.         Management Responsibility

Specifically identifies the responsibility of top management and the need for effective leadership

Quality Manual

5.1       Management Commitment

Top management shall provide evidence of its commitment to the development and implementation of the QMS and continually improving its effectiveness

Quality Policy

5.2       Customer Focus

Top management shall ensure that customer requirements are determined and are met with the aim of enhancing customer satisfaction

Quality Policy

5.3       Quality Policy

A Quality Policy establishes a commitment to quality, a commitment to continuous improvement, the context for quality objectives and how the objectives relate to customer requirements

Quality Policy

5.4       Planning

Top management shall ensure that the planning of the QMS is carried out in order to meet the requirements of the standard as well as the quality objectives. The integrity of the QMS is maintained when changes to the QMS are planned and implemented.

Quality Manual

5.5       Responsibility, Authority & Communication

Top management shall ensure that responsibility and authorities are defined and communicated within the organisation. Everybody should know what they are expected to do (responsibilities) and what they are allowed to do (authorities). They should understand how these responsibilities and authorities relate to each other. A Management representative needs to be appointed who has responsibility for the QMS. Appropriate communication processes need to be established, and the effectiveness of the QMS must be communicated to the organisation.

Quality Manual

5.6       Management Review

The QMS must be reviewed at planned intervals, to ensure its continuing suitability, adequacy and effectiveness. Records from the review must be maintained. The review inputs and outputs must be clearly stated.

Quality Manual (Section 3)

 

 

 

 

ISO 9001:2000 Clauses

Description

Related ITD Procedure

6.         Resource Management

To ensure that the resources needed to both maintain and improve the QMS are available, and also to carry out the work required in a manner that will satisfy customer requirements.

Quality Manual (Section 4)

6.1       Provision of Resources

The organisation shall determine and provide resources needed to implement and maintain the QMS and continually improve its effectiveness, and to enhance customer satisfaction by meeting customer requirements

Quality Manual (Section 3.1, 3.4 & 4)

6.2       Human Resources

Personnel performing work affecting quality of service, shall be competent on the basis of appropriate education, training, skill and experience. The organisation must determine the necessary competence and provide training to satisfy these. They must evaluate the effectiveness of the training and maintain appropriate records. They must also ensure that its personnel are aware of the relevance and importance of their activities and how they contribute to the achievement of the quality objectives

Quality Manual (Section 4) & Training Procedure.

6.3       Infrastructure

The organisation shall determine, provide and maintain the infrastructure needed to perform work. Infrastructure includes buildings, workspace, utilities, equipment and supporting services. This means determining and providing for current infrastructure requirements and planning for expected future needs.

Quality Manual (Section 4)

6.4       Work Environment

The organisation shall determine and manage the work environment needed to achieve conformity of service

Quality Manual (Section 4)

7.1       Planning of Product Realisation

The organisation shall plan and develop the processes needed for service realisation. This includes the need to establish processes, documents, and provide resources specific to the service; required verification, validation, monitoring, inspection and test activities specific to the service, and also the records needed to provide evidence that the realisation processes and resulting service meet the requirements

ITD Service Provision Procedure

7.2       Customer Related Processes

The organisation shall determine customer requirements, statutory and regulatory requirements and any additional requirements determined by the organisation, and review these requirements. Records resulting from the review and actions arising should be maintained. This clause focuses mainly on the service provided, but can cover additional factors such as regulatory or legal requirements, and unspecified customer expectations.

ITD Service Provision Procedure & Quality Manual (Section 2). ITD Project Management & System Development Procedure

7.3       Design & Development

The organisation shall plan and control the design and development of its service. This requires determining the design and development stages, the review, verification and validation of each stage, and the responsibilities and authorities for design and development. This clause requires you to have controls for the design processes and to establish a disciplined approach to the design process. The design process should include inputs, outputs, review, verification and validation. Design and development changes also need to be identified and records maintained.

ITD Project Management & System Development Procedure

 

 

ISO 9001:2000 Clauses

Description

Related ITD Procedure

7.4       Purchasing

The organisation shall ensure that purchased product conforms to specified purchase requirements. The organisation shall evaluate and select suppliers based on their ability to supply product in accordance with these requirements. Criteria for selection, evaluation and re-evaluation shall be established. Records of results of evaluation shall be maintained. The organisation shall also establish and implement the inspection or other activities necessary (verification) for ensuring that purchased product meets specified requirements.

Purchasing Procedure

7.5.1    Control of production & service provision

This clause describes the various types of controls that you might need to have in place for the delivery of service. These include the availability of information that describes the service, the availability of work instructions, the use of suitable equipment, the availability of monitoring and measuring devices and the implementation of monitoring and measurement.

Quality Manual (Section 6) and related procedures

7.5.2    Validation of processes for production and service provision

ITD do not have any processes where the resulting outputs cannot be verified by subsequent monitoring and measuring, therefore this clause is excluded

Excluded

7.5.3    Identification & Traceability

Where appropriate, the organisation shall identify the product/service by suitable means. Where traceability is a requirement, the organisation shall control and record the unique identification of the product / service. Identification is knowing what the product or service resulting from a particular process is. Where you need to identify a product/service, the methods used and the records to be kept need to be defined. Traceability is knowing where a product/service came from.

Equipment Management Procedure

7.5.4    Customer Property

The organisation shall identify, verify, protect and safeguard customer property provided for use or incorporation into the product/service.

Equipment Management Procedure

7.5.5    Preservation of Product

The organisation shall preserve the conformity of the product/service during internal processing and delivery to the intended destination. This preservation shall include identification, handling, packaging, storage and protection.

Equipment Management Procedure; Print Room Procedure and Post Room Procedure

7.6       Control of monitoring & measuring devices

This means having confidence in the equipment used to check your work. This clause applies to organisations where monitoring devices and measuring equipment are used to verify that what you are providing meets your customer requirements. Monitoring implies observation and supervision activities. Measurement considers the determination of a quantity, magnitude or dimension. Where necessary to ensure valid results, measuring equipment shall be calibrated, adjusted, safeguarded and protected from damage and deterioration.

Equipment Management Procedure

 

 

ISO 9001:2000 Clauses

Description

Related ITD Procedure

8.1       Measurement Analysis & Improvement

This clause covers the wider monitoring, measurement, analysis and improvement of the quality management system. The organisation shall plan and implement the monitoring, measurement, analysis and improvement processes needed to demonstrate conformity of the product/service, to ensure conformity of the QMS and to continually improve the effectiveness of the QMS. This shall include determination of applicable methods, including statistical techniques, and the extent of their use.

Quality Manual (Section 5)

8.2.1    Customer Satisfaction

The organisation shall monitor information relating to customer perception as to whether they have met customer requirements.  The methods for obtaining and using this information shall be determined. Monitoring of customer  satisfaction should be performed on an ongoing basis, as customers’ perceptions of performance change over time. The results of the customer satisfaction monitoring should be addressed in the management review and continual improvement activities, to identify and implement those changes which will improve the relationship with customers. Note: Satisfaction is not the opposite of dissatisfaction. Satisfaction can produce a neutral response whereas dissatisfaction can produce a strong negative response!

Quality Manual (Section 2)

8.2.2    Internal Audit

The organisation shall conduct internal audits at planned intervals to determine whether the QMS conforms to the standard and to the QMS requirements established by the organisation, and is effectively implemented and maintained. The responsibilities and requirements for planning and conducting audits, and for reporting results and maintaining records shall be defined in a documented procedure. Management must ensure that actions are taken without undue delay to eliminate detected nonconformities and their causes. Follow-up activities shall include the verification of the actions taken and the reporting of verification results.

Internal Audit Procedure

8.2.3    Monitoring & Measurement of Processes

The organisation shall apply suitable methods for monitoring and measuring the QMS processes. These methods shall demonstrate the ability of the processes to achieve planned results. When planned results are not achieved, corrective action shall be taken, as appropriate, to ensure conformity of the service.

 Quality Manual (Section 5)

8.2.4    Monitoring & Measurement of Product

There is considerable overlap between this clause and the previous one. In many cases the same monitoring or measurement procedures will be adequate for the purposes of monitoring or measuring both processes and products / services.

Quality Manual (Section 5)

8.3       Control of Nonconforming product

The standard requires you to have ways to identify a product or service nonconformity and to decide what to do about it.  You need to have a documented procedure describing how you comply with the requirements and to record any such activities. When nonconforming product is corrected it shall be subject to re-verification to demonstrate conformity to the requirements.

Quality Manual (Section 7), Print Room Procedure, Equipment Management Procedure

 

 

ISO 9001:2000 Clauses

Description

Related ITD Procedure

8.4       Analysis of Data

Analysing data is an essential activity for any possible improvement in the quality management system, in the processes and in the product/service. The organisation shall determine, collect and analyse appropriate data to demonstrate the suitability and effectiveness of the QMS and to evaluate where continual improvement of the effectiveness of the QMS can be made. This shall include data generated as a result of monitoring and measurement and from other relevant sources. The analysis of data shall provide information on customer satisfaction; conformity of product/service requirements; characteristics and trends of processes and products including opportunities for preventive action; and suppliers.

Quality Manual (Section 6)

8.5.1    Continual Improvement

Continual improvement is the process of taking actions on a recurring basis to implement agreed solutions that should bring positive benefits. The organisation shall continually improve the effectiveness of the QMS through the use of the quality policy, quality objectives, audit results, analysis of data, corrective and preventive actions and management review.

Quality Manual, Quality Policy, Quality Objectives, Management Review and Corrective & Preventive Action Procedure

8.5.2    Corrective Action

Corrective action is an important improvement activity. Corrective action identifies measures needed to correct identified problems. It seeks to eliminate permanently the causes and consequent effects of problems that could have a negative impact on business results; the organisation’s products/services, processes and QMS; and the satisfaction of customers. Corrective action involves finding the cause of a particular problem and then putting in place the necessary actions to prevent it from recurring.

Corrective & Preventive Action Procedure

8.5.3    Preventive Action

Preventive action seeks to prevent the occurrence of potential problems that could have a negative effect on business results, products/services, processes, QMS or customer satisfaction. A documented procedure shall define requirements for determining potential nonconformities and their causes; evaluating the need for action to prevent occurrence of nonconformities; determining and implementing action needed; records of the results of action taken; and reviewing preventive action taken.

Corrective & Preventive Action Procedure

 

ISO 9126 Std

                             Description

Related ITD procedure

1)Functionality

The functionality characteristic allows draw conclusions about how well software provides desired functions.

QM

a)Suitability

It correlates with metrics which measure attributes of software that allow to conclude the presence and appropriateness of a set of functions for specified tasks.

QM

b)Accuracy

The accuracy sub-characteristic allows to draw conclusions about how well software achieves correct or agreeable results

QM

c)Interoperability

It correlates with metrics which measure attributes of software that allow to conclude about its ability to interact with specified systems.

 

QM

d)Security

The security sub-characteristic allows to draw conclusions about how secure software is

QM

 

 

 

2)Reliability

The reliability characteristic allows to draw conclusions about how well software maintains the level of system performance when used under specified conditions.

DC

a)maturity

It correlates with metrics which measure attributes of software that allow to conclude about the frequency of failure by faults in the software.

 

DC

b)Fault Tolerance

It correlates with metrics which measure attributes of software that allow to conclude on its ability to maintain a specified level of performance in case of software faults or infringement of its specified interface.

 

QM

c)Recoverability

The recoverability sub-characteristic allows to draw conclusions about how well software recovers from software faults or infringement of its specified interface.

 

QM

d)compliance

It correlates with metrics which measure attributes of software that allow to conclude about the adherence to application related standards, conventions, and regulations in laws and similar prescriptions.

 

QM

 

 

 

3)Usability

The usability characteristic allows to draw conclusions about how well software can be understood, learned, used and liked by the developer.

QP

a)Understandability

Internal understandability reuse metrics assess whether new software engineers/developers can understand: whether the software is suitable;

 

DC

b)Learn ability

Internal learn ability metrics assess how long software engineers or developers take to learn how to use particular functions, and the effectiveness of documentation. Learn ability is strongly related to understandability, and understandability measurements can be indicators of the learn ability potential of the software.

QM

c)operability

Internal programmability metrics assess whether software engineers/developers can integrate and control the software.

QM

 

 

 

 

 

 

4)Re usability

Internal usability metrics are used for predicting the extend of which the software in question can be understood, learned, operated, is attractive and compliant with usability regulations and guidelines where here using means integrating it in a larger software system.

QM

a)Understandability for

reuse

Software engineers/developers should be able to select a software product which is suitable for their intended use. Internal understandability reuse metrics assess whether new software engineers/developers can understand: whether the software is suitable; how it can be used for particular tasks.

QM

b)Learn ability for re use

, internal learn ability metrics assess how long software engineers or developers take to learn how to use particular functions, and the effectiveness of documentation. Learn ability is strongly related to understandability, and understandability measurements can be indicators of the learn ability potential of the software.

DC

 

 

 

5) Efficiency

The efficiency characteristic allows to draw conclusions about how well software provides required performance relative to amount of resources used. It can be used for assessing, controlling and predicting the extent to which the software product (or parts of it) in question satisfies efficiency requirements.

QP

a)time  behavior

The time behavior sub-characteristic allows to draw conclusions about how well the time behavior of software is for a particular purpose.

DC

b)resource utilization

. It correlates with metrics which measure attributes of software that allow to conclude about the amount and duration of resources used while performing its function.

 

 

QM

 

 

 

6)Maintainability

The maintainability characteristic allows to draw conclusions about how well software can be maintained. It can be used for assessing, controlling and predicting the effort needed to modify the software product (or parts of it) in question.

QM

a)Analyzability

The analyzability sub-characteristic allows to draw conclusions about how well software can be analyzed. It correlates with metrics which measure attributes of software that allow to conclude about the effort needed for diagnosis of deficiencies or causes of failures, or for identification of parts to be modified.

QM

b)Stability

The stability sub-characteristic allows to draw conclusions about how stable software is. It correlates with metrics which measure attributes of software that allow to conclude about the risk of unexpected effects as result of modifications.

 

QM

c)Testability

The testability sub-characteristic allows to draw conclusions about how well software can be tested and is tested. It correlates with metrics which measure attributes of software that allow to conclude about the effort needed for validating the software and about the test coverage.

QM

 

 

 

7)Portability

The portability characteristic allows to draw conclusions about how well software can be ported from one environment to another. It can be used for assessing, controlling and predicting the extend to which the software product (or parts of it) in question satisfies portability requirements.

QM

a)Adaptability

The adaptability sub-characteristic allows to draw conclusions about how well software can be adapted to environmental change. It correlates with metrics which measure attributes of software that allow to conclude about the amount of changes needed for the adaptation of software to different specified environments.

 

QM

b)Install ability

The installability sub-characteristic allows to draw conclusions about how well software can be installed in a designated environment. It correlates with metrics which measure attributes of software that allow to conclude about the effort needed to install the software in a specified environment.

 

 

QP

c)Replace ability

The replaceability sub-characteristic allows to draw conclusions about how well software can replace other software or parts of it. It correlates with metrics which measure attributes of software that allow to conclude about opportunity and effort using it instead of specified other software in the environment of that software.

 


 

QM

 

 

 

 

 

 

 

 

The traditional SCM identifies four procedures that must be defined for each software project to ensure a good SCM process is implemented. They are

  • Configuration Identification
  • Configuration Control
  • Configuration Status Accounting
  • Configuration Authentication

 

3. The testing for the features are as follows.

Reliability : Software Reliability Testing requires checking features provided by the software,the load that software can handle and regression testing.

Feature test

feature test for software conducts in following steps

  • Each operation in the software is executed once.
  • Interaction between the two operations is reduced and
  • Each operation each checked for its proper execution.

feature test is followed by the load test.

Load test

This test is conducted to check the performance of the software under maximum work load. Any software performs better up to some extent of load on it after which the response time of the software starts degrading. For example, a web site can be tested to see till how many simultaneous users it can function without performance degradation. This testing mainly helps for Databases and Application servers. Load testing also requires to do software performance testing where it checks that how well some software performs under workload.

Regression test

Regression testing is used to check if any bug fixing in the software introduced new bug. One part of the software affects the other is determined. Regression testing is conducted after every change in the software features. This testing is periodic. The period depends on the length and features of software.

  • Compatibility:- The Compatibility testing can be done as follows
  • Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)..
  • Bandwidth handling capacity of networking hardware
  • Compatibility of peripherals (Printer, DVD drive, etc.)
  • Operating systems (Linux, Windows, etc.)
  • Database (Oracle, SQL Server, MySQL, etc.)
  • Other System Software (Web server, networking/ messaging tool, etc.)
  • Browser compatibility (Chrome, Firefox, Netscape, Internet Explorer, Safari, etc.)

Browser compatibility testing, can be more appropriately referred to as user experience testing. This requires that the web applications are tested on different web browsers, to ensure the following:

  • Users have the same visual experience irrespective of the browsers through which they view the web application.
  • In terms of functionality, the application must behave and respond the same way across different browsers.
  • Carrier compatibility (Verizon, Sprint, Orange, O2, AirTel, etc.)
  • Backwards compatibility.
  • Hardware (different phones)
  • Different Compilers (compile the code correctly)
  • Runs on multiple host/guest Emulators

 

Usability testing:

The best way to implement usability testing is two fold – firstly from a design & development perspective, then from a testing perspective.

From a design viewpoint, usability can be tackled by (1) Including actual Users as early as possible in the design stage. If possible, a prototype should be developed – failing that, screen layouts and designs should be reviewed on-screen and any problems highlighted.. The earlier that potential usability issues are discovered the easier it is to fix them.

(2) Following on from the screen reviews, standards should be documented i.e. Screen Layout, Labelling/Naming conventions etc. These should then be applied throughout the application.

Where an existing system or systems are being replaced or redesigned, usability issues can be avoided by using similar screen layouts – if they are already familiar with the layout the implementation of the new system will present less of a challenge, as it will be more easily accepted (provided of course, that that is not why the system is being replaced).

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

An example of a usability consideration within the functional specification may be as simple as specifying a minimum size for the ‘Continue’ button.

4). At the unit testing stage, there should be an official review of the system – where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen – yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other “discount usability” testing methods. 
 

6). From a testing viewpoint, usability testing should be added to the testing cycle by including a formal “User Acceptance Test”. This is done by getting several actual users to sit down with the software and attempt to perform “normal” working tasks, when the software is near release quality. I say “normal” working tasks because testers will have been testing the system from/using test cases – i.e. not from a users viewpoint. User testers must always take the customer’s point of view in their testing.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

(7) Another option to consider is to include actual users as testers within the test team. One financial organization I was involved with reassigned actual users as “Business Experts” as members of the test team. I found their input as actual “tester users” was invaluable.

8). The final option that may be to include user testers who are eventually going to be (a) using it themselves; and/or (b) responsible for training and effectively “selling” it to the users. 

 

Affordability testing:

A way to do affordability testing is to ask customer what is his buget and if the project manager thinks that he can do it then  affordability can be met else there is no way

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: