Snyders.US

In addition to the top-level classification of formal and informal, the plethora of review types can be further decomposed and categorized. The position presented here is that the review types defined by IEEE 1028-1997, the IEEE Standard for Software Reviews, will support any other standard that calls for review, or peer review, activities. In fact, IEEE 1028-1997 was designed to support other standards; is the best framework currently available to support review procedures. For direct applicability to systems engineering, only slight modification and augmentation to IEEE 1028 is required, and results in the following organization of review types:

  • Desk checking.
  • Walkthroughs.
  • Technical reviews.
  • Management reviews.
  • Artifact inspections.

The Desk Check

Desk checking does not often come to mind when reviews are discussed, but the desk check is indeed a type of peer review. The desk check is a one-person walkthrough where "someone other than the author (Frank, et al. 2002)" examines the work product for errors or adherence to specification based on the instruction of the designer. This is typically a casual review as no documentation or process is usually required, however, a standard checklist could be used.


Walkthroughs

A walkthrough is more of an analysis technique than it is a class of review. However, when the various project controls are applied, such as schedule, defined participants, a statement of objectives, output recording--a walkthrough becomes a formal review class. Without these controls it is informal. A walkthrough may be held for the purpose of educating the audience about various technical aspects. Another important objective of a walkthrough could be to exchange ideas as each subject area is sequentially addressed (IEEE 1998).

Typically, the designer of the element under review makes an overview presentation, followed by a general discussion from the participants. Then, the designer (author) "walks through" the artifact under review in detail. Suggested changes and improvements are noted, and optionally, recorded. If the walkthrough is structured, the recorded minutes are consolidated into a review report and distributed (Frank, Marriott, & Warzusen 2002). If defined roles are used in the walkthrough, they will be a leader (facilitator), a recorder (scribe), and the author.


Technical Reviews

The technical review identified in the IEEE 1028 standard could also be referred to as an engineering review. The standard defines it as "a systematic evaluation", "by a team of qualified personnel (IEEE 1998)". The primary purpose is to identify "discrepancies from specification and standards (IEEE 1998)". However, recommendations of alternatives and examinations of optional approaches may be on the agenda for this type of review. Examples of reviews that fall into this classification include the system requirements review, all design reviews, software related reviews, and documentation reviews. The critical design review would typically be in this category due to the technical nature and formality of the proceedings--but it might also be a management review, based on the participation of management personnel. The distinction is purely academic and is at the discretion of the implementing organization. Defined roles will include a decision maker, a leader, and a recorder.


Management Reviews

Management reviews are carried out by and for the management personnel who have responsibility for the system (IEEE 1998). Although the customary exception will always exist, management reviews are those that monitor the life cycle and processes used to create the system design--not the designed elements. A management review "is to manage progress, determine the status of plans and schedules, confirm requirements and their system allocation, or evaluate the effectiveness of management approaches (IEEE 1998)". Examples of a management reviews would be a project feasibility analysis, project plan reviews including risk analysis, those involving the management of suppliers, test readiness, verification and validation reports, operational readiness, and any postmortem reviews. The management review has the most potential to overlap with the technical review due to many similarities in the content and expertise involved. The distinction is not a critical one-- as before, it is academic. Categorize however it best suits your organization's needs. Defined roles for this type of review will include a decision maker, a leader, and a recorder.


Artifact Inspections

This type of review is developed from the concept of the software inspection identified in IEEE 1028, and pioneered by Michael Fagan at IBM (Fagan 1976). The original theory was to improve productivity and quality through inspection by a panel of experts. Inspection techniques have since evolved and matured in the software engineering field based on additional contributions (Frank, et al. 2002). It is easy to envision how these same techniques could be applied to systems engineering artifacts such as drawings, specifications, design documents, etc. An inspection is in a class by itself for two main reasons: the author of the artifact under inspection should not fill the role of leader, reader or recorder--and the issues identified must be resolved to meet exit criteria. Errors identified in an inspection are tracked to resolution much like errors reported from a test. An inspection is not a replacement for a test, but can be an effective, parallel error-detection mechanism (Frank, et al. 2002).

An artifact inspection is a visual examination of a document, specification, drawing, to identify errors and deviations from standards and specifications (derived from IEEE 1998). What differentiates an inspection from a technical review is the "determination of remedial or investigative action for an anomaly is a mandatory element (IEEE 1998)". Developing a solution to the issue(s) discovered is not an agenda item in an inspection; however, the inspection is not complete until all of the issues identified are resolved. For example, a technical review could identify issues, report them, and be considered a completed activity. In contrast, an inspection is not complete until it has been shown that the identified issues have been resolved to the satisfaction of the inspection team in subsequent sessions.

Another differentiation for the inspection is the role of "the inspector". This ominous sounding term can be euphemistically referred to as a subject matter expert; can be a team of experts. Where it is implied for other review types that the individual or group that is scrutinizing the work product be skilled in the field--for an inspection it is an explicit requirement. The IEEE standard calls for inspectors "chosen to represent different viewpoints" and gives examples such as "sponsor, requirements, design, safety, test (IEEE 1998) and others. The standard goes on to state that the inspectors should be assigned "specific review topics", for example, adherence to a "specific standard (IEEE 1998)". One can envision that this specificity can be extended into the SE domain for conformance to pressure vessel, welding, building, and other codes or standards.

Roles for an inspection are the leader, recorder, a reader, and the inspector. As stated previously, the author (designer) may be present, but should not act as the leader, the reader, or as the recorder so that any potential influence or bias is negated.


Effective Techniques

Many doubt the usefulness of formal reviews and inspections. Pajerek, for one, discusses how quality processes fail to recognize that in work product reviews, the people involved do "take it personally (Pajerek 2002)". Pajerek goes on to state, "People do have an attachment to their own creations, and criticism is resented (Pajerek 2002)". It would be difficult to refute this position. Reviews have the potential to degenerate into finger-pointing sessions where the only outcomes are bruised egos and time wasted. The application of effective techniques and proper planning can mitigate the negatives associated with a critique from peers.

The most important elements of a formal review, regardless of type, are well-defined objectives, and explicit entry, exit criteria. If a group gathers at a conference table without having read the objective for the meeting, it is very likely that the objective will not be realized. It is a good idea for the leader of the review to read aloud the objective in some opening remarks. This will reduce the possibility that the designer believes the purpose of the review is to look at the cosmetics of the design, when in fact; it is to investigate the cost to manufacture!

Written, unambiguous entry and exit criteria are essential to conducting successful formal reviews. Many preliminary design reviews have been conducted on presentations that were too incomplete to discuss intelligently. The entry criteria for the review should be invoked to prevent this waste of time. Similarly, exit criteria serves to frame the activities in the review, next steps, and output documentation. IEEE 1028-1997 offers an excellent archetype of procedures for the formal review types.

Planning the review activity is critical to its success, as it will ensure that the participants understand the process and its purpose. The selection of reviewers is very important. It is best to include those with a vested interest in a quality product. Distribute the material under review several days in advance of the meeting, to give time for advance preparation. Ensure that willing participants who understand the responsibilities fill the roles identified for the review types. Usually, it is not a good idea to have a recorder who is not also a reviewer, as the note taking is a copious task that will not allow for full participation in the session. A strong leader (facilitator) who does not allow side discussions and has good timekeeping skills is an asset.

Preparation for a formal review should always include at least one informal review event, either a desk check or an informal walkthrough. Desk checks are highly effective for several reasons:

  • The communication channel is exclusive.
  • The critique is not public.
  • Errors can be corrected immediately.


Many studies have shown that smaller groups are more productive, as McCumber states, "The ideal size of a single-discipline team is two (McCumber 2003b)", because of the communication advantages over larger teams. Consider that one senior engineer checking a design at a desk could identify an issue in one project hour (or less); that same issue might be overlooked in a formal technical review due to multiple conversation tracks, side discussions, and parliamentary distractions. Consider too the elementary cost advantage: 5 reviewers X 2 hours each = 10 project hours + administrative costs. The expense of a desk check is low in terms of project costs and high in terms of quality benefit.

The liberal use of informal reviews implicitly helps to solve the issue of the public critique and the natural aversion to it. This is accomplished through iterations of error detection and correction in the more cordial, informal working atmosphere. Hopefully, by the time the work is presented at a formal review, it will have been cleansed of the trouble spots, at least the most embarrassing ones. The correction of the errors is also more cost effective in the informal review mechanism as they can be accomplished without the overhead of formal review controls and reporting. Do not overlook desk checks and casual walkthroughs as highly effective, low cost review techniques.


Conclusions

Review and inspection are integral elements of a quality system. Reviews can be a communication tool invaluable to the disparate engineering specialities that must collaborate to succeed on large, complex engineering assignments. As long as electrical needs to interface with structural, structural with plumbing, and plumbing with the municipal authority, the need for systematic reviews will endure.

The quagmire of review types identified in the various SE standards and guidelines can be coalesced into a handful of formal reviews classes derived from IEEE 1028-1997. Informal reviews, the desk check and the unstructured walkthrough, should be a precondition for entry into any formal review. The informal reviews are highly effective in terms of cost and error resolution, and serve to minimize the stigma associated with public critique of a work product.

References
Carnegie Mellon University, About the SEI - Welcome. Retrieved June 8, 2003 from http://www.sei.cmu.edu/about/about.html on, 2003.

CMMI Product Team, Capability Maturity Model Integration (CMMI SM), Version 1.1 Staged Representation. PA: Carnegie Mellon University, 2002.

Electronic Industries Alliance, EIA Standard (EIA-731.1). VA: 8Electronic Industries Alliance, 2002.

Fagan, M., Design and Code Inspections to Reduce Errors in Program Development. IBM Systems Journal, Vol. 15, No. 3, 1976.

Forsberg, K. , Mooz, H., System Engineering for Faster, Cheaper, Better. CA: Center for Systems Management, Inc, 1998.

Frank, B., Marriott, P., & Warzusen, C., The Software Quality Engineer Primer. Terre Haute, IN: Quality Council of Indiana, 2002.

IEEE, IEEE Standard 1028-1988, IEEE Standard for Software Reviews. NY: IEEE Service Center, 1998.

International Council on Systems Engineering, INCOSE Position on CMMI. Feb. 2001, Retrieved June 27, 2003 from http://www.incose.org/lib/cmmi-pos-2001.html, 2001.

Leveson, Nancy G. and Clark S. Turner, "An Investigation of the Therac-25 Accidents," IEEE Computer, p. 18-41, July 1993

McCumber, W., Formal Design Reviews. University of Maryland University College graduate course MSWE603 Instructor's Notes, Retrieved June 21, 2003 http://www.eagleridgestore.com/secure/msit650/mswe603_in10.html, 2003.

McCumber, W., Teams and Management of Teams. University of Maryland University College graduate course MSWE603 Instructor's Notes, Retrieved July 1, 2003 http://www.eagleridgestore.com/secure/msit650/mswe603_in06.html, 2003.

Pajerek, L., To Engineer Is Human, Draft paper submitted for publication in The Journal of the International Council on Systems Engineering, John Wiley & Sons, 2002.

Patterson, F.G., JR., Systems Engineering Life Cycles: Life Cycles for Research, Test, Evaluation; and Planning and Marketing. eds. Sage, Andrew P. and Rouse, William B., Handbook of Systems Engineering and Management. John Wiley and Sons, New York, 1999.

Perry, William J., Acquisition Reform: A Mandate for Change. OSD, February 9, 1994.

This paper attempts to bring a common vernacular into systems engineering project reviews. Use of the term "review" by itself is too generic to identify any specific objectives. The position is that reviews specified by disparate standards organizations and quality management systems can be mapped to a modest set of informal and formal review types using IEEE 1028-1997 as a foundation. Project reviews encompass activities ranging from an informal desk check, to a walkthrough, to technical and management reviews--to a formal inspection. The role of the informal review as the support structure of formal procedures is emphasized.

Reviews and Systems Engineering
Formal design reviews have long been the domain of Department of Defense (DoD) contracts (McCumber 2003). Prior to the elimination of MIL-STD 1521 by then Secretary of Defense William Perry in 1994, systems engineers working on DoD contracts would have been contractually required to participate in formal reviews. Ten years after the removal of the federal mandate, the engineer is sometimes asked, more often required, to be involved in some form of review. Reviews did not disappear with the federal mandate that required them. In fact, since Department of Defense (DoD) has specified the "adoption of commercial business practices (Perry 1994)", reviews that could have been discarded by contract managers have endured. The specification of reviews is ubiquitous in government and commercial projects--and with good reason. The abandonment of review activities has been attributed to project failure on more than one occasion.

A failure analysis conducted on the Therac-25 computerized radiation therapy machine used to treat cancer patients is an example. The report showed that this device caused "six known accidents involving massive overdoses – with resultant deaths and serious injuries (Leveson 1993)". Several problems in the engineering approach used to design the machine were attributed, including the lack of "effective peer review during the system development phase (Forsberg and Mooz 1998)". Another example is the Lewis spacecraft launched by NASA in August 1997. According to "(Forsberg and Mooz 1998) Lewis was launched on 23 August 1997. The first problem was found 20 minutes after launch. All contact with the spacecraft was lost on 26 August 1997. There are specific engineering and operational causes of the failure. However, from the system engineering process standpoint, one of the primary causes was the abandonment of informal peer reviews two years prior to flight".

Considering these findings, it is appropriate that reviews of various types are specified in systems engineering standards. All of the current standards defined by the INCOSE Standards Technical Committee (STC) includes some type of review activity. However, it is notable that although each of the current SE standards stipulates to hold reviews, none describe the mechanics of how they should be conducted. The standards do not advise the details of who should attend, or what artifacts should be produced. In addition, each use the term in differing contexts: defect review, peer review, design review, etc. It is easy to understand why the term "review" has various connotations for diverse engineering disciplines. Perhaps due to the lack of alternatives, many of the MIL-STD review types have continued to be part of the acquisition models used for contracting with DoD, and for other governmental entities. For example, a U.S. State Department systems contract issued today will specify, at a minimum, that preliminary design reviews, and a critical design review be conducted. The following figure shows how ubiquitous reviews are in current Systems Engineering standards.

Copyright © Snyders.US. All rights reserved.

The desk check is the most effective form of engineering review

Effective Reviews for Systems Engineers

Prepared for an INCOSE Symposium


John R. Snyder, July 2003


About the same time that DoD was handing the control of process over to the private sector, it recognized acute quality issues in the commercial software industry. The DoD funded the Software Engineering Institute (SEI) at Carnegie Mellon University so that "DoD can acquire and sustain its software-intensive systems with predictable and improved cost, schedule, and quality (Carnegie Mellon University 2003)". The SEI developed a model, known as the Capability Maturity Model (CMM) that "is based on documented and dependable process that an organization can use with predictable results (Patterson 1999)". In the CMM, as with ISO-9000, the "details of the process are of little interest, as long as the process is repeatable (Patterson 1999)". The model was intended to help software organizations implement a disciplined, repeatable approach to bring software products to market.


Based on the success and acceptance of the software CMM (SW-CMM), additional models were developed, for example, a Systems Engineering Capability Maturity Model (SE-CMM). To resolve the issues of having multiple, discipline-specific models, the SEI produced a Capability Maturity Model Integration (CMMI) in November of 2000. The CMMI incorporates the "Maturity Level" concepts, in the form of "Capability Levels"; extends and expands on the concepts developed in the original CMM. The CMMI is one of the standards under development by the Standards Technical Committee, and "INCOSE supports the use of CMMI as an integrated model for appraisals and process improvement (International Council on Systems Engineering 2001)".

The software CMM was significant because it interjected the term "peer review" into the vocabulary of the engineering community. The CMMI continues the use of peer review, but does a better job of defining it pragmatically. The CMMI framework states that a peer review is synonymous with the term "work product inspection", and goes on to give a definition of a peer review as "the review of work products performed by peers during development of the work products to identify defects for removal". CMMI gives examples of peer review types as "inspections, structured walkthroughs, and active reviews (CMMI Product Team 2002)".

Attempts to map the CMMI peer review terminology to other SE standards, and to the older, ensconced MIL-STD terms is a daunting task. For example, how does a peer review relate to a systems requirements review? How does a design review compare to the technical reviews called for in IEEE 1220-1998? EIA 731 states, "Defect reviews are sometimes called peer reviews (Electronic Industries Alliance 2002)". CMMI widens the scope of the peer review term even more by stating that it can be implemented by "a number of other collegial review methods (CMMI Product Team 2002)".

Believe it or not, all of this confusing jargon may be by design. Remember, the CMMI is not a standard; it is a guideline for developing processes. Similarly, the SE standards cannot dictate to every organization on the globe exactly how to conduct their day-to-day business. The processes defined in standards need to be generic enough to allow for broad coverage and scope. Another factor to consider is that contractual agreements may prevail in the word-game. For example, the project contract may dictate that you hold a preliminary design review, but your organization would otherwise refer to the activity as a system design review. The difference is only a matter of semantics.


Review Classes

The premise is that only two types of reviews exist in the systems engineering domain: the formal, and the informal. The attributes of those two classes are the controls, the group dynamic, and the procedures. A review without any controls or defined procedures is informal. The majority of the reviews discussed in SE standards and guidelines are formal. To be formal, a review must be "systematic" (IEEE 1998)". The following characteristics define a formal review:

  • Defined entry and exit criteria.
  • A definite list of participant roles.
  • Documented procedures for conducting the proceedings.
  • Artifacts that require configuration management.
  • Required output documents.


The following table summarizes the informal and formal review attributes:


Attribute
Informal
Formal
Objective
Identify errors and issues. Examine alternatives. Forum for learning.
Evaluate conformance to specification and plans. Ensure change integrity.
Decision Making
The designer makes all decisions. Change is the prerogative of the designer.
Review team petitions management or technical leadership to act on recommendations.
Change Verification
Change verification left to other project controls.
Leader verifies that action items are documented; incorporated into external processes.
Recommended Group Size
2-7 people
3 or more people
Attendance
The designer and any other interested party
Management and engineering leadership
Leadership
The designer or designate
Lead engineer
Artifact Volume
Low
Moderate to high, depending on the specific "statement of objectives" for the meeting.
Presenter
The designer
Engineering team lead or representative.
Data Collection
As desired
Meeting minutes
OutputsAs desired
Review Report














 








The noun "review" by itself is too generic to identify any specific objectives. It must be qualified with a modifier to communicate what class of review is being discussed to avoid confusion or misrepresentation. The term "peer review" is also nebulous in terms of process and procedure. Any review, informal or formal could be called a peer review. The peer review concept is noble, but calling for a peer review does not help to clearly identify any objectives. These terms fall into the quintessential "gray area"; can only be used to identify collections of review types as shown in the following figure.