Measuring Perceived Risk of Pitfalls Associated with Systems Engineering Tradeoff Analyses

The U.S. Department of Defense (DoD) has recently revised the defense acquisition system to address suspected root causes of unwanted acquisition outcomes. One of the major changes in the revised acquisition system is an increased emphasis on systems engineering trade-offs made between capability requirements and lifecycle costs early in the acquisition process (Cilli, Parnell, Cloutier, & Zigh, 2015). Given that systems engineering trade-off analyses will play a pivotal role in future defense acquisition efforts, this paper takes an in-depth look at the state of systems engineering trade-off analysis capability through a review of relevant literature and a survey of systems engineering professionals and military operations research professionals involved in defense acquisition. The survey was developed to measure the perceived level of difficulty associated with compliance to the revised defense acquisition system mandate for early systems engineering trade-off analyses and to measure perceived likelihood and impact of potential pitfalls within systems engineering trade-off studies. The survey instrument was designed using Survey Monkey and was deployed through a link posted on several groups within LinkedIn, a professional social media site, and was also sent directly via email to those with known experience in this research area. Although increased systems engineering activity early in the life cycle is a compelling change for DoD, the findings of the literature review and the survey of practitioners both indicate that there is much to be done in order to position the systems engineering community for success so that the improved defense acquisition outcomes as envisioned by the architects of 2015 DoDI 5000.02 can be realized.


The Problem Statement
On 7 January 2015, the United States Department of Defense (DoD) released the latest update to DoDI 5000.02,Instruction for the Operation of the Defense Acquisition System.This Government document places increased emphasis on systems engineering trade-off analyses early in the acquisition life-cycle to address root causes of unwanted acquisition outcomes.Unaffordable weapon systems are an unwanted acquisition outcome that received heightened attention in this latest instruction as can be seen in the fact that the word "affordability" appeared 108 times in the 2015 version of DoDI 5000.02whereas affordability was mentioned only 4 times in the 2008 release.Increased systems engineering activity early in the life cycle is an exciting and necessary change but will likely prove insufficient if the systems engineers charged with executing the new elements of the process are not experienced, high performing, well trained, appropriately placed, and adequately equipped (Cilli, Parnell, Cloutier, & Zigh, 2015).This paper explores the degree to which the systems engineering community is prepared to execute the new DoD instruction, first through a review of relevant literature and then through a survey of practitioners.

Finding and Organizing Relevant Scholarship
A broad examination of academic works was conducted across disciplines involved with new product development pertaining to at least one aspect of systems engineering trade-off decision practices and theories.The findings are presented in a conceptual format synthesized through a decision management process lens.The activities and tasks of the decision management process as identified in Section 6.These activities and tasks form the basis of the decision management lens and associated taxonomy used to organize selected references.Reviewing the literature through this decision management lens allowed the information gathered across disciplines to be sorted into categories aligned with the elements of a decision management process and facilitated the assessment of previous research.

Summary of Key Findings
Although a comprehensive review of all the selected references is beyond the scope of this paper, a summary of the key threads that run through the selected references is provided in this section to provide a sense of the state of previous research from which potential research gaps will be identified in section 1.4.1.

Many Systems Engineering Publications Are Light on Decision Methods
The literature search found many textbooks and papers devoted to general product development or systems engineering (Maier & Rechtin, 2000;Sage & Rouse, 2011;Kossiakoff et al., 2011;Ulrich, 2003;Crawford & Di Benedetto, 2008;Urban et al., 1993;Pugh, 1991;Ullman, 1992;Dasu & Eastman, 2012;Suh, 1990).Many of these publications and others like it provide only a cursory treatment of decision making.Some notable exceptions (Buede, 1994;Buede, 2011;Parnell et al., 2011) take great care to address decision making as an integral part of systems engineering and spend a good portion of the publication covering best practices associated with decision making.

Popular Representation Schemes Prove Useful within Bounds
In their paper "Product Development Decisions: A Review of the Literature", Ulrich and Krishnan take the position that product development can be viewed as a long list of decisions.Towards the end of their extensive cross functional literature review regarding new product development, they recommend the development of representation schemes be a high priority.
We observe that research seems to flourish in problem areas with powerful representational schemes.For instance, the development of attribute-based representations by the marketing community led to the large body of work on conjoint analysis.The parametric representation of the engineering design problem led to hundreds of papers on design optimization.More recently, the Design Structure Matrix spawned dozens of research efforts on organizing product development tasks.We might therefore infer that the development of representation schemes should be a high priority in the product development research community (Ulrich, 2001).
Green and Srinivasan penned a well-read paper on Conjoint Analysis (Paul E Green & Srinivasan, 1990) and Browning penned a highly cited paper regarding the application of the Design Structure Matrix (DSM) technique (Browning, 2001) to integration problems.

Limitations of Representation Schemes When Applied as a Decision Method
Krishnan and Ulrich continue in their literature review paper and highlight the limitations of Conjoint Analysis.
Much of the research on setting attribute values is also aimed at maximizing customer satisfaction or market share, and does not explicitly consider design and production costs or overall profitability.In addition, the research on setting attribute values (done in the context of packaged goods) often assumes that arbitrary combinations of specifications are possible.While it may be feasible to provide any combination of "crunchiness" and "richness" in a chocolate bar, it is not possible to offer an arbitrary combination of "compactness" and "image quality" in a camera (Krishnan & Ulrich, 2001).
Krishnan and Ulrich conclude their extensive literature review with a recommendation for developing tools that facilitate the link between marketing models and engineering models.
Several areas for future research seem promising.Research in the marketing community has flourished on methods for modeling consumer preferences and for optimally establishing the values of product attributes.Yet, a weakness identified (earlier in the paper) is that models of the product as a bundle of attributes tend to ignore the constraints of the underlying product and production technologies.Parametric optimization of complex engineering models is a well-developed area within the engineering design community.We see an opportunity for these communities to work together to apply the product-design methods developed in marketing to product domains governed by complex technological constraints (Krishnan & Ulrich, 2001).
Many papers have been written on the notion of capturing customer requirements and then attempting to carry them through the product design process-generally referred to as a Multiple Domain Matrix (MDM) (Elezi et al., 2010) or a Domain Mapping Matrix (DMM) (Danilovic & Browning, 2007).A specific and popular example of an MDM technique is Voice of the Customer and QFD addressed most notably by Griffin and Hauser (Griffin & Hauser, 1993;Griffin & Hauser, 1996;Hauser & Clausing, 1988;Hauser et al., 2006;Hauser & Toubia, 2005).
In a separate paper, Ramaswamy and Ulrich point out the limitations of QFD for use in design decision making (Ramaswamy & Ulrich, 1993) as does (Delano et al., 2000).Both papers highlight their thoughts on how one might overcome these limitations.The Raaswamy and Ulrich paper puts it this way, To facilitate customer focus, several structured methodologies for organizing and presenting customer information have been developed.One such methodology is the House of Quality (HOQ)…the HOQ is most often used to set targets for the engineering performance of a product.In a typical situation, marketing staff collect data about customers and competing products and, with some input from engineering, decide a set of performance targets which are then communicated to the designers.In this paper we address two weaknesses of this methodology, (1) Targets set on customer information alone are often unrealistic.Hence designers cannot achieve them and these results in time-consuming iterations until a compromise is reached.(2) The roof of the HOQ alone cannot adequately capture the complex coupling between design variables.Hence the trade-offs that must be made in the design are over-simplified or even ignored.
We believe that engineering models, if used in conjunction with the HOQ, can help address these problems.Designers often have engineering models which they can use to test the limits of product performance.The inputs to these models, the design variables, are the actual quantities that the designer can control and the outputs are the important performance metrics of the product.Engineering models can therefore be a valuable tool for exploring design tradeoffs and product performance without building extensive prototype hardware (Ramaswamy & Ulrich, 1993).

Design Engineers Are Exhibiting High Risk Behavior
The likelihood of design engineers applying a flawed decision method is high and the consequences to decision quality are catastrophic.In his paper Validation of Engineering Design Alternative Selection Methods, Hazelrigg warns about the shortcomings of using QFD as a decision tool, as did Ramaswamy and Ulrich, but Hazelrigg does not hold out hope of fixing it through modifications or extensions.In addition to his criticism of QFD, Hazelrigg provides strong warnings about seven other popular design selection methods and about the catastrophic consequences of design engineers not using classical decision theory when structuring their design choice decisions.
Decision theory and optimization are closely linked in that decision making and optimization contain exactly the same elements, and all decisions involve some amount of optimization.The idea that the optimal choice is that alternative whose outcome is most preferred is the underlying objective of all optimization.However, optimization theory is almost universally presented in the context, maximize f(x) subject to constraints g(x)<0, and it deals exclusively with search techniques.Optimization theory does not deal with the questions of where does f(x) come from and what conditions must it satisfy?These questions are the domain of decision theory.Yet, they are equally important as the most efficient search toward the wrong objective is no more useful than no search at all (Hazelrigg, 2003).
Decision theory places great emphasis on the proper formulation of decision objectives for as Hazelrigg puts it, …the optimal choice is that alternative whose outcome is most preferred by the decision maker; that if the decision maker is indifferent to the outcomes, then a random choice is acceptable; and that all decisions result in outcomes.By and large, what modern normative decision theory does is to wrap a rigorous mathematical framework around these basic notions (Hazelrigg, 2003).
He goes on to lament about how design engineers, despite the availability of decision techniques that are well founded in classical decision theory, are prone to use inferior methods that lead to poor decisions.

Yet, despite its long history and wide acceptance in other communities, design engineers have not adopted the formalisms of decision theory, and there exists considerable disagreement among members of the engineering design community as to the extent that engineering design involves decision making and to which classical decision theory applies to engineering. Nonetheless, faced with the imperative of selection in design, engineers have developed and promoted a broad range of decision tools that have been widely taught and that are in widespread use for design selection. These methods have, almost entirely, failed to reference or utilize the extensive literature and research results in the decision sciences and many deviate from the simple basic tenets given to
Alice by the Cheshire Cat.Unfortunately, this failure has led to methods that are inconsistent with the rigorous principles of decision theory, and these inconsistencies lead to several undesirable behaviors of the engineering methods (Hazelrigg, 2003).

Properties of a Sound Design Alternative Selection Method
Hazelrigg references (Howard & others, 1968) and suggests ten properties be used to validate an engineering design alternative selection method.Hazelrigg assesses eight design alternative selection methods against the ten required properties (Hazelrigg, 2003).A summary of the results are shown in Tables 2-6.Notice that none of the eight methods met all ten principles.The opportunities for fixing any one of these eight methods are non-existent in Hazelrigg's view.
By and large, these methods cannot be salvaged.The logical errors that lead to their misbehavior are generally central to the construct of the methods themselves, and attempts to rectify them are so basic as to alter the method totally (Hazelrigg, 2003).
Hazelrigg suggests that poor design alternative selection methods are used despite their severe flaws because the shortcomings of these methods are often obscured by the complexity of the decision problem itself.
Often, such undesirable behaviors are obscured by the complexity of the engineering design process, and lead to poor engineering design decision making in a way that goes unrecognized by the engineers (Hazelrigg, 2003).
Additionally, it is argued that these methods are often less taxing to implement than methods involving classical decision theory.Hazelrigg puts this claim into proper perspective.
Often, extant methods are touted for their "ease of use".However, one should note that it is possible to create methods that are quite easy to use so long as it is not important that they provide valid results.Demanding that a method provide valid results often places demanding constraints on its ease of use (Hazelrigg, 2003).
In summary, Hazelrigg seeks to highlight very clearly the treacherous nature of path paved by inferior decision methods.
The decision sciences comprise a rigorous branch of study that has emerged over the past 300 years, with contributions from brilliant individuals of great fame.Yet, many decision methods developed for engineering design have neglected this body of knowledge, and many elements of the engineering methods are in direct contradiction with well-established principles of the decision sciences.Using these contradictions, and given the rigor of the decision sciences, it is not surprising to see that pathological case … can be generated that demonstrate highly undesirable behavior of the engineering methods, and to show that many of these methods are quite capable of recommending even extremely poor design alternatives.Indeed, it is even suggested that such behaviors are more the norm than the exception.The fact that any such pathological cases exist should be extreme cause for alarm, as they give clear indication of the failure of their respective underlying methods (Hazelrigg, 2003).
The roots of decision analysis are set in the works of Blaise Pascal, Pierre de Fermat, and Jacob Bernoulli, and Thomas Bayes and their seminal work on concepts of probability, utility, expected value, and the mathematics of updating probabilities given new information and the philosophical underpinnings of the subjective view of probabilities.In the mid 1940's, John von Neumann and Oskar Morgenstern established the theory of decision analysis and Ward Edwards created a psychology field of study around the theory in the mid 1950's.Howard Raiffa carried the field forward, authoring the first book of decision analysis in 1968 and co-authoring the first multiple objective decision analysis book with Ralph Keeney in 1976 (Parnell et al., 2013;Edwards et al., 2007).Foundational concepts for single objective decision analysis across a wide range of business and policy decisions include decision trees, influence diagrams, sensitivity analyses, probability distributions, and Monte Carlo simulations (Clemen & Reilly, 2013).For design trade decisions involving multiple competing objectives, Value Focused Thinking embedded in Multiple Objective Decision Analysis is and established approach (Parnell et al., 2011;Parnell et al., 2013;Keeney, 2009;Kirkwood, 1996).

Literature Review Synthesis and Link to Research
This literature review broadly examined academic works across disciplines involved with new product development pertaining to at least one aspect of systems engineering trade-off decision practices and theories.The literature review produced a list of citations to over three-hundred-fifty papers organized through a decision management process lens taxonomy and the findings regarding the state of current research were summarized and presented in a conceptual format.Note only seventy of the full three-hundred-fifty citations are provided as part of this paper.The full list of citations structured through the full decision management lens taxonomy can be found in (Cilli, 2015).

Identification of Remaining Research Gaps
This final section of the literature review is dedicated to the identification of remaining research gaps and the link between these research gaps and this paper's research questions and hypotheses.

Research Gap (i)
The literature review revealed that design engineers apply a wide variety of decision methods during design alternative selection activities such as systems engineering trade-off analyses.Few of the methods employed are founded on classical decision theory and thus many are prone to misguiding the decision maker.To make things worse, even decision methods that are based on classical decision theory can be misused or poorly executed and lead to poor choices.Consequently, there are many academic critics in the current state of literature warning of the potential hazards associated with systems engineering trade-off analyses and similar decision opportunities.What seems to be missing however was any measure of perceived level of difficulty from the practitioners' point of view.Are the warnings from academics purely "academic" in that practitioners have this figured out and there is no cause for concern?Do all the criticisms in the literature boil down to subtle differences in terminology with no real consequences?
Although there are warnings of potential errors associated with the execution of systems engineering trade-off analyses and similar design alternative decision opportunities scattered throughout the literature, there does not seem to be a comprehensive list of pitfalls tied to decision process steps.Additionally, it appears that there has been no attempt to measure the frequency at which particular errors are encountered and there has been no attempt to measure severity of the consequences given the error does occur.A significant % of systems engineers & operations researchers involved with U.S. defense acquisition process find the large number of variables involved with the exploration of emerging requirements and their combined relationship to conceptual materiel systems to be difficult (where a significant percentage is defined as>70%). (ii) Question 2: How can systems engineering trade-off analyses go wrong?How many potential pitfalls can be identified?
What are the potential pitfalls associated with such analyses?How likely are the pitfalls to occur?How severe are the consequences of each pitfall?
Hypothesis B: There are many potential pitfalls associated with systems engineering tradeoff studies that can be characterized as a medium or high risk to systems engineering trade-off study quality (where "many" is defined as>10).

Method
In an effort to counterbalance the strength and weaknesses associated with either quantitative or qualitative research approaches, this research effort employed a two phases, exploratory sequential and embedded mixed methods approach (Creswell, 2014).In the first phase, the needs of requirements writers and product development decision makers were explored through field observations and direct interviews as they sought to understand system level cost, schedule, and performance consequences of a set of contemplated requirements.From these case studies, hypotheses associated with the qualitative research questions described in section 1 were generated.In the second phase of this research, a larger sample of systems engineering professionals and military operations research professionals involved in defense acquisition were surveyed to help interpret qualitative findings of the first phase and test the quantitative hypotheses.The survey was largely quantitative with an open ended qualitative question embedded at the end of the first section.The planning of the data collection effort involved many decisions regarding survey instrument, number of questions, and type of questions, target population, deployment technique, and ethical considerations.The following two subsections provide a summary of the choices made while designing the data collection process.

Survey Design
The survey consisted of nineteen questions, some with multiple parts.The survey included a question that asked the subject to rate the systems engineering trade-off analysis difficulty on a Likert type scale and to provide supporting narrative through an open ended response portion of the question.Eighteen of the nineteen questions pertained to measuring the risk associated with potential pitfalls within trade-off analyses-nine questions asked the subject to indicate how often they have observed various pitfalls and nine questions asked the subject to assess the impact to overall study quality if such a pitfall was to occur.
The decision to collect Likert type scale responses and open ended responses for at least one of the survey questions was made in an effort to counterbalance the strength and weaknesses associated with either quantitative or qualitative research approaches.The survey was designed so that the total time commitment required of each subject would be on the order of thirty minutes and that there would be no requirement or opportunity for follow-up questions or interviews.No identifiable information was collected from participants.

Survey Deployment to Target Population
The survey was administered using the online tool, Survey Monkey.The online survey was deployed to systems engineers and operations researchers involved in defense acquisition through a link posted on five groups within the professional social media site LinkedIn-INCOSE Group, MBSE Group, DAU Alumni Group, MORS Group, and INFORMS Group.The link to the online survey was also sent directly via email to members of the International Committee on Systems Engineering (INCOSE) and Military Operations Research Society (MORS) with known activity in this research area.The survey design and deployment strategy is illustrated in Figure 1.
Figure 1.Survey design and deployment strategy

Population Size, Sample Size, and Confidence Interval
The survey was deployed on February 1, 2015 and was closed on March 31, 2015.Over this two-month period, 184 participants entered the disclosure page of which 181 completed the survey.Because survey participants were allowed to skip questions, confidence intervals associated with response statistics were developed for each specific question.In general, however, 181 respondents represent a meaningful sample of the population of systems engineers and operations researchers engaged in the defense acquisition domain.
The U.S. Department of Defense has 39,000 positions coded as systems engineers but the number of DoD employees that perform true systems engineering tasks is unknown but likely to be far fewer than 39,000.The size of the subject population is estimated to be on the order of 4,000 people given the fact that INCOSE membership is approximately 10,000 people working across at least ten domains including automotive, mass transit, urban planning, power & energy, communication systems, information systems, healthcare, hospitality, aerospace systems, and defense systems with aerospace systems and defense systems being identified as core domains.Therefore, of the 10,000 systems engineers within INCOSE, it is reasonable to believe that between 2,000 and 3,000 systems engineers focus most of their efforts on defense systems.Combine this estimate with the 2,122 members of the Military Operations Research Society (MORS) group within LinkedIn and adjusting the sum to account for the fact that some members of MORS may also be members of INCOSE yields the estimate of 4,000 systems engineers and operations researchers associated with defense acquisition.
As seen in Table 2, the 181 respondents represent a meaningful sample of the true population of systems engineers and operations researchers engaged in the defense acquisition domain across a wide range of population estimates.The confidence intervals were generated using the confidence interval calculator found on the Creative Research Systems site using inputs for a sample size of 181, a population ranging from 2,000 through 20,000, and assuming 50% of the sample will select a particular answer.This 50% is considered worst case and most appropriate for generating a general confidence interval associated with a multi-question survey for a given sample size.As shown in Table 6, if the original estimate of the true population of 4,000 is correct, a sample size of 181 would allow one to be 95% confident that the results reflect the perceptions of the true population of 4,000 within a margin of error of about +7.12%.

Demographics of Sample-Discipline
169 participants answered this question.As shown in Figure 2, all but 22 of the 169 respondents self-identified with the systems engineering or operations research discipline or some combination of the two with a majority of this sample associating more strongly with Systems Engineering.
Twenty-two of the respondents selected "other" and identified their disciplines as follows: T & E with some Systems Engineering; and Procurement.Notice that even within the twenty-two respondents that selected "other" as their discipline, there is a systems engineering aspect to many of their self-identified disciplines.
Figure 2. Demographics of respondents-discipline

Demographics of Sample-Experience
Of the 181 respondents, 15 skipped this question.Figure 3 shows that of the ten experience categories the most common level of experience was the 25-29 years with 88% having at least 5 years of experience, 76% of the respondents having at least 10 years of experience, and 6% having over 40 years of experience.

Results
The survey was developed to measure the perceived level of difficulty associated with compliance to the revised defense acquisition system mandate for early systems engineering trade-off analyses and to measure perceived likelihood and impact of potential pitfalls within systems engineering trade-off studies.The following subsections provide these results.

Perceived Level of Difficulty Associated with Early SE Trade-off Analysis
In order to measure the perceived level of difficulty associated with early systems engineering trade-off analyses, participants were presented with an observation informed by case study research (Cilli, 2015).To help interpret the observation and perhaps generalize results to the population of all early defense acquisition activity, survey respondents were asked to indicate the degree to which the following observation reflects their experience.
Observation: Defense acquisition professionals are eager to comply with the new defense acquisition instructions but the large number of variables involved with the exploration of emerging requirements and their combined relationship to notional materiel systems causes compliance with the new instructions to be difficult.Consequently, requests for tools and techniques that can help model the relationship between physical design decisions and consequences of those decisions at the system level as measured across all elements of stakeholder value are growing more frequent and intense.
Figure 4 shows that approximately 81% of the respondents found their own experience to be somewhat similar, very similar, or extremely similar to the observation that defense acquisition professional found the large number of variables involved with the exploration of emerging requirements and their combined relationship to notional materiel systems causes compliance with the new instructions to be difficult.With the sample size of 135 one can be 95% confident that 81%+6.5% of the true population (estimated to be 4,000 as discussed earlier) would respond that their observations were at least somewhat similar to the survey author's observations that acquisition professionals found the large number of variables involved with the exploration of emerging requirements and their combined relationship to conceptual materiel systems to be difficult.
Qualitative comments were provided by 33 of the 136 participants that responded to this question.Many of the qualitative statements supported the quantitative results suggesting that a strong majority of those surveyed sense that acquisition professionals are indeed struggling with the complexities of informing requirements.For example, one respondent writes, "program/project managers are constantly asked to understand the relationship between requirements and cost.It's not a simple measure to make, if at all even possible to do, on complex weapons systems".Qualitative comments seem to show that the question of whether such struggles are leading to an increase in requests for tools and processes is met with a more varied response.One respondent writes, "I believe the first part of the observation.I haven't seen the second half".Another survey taker seems to agree that the demand for such tools is not necessarily on the rise but not because there is no need, but rather because effected parties are slow to react, stating, "Requests lag actual need.Very few Program Managers are consciously concerned with the complexity, intricacy, and rigor of true and honest trade analysis to guide program decisions.But the availability of good decision analysis tools and techniques can help change the practice of looking to solve design implementation problems with more money and more time".A different respondent sees increased interest among acquisition professionals, "I see program managers being more and more interested in tools that support integrated system level architectures (integrated behavior allocated to physical system level architecture, with a value model against user requirements)".
Others noted an observed upward trend in chatter surrounding tools but note either a mismatch in organizational responsibility, funding shortfall, or both.As one survey participant writes, "(the) trend in the last five years has been to increase reliance on tools of this nature.The problem with getting tools prepared to assist in the process is that tools take time and resources to develop.These are often not within the purview of the organization dealing with the acquisition".Another states, "I work directly with ASA (ALT) System of Systems Engineering & Integration (SoSE & I); every week, someone brings a new 'tool' to the director for portfolio management, decision analysis, dashboarding, etc.Also, I see new policies coming out that require more from systems engineering, but there are never enough resources to tackle all of it".A separate survey taker sees it this way, "In my experience acquisition professionals want compliance with the instructions to occur by magic-it isn't anyone's job (which is why it wasn't happening) and it still isn't anyone's job, despite instructions and exhortations".Bureaucratic and security concerns sometimes also present hurdles to the introduction of new tools as discussed by one of the survey takers, "Modeling and simulation is one of the least understood and most under-utilized tool sets in the acquisition tool box (actually most of those tool boxes almost no real tools in that domain).Better tools are needed however bureaucratic and legitimate security requirements make this very difficult in practice".
One respondent warns that even the best of tools would have limited impact on decision quality.As this respondent sees it, "Several other things would also have to happen even if the tools were available.(1) Sufficient time & resources need to be allocated to the analyses-they usually aren't; (2) Leadership would need to spend adequate time and become more deeply involved in the decision making process-they probably won't/can't because of constraints on their time and; (3) Political pressures would need to be mitigated-they definitely won't be".

Perceived Risk of 40 Potential Pitfalls Associated with Systems Engineering Trade-Off Analyses
The risk of the 40 potential pitfalls was compiled by using the mode of the response distribution for each question for each potential pitfall.Figure 5 depicts the cluster of perceived risk that these potential pitfalls present to a trade-off analysis.Respondents found some of the 40 potential pitfalls to pose a higher risk than others but surprisingly found none of them to be low risk.Table 2 lists each potential pitfall and identifies the perceived risk to systems engineering trade-off analyses.Full questions and response distribution for each are provided in the Appendix.
The number of subjects that responded to these questions regarding likelihood and consequence of the potential pitfalls ranged from 124 to 132 respondents.Using 124 as a worst case sample size, the population estimate of 4,000, and the approximation that 50% of the sample selected the answer identified as the mode, one can be 95% confident that the results reflect the perceptions of the true population within a margin of error of about +8.7%.
In some cases, this margin of error could be enough to cause the mode of the true population to differ from the mode of the sample.However, upon examination of the response distributions for each question, one sees that this margin of error will at most cause the mode to change by only one category on the likelihood or severity scale.Examination of Figure 6 reveals that such movement will not perturb the overarching observation that some of the potential pitfalls were thought to be higher risk than others but none were low risk.This result indicates that the community has a long way to do to improve trade-off studies.

Discussion
The data collected supported both hypotheses as described in 4.1 and 4.2 below.

Hypothesis A
A significant % of systems engineers & operations researchers involved with U.S. defense acquisition process find the large number of variables involved with the exploration of emerging requirements and their combined relationship to conceptual materiel systems to be difficult (where a significant percentage is defined as >70%).
The data show that with the sample size of 135 one can be 95% confident that 81%+6.5% of the true population (estimated to be 4,000 as discussed in Chapter 3) would respond that their observations were at least somewhat similar to the survey author's observations that acquisition professionals found the large number of variables involved with the exploration of emerging requirements and their combined relationship to conceptual materiel systems to be difficult.

Hypothesis B
There are many potential pitfalls associated with systems engineering tradeoff studies that can be characterized as a medium or high risk to systems engineering trade-off study quality (where "many" is defined as >10).
Survey questions 12-29 asked the subject to assess the likelihood and consequence of these 40 potential pitfalls associated with systems engineering trade-off analyses.The risk of the 40 potential pitfalls was compiled by using the mode of the response distribution for each question for each potential pitfall.Respondents found some of the 40 potential pitfalls to pose a higher risk than others but found none of them to be low risk.The number of subjects that responded to these questions regarding likelihood and consequence of the potential pitfalls ranged from 124 to 132 respondents.Using 124 as a worst case sample size, the population estimate of 4,000, and the approximation that 50% of the sample selected the answer identified as the mode, one can be 95% confident that the results reflect the perceptions of the true population within a margin of error of about +8.7%.In some cases, this margin of error could be enough to cause the mode of the true population to differ from the mode of the sample.However, upon examination of the response distributions for each question, one sees that this margin of error will at most cause the mode to change by only one category left or right on the likelihood or severity scale.Examination of Figure 5 reveals that such movement will not perturb the overarching observation that some of the potential pitfalls were thought to be higher risk than others but none were low risk.

Conclusions
As discussed in the opening of this paper, increased systems engineering activity early in the life cycle is an exciting and necessary change for DoD but will likely prove insufficient if the systems engineers charged with executing the new elements of the process are not experienced, high performing, well trained, appropriately placed, and adequately equipped (Cilli, Parnell, Cloutier, & Zigh, 2015).The review of relevant literature and the survey of practitioners presented in this paper indicate that there is much to be done in order to position the systems engineering community so that the improved defense acquisition outcomes as envisioned by the architects of 2015 DoDI 5000.02 can be realized.The integration of Systems Engineering and Multiple Objective Decision Analysis (MODA) offers potential to help improve trade-off analyses.The process in the INCOSE SE Handbook and SEBoK is a step in the right direction and has the potential to help avoid many of the identified pitfalls.
3.3 of the International Standard for Systems and Software Engineering-System Life Cycle Processes, ISO/IEC/IEEE 15288:2015(E) are shown in the bullet list below and were used to create the decision management lens for this literature review. Prepare for decisions o Define a decision management strategy o Identify the circumstances and need for a decision o Involve relevant stakeholders in decision-making in order to draw on experience & knowledge  Analyze the decision information o Select and declare the decision management strategy for each decisions o Determine desired outcomes and measurable selection criteria o Identify trade space and alternatives o Evaluate each alternative against the criteria  Make and manage decisions o Determine preferred alternative for each decision o Record the resolution, decision rationale, and assumptions o Record, track, evaluate and report decisions

Figure 4 .
Figure 4.The degree to which survey respondents' experience matched the survey author's observation

Figure 5 .
Figure 5. Cluster of perceived risk that potential pitfalls present to a systems engineering trade-off analysis

Table 1 .
1.4.1.3Qualitative Research Questions and Corresponding Quantitative Hypotheses Links from the two research gaps to qualitative research questions and quantitative hypotheses are summarized in Table 1.Research gaps crosswalked with research questions and hypotheses

Table 2 .
Confidence interval at 95% confidence level for sample size of 181 across estimates of population