Expert Choice V11 Exercises
Note Due to the specific nature of this case study, a number of features that could be helpful to you for RSM will not be implemented in this tutorial. Many of these features are used in the earlier tutorials. If you have not completed all these tutorials, consider doing so before starting this one. We will presume that you are knowledgeable of the statistical aspects of RSM. For a good primer on the subject, see RSM Simplified (Anderson and Whitcomb, Productivity, Inc., New York, 2005). You will find overviews on RSM and how it’s done via Design-Expert in the on-line Help system. To gain a working knowledge of RSM, we recommend you attend our Response Surface Methods for Process Optimization workshop.
Jul 14, 2009 how to use expert choice, actually this isnt my video but still upload it for the people that needs to know how to use this amazing software.
Call Stat-Ease or visit our website for a schedule at www.statease.com. The case study in this tutorial involves production of a chemical. The two most important responses, designated by the letter “y”, are. y1 - Conversion (% of reactants converted to product). y2 - Activity.
The experimenter chose three process factors to study. Their names and levels are shown in the following table. Factor Units Low Level (-1) High Level (+1) A - Time minutes 40 50 B - Temperature degrees C 80 90 C - Catalyst percent 2 3 Factors for response surface study You will study the chemical process using a standard RSM design called a central composite design (CCD). It’s well suited for fitting a quadratic surface, which usually works well for process optimization. Default CCD option for alpha set so design is rotatable Many options are statistical in nature, but one that produces less extreme factor ranges is the “Practical” value for alpha.
This is computed by taking the fourth root of the number of factors (in this case 3¼ or 1.31607). See RSM Simplified Chapter 8 “Everything You Should Know About CCDs (but dare not ask!)” for details on this practical versus other levels suggested for alpha in CCDs – the most popular of which may be the “Face Centered” (alpha equals one). Press OK to accept the rotatable value. (Note: you won’t get the “center points in each axial block” option until you change to 2 blocks in this design, as below). Using the information provided in the table on page 1 of this tutorial (or on the screen capture below), type in the details for factor Name (A, B, C), Units, and Low and High levels. Enter the Response Data – Create Simple Scatter Plots Assume that the experiment is now completed.
At this stage, the responses must be entered into Design-Expert. We see no benefit to making you type all the numbers, particularly with the potential confusion due to differences in randomized run orders. Therefore, use the Help, Tutorial Data menu and select Chemical Conversion from the list. Let’s examine the data! Click on the Design node on the left to view the design spreadsheet. Move your cursor to Std column header and right-click to bring up a menu from which to select Sort Ascending (this can also be done via a double-click on the header).
Displaying the Point Type Notice the new column identifying points as “Factorial,” “Center” (for center point), and so on. Notice how the factorial points align only to the Day 1 block. Then in Day 2 the axial points are run. Center points are divided between the two blocks.
Unless you change the default setting for the Select option, do not expect the Type column to appear the next time you run Design-Expert. It is only on temporarily at this stage for your information. Before focusing on modeling the response as a function of the factors varied in this RSM experiment, it will be good to assess the impact of the blocking via a simple scatter plot. Click the Graph Columns node branching from the design ‘root’ at the upper left of your screen. You should see a scatter plot with factor A:Time on the X-axis and the Conversion response on the Y-axis.
Note The correlation grid that pops up with the Graph Columns can be very interesting. First off, observe that it exhibits red along the diagonal—indicating the complete (r=1) correlation of any variable with itself (Run vs Run, etc). Block versus run (or, conversely, run vs block) is also highly correlated due to this restriction in randomization (runs having to be done for day 1 before day 2). It is good to see so many white squares because these indicate little or no correlation between factors, thus they can be estimated independently. For now, it is most useful to produce a plot showing the impact of blocks because this will be literally blocked out in the analysis.
Therefore, on the floating Graph Columns tool click the button where Conversion intersects with Block as shown below. Begin analysis of Conversion Design-Expert provides a full array of response transformations via the Transform option. Click Tips for details. For now, accept the default transformation selection of None. Now click the Fit Summary tab.
At this point Design-Expert fits linear, two-factor interaction (2FI), quadratic, and cubic polynomials to the response. At the top is the response identification, immediately followed below, in this case, by a warning: “The Cubic Model is aliased.” Do not be alarmed. By design, the central composite matrix provides too few unique design points to determine all the terms in the cubic model. It’s set up only for the quadratic model (or some subset).
Next you will see several extremely useful tables for model selection. Each table is discussed briefly via sidebars in this tutorial on RSM. Note The Sequential Model Sum of Squares table: The model hierarchy is described below:. “Linear vs Block”: the significance of adding the linear terms to the mean and blocks,. “2FI vs Linear”: the significance of adding the two factor interaction terms to the mean, block, and linear terms already in the model,. “Quadratic vs 2FI”: the significance of adding the quadratic (squared) terms to the mean, block, linear, and twofactor interaction terms already in the model,.
“Cubic vs Quadratic”: the significance of the cubic terms beyond all other terms. Fit Summary tab For each source of terms (linear, etc.), examine the probability (“Prob F”) to see if it falls below 0.05 (or whatever statistical significance level you choose). So far, Design-Expert is indicating (via bold highlighting) the quadratic model looks best – these terms are significant, but adding the cubic order terms will not significantly improve the fit. (Even if they were significant, the cubic terms would be aliased, so they wouldn’t be useful for modeling purposes.) Move down to the Lack of Fit Tests pane for Lack of Fit tests on the various model orders. The “Lack of Fit Tests” pane compares residual error with “Pure Error” from replicated design points.
If there is significant lack of fit, as shown by a low probability value (“ProbF”), then be careful about using the model as a response predictor. In this case, the linear model definitely can be ruled out, because its Prob F falls below 0.05. The quadratic model, identified earlier as the likely model, does not show significant lack of fit. Remember that the cubic model is aliased, so it should not be chosen. Look over the last pane in the Fit Summary report, which provides “Model Summary Statistics” for the ‘bottom line’ on comparing the options The quadratic model comes out best: It exhibits low standard deviation (“Std. Dev.”), high “R-Squared” values, and a low “PRESS.” The program automatically underlines at least one “Suggested” model. Always confirm this suggestion by viewing these tables.
The options for process order Also, you could now manually reduce the model by clicking off insignificant effects. For example, you will see in a moment that several terms in this case are marginally significant at best.
Design-Expert provides several automatic reduction algorithms as alternatives to the “Manual” method: “Backward,” “Forward,” and “Stepwise.” Click the “Auto Select” button to see these. From more details, try Screen Tips and/or search Help. Click the ANOVA tab to produce the analysis of variance for the selected model. Statistics for selected model: ANOVA table The ANOVA in this case confirms the adequacy of the quadratic model (the Model ProbF is less than 0.05.) You can also see probability values for each individual term in the model.
You may want to consider removing terms with probability values greater than 0.10. Use process knowledge to guide your decisions.
Next, move over to the Fit Statistics pane to see that Design-Expert presents various statistics to augment the ANOVA. The R-Squared statistics are very good — near to 1. Cook’s Distance — the first of the Influence diagnostics Nothing stands out here. Move on to the Leverage tab. This is best explained by the previous tutorial on One-Factor RSM so go back to that if you did not already go through it.
Then skip ahead to DFBETAS, which breaks down the changes in the model to each coefficient, which statisticians symbolize with the Greek letter β, hence the acronym DFBETAS — the difference in betas. For the Term click the down-list arrow and select A as shown in the following screen shot. Note Click outside the Term field, reposition your mouse over the Term field and simply scroll your mouse wheel to quickly move up and down the list. In a similar experiment to this one, where the chemist changed catalyst, the DFBETAS plot for that factor exhibited an outlier for the one run where its level went below a minimal level needed to initiate the reaction. Thus, this diagnostic proved to be very helpful in seeing where things went wrong in the experiment. Now move on to the Report tab in the bottom-rite pane to bring up detailed case-by-case diagnostic statistics, many which have already been shown graphically.
Note Design-Expert displays any actual point included in the design space shown. In this case you see a plot of conversion as a function of time and temperature at a mid-level slice of catalyst. This slice includes six center points as indicated by the dot at the middle of the contour plot. By replicating center points, you get a very good power of prediction at the middle of your experimental region. The Factors Tool appears on the right with the default plot. Move this around as needed by clicking and dragging the top blue border (drag it back to the right side of the screen to “pin” it back in place. The tool controls which factor(s) are plotted on the graph.
Note Each factor listed in the Factors Tool has either an axis label, indicating that it is currently shown on the graph, or a slider bar, which allows you to choose specific settings for the factors that are not currently plotted. All slider bars default to midpoint levels of those factors not currently assigned to axes.
You can change factor levels by dragging their slider bars or by left-clicking factor names to make them active (they become highlighted) and then typing desired levels into the numeric space near the bottom of the tool. Give this a try. Click the C: Catalyst toolbar to see its value.
Don’t worry if the slider bar shifts a bit — we will instruct you how to re-set it in a moment. Note To enable a handy tool for reading coordinates off contour plots, go to View, Show Crosshairs Window (click and drag the titlebar if you’d like to unpin it from the left of your screen). Now move your mouse over the contour plot and notice that Design-Expert generates the predicted response for specific factor values corresponding to that point. If you place the crosshair over an actual point, for example – the one at the far upper left corner of the graph now on screen, you also see that observed value (in this case: 66). Factors sheet In the columns labeled Axis and Value you can change the axes settings by right-clicking, or type in specific values for factors.
Give this a try. Then close the window and press the Default button. The Terms list on the Factors Tool is a drop-down menu from which you can also select the factors to plot. Only the terms that are in the model are included in this list.
At this point in the tutorial this should be set at AB. If you select a single factor (such as A) the graph changes to a One-Factor Plot. Try this if you like, but notice how Design-Expert warns if you plot a main effect that’s involved in an interaction.

The Perturbation plot with factor A clicked to highlight it For response surface designs, the perturbation plot shows how the response changes as each factor moves from the chosen reference point, with all other factors held constant at the reference value. Design-Expert sets the reference point default at the middle of the design space (the coded zero level of each factor). Click the curve for factor A to see it better. The software highlights it in a different color as shown above. It also highlights the legend. (You can click that too – it is interactive!) In this case, at the center point, you see that factor A (time) produces a relatively small effect as it changes from the reference point. Therefore, because you can only plot contours for two factors at a time, it makes sense to choose B and C – and slice on A.
Expert Choice Solutions combine collaborative team tools and proven mathematical techniques to enables your team obtain the best decision in reaching a goal. The Expert Choice process lets you:. structure complexity,. measure the importance of competing objectives and alternatives, and. synthesize information, expertise, and judgments. conduct what-if and sensitivity analyses. clearly communicate to share results, and iterate parts of the decision process when necessary.
allocate resources (if desired) Upon completion of an Expert Choice evaluation, you and your colleagues will have a thorough, rational, and understandable decision that is intuitively appealing and that can be communicated and justified. A simple AHP Hierarchy This might take the form of (a) choosing an alternative course of action, or (b) allocating resources to a combination (portfolio) of alternatives. The process is iterative and not necessarily one pass through a fixed number of steps. The process involves combining logic and intuition with data and judgment based on knowledge and experience. Structuring is the first step in both making a choice of the 'best' (or most preferred) alternative as well as in optimally allocating resources to a combination of alternatives.
(Animation will show JUST GREEN BOXES, mimicking the visual brainstorming process, then organizing them into goals, objectives & alternatives). Structuring involves identifying alternative courses of action, identifying objectives (sometimes called criteria) into a hierarchy, determining which objectives each of the alternatives contribute to, and identifying participants and their roles (based on governance considerations where appropriate).
After structuring a hierarchy of objectives and identifying alternatives, priorities are derived for relative importance of the objectives as well as the relative preference of the alternatives with respect to the objectives. Originally, all measurement with AHP was performed by pairwise relative comparisons of the elements in each cluster of the hierarchy, taken two elements at a time. For example, if a cluster of objectives consisted of Cost, Performance, Reliability, and Maintenance, judgments would be elicited for the relative importance of each possible pair: Cost vs. Performance, Cost vs. Reliability, Cost vs. Maintenance, Performance vs.
Reliability, Performance vs. Maintenance and Reliability vs. While pairwise relative comparisons are used in AHP and Expert Choice to derive the priorities of the objectives in the objectives hierarchy,AHP and Expert Choice were subsequently modified to incorporate absolute as well as relative measurement for deriving priorities of the alternatives with respect to the objectives. All measures derived with Expert Choice possess the ratio scale levels of measure. If priorities do not possess the ratio level property, as often occurs with other decision methodologies, such as weights and scores in a spreadsheet, the results are likely to be mathematically meaningless. Measurement can be performed by making pairwise relative comparisons, or by using absolute rating scales.

Expert Choice allows for subjective as well as objective measurement. This capability makes it somewhat unique in that most mathematical models don't allow for human judgment to the extent possible with AHP and Expert Choice. Furthermore, all measures derived with Expert Choice are 'ratio level measures', an important property that avoids computations that lead to mathematically meaningless results.
A synthesis (combining) of the measures according to the objectives hierarchy follows the structuring and measurement steps. This is done automatically by Expert Choice. This synthesis is quite unique (as far as models go) since it includes both objective information (based on whatever hard data is available) as well as subjectivity in the form of knowledge, experience, and judgment of the participants. The synthesis results include priorities for the competing objectives as well as overall priorities for the alternatives.
Because of the structuring and measurement methods used by Expert Choice, the results are mathematically sound, unlike many traditional approaches such as using spreadsheets to rate alternatives. But having mathematically sound results is not enough.
The results must be intuitively appealing as well. The synthesis workflow step provides tools (such as sensitivity analysis and consensus measures) to allow you and your colleagues to examine the results from numerous perspectives. Using these tools, you can ask and answer questions such as 'What might be wrong with this conclusion?' Why is Alternative Y not more preferable than Alternative X? If we were to increase the priority of the financial objective, why does Alternative Z become more preferable? Why might others in the organization feel that Alternative W should have a higher priority than alternative X? The answers to one or more of these questions might signal the need for iteration.
If for example, you feel that Alternative Y might be more preferable than Alternative X because of its style, and style is not one of the objectives in the model, iteration is necessary. If style is already in the model, does increasing the importance of style shift the priorities such that Alternative Y becomes more preferable than alternative X? If not, then perhaps the judgments were entered incorrectly, and iteration to re-examine the judgments is called for. If style is already in the model and the judgments are reasonable, how much would the importance of style have to be changed before the decision were reversed?
If it is just a little bit, then you might reconvene those whose role it was to prioritize style and ask that they discuss their judgments and feel that they are reasonable. Comparion Resource Aligner helps you if you are deciding on a combination of actions to take, such as a portfolio of capital investments or projects, then the allocation step is a powerful way to determine an optimal combination of actions or investments subject to constraints such as budget, personnel, space, materials, coverage, balance, and dependencies.
Using Expert Choice Comparion Resource Aligner you will be able to enter additional information pertaining to costs, risks, funding pools, dependencies and other constraints that will enable you to determine an optimal combination of alternatives under different scenarios. The following information is an overview of the resource allocation process as it is typically applied in applications such as project portfolio management.
We won't discuss details of the Resource Aligner application here but will give an overview instead. The figure above shows the optimum combination of projects given a budget of $11 million. Projects in yellow are in the optimal portfolio. This is typically only the first 'iteration' for an optimum portfolio. Subsequent iterations will evolve as decision makers discuss dependencies, balance and coverage, musts and must nots, and other resource constraints such as specific type of personnel, building space, etc. The Benefits column shows the relative benefits of each of the alternatives (often projects in a project portfolio application) that were derived in Expert Choice Comparion. Costs are typically dollar costs, but can be any constraint (such as Full Time Equivalents - FTE's) that places a limitation on selecting all of the alternatives.
You can specify many different constraints from the Resource Aligner tab in addition to the 'Cost' constraint on the data grid tab. The 'Cost' constraint, however, has a special function in that it can be used to generate a 'Pareto' curve that shows different optimum portfolios (combinations of alternatives) at different maximum cost values. Psychoneuroimmunology degree programs. Project risks (one of the ) can be entered directly into the risk column. The risks should be estimates of the relative risk of the alternatives or projects. If you need to derive such estimates, you can create and evaluate an 'associated' risk model using the Risks pull down menu: You can set up 'Strategic Buckets' by designating categories for the alternatives/projects. Typical categories might include type of project, business area addressed, time-frame, level of risk, geographical region, etc. The figure below shows the result of adding a 'Time frame' attribute with Short, Medium, and Long term category items: Benefits vs.
Costs 2D Plots The figure below shows a 2d plot of benefits vs. Costs for a set of projects, color coded by the Time frame category. Some organizations that don't have access to a resource aligner optimizer might use such a plot to hand select a 'balance' of projects that tend to be in the upper left of the plot (high benefit and low cost). They might also try to get some 'balance and coverage' for different strategic buckets such as Time frame. We can create a plot as above, but show only the funded projects or other plot view options: We could do this for different strategic bucket categories and look for cases where there may be an imbalance or lack of coverage.
If we observe any, we could then add constraints to the resource aligner to constrain the projects in a category to some maximum or minimum number. There are many other capabilities available so that the decision makers can easily mold the optimum portfolio to their needs and wants. The decisions and resource allocations that Expert Choice are applied to are almost always complex, important (even crucial) and thus require iteration.
One pass through a series of steps is hardly ever enough. There are many reasons for this including but not limited to:. Someone in the group feels that Alternative Y is really more preferred than Alternative X because of it's style, and you realize that style was not included as an objective. The most preferred alternative is likely to cause political problems with another part of the organization, resulting in delays in getting approval.
Adding an objective of time to get approval should be added to the model. The combination of the two most preferred alternatives suggests another alternative that was not considered, and should now be added to the model. A sensitivity analysis graph shows that the most costly alternative increases in priority as the importance of cost increases, indicating that judgments were entered incorrectly, e.g. In making a pairwise comparison, which alternative is more costly, rather than which is more preferred with respect to cost. A performance analysis graphs shows that one alternative is best on every objective and every sub-objective! This is a sign of either a trivial decision, or more likely a sign that something was overlooked because if an alternative is best on every objective except cost, for example, it is most likely going to cost more. If this occurs, look at the pros and cons of the alternatives, especially the cons of the alternative that is best on every objective.
Expert Choice V11 Exercises For Beginners
The cons of this alternative should 'point to' objectives for which this alternative is NOT best! If you then add one or two objectives or sub-objectives based on the cons of the best alternative, you can send a link to participants to re-evaluate the project and click on 'next unassessed' to enter the few additional judgments necessary. Perhaps the most important reason is that your intuition does not agree with the results. AHP and Expert Choice models are unlike any other economic or management models in that they can and should incorporate any and all considerations - qualitative as well as quantitative, subjective as well as objective.
Download Expert Choice
With adequate iteration, the results obtained with Expert Choice will always make sense. In some cases, the iteration will change the priorities of the alternatives to match ones intuition while in other cases ones intuition will change due to insights gained from the model. There is no fixed sequence for iteration. In some cases, it might be obvious that additional alternatives should be identified or designed. In other cases, additional objectives added to the objectives hierarchy. Participant roles and judgments might need to be reexamined and discussed.
For either of the above cases you can return to the Structuring step at the top level workflow, or click on the link to structuring in the structuring tabby under this Iteration step which will take you there as well. In some cases you might want to review the judgments made in the Measurement step. You can return to the Measurement step at the top level workflow, or click on the link to Measurement in the Measurement tabby under this Iteration step which will take you there as well. Hint: Before iterating, you might want to make a copy of the project using the Save as option found in the Home Projects page: Regardless of what steps might be involved in iteration, the time required for iteration should be included in planning the decision process, and never as an afterthought.
If iteration is required but not performed, two important benefits of the decision process are jeopardized: The ability to justify the decision to others who might object or delay, and the ability to track the success of the decision process over time. The Analytic Hierarchy Process (AHP) is a powerful and flexible decision making process to help people set priorities and make the best decision when both qualitative and quantitative aspects of a decision need to be considered. By reducing complex decisions to a series of one-on-one comparisons, then synthesizing the results, AHP not only helps decision makers arrive at the best decision, but also provides a clear rationale that it is the best.
Designed to reflect the way people actually think. Designed to reflect the way people actually think, AHP was developed in the 1970’s by Dr. Thomas Saaty, while he was a professor at the Wharton School of Business, and continues to be the most highly regarded and widely used decision-making theory. Saaty joined Dr.
Expert Choice V11 Exercises Online
Ernest Forman, a professor of management science at George Washington University, to co-found Expert Choice. Structured Decision Making The AHP and Expert Choice software engage decision makers in structuring a decision into smaller parts, proceeding from the goal to objectives to sub-objectives down to the alternative courses of action. Decision makers then make simple pairwise comparison judgments throughout the hierarchy to arrive at overall priorities for the alternatives. The decision problem may involve social, political, technical, and economic factors. Analysis of Objective and Subjective Data The AHP helps people cope with the intuitive, the rational and the irrational, and with risk and uncertainty in complex settings. It can be used to:.
predict likely outcomes. plan projected and desired futures.
facilitate group decision making. exercise control over changes in the decision making system. allocate resources. select alternatives. select alternatives.
do cost/benefit comparisons Expert Choice is intuitive, graphically based and structured in a user-friendly fashion so as to be valuable for conceptual and analytical thinkers, novices and category experts. Because the criteria are presented in a hierarchical structure, decision makers are able to drill down to their level of expertise, and apply judgments to the objectives deemed important to achieving their goals. At the end of the process, decision makers are fully cognizant of how and why the decision was made, with results that are meaningful, easy to communicate, and actionable.