During its initial meetings and training in May, 2013, the AcTF was introduced to the so-called Dickeson Model for Program Prioritization. The model described in Dickeson’s book is deliberately vague, allowing each institution fairly free rein to implement it as it sees fit. However, the suggestions from the facilitator at the training sessions were rather more definite.
The AcTF was asked to determine weights for the various criteria against which programs were to be assessed. These weights were designed to sum to 1.0 (or 100%), and the suggestion was that these would be used to support a linear combination of scores for each criterion. The subsequent weighted score would then allow easy grouping of programs into quintiles. The quintiles would then be a linear ranking approach, albeit clustered into five equally sized groups.
The facilitator assured us that this approach was the easiest way to undertake placing programs into quintiles, and that this approach also allowed quick grouping. This suggested grouping model was more definite than that in the book, and was apparently based on implementation experience elsewhere.
Recalling that the Dickeson Model is primarily designed for a triage situation, these attributes of speed and simplicity are indeed an asset. An institution in financial crisis must move quickly to resolve its issues, and a quick resolution of the question of how to reallocate resources to ensure survival for the largest part of the institution is critical. Whether this is the best way is another question and, as it turns out, is not relevant to UAA’s situation.
The AcTF recognized very early the very non-linear nature of the problem of determining the grouping model. This militated against a simple linear combination, supported by predetermined weights, as this was incompatible with the situation as we saw it. This was a significant factor in the AcTF moving away from the Dickeson Model and its suggested grouping model.
Why does the grouping process (how we go from an assessment of a program against the criteria to a program’s placement in a category, as set out in what we are calling the grouping model) have this non-linearity? Perhaps this is easier to show with some examples.
Suppose we have a program that scores very well against nine of the ten criteria. It is well aligned with UAA’s mission, is efficient and effective, does all the right things within and outside the university, and is clearly important to the state and region. In a strictly linear scoring system, such a program will rank at or very close to the top, and so would qualify for additional funding. But what if the one criterion where it scored poorly was concerned with its opportunity for development with additional funds? This program could be doing everything right, but be at capacity, either internally (faculty, space, equipment), or in terms of capturing the potential student pool. If the program would require major additional funding in order to increase its numbers by, say, 10%, because it needed to build new facilities or attract significant numbers from out of state, it would not benefit from the smaller amounts of incremental funding available through reallocation via PBAC. So it should not be placed in the ‘top category,’ even though the numbers would require that outcome in a strictly linear grouping model.
Program Prioritization is supposed to deal with resources that PBAC can manage. This means relatively small amounts. Whatever the merits of the case, Program Prioritization cannot deal with situations where funding has to come from major capital allocations or grants. A recent example has been the School of Engineering. Program Prioritization would not have been able to be used to support funding for the new Engineering Building, other than being able to be used as supporting information; gaining such funding would necessarily be a very different process.
Similarly, consider a program in dire straights. Its finances and management are in a mess, for whatever reason, and it scores poorly across the board in all criteria, except one. That one criterion is essentiality, as this program is critically important for the state. While a strictly linear model would probably place it squarely in the ‘bottom’ category, it really belongs in the Transform category, and would probably benefit significantly from reorganization and some additional resources.
It can be seen that the linear approach has very limited flexibility as a grouping model. If the outcomes have to be modified based on a more realistic assessment of the situation, clearly the grouping model is of limited utility, and probably should not be used. (In Geomatics-speak, there are systematic errors introduced by a model that does not match reality in some important way(s).) This was the conclusion of the AcTF, reached after much consideration, discussion and soul-searching. One of the consequences of this conclusion is that the original weights were changed to be guides to the relative importance of criteria, not the basis of linear combinations.
Similarly, the non-linearity extended to the categories in which programs are placed. We have been at pains to convey the idea that the categories are not linear, and that there is not a ‘top’ and a ‘bottom.’ This is a significant departure from the Dickeson Model, which has a very definite ordering to the categories, and matching placement methodologies. It is also important to realize that what the AcTF does is not able to be converted into any kind of action without a subsequent process. PBAC determines additional funding, not the AcTF. There is a lot of input to the processes for programs in the Further Review category, in which the AcTF will play no part. AcTF is at least as much about information creation and dissemination as it is about program assessment and categorization.
Looking at a more complex combination-type approach, such as that used in the Analytic Hierarchy Process (AHP), pairwise comparisons of criteria are used to derive what is still a set of weights for linear combination. While such an approach has the potential to provide finer discrimination, especially when based on non-expert assessment of weights, the combination is still linear.
An alternative, rather different, approach is that adopted in expert systems. Here we arrange a number of “IF … THEN … ELSE” combinations to provide the grouping model. The problem with this is twofold. First is that it is necessary to define ahead of time all possible combinations and their outcomes, and this can be very difficult if qualitative assessment is used for each criterion. The second problem is that we are doing this for the first time at UAA, and we have no experts from whom to extract the knowledge of how it should be done. All we have are faculty with experience of academic programs at UAA, who are going to have to assess these combinations as they arise. As it is difficult to create an expert system before there are any experts in that particular field, the expert system style of approach has serious implementation problems in our current circumstances.
Our best approach to creating the grouping model for the UAA Program Prioritization Process is to rely on the expertise we have, and allow the AcTF faculty to develop (in effect) the ‘expert system’ as we work our way through the process. Here we rely on Professional Judgment, and after many months of considering this problem, it looks to be the best solution available. While it may not have the superficial certainty of a more mathematical approach, it has flexibility and a capacity for rapid learning. It also has the ability to self-assess its own processes as they are being developed. These are major advantages.
The primary advantage to this approach is that it is based on humans and their wetware, rather than being based in deterministic machine software. The main disadvantage of this approach is that it is based on humans and their wetware. Because we are dealing with humans, a great deal comes down to trust.
The AcTF is currently wrestling with the problem of ensuring that this trust is maintained and assured to the university community. What would constitute a conflict of interest, and when does it move from being merely potential to being real? When should members recuse themselves, and does it really matter in a group of 18 voting members that has to achieve an 80% majority of votes cast for all decisions? How effective is any ‘code of conduct’ likely to be, and how much of it would be symbolic only, rather than really deal with actual conflicts of interest?
Self-assessment and self-monitoring are potential solutions, but they rely on trust. A comprehensive process of exclusion for potential conflicts may render the process unworkable, as if this is a competition for finite resources (a zero-sum game), then all members can potentially be biased against all programs but their own.
The AcTF brings a wide range of viewpoints to this issue, and is very aware of concerns across the university community. Our discussions on the subject are wide-ranging, varied and informative, but we have not yet reached consensus on a workable solution, despite devoting a lot of time to this issue.
This is a very difficult problem, and we are trying to apply Professional Judgment to its resolution. It has to be resolved for a reliable and acceptable outcome to be achieved.