Models Considered by the Academic Task Force

Introduction

At its meeting on Friday, October 4, 2013, Faculty Senate approved the following motion, which was subsequently approved by the Provost:

“The Faculty Senate recommends that the quintile system be reconsidered.”

The Academic Task Force is pleased to report that it has been reconsidering the quintile system since May, 2013, and after six months of consideration of various grouping and connection models, has adopted its own Five Groups with Professional Judgment (FGPJ) Model. While this model has similarities to the original quintile system, it has been adopted only after consideration of no fewer than seven different models, and for reasons that are not connected to the original adoption of the quintile system. The reasons for adopting this system are:

  • it has a reasonable degree of simplicity;
  • it is easy to implement and understand;
  • it includes a connection model that takes advantage of existing professional expertise within the university;
  • it has some degree of flexibility in terms of program placement in categories while avoiding the potential risk of having empty groups; and
  • it can be subjected to cross-checking and self-assessment during the operation of the process.

A further advantage of this model is that it can be used to help the development of future models for later iterations of the prioritization process.

The Academic Task Force therefore considers that it has met both the spirit and the letter of the Faculty Senate motion, and wishes to report a successful completion of the reconsideration of the quintile system.

 

Models Considered

In order to take a large number of diverse and multi-faceted academic programs and reduce them to a number of groups of similar programs, two models are required. One model is concerned with how the programs will be grouped, relating to number and sizes of the final groups, together with any constraints about the numbers in each group. We could term this the grouping model. Second, one needs a model that connects the facets of the program to membership in the various groups, by explaining how some information about each program, such as that collected using the template, is used to determine program placement. This we can call the connection model.

The so-called Dickeson Model is vague on both counts. It suggests five groups with equal numbers in each group, but gives examples of programs that used three groups, as well as other numbers of groups. There isn’t a strong case made for the number of groups, so its grouping model is weakly specified. The Dickeson Model does little to specify what the connection model should be, leaving it largely up to the individual institution’s judgment.

To develop a pair of suitable models has been one of the AcTF primary activities over the last six months. The following models were considered.

Dickeson Model           This was too vague for our use, and is also predicated on being a triage model. We are more concerned with information flows, and so we effectively abandoned this model very early in the process, around late May. Added to the difficulty of determining exactly what the Dickeson Model consists of, is its very open and flexible nature. Essentially, the Dickeson Model can be reduced to “cluster programs into meaningful groups, based on some meaningful connection model.” Whether this is a distinct model, or just a statement of the patently obvious is open to debate. Given Dickeson‘s objective with the book is to urge universities and colleges to go through the process, rather than sell them a ‘one-size-fits-all’ process, the specifications of the two critical models is necessarily general and vague.

Simple Ranking           This only works when the various facets of all the programs can be mapped onto a meaningful numerical scale, which can then be divided at suitable points to form groups. The only definite grouping model here is to use groups of one program. The problem we faced was that there was no single meaningful scale that could represent the full range of program diversity we were trying to capture, so after considerable discussion over six months, we finally abandoned this model. Add to that the difficulties of developing a meaningful connection model for almost 400 very diverse programs, and this approach is very complex and possibly unworkable if truly meaningful results are required.

To give a basic example of the ranking model and to understand its limitations in our circumstances, recall that IQ is a simple linear scale representing some aspect of human intelligence. However, it has no provision for dealing with social skills, creativity, insight, determination, perseverance or leadership, all of which are important for success in many domains of endeavor. Also, IQ scores have a standard deviation of around 3 (in certain tests), so in order to determine whether a difference of, say, 5 in IQ is actually meaningful, we have to resort to statistical testing methods, something that a large group of the general population finds hard to grasp. So IQ scores give a false sense of precision by the very fact of giving numerical values on what appears to be a real number line, while in fact they are fairly fuzzy values that cannot be readily compared by simple differences.

Analytical Hierarchy Process (AHP)             This method, with a large body of supporting literature and examples, including software (ExpertChoice) and consulting services over many years (decades), was considered quite seriously (especially since CTC had already used it for a similar activity within the college), but was abandoned because it is far better at making fine discriminations than it is for grouping like with like, which was the AcTF’s primary objective. It is also highly mathematical and so difficult to explain to a significant body of the faculty, as well as needing considerable preliminary work that really requires a significant body of information already on hand.

AHP requires that every pair-wise combination of components for each criterion are compared, assessed for consistency, and then combined to give a set of combination weights, based on the principal eigenvector of the pair-wise comparison matrix. All of the criteria then undergo a series of pair-wise comparisons, and the resulting matrix of comparisons is transformed into a set of weights. While this gives a mathematically sound way to make fine distinctions between individual items, it downplays professional judgment by seeking to reduce it to the finest levels. It is far more useful when it is applying non-professional experience to the determination of the weights, as it can then bring together a lot of disparate experience and knowledge. At UAA we have the advantage of being able to apply professional judgment directly to the problem.

Analytical Network Process (ANP)   This is a development of the AHP model, but includes feedback loops in the hierarchies. It was abandoned with relatively little consideration after AHP was abandoned, primarily because of its complexity and uncertain utility for UAA.

Quadrants       This model was discussed in depth for almost two months, but was abandoned because of the difficulty of translating the multiple facets of programs into two numerical scales, and because the scales to be used were vague and difficult to measure in terms of outcomes. It was developed as a home-grown compromise among the various connection models being discussed at the time, and as a means of trying to clarify the discussion.

Clustering        This model allows the data to form its own clusters, being a form of (initially) unsupervised classification. This model has several advantages, as it can be applied to data whose form is unknown (the situation at UAA), and it can be iteratively applied to refine the clusters. By using the initial clusters as a means of refining the understanding of the data, a more supervised classification can be undertaken to produce more meaningful groupings, based on the actual data, rather than pre-conceptions. It can also do a form of data mining, helping to find things in the data that we were not expecting. However it, too, is highly mathematical and statistical, and so difficult to explain fully to the wider audience of the university community, and so was abandoned primarily for that reason.

Five Groups with Professional Judgment (FGPJ)     The general objective of prioritization is to collect programs into groups that contain programs that are reasonably alike, i.e., share many characteristics. One can have very many groups, or very few, depending upon the degree of discrimination required. After considerable discussion over six months, the AcTF decided to adopt five groups, as that was a convenient number over which to distribute 330+ programs. To have more groups would have tended to provide finer discrimination in the middle, which makes no significant difference to resource allocation decisions, which happen near the ends of the distribution. To have had fewer groups would lump too many programs together at the edges, where we wished to have greater discrimination. So five groups is a compromise between the most basic number of groups (three: more resources, stay the same, fewer resources), and a multiplicity of groups, mostly dividing up the ‘stay the same’ group. As we had the flexibility to decide, we chose a minimum distribution of 15% of programs into each group. This allows 25% of programs to fall as they may. This constitutes our adopted grouping model.

Future iterations of the process may have rather different goals, such as finer discrimination of programs, especially in the ‘stay the same’ groups. In that case, more groups can be used, with different distributions. That decision will be made to suit the circumstances and needs of that time. For this, the first iteration, the selected grouping model seems the best all-round choice.

The connection model to be used is to be based on faculty assessment by the AcTF as a whole of every program against 10 criteria, which will lead to a professional judgment about which category the program is most suited to. While the criteria are weighted, these weights are a guide to the importance that is to be given to each criterion in the overall professional judgment process, not to a weighting of numerical scores for each criterion.

While it could be argued that numerical scores could be given for each criterion, and then combined according to the weights, and that this would remove the need for professional judgment in the process, all it does is shift the professional judgment to the criteria scoring level, then giving this professional judgment a false patina of deterministic mathematical calculation. Creating a rubric for scoring the criteria, so as to limit the input of professional judgment, is another false effort to limit professional judgment, as professional judgment is used to determine the rubric in the first place. The AcTF feels that the professional judgment of professional and experienced faculty is best applied at the highest and most general level of program assessment, so as to allow the AcTF to look at each program holistically, rather than being forced to consider each facet in complete isolation. This places reliance upon professional judgment at a very high level, but that is why the AcTF was selected from experienced and dedicated faculty. To try to remove professional judgment from the process and make it a simple counting process would mean that it could be done just as easily by a group of junior administrative staff, and there would be no consideration for the synergistic interactions among the various facets of each program.

An important point to recognize is that while there are five groups to be used, the AcTF is trying hard to avoid a linear interpretation of these groups. We suspect that we will see groups that differ on several different diverse criteria, while being homogenous within the group, and that this multi-dimensional separation will not readily allow a linear interpretation. While it is easy to place five groups into a strictly linear arrangement, such as we do with the A to F grading scheme for coursework, it is not the only possible arrangement. Five groups can be arranged in a circle or a pentagram, each of which has different meanings and overtones, even while the groups remain in exactly the same relative positions. While there will be one group that can be interpreted as a ‘winner’ (the one getting access to additional resources through PBAC), and another that can be interpreted as a ‘loser’ (the one to be subject to further consideration), this is not entirely the case, and the arrangement of the other groups may not be anything like linear.

(It should also be noted that getting into the ‘winner’ group does not guarantee success, as it may lead to fighting within the program over the additional resources, leading to an inability to operate effectively and so a decline in the program. Meanwhile, a program in the ‘loser’ group may find itself able to restructure and reinvent itself in ways that it was unable to do in normal circumstances, and so become a long-term winner. Nothing is guaranteed from the outcomes of the prioritization process, especially winning or losing in the long term, and PBAC is not the only source of funding open to programs. The AcTF will provide information about UAA’s academic programs to the Provost, to each program, and to the UAA community. That is as far as our mandate goes. Beyond that point, it is a completely different game with different players and very different rules.)

Discussion

While this choice of the grouping and connection models may make it seem like we have not come very far in six months, the AcTF has covered nearly all of the grouping and connection models that are reasonably available, straightforward to implement, and widely comprehensible. Remember also that the Dickeson Model isn’t really a pair of models, just general guidance. We have tried for something reasonably definite that could be explained to a wide audience. We avoided establishing a connection model that may force the data into patterns that were preconceived, rather than based on how the data actually fall. We have focused on a connection model that takes maximum advantage of the experienced and professional group of faculty on the AcTF, rather than passing everything off to a mathematical approach based on simplified scoring and aggregation of scores by simple weights.

Professional judgment may not seem like an ‘objective’ system, such as something purely mathematical may seem, but such a mathematical system simply hides the judgment behind a fog of numbers. Professional judgment, being based on personal experience and knowledge, is necessarily subjective, but ‘subjective’ does not mean biased or unfair. Professional judgment offers the best way to ensure that the full diversity of programs at the university are acknowledged and judged from the viewpoint of people who work in academic programs at UAA, rather than outsiders. These people are our colleagues, people who have been through many of the problems, issues and tribulations that we all face. They represent and advocate for no program, but bring their professional expertise to the task of looking at all of UAA’s programs. They are every program’s best assurance of a fair, knowledgeable and understanding assessment, and the combination of eighteen diverse people working as a team strengthens this.

The AcTF intends to run a number of the abandoned models in the background, as the primary grouping process unfolds. This will be done as a comparison check on the Five Groups with Professional Judgment (FGPJ) model we have adopted: do we get roughly similar results, regardless of the model we use, particularly in the more critical groups? The AcTF feels that it is important that we assess our own process, especially since there is significant concern about that process in the UAA community. The AcTF does reserve the right to adopt a different grouping model for the final report, e.g., fewer or more groups and a different distribution, should the data show that a different approach is clearly better, but we won’t know that without parallel analysis of the process and the data.

Parallel analysis would be done as a means of analyzing how the data actually fall, both as a guide to the current process, and as a guide to future iterations. We intend to look at the effectiveness of the different criteria being used at influencing the final placement: are some criteria less helpful than others, or did we miss some important criteria? Are there criteria that cannot be differentiated in outcome from others, i.e., they have a very high degree of correlation? Are there apparent groupings of program that, while not affecting the current outcomes of prioritization, tell us something about the nature of programs at UAA that was not understood before? Would such groupings cause us to rethink aspects of the prioritization process itself?

Conclusions

The AcTF has explored the strengths and weaknesses of the original model that was suggested to us (the so-called Dickeson Model), and using it as a starting point, has explored a range of options for both the creation of groups of programs, as well as the means of getting programs into these groups. The emphasis has been on flexibility and meaningfulness, as well as comprehensibility across the wide university community. We have defined the grouping and connection models that we intend to use for this process, while reserving the right to alter these if the data suggest that a better solution.

We have tried to use the strengths we possess, which include a diverse group of experienced faculty from across the university, and the professional expertise of those faculty. We have discussed the various options at considerable length, and while many different models were advanced, we have collectively developed and adopted our own FGPJ Model as being workable, comprehensible and meaningful for the process.

Posted in Models Tagged with: , , , , ,
3 comments on “Models Considered by the Academic Task Force
  1. I have decided to re-post this comment here as it is a more appropriate forum. It was originally posted in response to the generally negative discussion surrounding the effort at program prioritization. I made one small factual correction, I have known Bear Baker for 11 years not 6.

    Hi All,
    I have resisted commenting until now, but it is time for a counter-point to the discussion that has been going on. I do not agree with the criticism of the program prioritization effort or the action that is being taken to delay or prevent it. My thoughts are as follows:

    The prevailing tone of the conversation to this point is one of fear and defensiveness. That seems odd since this is an opportunity for all of us to toot our own horn and inform the administration regarding the value added by our respective programs. This opportunity comes at a critical point in time since if we accept this task and do it well we will equip the chancellor and the provost with exactly the information (ammunition?) they need to make a compelling argument to the regents and the president to fully fund UAA.

    I understand that there are faculty who are critical of the instrument proposed to use as a template for program prioritization. What seems to be missing from the debate is the realization that the institutions that made all the cuts after using this tool were in deep financial trouble before the prioritization was even started. Blaming the template for the cuts that were made in other institutions is like blaming a saw for cutting wood the wrong length. We are not in the same position. I believe that the provost and chancellor are attempting to be proactive and prevent the very thing so many seem afraid of, the dreaded program cut.

    How long has it been since any of the programs have been assessed relative to the rest of the other programs to see which ones are maintaining their relevance to the mission of the university and others not so much? Perhaps my memory is faulty, but I do not recall any previous prioritization efforts since I came here in 1994. Unless education is unlike any other endeavor in human activity, the environment changes, technology changes, demand for skill sets change, and as a result some programs may be losing their relevance and yes, the priorities change. This exercise will provide the information necessary to identify those programs that need redirection, additional resources, revision, and some may benefit from being discontinued. The idea that once a program is started it will last forever is not realistic because we live in a dynamic environment and things change.

    Another thing to consider is this, in the past 20 years we have seen numerous “across the board” cuts. This seems ok since it spreads the pain across the entire institution. In reality it harms growing programs that need funding to support the demand for the program. It is far better to make reductions in targeted areas than to cripple all programs. The across the board method of reducing budgets is a sure way to slowly slide into mediocrity.

    Finally, a word about the chancellor and the provost. I know both of these men pretty well professionally. I served as associate dean for academics and department chair under Chancellor Case for four years when he was dean of CBPP. His mantra is “service before self” and he exemplifies this. I have known Bear Baker for over 11 years, four of them as dean of CBPP. He is also driven by the desire to make a positive contribution and to serve. Neither of these men is serving in the capacity that they do as a stepping stone to the next higher academic position like so many of our leaders in the past. Both of them are men of unimpeachable integrity with the best interests of this institution at heart.

    Stonewalling or delaying the program prioritization effort because of ungrounded suspicions will deny the chancellor and provost the information that they need to make a rational data driven appeal for the support that UAA needs and deserves. If that is what you want to do that is your choice. As for me, I am rolling up my sleeves and doing whatever I can to get the template completed accurately and fully as quickly as I can. The legislature is not interested in increasing the university budget as indicated by the letter that was circulated last week signed by six key legislators. If we don’t equip our chancellor and provost with the data to present compelling arguments in support of funding our institution we will deserve what we get (or don’t get).

    Sincerest Regards,

    Frank Jeffries, Ph.D.
    Professor
    UAA College of Business and Public Policy
    (907) 786-4162

  2. N.Bhattacharyya says:

    It is the same old wine in a new bottle. It is the same five quintiles of Dickeson now being called FCPJ. That P should stand for Political and not Professional.How can you rank order different branches of knowledge? Indeed the only academic study I know of Prioritization finds that the programs that are hit are those where the decision makers think they can get away with it.
    See
    Decision Rules Used in Academic Program Closure: Where the Rubber Meets the Road
    Peter D. Eckel
    The Journal of Higher Education , Vol. 73, No. 2 (Mar. – Apr., 2002), pp. 237-262
    Published by: Ohio State University Press
    Article Stable URL: http://www.jstor.org/stable/1558411
    Abstract(:
    By what criteria do institutions decide which academic programs to terminate? This study adopts a decision/action rationality framework to explore the criteria used to close programs at four universities. Findings suggest that decisions are based upon criteria other than cost, quality, and centrality and that process leads to criteria generation.

    I quote from this paper:
    “This study suggests that the majority of decision criteria identified are
    not used to select programs for closure. Rather, institutional decision
    makers use alternative criteria, ones that lead to action. Simply having
    stated criteria, and possibly a process to develop those criteria, may be
    more important to the discontinuance process than serving a utilitarian
    choice purpose. Stated criteria most likely fulfill a symbolic role needed
    to generate commitment and action.

    Because decisions are not based on cost, centrality, or quality, decision
    makers might reconsider readily adopting a strategy of program
    closure in the first place. By closing the programs “they could get away
    with,” decision makers neither meet their goals of saving money, improving
    quality, or streamlining focus, nor do they follow efficient
    processes. Closure processes might cause pain, disruption, and hard
    feelings for possibly little actual return. Decision makers thus are caught
    between reaching their intended goals-which in the case of program
    discontinuance tend to be reducing costs, enhancing quality, or realigning
    institutional strategy-and closing politically vulnerable programs
    that may do little to increase quality, save money, or create new strategy.” (pages 257-258)

  3. Tracey Burke says:

    I am also on the AcTF. I support everything Bill has said as accurate; however, I would highlight other aspects of our discussions. Perhaps Bill’s and my approaches to Reconsidering the Quintiles reflect our professional and disciplinary backgrounds. (Note to those worried about AcTF not having sufficient representation: While we are not “representing” any programs, insofar as we are not advocating for or against any, we do bring different ways of thinking that I believe include the many approaches of UAA faculty.)
    I originally favored the quintiles – 5 equally-populated categories – as a fair compromise between simple and meaningful. Our task was to evaluate programs comparatively; not simply to rate the intrinsic value or quality or any program in isolation, but to identify which were most important, most central, to who we (UAA) think we are and who we want to become. The Further Review quintile was never meant as a statement that “20% of our programs are lousy” but that “these 20% are relatively less important, and freeing up some of these resources may allow us to grow these other, more central programs.” The Further Review quintile has caused a lot of angst, not only because freeing up resources probably means reducing or eliminating some programs, but because the forced 20% seemed harsh and artificial.
    However, it’s really the Transformation category that made the forced 20% artificial. This is the category that keeps the categories from being linear; this is the wildcard. This is the one I think of as “these are programs that we must have – and that we must dramatically re-imagine.” One can’t have a university without basket weaving, right? Yet, if we on the AcTF discern that UAA has a weak basket-weaving program, we would place it here so it can get the attention (devoted thinking, restructuring, resources?) it needs. I quite like this category. Yet to say a priori that 20% of programs belong here? Hmm. Nonetheless, the quintile system was good enough in other ways that I was okay with 20% in Transformation.
    Now we’ve moved away from forced equal distribution, though after much discussion we chose to retain forced minimal distribution (15% of programs per category). The prioritization endeavor is still fundamentally about identifying what is most central so that if and when it is necessary, the provost and PBAC can free up resources – and we need something in Further Review for that. To use entirely free distribution would reduce the comparative function of the endeavor; Further Review would wind up as simply a very few “lousy programs.” Much more stigma for those programs; much less flexibility for those that need to make the resource numbers come out right. With forced minimal distribution, every category (no longer quintiles) still will be populated, but the AcTF has more discretion for placement of 25% of programs.
    Given the continuing need to evaluate the relative importance of our many programs, I think this is fair.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

AcTF Open Dialogue

This is a community forum for people to share their perceptions and experiences with UAA’s process of prioritizing of academic programs. Members of the AcTF will post to share their experiences of task force deliberations and dialogues with the UAA community. We encourage well-informed and open dialogue about this process. Anyone is welcome to comment on posts. Find more information on UAA's Prioritization Process Website.