One of the big lessons from the fall was that UAA just doesn’t have a good handle on a lot of what’s going on. The initial data for the academic templates were a mess, one that took us by surprise. We as faculty put considerable effort into thinking about what would be useful to know in order to compare academic programs across the university. Yes, we’re comparing apples to oranges to kumquats to mangos. How do you reasonably compare everything in the fruit basket? Simply letting programs (fruits) choose the data that benefit them – and that they happen to have available – doesn’t make for good comparison. This program tells us how tasty it is, this program how nutritious it is, this program how appealing it is to children, this program that it’s sustainably grown. All good stuff, and we need to know all those things about 300+ programs.
We didn’t know most of those things for most programs, and we couldn’t easily access existing data sets to tell us. So some of the reason for the data mess and delay was that we were asking for new information, and there was lag time in being able to provide it. There were other reasons for the problems, and I don’t pretend to know all the details. One disturbing reason is that some data were just wrong; items were entered into Banner incorrectly. Some faculty were listed as in the wrong department or college, and some financial coding was sloppy, for example; those erroneous inputs impacted some reports which then impacted individual program templates. It’s a benefit to the university that our bookkeeping should be better because of this process.
Better data will help us make better decisions, and the data are getting better. Some of the input-level messes have been cleaned up, and the reconfigured data team (now led by a sub-committee of the AcTF) is asking questions of the data somewhat differently than was done a few months ago. The sub-committee reports that the new data look good, but they are taking extra pains to check it. At this writing, they have signed off on the data for the programs they are directly involved in, those being the ones they know most about. Now volunteers from some programs not directly represented on the task force are checking their data, and the rest of the task force members will review the data for programs they are familiar with. Once the team is confident that the data quality is good, everyone’s reports can be run and the data officially released.
There will be some time required for all this to happen, and as a task force, we are intentionally refraining from announcing new deadlines for template submission until confidence in the data is solid. Of course, not all sections of the template include data, and not all templates will have any centrally-provided data (e.g. sponsored programs).
People writing the templates are urged to think hard about their “storylines” and to start drafting what they can. We are committed to giving programs adequate time to complete the templates once the data are released; we are also committed to having the review complete and the report written by June 30. We know it is frustrating to be given non-answers about data and deadlines, but we want the next answer to be the right answer. “The final answer.”