SPA Process
My Mentoring Experience and "Expert" Opinion about the UC Davis Student Progress Assessment (SPA) Process
Disclaimer: The content of this page is 100% my own, in my official capacity as a professor at UC Davis and thereby afforded protection under APM 010 and APM 015 as part of my correlative duties of professional care (APM010) and policy-given right to communicate about institutional matters (APM015). It ought to also be protected as public free speech under the US Constitution. In no way am I representing that this content represents any one else's viewpoint, nor that of UC Davis or any unit within or beyond it, or any other institution. I do believe this viewpoint counts as an "expert" scholarship, because it is rooted in education on the subject matter, scientific literature analysis of the subject matter, decades of professional experience on this subject, and scholarly activity on my part to write the essay, all within the scope of my terminal degree of a "doctor of philosophy", commonly called a PhD, and also within the scope of my work duties as a I&R and AES professor at UC Davis.
Onward!
In recent years, UC Davis instituted a new, formal framework for graduate student performance assessment called Student Progress Assessment (SPA). This is done annually in spring quarter. SPA is a form of performance evaluation, and based on the content of what it involves, it fits more in the tradition of employee evaluation that in the form of student learning assessment.
I have been a professor at UC Davis for >25 years. In that time, I have not only served as a “major professor’ mentoring many graduate students, but I have also served as a graduate group chair for 5 years and a graduate advisor across 2 graduate groups for even more. I also ran an undergraduate major for 11 years. Thus, in all of these formal UC capacities, in addition to many more formal and informal capacities over a lifetime of work, I have extensive experience with supervising and mentoring people as well as with performance evaluation.
After my first year as a professor, I studied the topic of performance evaluations academically by reading a textbook on the topic and considering best practices in industries that use such mechanisms. Based on everything I read and learned, I wrote up documents that seemed to capture what those proponents of this method said works. Of course, at the time, UCD had nothing like that going on, so I went ahead with implementation at my own initiative to try to do the best job I could as a professor and mentor.
What I found when I implemented a high-quality performance evaluation framework was that it did not go as I had hoped. Specifically, instead of helping grad students identify ways to improve and helping them implement practices toward effective work to make scientific progress, I felt that it did not affect their behavior and meanwhile it did do something to their attitudes. Specifically, it put up a big blockage in our professional relationship as mentor-mentee, it caused students emotional distress, and it did not actually improve anything about their performance.
One of the especially ineffective strategies, which came as a surprise to me because it was and remains highly touted as supposedly beneficial, was setting “guidelines and deadlines”. I taught students that guidelines were longer term dates they they should establish and have in mind to track their whole degree and research program, even if exact dates were not going to be met- guidelines just got the student thinking about how long things take to achieve and they set shared expectations between us. Meanwhile, I taught students that deadlines were short-term, upcoming dates at which they were committing to have something done. Deadlines created immediate, actionable expectations that we both understood. I implemented this approach for a few years.
I found the systemic use of guidelines and deadlines to be a total failure and completely ineffective.
First, even when students are working hard, science just isn’t the kind of “work” that can easily be scheduled out, so deadlines are often blown and guidelines don’t hold up. Sometimes a person can work hard for months and in the end be farther away from the product or outcome they were working toward than they were when they started. Why? Because they learn through trial-and-error that their assumptions, hypotheses, and/or methods were wrong and they have to start over in a new direction. Major professors aim to help students avoid such an outcome, but even for professors, science is highly uncertain if people are doing novel, bold research. That is just part of science. It is also part of education- “freedom to fail” as they call in the pedagogy literature. Some scholars believe people learn more through failure than success, but that must be weighed against the reality that in graduate school, one cannot earn a degree without successfully writing an MS thesis or PhD dissertation if they are on a research track requiring an actual product like that.
Second, in my experience, people at this stage of education and life typically have a lot of things going on that are real and meaningful, but are also “distractions” from focused academic and work progress, for a type of work that is often so long-term. We do not just make 100 widgets a day and each day is like the last. It takes weeks, months, years even to work through complex studies to get to the final outcome of a written document. As a result, I found no relationship between the setting of goals/deadlines/etc and achieving productive outcomes. Again, I found far more harm to individual student mental health, “spirit”, and progress than benefit from such oversight.
In fact, I myself abandoned the use of specific deadlines and guidelines in my own career, instead using a totally different framework for successfully accomplishing my goals, which involved simply doing the activities that excited me in the moment. By always doing what I like, I find that I am inspired to do far more work than through the punitive framework inherent in the deadline mindset. A deadline is effectively a future punishment, because one feels minimal reward for achieving it (maybe relief), but feels intense discomfort for missing it. When work involves so many tasks and complex ones, then I have found that working towards what inspires me is effective.
Based on my experience at the time with trying to implement best practices with performance evaluation theory, I went to the primary scientific literature to look for papers that were not proponents of it but evaluated it objectively and independently to understand why the best practices of industry did not work for us. Recently, I went back to the literature to see if the findings still hold after >25 more years of research. I found (and still find) that, in fact, the literature has extensive evidence and conclusions that performance reviews are harmful and ineffective, if the goal is to actually help people do better. What is the specific evidence?
(1) Studies in psychology and neuroscience (e.g., by David Rock and others in the field of neuroleadership) show that formal performance reviews activate the brain’s threat response, triggering stress and defensiveness. When people feel judged, cortisol spikes, impairing memory, creativity, and decision-making—exactly what you don’t want when assessing or improving performance. As a result, employees become more focused on self-protection than growth, leading to disengagement or even sabotage of feedback.
(2) Ratings undermine motivation and learning. A meta-analysis published in the Journal of Applied Psychology found that more than one-third of performance reviews result in decreased performance. Performance reviews shift the focus from intrinsic motivation (growth, mastery) to extrinsic control (judgment, comparison)—a dynamic proven to kill motivation (Deci & Ryan, Self-Determination Theory). Wow! That’s exactly what I am talking about in my approach of focusing on inspiration rather than deadlines. Interesting to find that the science affirms my empirical experience.
(3) Performance reviews reinforce bias and inequity. Studies have shown that performance reviews often reflect implicit bias more than actual performance. Women and people of color tend to receive more vague and personality-based feedback (“you’re aggressive”) than their white male peers (“meet more key performance indicators”). This perpetuates inequity, hurts morale, and undermines retention of diverse talent.
Summing up the scholarly literature, traditional performance reviews- especially those done annually, like UCD’s SPA process- are more about risk management than real development. This final point from the literature, about the ulterior motive, is really pertinent to what is going on right now at UC Davis, in my opinion. It’s about leaders covering themselves and the institution, and seeking to place blame on others, more than anything else.
Still, we have to be equally honest that among faculty, faculty committees, and Graduate Studies, some students are not getting the advising and care they should be getting, even when they actively seek it out. We know from the literature that SPA’s cannot solve that problem, so then we have to ask what can be done? I have already mentioned my method of focusing on what I like about my work, what inspires me, so that is something for the “worker” to consider. But what about from the advisor/mentor side?
Regardless of what level of oversight is at issue, the evidence shows that the best practice is frequent, informal, coaching-style feedback. This is not just between a student and the major professor, but between the student and all levels of oversight. And this is a key problem, because the oversight framework at UCD is hierarchical in design, and does not enable and support frequent, informal, coaching-style feedback from multiple levels. Faculty are put in a position of cannon fodder to be thrown under the bus. So let us look at all three levels of oversight.
When it comes to the individual student-major professor relationship, this ideal of frequent, informal, coaching-style feedback is exactly what I aim to achieve. I work hard and really care to personalize my guidance for people I supervise. While there can be some general tendencies I see in how different students can go down a similar path in their education, and I do give advice based on those past lessons, I have also found that there is no cookbook for what works in any situation. My expertise from 25+ years of experience is knowing how to do my part in the relationship. Some students do great with deadlines and guidelines from day 1, typically because they are motivated to win adulation in a structured reward system. In such cases, students and workers will often make their own deadlined and guidelines themselves. Without that self-direction, some other students will simply ignore these imposed structures, while even other students will get offended or go out of there way to miss them- call these the “rebels” if you like. Overall, my experience and the scientific literature align that the use of deadlines and guidelines is not a uniformly appropriate or successful strategy. It’s especially bad at solving a situation once someone is identified as having problems. The more a situation goes off the rails, “risk management” wants faculty to install more structure, because that’s not about student success, it’s about legal protections and such. Student success would instead drive a solution toward helping a student find their inner love for their efforts. If no such love and inspiration can be found within, then it should not be imposed from outside. Most likely, the student should drop out and do something else with their lives, because they are not motivated and inspired to overcome the difficult challenges posed in a graduate education- and from there, work often only gets more challenging in a career. They should adapt to some other career path that does inspire them.
Instead of forced structure, the personalization I use comes in both the frequency and in content of the engagement, and is mindful of what level of engagement the supervisee/mentee wants, for their part, up to a limit. I tell folks in my group to come to me when they want and need engagement, but at a minimum, provide me with a weekly update by email for asynchronous engagement, and then let’s meet when we have things to address. I often use email to the group as a basis for offering individual zoom and in-person meetings, and then see who takes that up. If people are not coming to me, then eventually I’m going to step in and contact them for an update and possibly for a meeting.
But, let’s also be honest that there are major problems with graduates student advising at all levels above the student-major professor level, and these are not being addressed at UCD, in my opinion. Grad advisors are not trained, empowered, and overseen to engaging with their advisees enough, in my opinion, other than reviewing coursework annually and signing forms. Students in their first few years do not have thesis committees, so they really lack a wider faculty support network. During this time, the grad advisor is critical. I will also point out from feedback from a colleague that when a grad advisor does take the initiative to be more involved, outside of any structure set up by UCD, some major professors may feel offended and respond negatively, feeling that the grad advisor is butting in where they don't belong. This may especially be a risk when the grad advisor is younger or earlier in their career than the major professor. This feedback resonates with me, because I started as a professor at age 27 behind the Boomer generation and I had the experinece several times of more senior colleagues expressing offense if I ever tried to point out problems or give advice- like, how could a younger person such as myself think I have anything to offer them individually. It just wasn't accepted. Thus, grad advisors should be more empowered and structures set up to support them playing a more active role with students. Eventually, when a student does have a thesis committee, then grad advisorsd should stil play a role but also faculty thesis committees can also help students out, typically on an annual basis or during active research engagement.
Grad groups as a whole, especially chairs, typically do not meet with students in the program individually, at least annually, and take responsibility to know what is going on within their grad groups down to the individual level. When I was grad-group chair, I did do that and I found it very useful at building relationships with people that enabled us to avoid problems in the first place and then address them sooner when they arose. I also met with the staff program coordinator annually to go down the list of graduate students and discuss their individual status. When I was chair, I found that many major professors needed the grad advisor or grad group chair to step in and direct the student in ways that the major professor was unsuccessful in achieving. Grad advisors are not really set up to serve as this intermediary, because they typically do not meet regularly with students and keep track of what the students are doing, and they don’t have the aid of the program coordinator in such an effort. Grad chairs can be better informed, because they are typically interacting with a staff program coordinator almost daily, and those coordinators are often solicited by students for help when they have problems. But without doing an annual meeting with individual students, grad-group chairs, in my opinion, are allowing problems to grow and are not mindfully connecting students and faculty in difficult with potential solutions .In mentioning solutions, I especially do not mean spamming generic emails of “resources” (often with several useless links that do not go to an actual resource a student can use) as has become all to common at UCD. No, I mean doing meaningful engagement.
On the SPA form itself, if a student is judged as having made marginal progress, then the major professor only gets 2000 characters to explain why the progress is deemed marginal. That’s absurd. The motto on the University of California shield exclaims “Let There Be Light.” It’s not “Let there be a few 10 lumens of light”. Light means truth and knowledge. Setting a word-count limit on such a serious matter is ridiculous. When a student is in difficulty and performing at a marginal level, a thorough accounting is warranted. Why would there be a 2000 character limit? Most likely because administrators do not wish to actually know the full story or spend the time reading it to gain a rich understanding to actually achieve a role as a coach, per the scientific guidance of what is needed. Instead, the evidence suggests the word-count limit is about a “check-the-box” exercise to cover liabilities. I do not agree with that. I also think it mindfully creates a void that administrators can fill with whatever fantasy narrative they want to place blame on a professor for what is going on.
Similarly, when marginal progress is indicated, then the major professor is directed to explain the conditions the student must fulfill to achieve satisfactory progress, “Be precise and include all required tasks and deadlines. (2000 character limit)”. How is one to enumerate all of that within another 2000 character limit? It’s absurd. And then here we are again where UCD is directing faculty to harm students by taking actions that the scientific literature demonstrates does harm them. That’s immoral.
Beyond these examples of bad practices instituted by UCD, then we must consider the even higher levels of oversight beyond a graduate program or group. Based on my >25 years of experience, I have the evidence to come to the conclusion that Grad Studies is just fraught with so many problems. Sadly, it’s not safe for rank-and-file faculty at UC Davis to address and publicize such things, so I will forgo enumerating the crisis that I think the evidence shows at that level, but I will simply point it out. If you are student, ask yourself- what has Grad Studies done to help you, whether you have sought such help or not? A lot of my graduate students have had negative experiences interacting with Grad Studies and with other “resources” beyond the program. Students have expressed to me a strong sense of being neglected and not getting what they need from UC Davis, broadly. The SPA process does not address that problem.
In conclusion, the scientific literature on pedagogy and performance evaluation is clear and highly certain. The best practices involves frequent, informal, coaching-style feedback and helping students find their inspiration to achieve success. Approaches that can lead to negative emotions typically backfire. What is really needed is not an online form process, but a lot more effort on all levels for genuine engagement. UCD needs to actually function like a school on all levels more than a business or an all-things-for-all-people framework. Processes like SPA are very poor substitutes for genuine engagement on all levels. Not only are they poor, but we know they are often harmful. If I had a choice, I would not use the SPA system, but of course it is required, so I must.
Nevertheless, I provide this informal opinion essay as my viewpoint on the situation, which I am entitled to share publicly per APM015 that gives all UC professors the freedom to communicate freely on institutional matters. Others do not have to agree with me, and I welcome alternative viewpoints, but this is the viewpoint I am at right now based on my education, experience, and professional efforts.
-Prof. Greg or Pasternack