Pulled Pork, The Remix

We are finally, finally, in the thick of spring — the sun is out, at least some of the time, and the windows are open, at least part of the day. And the ability to stand being outside for more than ten minutes at a time has me pondering the things that sustained me through this miserable winter.

In a word: Pork, and lots of it.

One of the best things I did this winter was develop a variation on the famous ProfHacker pulled pork recipe, engineered away from barbecue and towards carnitas. In the spirit of the commons, I now want to pass this on for further experimentation and remix.

One note of not-exactly caution: I invariably pretty much eyeball the spices, so the rub is totally much a YMMV thing. That said, I have yet to have this turn out anything less than awesome.

* * *


2 large or 3 medium yellow onions
6ish cloves minced/crushed garlic
coarse kosher salt
Morton & Bassett Mexican Blend spice mix
olive oil
5ish-lbs bone-in pork shoulder or pork shanks
2 fresh jalapenos
1 tub hot salsa (of the pico de gallo sort, usually found in the produce section)
1/2 cup chicken stock

Very coarsely chop the onions and cover the bottom of a large-size slow cooker with them. Mix the garlic, salt, spice mix, oregano, paprika, and olive oil, which together should form a nice thick rust-colored paste.

Wash the pork and pat dry. Rub it thoroughly on all sides with the spice mix, and place on top of the onions. (If you use a pork shoulder, place it fat side up.)

Clean and chop the jalapenos, and scatter them on top of the pork roast. Pour the tub of hot salsa over the roast, and then add the chicken stock.

Turn the slow cooker on low, and… let it do its thing, for something on the order of 10 hours. I sometimes start this early in the morning, so that it’s ready for dinner. But even more often I start it after dinner and let it cook overnight. (The house fills with the smell of amazingly good pork, which produces really interesting dreams.)

In any case, the roast should be utterly falling apart by the time it’s done. Pull it out of the slow cooker a chunk at a time, cleaning away the fat and shredding the meat.

Because we tend to use the meat over the course of several days in things like tacos (soft corn tortillas, cheese, good salsa and guacamole), I store the meat in a large container, adding a bit of the cooking liquid that the roast leaves behind to keep it moist. I also totally recommend straining the rest of the cooking liquid (and removing the thick layer of fat from it), which leaves behind a super-rich gelatinous broth excellent for doing things like cooking greens.

And that’s how I made it through the winter. That and a series of chicken roasting experiments. But that’s another post entirely.

Evolving Standards and Practices in Tenure and Promotion Reviews

The following is the text of a talk I gave last week at the University of North Texas’s Academic Leadership Workshop. I’m hoping to develop this further, and so would love any thoughts or responses.

I’m happy to be here with you today, to talk a bit about evolving standards and practices in promotion and tenure reviews. Or, perhaps, about the need to place pressure on those standards and practices in order to get them to evolve. Change comes slowly to the academy, and often for good reason, but we find ourselves at a moment in which uneven development has become a bit of a problem. Some faculty practices with respect to scholarly work have in recent years changed faster than have the ways that work gets evaluated. If we don’t make a considered effort to catch our review processes up to our research and communication practices, we run the risk of stifling innovation in the places we need it most.

I want, however, to start by noting that most of what I am proposing here is intended to open a series of issues for discussion, rather than presenting a set of answers to the problems. Every university, every field, indeed, every tenure case brings different needs and expectations to the review process; it’s only in teasing out those needs and expectations that you can begin to craft a set of guidelines that will adequately represent your campus’s values and yet be supple enough to continue to represent those values in the years to come.

So first, a bit of recent history around these issues, before I move to the kinds of issues I believe we need to be considering with respect to tenure processes. In 2002, then-president Stephen Greenblatt sent a letter to the 30,000 members of the Modern Language Association, alerting them to a coming crisis in tenure review processes. The failing fiscal model under which university presses operate, he noted, was resulting in the publication of fewer and fewer scholarly monographs, a reduction being felt most acutely in the area of first books; as a result, work that was of perfectly high quality but that did not present an obvious market value was in danger of not finding a publisher. Unless departments were willing to think differently about their review processes, recognizing what the “systemic” obstacles facing all of scholarly communication, Greenblatt argued,

people who have spent years of professional training — our students, our colleagues — are at risk. Their careers are in jeopardy, and higher education stands to lose, or at least severely to damage, a generation of young scholars.

In considering what might be done, Greenblatt noted that

books are not the only way of judging scholarly achievement. Should our departments continue to insist that only books and more books will do? We could try to persuade departments and universities to change their expectations for tenure reviews: after all, these expectations are, for the most part, set by us and not by administrators. The book has only fairly recently emerged as the sine qua non and even now is not uniformly the requirement in all academic fields. We could rethink what we need to conduct responsible evaluations of junior faculty members.

There are some things that might bring one up short here: for instance, Greenblatt’s acknowledgement that the book is not “uniformly the requirement in all academic fields” of course masks the degree to which the book-based fields are outliers in the academy today. But nonetheless, those fields’ reliance on the book as the gold standard for tenure was becoming, for a host of reasons, problematic, and thus Greenblatt urged departments to reconsider their review practices, as they

can no longer routinely expect that the task of scholarly evaluation will be undertaken by the readers for university presses and that a published book will be the essential stamp of a young scholar’s authenticity and promise.

Departments, in other words, must step forward and establish means of determining for themselves where appropriate evidence of a young scholar’s “authenticity and promise” lies.

In the years following this letter, the MLA created a task force charged with examining the current state of tenure standards and practices and making recommendations for their future. That task force issued its final report in December 2006, presenting a list of 20 recommendations, supported by nearly 60 pages of data and analysis. Their recommendations included things like:

The profession as a whole should develop a more capacious conception of scholarship by rethinking the dominance of the monograph, promoting the scholarly essay, establishing multiple pathways to tenure, and using scholarly portfolios.


Departments and institutions should recognize the legitimacy of scholarship produced in new media, whether by individuals or in collaboration, and create procedures for evaluating these forms of scholarship.


Departments should conduct an in-depth evaluation of candidates’ dossiers for tenure and promotion at the departmental level. Presses or outside referees should not be the main arbiters in tenure cases.


Departments and institutions should facilitate collaboration among scholars and evaluate it fairly.

None of these recommendations seem terribly controversial, and yet here we find ourselves. Over seven years have passed since the task force report; nearly twelve have passed since the Greenblatt letter. And by and large, standards and practices in tenure and promotion reviews have changed but little. By and large, the book remains the gold standard in what are still referred to as “book-based fields,” and departments still find themselves stymied when it comes to evaluating digital work, collaborative work, public work, and the like.

It would not be unreasonable to ask whether Stephen Greenblatt was simply being a bit alarmist in the sense he conveyed of an impending crisis for the faculty. We do not appear to be surrounded by a lost generation of scholars whose prospects were damaged by our continued adherence to the book standard in the face of university press cutbacks — and yet, given that faculty who are not awarded tenure leave our midst, and that we are not haunted by the lingering specters of unpublished books, it’s possible that the damage is nonetheless being inflicted, just in a way that we are able to keep outside our awareness. One clear, if anecdotal effect of our refusal to change, however, may well be precisely how much we have stayed the same; I have been told by several junior faculty members, and have heard the same thing at second-hand from many others, about having been counseled by senior colleagues that they should put aside their more experimental projects and focus on the traditional monograph until tenure is assured. The counselor is generally well-intentioned, wanting to help his or her junior colleague have as frictionless an experience of the review process as possible. But the outcome, too often, is that the junior faculty member is either made risk-averse or, in a positive sense, ushered into the more reliable reward channels of the ways that things are usually done, and as a result never returns to the transformative work originally imagined. And that disciplinary lockdown then gets transmitted on to the next round of junior colleagues. And so we continue, as a field, to rely on the monograph as the gold standard for tenure, and we continue to find ourselves baffled by the prospect of evaluating anything else.

This is not to say, however, that there has not been change — even in the most hide-bound of book-based fields — over the last twelve years. Scholars today are communicating with one another and making their work public in a range of ways that were only beginning to flicker into being in 2002. Many faculty maintain rich scholarly blogs, either on their own or as part of larger collectives, through which they are publishing their work; others are working on a range of small- and large-scale corpus building, datamining, mapping, and visualization projects, all of which seek to present the results of scholarly research and engagement in rich interactive formats. Projects in a wide range of digitally-inflected fields across the humanities, sciences, and social sciences are both using and developing a host of new methodologies, both for research and for the communication of the results of that research. And these projects are not just transforming their fields, but also creating a great deal of interest in scholarly work amongst the broader public.

And yet I visited a university last fall whose form for the annual professional activities report asks faculty members to list their (1) book publications, (2) peer-reviewed journal articles, (3) major conference presentations, and so on, finally getting to “web-based projects” somewhere just above volunteer service in the community. It’s just a form, of course, but in that form is inscribed the hierarchy of what we value, as evidenced by what we actually reward in our evaluation and merit review processes. And if we are going to take web-based work as seriously as traditionally published work, we need to manifest that in those reward systems.

However, I do want to be clear about something: What I am arguing here today is not that digital projects of whatever variety should be treated as the equivalent of a book or a journal article. In fact, attempting to draw those equivalences can get us into trouble, as digital work demands its own medium-specific modes of assessment. Digital projects are often radically open, both in their mode of publication and their mode of peer review; they are often process-oriented, without a clear moment of “publication” or a clear completion date; they are very frequently code-based, and often non-linear, in ways that require that they be experienced rather than simply read. And too often review processes eliminate that possibility; not only do our forms rank web-based work as unimportant, but our processes require that such work be printed out and stuck in a binder. This is clearly counter-productive; we cannot continue evaluating new kinds of work as if it has been produced and can be read just like the print-based work we’re accustomed to.

But what I’m after here is not a new set of equally rigid processes that better accommodate the particularity of the digital. Rather, our review processes need to develop a new kind of flexibility — in no small part because developing a set of criteria that perfectly deals with all of today’s forms of scholarly communication will in no way prepare you for tomorrow, or next year. The fact of the matter is that scholarly communication itself is in a period of profound change, profound enough that change itself is the only certainty. And so we need guidelines that will enable the faculty and the administration together to locate the core values that we share and to establish processes that will take each case on its own terms, while nonetheless proceeding in ways that can be fairly applied to all cases.

In considering such a transformation, I believe that we need to begin by thinking differently about what it is we’re doing in the tenure review process in the first place. We have long treated the tenure review, and to a lesser extent the review for promotion to full, as a threshold exercise: an assessment of whether the candidate has done enough to qualify. The result, I believe, is burnout and disgruntlement in the associate rank. There’s a reason, after all, why The Onion found this funny, and it’s not just about the privileges of lifetime tenure producing entitled slackers.


Assistant professors run the pre-tenure period as a race and, making it over the final hurdle, too often collapse, finding themselves exhausted, without focus or direction, depressed to discover that what is ahead of them is only more of the same. The problem is not the height of the hurdles or the length of the track; it’s the notion that the pre-tenure period should be thought of as a race at all, something with a finish line at which one will either have won or lost, but will in any case be done. I believe that we can find a better means of supporting and assessing the careers of junior faculty if we start by approaching the tenure review in a different way entirely, thinking of it not as a threshold exercise but instead as a milestone, a moment of checking in with the progress of a much longer, more sustained and sustainable career.

Here’s the thing: We hire candidates with promise, expecting that their careers will be productive over the long term, that they will engage with their material and their colleagues, and that they will come to some kind of prominence in their fields. The tenure review, at the end of the first six years of those careers, should ideally not be a moment of determining whether those candidates have thus far done X quantity of work (that is, that they have done enough to earn tenure, and can safely rest), but rather of asking whether the promise with which those candidates arrived is beginning to bear out. Let me say that again: beginning to bear out. The question we are asking, at tenure, should not be whether the full potential of a candidate has been achieved, but whether what has been done to this early point in a career gives us sufficient confidence in what will happen over the long haul that we want the candidate to remain a colleague for as long as possible. In order to figure that out, the questions we ask about the work itself should not — or at least should not only — be about its quantity; rather, we should focus on its quality. And there are a couple of different ways of thinking about and assessing that quality: first, through the careful evaluation by experts in the candidate’s field, and second, through an exploration of the evidence of the impact the candidate’s work is having in his or her field.

Such a focus on impact might help us more fairly evaluate the new kinds of digital projects that many scholars today are engaged in. But they also might encourage us to reassess a range of forms of work that have not been adequately credited in recent years. In fact, I would argue that the reforms that we need in our tenure review processes are not just about accommodating the digital at all. We also need to acknowledge and properly value forms of intellectual labor that have long been done by the faculty but that for whatever reason have gone undervalued. In my own area of the humanities, such work includes translation, or the production of scholarly editions, or the editing of scholarly journals. These are forms of work that have long been part of academic production, but that have by and large been treated as “service to the field.” And yet — just to pick up one of those examples — what more powerful position in shaping the direction of a field is there than that of the journal editor? The impact of such an editor across his or her term is likely to have a far greater and far longer-lasting influence on his or her area of study than any monograph might produce — and yet only the monograph, in most institutions, will get you promoted.

This is just one of the kinds of problems that we need to encounter. But again, I want to emphasize that it’s not enough simply to add “digital work” or “journal editing” to the list of kinds of work that we accept for tenure and promotion, not least because the impulse then is to apply current standards to those objects: are there kinds of journals that “count,” and kinds that don’t? Does the journal have to have a specified impact factor? I’m sure you can imagine more such questions — questions that I’m convinced lead us in the wrong directions, toward increasing rigidity rather than flexibility. Instead, I want to head off in a different direction. In the rest of my time this morning, I want to sketch out a few of the ways that our thinking about the review process might change in order to help produce the results we’re actually aiming for. The new ways of thinking that I’m urging today may require us to give up our reliance on some relatively easy, objective, quantitative measures, in favor of seeking out more complex, more subjective qualitative judgments — but I would suggest that these kinds of complex judgments about research in our fields are the core of our job as scholars, and that we have a particular ethical obligation to take our responsibility for such judgments seriously when they determine the future of our colleagues’ careers. This different direction will also require us to think as flexibly as we can about how our practices should not only change now, but continue to evolve as the work that junior scholars produce changes.

So, I want to float a number of principles meant to instigate some new ways of thinking about the tenure standards and processes of the future. Though these are pitched as imperatives, they are not specific practices, but rather considerations for the creation of practices. First:

(1) Do not let “but we don’t know how to evaluate this kind of work” stand as a reason not to evaluate it.

Many disciplinary organizations have been hard at work developing criteria for evaluating new kinds of scholarly work. For instance, the MLA’s Committee on Information Technology developed such a set of best practices for the evaluation of digital work in MLA fields back in 2000, and has recently updated those guidelines. The CIT has further created an evaluation wiki, which includes information such as a breakdown of types of digital work. And, perhaps most importantly, the CIT has led a series of workshops before the annual convention designed to give department and campus leaders direct experience of the kinds of questions that need to be asked about digital work, and the ways that such evaluation might proceed. In conjunction with that workshop, the CIT has produced a toolkit. And the MLA also has guidelines for the evaluation of translations, and guidelines for the evaluation of scholarly editions, among other such guidelines.

And this is just the MLA. Other scholarly organizations have done similar work on the sorts of nontraditional projects that are appearing in their own fields. And several universities have developed their own policies for how such work should be evaluated, including Emory University and the University of Nebraska at Lincoln.
So there are excellent criteria out there that can be used in evaluating many non-standard kinds of scholarly work. Review bodies, from the department level up to the university level, must familiarize themselves with those criteria and put them to use in their evaluations.

(2) Support evaluator learning.

Despite the existence of these excellent criteria for evaluating new work, however, many faculty, especially those who have long worked in exclusively traditional forms, need support in beginning to read, interpret, and engage with digital projects and other new forms of scholarly project. This need is of course what led the MLA’s committee on information technology to hold its pre-convention workshops; similar kinds of workshops have been held at the summer seminars of the Association of Departments of English and the Association of Department of Foreign Languages, and at NEH-funded summer workshops. On the local level, you might enlist the scholars on your campus who are doing digital work or other forms of nontraditional scholarship in leading similar workshops for the faculty and administrators who play key roles in the tenure review process.

(3) Engage with the work on its own terms, and in its own medium.

Supporting evaluators in the process of learning how to engage with new kinds of work is crucial precisely because the work under review must be dealt with as it is, as itself. If I could wave my magic wand and eliminate one bit of practice in tenure and promotion evaluations, it would probably be the binder. More or less every year I hear reports from scholars whose work is web-based but who have been asked to print out and three-hole-punch that work in order to have it considered as part of their dossiers. Needless to say, eliminating the interaction involved in web-based projects undermines the very thing that makes them work. As the MLA guidelines frame it, “respect medium specificity” — engage with new work in the ways its form requires.

(4) Dance with the one you brought.

In the same way that the work demands to be dealt with on its own terms, it’s crucial that tenure review processes engage with the candidates we’ve actually hired, rather than trying to transform them into someone else. While it’s tempting to advise junior scholars to take the safer road to tenure by adhering to traditional standards and practices in their work, such advice runs the risk of derailing genuinely transformative projects. Particularly when candidates have been hired into positions focused on new forms of research and teaching, or when they have been hired because of the exciting new paths they’re creating, those candidates must be supported in their experimentation. In creating that support, it’s particularly important to guard against doubling the workload on the candidate by requiring them both to complete the project and to publish about the project, or worse, to complete the project and do traditional work as well. This is a recipe for exhaustion and frustration; candidates should be encouraged to focus on the forms of their work that present the greatest promise for impact in their fields.

(5) Prepare and support junior faculty as they “mentor up.”

My emphasis on supporting the candidates that you have doesn’t mean those candidates won’t need to persuade their senior colleagues of the importance of their work. Scholars working in innovative modes and formats must be able to articulate the reasons for and the significance of their work to a range of traditional audiences — and not least, their own campus mentors. In theory, at least, this is the case for all scholars; it’s the purpose that the “personal statement” in the tenure dossier is meant to serve. For scholars working in non-traditional formats, however, there is additional need to explain the work to others, and to give them the context for understanding it. That process cannot begin with, but rather must culminate in, the personal statement. Throughout the pre-tenure period, candidates should be given opportunities to present their work to their colleagues, such that they have lots of experience explaining their work — and ample responses to their work — by the time the tenure review begins. They also need champions — mentors who, having examined the work and coming to understand its value, will help them continue to “mentor up” by arguing on behalf of that work amongst their colleagues.

(6) Use field-appropriate metrics.

Every field has its own ways of measuring impact, and the measures used in one field will not automatically translate to another. A colleague of mine whose PhD is in literature, and who began her career as a digital humanist, now holds a position that is half situated in an English department and half in an information science department. Her information science colleagues, in beginning her tenure review, calculated her h-index — and it was abysmal. The good news is that her colleagues then went on to calculate the h-indexes of the top figures in the digital humanities, and discovered that they were all equally terrible. Metrics like the h-index, or citation counts, or impact factors simply do not apply across all fields. It’s absolutely necessary that we recognize the distinctive measures of impact used in specific fields and assess work in those fields accordingly.

(7) Maybe be a little suspicious of counting as an evaluation method.

We tend to like numbers in our assessment processes. They feel concrete and objective, and some of them are demonstrably bigger than others. The problem is that we tend only to count those things that are countable, and too often, if it can’t be counted, it doesn’t count. But as qualitative social scientists — much less humanists — would insist, there is an enormous range of significant data that cannot be captured or understood quantitatively. Citation counts, for instance: such metrics can tell us how often an article has been referred to in the subsequent literature, but they can’t tell us whether the article is being praised or buried through those citations, whether it’s being built upon or whether it’s being debunked. So while I’m glad that problematic metrics like journal impact factor are gradually being replaced with a more sophisticated range of article-level metrics, I still want us to be a bit cautious about how we use those numbers. This includes web-based metrics: hits and downloads can be really affirming for scholars, but they don’t necessary indicate how closely the work is being attended to, and they aren’t comparable across fields and subfields of different sizes. If we’re going to use quantitative metrics in the review process, they need careful interpretation and analysis — and even better, should be accompanied by a range of qualitative data that captures the reception and engagement with the candidate’s work.

(8) Engage appropriate experts in the field to evaluate the work.

It is, by and large, the external reviewers that we have relied upon to produce the qualitative assessment of the tenure dossier. These experts are generally well-placed, more senior members of the candidate’s subfield who are asked to evaluate the quality of the work on its own terms, as well as the place that work has within the current discourses of the subfield. Where candidates present dossiers that include non-traditional work, however, we must seek out external reviewers who are able to evaluate not just the work’s content — as if it were the equivalent of a series of journal articles or a monograph — but also its formal aspects. These experts can and should also uncover and evaluate the specific evidence of the work’s impact within the field. In the last couple of years, a couple of colleagues and I have all had the experience of being asked to undertake a review of a tenure candidate’s digital work, and have been specifically asked by those campuses to account for the technical value of that work and the significance that it has for the field. This kind of medium-specific review is, I would argue, necessary for all forms of nontraditional work: a candidate whose dossier includes translation should have at least one qualified external reviewer asked to focus on the significance of the translation; a candidate whose dossier includes journal editing should have at least one qualified external reviewer asked to focus on the significance of that editorial work for the field.

(9) But do not overvalue the judgments of those experts.

The external reviewers that are engaged by a department or a college to assess the work of a candidate are often the best place to evaluate the quality of that work, its place within the subfield, its significance and reception, and the like. But all too often these reviewers are called upon — or take it upon themselves — to make judgments that are outside the scope of their expertise. It would be best for us to refrain from asking, or even specifically enjoin, reviewers from indicating whether a candidate’s work would merit tenure at their institution, or whether a candidate is among the “top” scholars in their field. Such comparisons rely on false equivalences among institutions and among scholars, and they are invidious at best.

Even more, departments must use the judgments of those experts to inform their own judgment, and not supplant it. Departments know the internal circumstances and values of the institution in ways that external reviewers cannot. And while the members of a departmental tenure review body might not be experts in a candidate’s specific area of interest, bringing in such experts cannot be used absolve them of responsibility for exercising their own judgments, including engaging directly with the candidate’s work themselves.

(10) Avoid (or at least beware) the false flag of “objectivity.”

The desire to externalize judgment — whether by relying upon quantitative metrics or on the assessments of external reviewers — is understandable: we want our processes to be as uncontroversial, as scrupulous, and therefore as objective as possible. And there are certain subjective judgments — such as those around questions of “collegiality” or “fit” — that should not have any place in our review processes. But aside from those issues, we must recognize that all judgment is inherently subjective. It is only by surfacing, acknowledging, and questioning our own presuppositions that we can find our way to a position that is both subjective and fair. This is a kind of work that scholars — especially those in the qualitative social sciences and the humanities — should be well equipped to do, as it’s precisely the kind of inquiry that we bring to our own subject matter. And in this line, I want to note that the external judgments that we seek from outside reviewers are no more objective than are our own. If anything, external reviewer testimony itself requires the same kind of judgment from us as does the rest of the dossier.

Moreover — and I have a whole other 45-minute talk focusing on this issue — we need to acknowledge that “peer review” is not itself an objective practice, and therefore an objective marker of quality research. And there isn’t just one appropriate way for peer review to be conducted. Many publications and projects are experimenting with modes of review that are providing richer feedback and interaction than can the standard double-blind process; it’s crucial that those new modes of review be assessed on their own merits, according to the evidence of quality work that they produce, and not dismissed as providing insufficiently objective criteria for evaluation.

(11) Reward — or at least don’t punish — collaboration.

Along those lines: I have been told by members of university promotion and tenure committees that an open peer review process, or other forms of openly commentable work, would doom a tenure candidate because anyone who participated in that process would be excluded as a potential external reviewer. The intent again is objectivity: any scholar who has had any contact with the candidate’s work, or has engaged in any communication with the candidate, or has participated in any projects with the candidate, could not possibly be “objective” enough to evaluate the work.

This is on the one hand the kind of adherence to the false flag of objectivity that I think we need to get away from, and on the other a highly destructive misunderstanding of the nature of collaboration in highly networked fields today. I understand the impulse, to ensure that the judgment provided by an external reviewer is as focused on the work as possible, without being colored by a personal relationship. But there are degrees, and we need to be able to make distinctions among them. At my own prior institution, the line was one about personal benefit: if potential external reviewers stand to gain directly in their own careers from a positive outcome in the review process — a dissertation director who becomes more highly esteemed the more highly placed his former advisees are; a co-author whose work gains greater visibility the more her partner’s career advances, and so forth — such reviewers should obviously not be engaged. But other levels of interaction should not disqualify reviewers, including co-participants in conference sessions, commenters on online projects, and so forth. We need to recognize that a key component of impact on a field is about those kinds of connection: we should want tenure candidates to be developing active relationships with other important members of their fields, to be working with them in a wide variety of ways. Such relationships should be disclosed in the review process, but they should not be used to eliminate the reviewers who might in fact be the best placed to assess the candidate’s work.

The key thing, again, is that the tenure review should be focused on assessing the impact that the candidate’s work is beginning to have on its field, and the confidence that impact to this point gives you about the importance of the work to come. Each aspect of the standards and processes that you bring to the tenure review process should be reconsidered in that light: are the measures you are using, the evaluators you are engaging, the ways the work is being read or experienced, are all of these aspects producing the best possible way of assessing a career in process, and the most responsible way of considering its future.

I want to close with one crucial question that remains, however, and it’s a big one. This process of change is huge, and wide-ranging, and it strikes at the heart of academic values. Who will lead it? I do not know the situation at your institution well enough to say that this is definitively true here, but I will say that I know of institutions where administrative initiatives to reform processes like tenure and promotion reviews are met with faculty resistance to having standards imposed on them, and yet when faculty are tasked with beginning such reform they often disbelieve that the administration will listen to them. Which is to say that among the things that has to be done in this process is creating an atmosphere of trust and collaboration between faculty and administration, such that the work can — and will — be done together.

This is not an easy. It’s not just a matter of changing a few phrases in the current guidelines to permit consideration of new kinds of work. But I firmly believe that a real investment in envisioning a new set of tenure standards and practices can have a transformative effect on our campuses, opening discussions about scholarly values, promoting innovations in both research and teaching, and supporting the new ways that scholars are connecting not just with one another but with the broader public as well. I look forward to hearing your thoughts about how such a process might go forward.

Being Wrong

Intermittently over the last year, I’ve found myself fumbling around an idea about critical temporalities. That is: ideas keep moving, keep developing, even after you’ve locked them down in print or pixels. You continue developing your own ideas, one hopes, but the others who encounter your ideas also develop them as well, often in very new directions. And given how much critical development takes place in the negative (demonstrating the fundamental incorrectness of previously held ideas, as opposed to building beside or on top of those ideas), the conclusion I keep being drawn back to is that everything that we are today arguing will someday be wrong. (1)

On the one hand, there’s a bit of a lament in this: the half-life of an idea seems desperately short today; the gap between “that’s just crazy talk” and “that’s a form of received wisdom that must be interrogated” feels vanishingly small. How nice it would be for us to linger in that gap a little longer, to find there some comfortable space between Radical Young Turk and Reactionary Old Guard. To get to be right, just a little bit longer, before those future generations discover to a certainty just how wrong we were.

On the other hand, there’s a perverse freedom in it, and the possibility of an interesting kind of growth. If everything you write today already bears within it a future anterior in which it will have been demonstrated to be wrong, there opens up the possibility of exploring a new path, one along which we develop not just our critical audacity but also a kind of critical humility.

The use of this critical humility, in which we acknowledge the mere possibility that we might not always be right, is in no small part the space it creates for genuinely listening to the ideas that others present, really considering their possibilities even when they contradict our own thoughts on the matter.

Critical humility, however, is neither selected for nor encouraged in grad school. Quite the opposite, at least in my experience: everything in the environment of, e.g., the seminar room made being wrong impossible. Wrongness was to be avoided at all costs; ideas had to be bulletproof. And the only way to ensure one’s own fundamental rightness was to demonstrate the flaws in all the alternatives.

As a result, we were too often trained (if only unconsciously) in a method that encouraged a leap from encountering an idea to dismissing it, without taking the time inbetween to really engage with it. It’s that engagement that a real critical humility can open up: the time to discover what we might learn if we are allowed to let go, just a tiny bit, of our investment in being right.

If time inevitably makes us all wrong, maybe slowing down enough to accept our future wrongness now can help us avoid feeling embittered later on. The position of critical humility is a generous one — not just generous to those other critics whose ideas we encounter (and want to contradict) today, but to our selves both present and future as well.

It’s no accident that I’m thinking about this today, on the cusp of a new year, as I try to imagine what’s ahead and look back on what’s gone by. It’s a moment of letting go of what’s already done and cannot be changed, and of opening up to new, as yet unimagined possibilities ahead. I wish for all of us the space and the willingness to linger in that moment, even knowing how wrong we will someday inevitably have been.

  1. There is of course the possibility of tomorrow’s wrong idea being critically recuperated the day after, when those arguing its wrongness are themselves demonstrated to be wrong. But this happens much less frequently than does utter dismissal, alas.

The Tree

Mom & me, next to the tree (December 1967).

I hunted through the cabinets where I’ve stored the old family photos to find this one this morning. It’s probably my favorite Christmas picture.

There are so many things about this picture that I’m haunted by, my mother chief among them. She’s barely 23 here — quite mature, by the standards of the time, to have had her first baby, and yet I can never see this picture without focusing on how unbelievably young she is. I want so badly to reach back through the image and help.

I also can’t help but focus on how tired she looks: I’m about to be four months old, and it looks like it’s been a pretty eventful four months. Her wrists are so delicate, and her skin is so pale. And yet for all that superficial fragility, she would hold everything together a few years down the road, when it all must have seemed like it was falling apart.

Youth aside, exhaustion aside, in this picture is my most intense connection to my mother. But for a slightly different nose, the girl holding the baby could perfectly well be me. My life, just starting in this picture, could have circled around to this point with no effort at all.

So much of the path I’ve taken — that she helped me take — has been different, and yet it all for me starts here, in the open-mouthed wonder of it all. How did they get this thing in here? And what for?

Merry Christmas, and happy holidays.

Tools and Values

I’ve been writing a bit about peer review and its potential futures of late, an essay that’s been solicited for a forthcoming edited volume. Needless to say, this is a subject I’ve spent a lot of time considering, between the research I did for Planned Obsolescence, the year-long study I worked on with my MediaCommons and NYU Press colleagues, and the various other bits of speaking and writing I’ve done on the topic.

A recent exchange, though, has changed my thinking about the subject in some interesting ways, ways that I’m not sure that the essay I’m working on can quite capture. I had just given a talk about some of the potential futures for digital scholarship in the humanities, which included a bit on open peer review, and was getting pretty intensively questioned by an attendee who felt that I was being naively utopian in my rendering of its potential. Why on earth would I want to do away with a peer review system that more or less works in favor of a new open system that brings with it all the problematic power dynamics that manifest in networked spaces?

In responding, I tried to suggest, first, that I wasn’t trying to do away with anything, but rather to open us to the possibility that open review might be beneficial, especially for scholarship that’s being published online. And second, that yes, scholarly engagements in social networks do often play out a range of problematic behaviors, but that at least those behaviors get flushed out into the open, where they’re visible and can be called out; those same behaviors can and do take place in conventional review practices under cover of various kinds of protection.

It was at this point that my colleague Dan O’Donnell intervened; by way of more or less agreeing with me, Dan said that the problem with most thinking about peer review began with considering it to be a system (and thus singular, complex, and difficult to change), when in fact peer review is a tool. Just a tool. “Sometimes you need a screwdriver,” he said, “and when you do, a hammer isn’t going to help.”

Something in the simplicity of that analogy caught me up short. I have been told, in ways both positive and negative, that I am a systems-builder at heart, and so to hear that I might be making things unnecessarily complicated didn’t come as a great shock. But it became clear in that moment that the unnecessary complications might be preventing me from seeing something extremely useful: if we want to transform peer review into something that works reliably, on a wide variety of kinds of scholarship, for an array of different scholarly communities, within a broad range of networks and platforms, we need a greatly expanded toolkit.

This is a much cleaner, clearer way of framing the conclusions to which the MediaCommons/NYU Press study came: each publication, and each community of practice, is likely to have different purposes and expectations for peer review, and so each must develop a mode of conducting review that best serves those purposes and expectations. The key thing is the right tool for the right purpose.

This exchange, though, has affected my thinking in areas far beyond the future of peer review. In order to select the right tool, after all, we really have to be able to articulate our purposes, which first requires understanding them — and understanding them in a way that goes deeper than the surface-level outcomes we’re seeking. In the case of peer review, this means thinking beyond the goal of producing good work; it means considering the kind of community we want to build and support around the work, as well as the things we hope the work might bring to the community and beyond.

In other words, it’s not just about purposes, but also about values: not just reaching a goal, but creating the best conditions for everyone engaged in the process. It’s both simpler and more complex, and it requires really stopping to think not just about what we’re doing, but what’s important to us, and why.

If you’ll forgive a bit of a tangent: I mentioned in my last post that I’d been reading Jim Loehr and Tony Schwartz’s The Power of Full Engagement, which focuses on developing practices for renewing one’s energy in order to be able to focus on and genuinely be present for the important stuff in life. I only posted to Twitter, however, the line from the book that most haunted me: “Is the life I am living worth what I am giving up to have it?”

At first brush, the line produces something not too far off from despair: we are always giving up something, and we frequently find ourselves where we are, having given up way too much, without any sense of how we got there or whether it’s even possible to get back to where we’d hoped to be.

But I’ve been working on thinking of that line in a more positive way, understanding that each choice that I make — to work on this rather than that; to work here rather than there; whathaveyou — entails not just giving up the path not taken, but the opportunity to consider why I’m choosing what I’m choosing, and to try to align the choice as closely as possible with what’s most important.

In the crush of the day-to-day, with a stack of work that’s got to be done RIGHT NOW, it can be hard to put an ideal like that into practice. And needless to say, the opportunity to stop and make such choices is an extraordinary privilege; thinking about “values” in the airy sense that I’m using it here becomes a lot easier once things like comfort, much less survival, are already ensured.

But this is precisely why, I think, those of us in the position to do things like create new programs, or publications, or processes, need to take the time to consider what it is we’re doing and why. To think about the full range of tools at our disposal, and to select — or even design — the ones that best suit the work that is actually at hand, rather than reflexively grabbing for the hammer because everything in front of us has always looked like a nail.

So, an open question: if peer review is genuinely to work toward supporting our deeper goals — not just getting the work done, but building the future for scholarship we want to see — what tools do we need to have at our disposal? What of those tools do we already have available, even if we’ve never used them for this purpose before, and what new tools might we need to imagine?

Engage. Disengage. Repeat.

I believe that I have caught myself just this side of a major case of burnout.

If that sentence is an exaggeration, it’s not by much. A few friends who had the dubious pleasure of talking with me just after I arrived at THATCamp Leadership last week can attest that I showed up with an attitude that was in need of a little adjustment. Whenever I was asked how I was, I’d find myself starting out by saying “things are great,” which I meant, but which gradually gave way to a Five-Minute Complaint. I kept trying to stop myself, but it kept bubbling over. I’d hit some kind of limit, and my self-censor was just gone.

It wasn’t that I was unhappy about being where I was; I was very pleased to be back at George Mason, to be seeing my friends, to be participating in an event that promised to be both important and energizing.

It wasn’t that I was unhappy about where I’d just come from; I’d had an excellent, if action-packed, visit to talk with faculty and administrators at an institution thinking seriously about its digital initiatives in the humanities.

It was more that where I was and where I’d just come from were on the tail end of five solid weeks of travel and committee meetings, involving eight cities (not counting New York) and more planes, trains, and automobiles (and one unexpected van) than I can count.

It was thirteen nights in eight hotels over a five-week period, capped off with a musty room with two double beds (rather than one king) on a low floor (rather than a high one) with an industrial rooftop right outside my window (rather than pretty much any other view possible from that building).

Something about that room was the last straw, the thing that sent me right over the edge into a bitter litany of complaint aimed at anyone who would listen. But it wasn’t the room, and it wasn’t the trip: it was everything I’d gotten myself into over the previous month and a half, and — especially — knowing full well that I’d done it to myself. That no one was responsible for where I was, or for the mood I was in, except me.

I’ve spent the week-plus since trying to how to rectify this situation, how to pull myself back from the edge of complete flaming disaster. (1) Because, of course, my major projects did not grind to a halt in the office while I was traveling. Nor did the deadlines for the writing I’ve promised people this fall get any further away. It has become painfully clear that something has got to give — or that something will be me. And so, after a lot of thought, I think I’ve figured out what I need to do in order to make things better.


I need to do less.

* * *

You would be fully justified in rolling your eyes at this point. Because, yeah, duh. But this is a lesson that I have had to teach myself over and over.

I can read about the importance of significant downtime and totally get it. I can even go so far as to write about the degree to which stress has become the contemporary sign of our salvation or about the role of goofing off in the most important, most creative work that I do.

But I somehow cannot internalize it all enough to refrain from over-scheduling myself. Or at least I have not done so. And even when I think I’ve done a good job of protecting myself, of determining what’s enough and trying not to go beyond it, I manage to cram enough tiny things in around the edges that I end up just as over-scheduled and exhausted as ever.

* * *

If I’m going to be completely honest with myself — and this is hard — a huge percentage of this over-scheduling is about ego. People like my work enough to want me to come talk to them, and they’re nice to me when I get there, and that feels awfully, awfully good. (2) There’s of course also a general people-pleasing aspect to the difficulties I have turning down requests. And as long as I’m at it I’ll acknowledge that I’ve also fallen under the spell of competitive busyness; every time somebody says “I don’t know how you do it” about my travel schedule I get a sad little boost.

Ha, I don’t know how I do it either.

I feel as though I’ve been able to do some good out there in my travels — as though I’ve been able to help some departments and institutions jumpstart some much-needed conversations, and as though I’ve been able to help demonstrate some of the possibilities for the academy’s future. But I also know, when I’m willing to look at it squarely, that I’ve gotten a lot out of just feeling important. But that’s finally wearing thin, and the toll is beginning to make itself known.

* * *

It’s perhaps not a coincidence that during this same period I’ve found myself withdrawing from the various venues where I engage with colleagues and other folks online. I haven’t been very present on Twitter, and I certainly haven’t posted here. Some of that withdrawal has been about not having enough time or space or whatever to devote to figuring out whether I had anything worth saying. Some of it has been about a level of conflict of late that I haven’t had the energy to face.

In any case, for someone whose job is focused on fostering productive online engagements, this withdrawal has not seemed to me a Good Sign, and it’s been one more thing that’s had me worried.

But I’m now thinking that the withdrawal is in part about the conservation of energy, and as such may not have been such a bad thing after all. Total disengagement would be a problem. But disengaging enough to restore oneself, in order to be better prepared to re-engage, is utterly, utterly necessary.

It’s like sleep. It’s cyclical. And you’ll go crazy without it.

* * *

I’ve been reading a fair bit of self-help type stuff of late, partially (3) because I’m interested in the genre, in how it can describe and shape lived experience, and in the purposes it might serve in a scholarly context, and in part because I have felt myself in need of something that might help me personally figure out a better path. A more manageable way of being in the world.

Among the things I’ve read lately is Jim Loehr and Tony Schwartz’s The Power of Full Engagement, which, if they’ll forgive me, is a rotten title for a very important book. (4) The key lesson in the book — heck, it’s in the subtitle, but if you’re interested, read farther than that — is that our belief that the resource we are shortest on, the thing that if only we had more of we could do what we need to do, is time, is dead wrong. In fact, the resource we are shortest on is energy, and we resist many of the things we need to do in order to conserve and restore our energy because they look to us like enormous wastes of time.

However, it’s clear that those wastes of time are precisely the things that allow us to step out of the barrage of the urgent long enough to discover, focus on, and make room for the important. In order to be genuinely engaged where it most matters, in other words, you have to find regular, routine ways to disengage. And to somebody as completely inculcated into our always-on, more more more culture as I am, that disengagement does not come easily.

Or at least it doesn’t come easily in a productive form. But it’s becoming clear that if I don’t figure out some better strategies for managing productive disengagement, a few much more damaging modes of disengagement are lurking just around the corner.

* * *

So, doing less. It’s not just a matter of saying no to more things. I keep trying to find some quantitative limit for how much I can do — no more than one trip every two weeks! no more than three major service commitments! — and yet it keeps not working. The over-extendedness just gets worse.

I finally realized something about why last week. In talking with my coach (5) about the issue, it suddenly became clear that the problem is the nature of the quantitative itself. If I set a limit of four trips per semester, it becomes very hard to distinguish between four trips and four with one little add-on. Or five, for that matter. With maybe one small side thing tucked in there too. And something local, because that’s not really a trip. And next thing you know, I have a calendar filled with five solid weeks of three-city trips and am railing at my friends over cocktails.

It’s the nature of the more more more culture: if you can run two miles, isn’t it better to run five? If you can write an article about something, isn’t it better to turn it into a book? If you can speak in four places this semester, isn’t it better to add on just… one… more…?

The quantitative will do you in every time, precisely because so much of how we operate is all about finding our limits and pushing past them. So it’s becoming clear to me that I’ve got to turn my attention to the qualitative, if I’m going to change anything, even if it’s not entirely clear what in this context the qualitative might mean.

* * *

One key to the qualitative, I think, is figuring out how to determine what’s important, and how to separate it from what’s just nice, or ego-gratifying, or adding to the frequent-flyer record. But the real challenge in that is that I don’t mean “important” in some externally-defined sense: what’s best going to further my career goals, or promote my organization, or what have you. I mean what is most important in a very personal sense: what’s most in line with the things I value, the things I want to be, the ways I want to live. What’s going to support me not just in getting more done, but in doing what I most want to do, and doing it better.

What am I doing it all for, is the question I keep asking myself.

* * *

As I’ve been working on this post, I’ve been hoping that some conclusion would present itself to me, some anecdote that would cheerily illustrate everything I’m pondering here. I’m not sure that anything can; I’m not sure that concluding, in fact, is the right way to end this line of thought. As the links above might suggest, I’ve written too many times before about the need to recalibrate and reshape the way I’m living, and yet. Here I am. Again.

I had, however, a near-perfect day yesterday. I did a bit of work in the morning, and then went and got a fantastic haircut, and had a great lunch with a friend I haven’t seen in eons, and then headed back home. And on a whim, I told R. that I wanted to take a walk in the park. Rather than push it, though, in the ways that I usually do (surely you can go a little faster!), I let myself just… walk. A bit faster than a stroll. Kind of an amble. It only took about five minutes longer than usual to make the loop of the park, and in the process, I got to do two really important things. I got to spend the hour really talking with R., and I got to look around.

And the trees. If it’s not peak leaf around here yet, at least a few of the trees are there: flaming reds and yellows mixed in amongst the still-rich greens. It was absolutely gorgeous, the best moment of my favorite season.

It’s uncomfortably obvious (see footnote 5 above) to point out that it will all be gone in the blink of an eye. But it will be. And I’m grateful, really really grateful, not to have missed it.

That’s what I’m doing it for. That’s what I want to keep my eye on. How the things I elect to do can better contribute to my ability to engage with the here and now, and, when I need to recover, can let me gently disengage.

I do not know how. But I do know why. And that’s at least a start.

  1. Okay, that one’s an exaggeration.
  2. I’ll just go ahead and admit here tucked away at the bottom of this post how much I identified with Sally Field’s “you really like me” moment, and how personally I took the grief she was given over it. Because, seriously, if you’re not a little bit shocked every time somebody likes something you do, the inside of your head is a very different place than the inside of mine.
  3. Says the scholar desperate to justify her more pedestrian engagements with the world.
  4. Schwartz is quoted in the “downtime” story I linked to a ways back.
  5. Yes, I have a coach. And she’s awesome. She’s helped me sort out a whole series of issues related to my new career path in ways that have been productive for me. But there’s still something in the phrase “my coach” that makes me just as uncomfortable as the statement that I’ve been reading a bunch of self-help literature, as if it were some kind of admission that I am underneath the scholarly veneer so simple that my issues can be understood and helped in the most facile ways possible. It is not at all unlike the discomfort with AA evidenced by so many of the characters in Infinite Jest, who are at great pains not to admit that, as Don Gately notes, “It starts to turn out that the vapider the AA cliché, the sharper the canines of the real truth it covers.” It’s me, whether I like it or not.

I Am Not Blogging

This post is likely little more than a bit of ritual throat-clearing, designed to help me get past a stage in the trying-to-write-again process in which I simply cannot get myself to focus on what it is that I need to write (promised articles coming due in very rapid succession) and yet cannot find a way to noodle around with something new, either. The result is that I find myself looking guiltily at this space, thinking I should be writing something here, that it might help get me going again, but finding myself with nothing much worth writing about.

It’s not as though I’m not writing, though, all-day-every-day: memos and reports and email messages and proposals and even one very big important project for the day job. It’s just that all of that has taken a tremendous amount of energy off the top of the thing I persist in thinking of as “my own writing.” But deadlines are pressing, and I find myself flailing around a bit, looking for that magical point of entry into these articles.

And so, I’m back into my too frequently forgotten strategies: sitting down at the computer first thing in the morning, before the day’s demands get the opportunity to make themselves known; doing whatever freewriting I need to do to get myself loosened up; consulting the notes I’ve made about the projects in front of me.

This post is a moment of knuckle-cracking before I set fingers to keyboard, hoping that the loosened-up hands will magically tap out the answers. Wish me luck.