The American Association of University Professors (AAUP) might have become something it should have. Instead, the organization opted for shoring up institutional employer shortcomings, first as a professional association and later as an employee union. While the AAUP has made itself an authoritative functionary of the inherited institutional model, 110 years ago when the organization was founded, 55 years ago when it morphed into a labor union, or at any time along the way, why did its members not wonder if there is a different way to serve and steward the social good of higher education?
In the history of the AAUP you will find no ad hoc committees or research reports of any kind that pose this question. Like the rest, exclusive institutional employment and enrollment remains the unexamined assumption of its members. As such, all AAUP action amounts to a (defensive) reaction to the dynamics of this unchallenged, inherited, monopolistic mode of higher education earning and learning. This makes everything the AAUP says on the question of AI in the academe both irresponsible and predictable, as evidenced in its July 2025 report, "Artificial Intelligence and Academic Professions."
Though the title indicates that the AAUP has in mind a narrow band of dues-defined concerns for its report, the PSA response to AI has in mind a more inclusive, direct, explicit concern for the entire social good and all those who depend upon it. In comparing the ad hoc committee and PSA responses, I find myself increasingly concerned about the future of higher education and so this post addresses each of the report's findings and recommendations on AI in the academe.

From tech to tenet, the alternative Professional Society of Academics (PSA) model for higher education offers far superior treatment of newness in the academe. One reason for this is that PSA does not use the institutional employer-enroller model into which the AAUP was born and raised. Instead of the inherited absurdity of individuals who try to secure faculty employment while trying to exercise shared governance over their institutional employers, PSA eliminates the middleman employers and vests the power to make service and stewardship decisions in the hands of independently practicing academics who are licensed and supported by a legislated professional society of peers. To quickly grasp this arrangement, think of academics who, like attorneys, can provide their service and stewardship in the employ of others or as independent, licensed, solo or partnered practitioners. From this alternative coherent point of view, the AAUP response to much of the turmoil in higher education is comic and tragic.
To show this is the case with respect to AI, each of the AAUP findings and recommendations for this crucial technology of future higher education are addressed:
1) On the Finding and Recommendation Regarding "Lack of Shared Governance"
The AAUP's Finding: The report correctly identifies a core pathology of the modern university: administrations unilaterally procure and deploy AI and other educational technologies with little to no meaningful input from faculty, staff, or students. This is a violation of the AAUP's own long-standing principles of shared governance, with other examples of this inherent institutional weakness that include the introduction of Learning Management Systems in the late 1990s and MOOCs in the early 2010s.
The AAUP's Recommendation: The proposed solution is to create elected ed-tech oversight committees with real power to challenge procurement, review contracts, and enforce policies.

The PSA Response:
I wonder, how successfully did the AAUP manage for academics the introduction of the LMS and MOOC education technologies? From the point of view of PSA, all such efforts are shameful and avoidable, with the AAUP's diagnosis of the problem a classic example of what happens when academics fail to question the foundational assumptions of their own thinking.
The problem is not that shared governance is being poorly implemented – though obviously and chronically it is – the problem is that shared governance is itself a logical and logistical error that masks the fundamental power imbalance of the employer-employee relationship. A committee, no matter how well-funded or empowered, that must "challenge," "push for," and "monitor" the administration is not a governing body. It is, at best, a well-organized lobby group of employees petitioning their managers, in circumstances where failure leads to labor, legal, and lobbying actions that undermine the social good and society .
The very language of the recommendation—"meaningful input," "the ability to meaningfully challenge decisions"—reveals the weakness of the position. It is a plea for a seat at a table that is not yours and where terms like “meaningful” are defined through fractured faculty negotiations that eventually lead to fixed-term, mutable labor agreements with exclusive and often adversarial institutional employers and their government funders. What could possibly go wrong, that hasn’t already repeatedly gone wrong in this assumed monopoly on higher education?
The Professional Society of Academics model addresses this problem at its root, where the problem of shared governance is not solved, it is dissolved.
In the PSA model, there is no "administration" to negotiate with over the procurement of AI. The peer-governed Professional Society—the body of licensed, autonomous professionals—is the governing body of higher education facilitation. It would be the Society, through its democratically established committees, that would set the standards, vet the technologies, and create the ethical guardrails for the entire sector and a profession that exercises prerogative over AI use in the operation of independent solo or partnered frontline practices in higher education.
The PSA model does not ask for a "meaningful voice" for faculty. It argues that the faculty is the voice. It replaces the fiction of shared governance with the reality of professional self-governance. Until the AAUP is willing to confront this more fundamental, structural problem, its recommendations, will remain a perpetual and ultimately futile attempt to democratize fundamentally undemocratic institutions.
The PSA model dissolves this problem by eliminating the administration as a separate managerial class. In PSA, the peer-governed Professional Society is the governing body. It does not "challenge" decisions; it makes them according to the collective and individual prerogative of a legislated academic profession that does not employ academics, though it enables them to earn in higher education, as individuals earn in the social goods of law and medicine.
2) On the Finding and Recommendation Regarding "Work Intensification and Devaluation"
The AAUP's Finding: The report correctly identifies a vicious cycle. Pre-existing work intensification and the institutional devaluing of core pedagogical tasks (like grading and writing recommendation letters) are pushing faculty to use AI as a desperate shortcut. At the same time, the introduction of AI is adding new, uncompensated labor (plagiarism detection, mandatory training), further intensifying the work.

The AAUP's Recommendation: The proposed solution is to "maintain protections" against work intensification, deskilling, and job loss, and to ensure that any productivity gains are shared.
The PSA Response:
Once again, the AAUP diagnosis of the symptom is correct, but its prescription is tragically inadequate because it fails to address the underlying disease. The problem is not that core academic work is incidentally "undervalued"; the problem is that the unchallenged, exclusive use of the higher education institution model makes all employees susceptible to devaluation and exploitation in the form of underpaid or unpaid labor across the academe frontline.
Once again, the AAUP fails to ask key questions. Based on its findings, the question should be, “Why is the use of AI by employers, employees and enrollees so desperate across the academe?” On the recommendation side, the questions should be, “If AI improves higher education for society, how does work intensification, deskilling, and job loss matter?
An academic's career advancement is overwhelmingly tied to research output, grant acquisition, and administrative service—not to the quality of their teaching or mentorship. Grading and writing letters of recommendation are treated as costs to be minimized, not as central professional acts to be celebrated. The "work intensification" that the AAUP laments is a direct and predictable consequence of this flawed incentive structure. Recommending "protections" is like recommending a better bilge pump for a ship that was designed with severe leakage.
The PSA model does not try to patch the leaks; it proposes a new kind of ship.
In PSA, the problem of "undervalued" labor is dissolved. An academic is an autonomous professional practitioner whose reputation and livelihood are directly tied to their demonstrated competence and the success of their students, as measured by the transparent Public Performance Record and the Objective Crowd Peer Evaluation and Assessment mechanisms of the model.
- A practitioner who becomes known for writing powerful, effective letters of recommendation will be sought out by ambitious students.
- A practitioner who develops a reputation for providing rigorous, transformative feedback will see their practice flourish.
- A practitioner who dedicates their career to pedagogical excellence can achieve the highest levels of success and respect without ever having to publish a single research article.
- A practitioner who successfully prepares students for objective evaluation can now write letters of reference that are actually meaningful to those who rely on such professional judgements.

The PSA model solves the problem of work intensification not by adding "protections," but by re-aligning the incentives. It makes the core acts of teaching and mentorship the primary drivers of professional success. It re-values the work by making it valuable again. Until the AAUP is willing to confront this deeper, structural problem, it will be forever trapped in a defensive crouch, fighting a losing battle to protect the dignity of a profession whose own institutional working conditions are designed to undermine it.
As one serious form of undermining, consider how much higher education work is done for nothing. Every industry has an element of uncompensated labor. What does the chronically underfunded, but monopolistic institutional means of earning and learning in higher education cost individuals in uncompensated labor?
I ask this question because the AAUP report on AI in the academic profession fails to consider use of ed-tech to reduce this unacceptable cost of an ignorant assumption. This sort of myopic, one-sided analysis is hardly what is needed to navigate the future of higher education, with or without ed-tech. One only need look to the AAUP's Academe Blog to see how hostile is this organization to AI technology that is here to stay as a tool of higher education.
With its focus on protections for academic workers, the AAUP recommends using strong contractual and handbook language to protect against AI-driven deskilling, job loss, and wage decreases. In doing so, they reveal their function as a mere faculty labor union with members who are jointly and severally party to the social contract for high education, offering no discussion of the students or societies that rely upon the social good. The AAUP is a union and unions are merely part and parcel of the institutional model that is the fucking problem. Hello!?

The AAUP recommendation for negotiated employment contracts is a call to build a stronger defensive wall around a besieged tower. It accepts the premise that academics are "workers" whose rights must be protected from their employers.
PSA argues that this entire framework is the source of the problem. In a professional model, academics don't have to be employees; they are autonomous professional practitioners who own their labor. There is no employer to deskill them, decrease their wages, exploit unpaid labor, or otherwise determine working conditions for academics that exercise professional prerogative in practices the success of which are determined by demonstrated competence and the value they provide to their students and colleagues, as documented in a transparent, objective public performance record. The PSA model doesn't seek better protections for workers; it seeks to abolish the category of "academic worker" altogether and replace it with "academic professional." Only, this academic professional is not the faculty employee version that the AAUP has assumed for over a century now.
Here's a couple more questions for the AAUP and all other unionphiles in the academe: What resources are spent on "battles," "fights," and "wars" between labor union faculty employees and their institutional employers and how much of this non-academic work to protect academic work invites the unexamined or unethical use of AI in the academe?
I suspect the retort will be something moronic like, ‘Well, if the employer would just act properly, according to AAUP recommendation, censure and censorship, there'd be no problem’.
3) On the Finding and Recommendation Regarding "Uncritical Adoption and Harm"
The AAUP's Finding: The report correctly identifies that university administrations are uncritically adopting untested, unproven, and potentially harmful AI technologies. These tools can be biased against marginalized groups, used for surveillance, and may actually hinder the core goals of education.
The AAUP's Recommendation: The proposed solution is to require administrations to conduct impact assessments, test for bias, ensure accessibility, and include indemnity clauses in contracts that hold vendors liable for harms.
The PSA Response:
The AAUP's diagnosis of the symptom is, once again, correct. The uncritical adoption of potentially harmful technology is a clear and present danger to the academic community. However, the proposed solution—a set of rules and procedures to better manage the procurement process—fails to grasp why this problem exists in the first place. It is an attempt to regulate the behavior of an institution whose very structure incentivizes that behavior. The irony is that the AAUP charges their institutional employers with the harm of uncritical adoption, when the organization is doing precisely that with its ignorant assumption of the exclusive earning and learning model of universities and colleges.

These institutions are large, bureaucratic, market-driven corporations. From the perspective of a managerial class focused on efficiency, risk management, and scalable solutions, the sales pitches of ed-tech vendors are incredibly seductive. They promise data-driven control, automated efficiency, and the appearance of innovation. The "harm" that the AAUP rightly identifies is a predictable and, from a purely managerial standpoint, often acceptable byproduct of this logic. Asking an administration to regulate itself more effectively is like asking a factory to voluntarily reduce its own efficiency.
Does the AAUP have similar concerns about the work of administration and other staff employees, where for instance the human resource department is replaced by AI? I suppose they might if these employees paid AAUP membership dues. But now ask yourself: Why doesn’t the AAUP represent all workers in the academe, all workers throughout the institutional model it assumes and with which it uncritically attempts to service and steward higher education? Explore this and once again you'll find root absurdity in the AAUP and its ignorant inheritance.
The Professional Society of Academics (PSA) model does not try to regulate the factory; it replaces it with a different mode of production.
In the PSA model, the problem of the uncritical, top-down adoption of technology is dissolved, because the top-down assumption of the institutional model of employer-enrollers is dissolved. There is no central administration to make a campus-wide procurement deal with a vendor. Decisions about technology use rest with the autonomous licensed practitioners in professional society. An individual historian, physicist, or sociologist would choose the pedagogical tools that best suit their specific discipline and the needs of their students. This creates a market where technology must prove its value to the actual end-users—academics and students—not just to administrators and accountants. A tool that is biased, ineffective, or harmful would be quickly abandoned by the community in favor of better alternatives. The peer-governed Professional Society would not be a purchaser, but a curator and ethical guide. It could vet technologies, provide research on their effectiveness, and issue standards or warnings, empowering its members to make informed choices for themselves.
The PSA model solves the problem by removing the centralized power structure of scattered institutions that is susceptible to the sales pitches of vendors and the logic of managerialism. It trusts the collective, distributed intelligence of the professionals themselves to choose the right tools for the job. It replaces a system of top-down mandates and committees that issue policy and practice with one of bottom-up, evidence-based adoption in a system of performance transparency and evaluation objectivity that is atomized across the frontline of the academe.
Speaking of transparency and accountability, while the AAUP recommends a requirement that administrations and vendors be transparent about data collection and hold tech companies liable for harms, PSA points out that the need to demand transparency from an administration is an admission that the system is, by default, opaque. It is a plea for the powerful to reveal their workings to the powerless. In the PSA model, transparency is the default setting. The Public Performance Record is a radical commitment to transparency, and the operations of the peer-governed Professional Society are transparent to its members. Accountability is not a feature to be demanded; it is the core mechanic of the system, flowing directly from the practitioner to their students and peers.
4) On the Finding and Recommendation Regarding "Lack of Transparency and Choice"
The AAUP's Finding: The report correctly identifies that faculty, staff, and students are often mandated to use AI and other data-intensive technologies without meaningful avenues to opt out. There is a profound lack of transparency regarding how these tools work, what data they collect, how that data is used, and who profits from it.
The AAUP's Recommendation: The proposed solution is to create and enforce meaningful opt-out policies, protect intellectual property, and demand greater transparency from both administrations and vendors.

The PSA Response:
The AAUP has, once again, perfectly described the cage, but it has failed to question why we are in a cage in the first place. The call for "transparency" and the "ability to opt-out" is a plea for more humane treatment from a fundamentally paternalistic and opaque system. It is the request of a subject, not the demand of a sovereign professional.
The lack of transparency and choice is not an incidental flaw of the higher education institution model; it is a necessary feature of its design. The university and college are managerial, bureaucratic corporations. In such a system, faculty are employees and students are enrollees. Information is managed, not shared. "Choice" is a privilege to be granted or withheld by the administration, not an inherent right. The very need to fight for an "opt-out" policy is a concession that the default is mandatory compliance, as a condition of employment and enrollment in the only game in town.
The Professional Society of Academics model does not seek better rules for the cage; it dismantles the cage entirely.
In the PSA model, the problems of transparency and choice are dissolved because the system is built on a different foundation. For instance, transparency is the default of the model with the Public Performance Record as commitment to radical transparency. All relevant information about a practitioner's qualifications, methods, outcomes and professional standing is public by default. The operations of the peer-governed Professional Society are transparent to its members and the society that entrusts the model with provision and protection of the social good. There are no secret vendor contracts or hidden data collection policies to demand access to, because the professionals themselves, through their society and frontline practices, are the ones who set and enforce the standards.
Second, choice is the default, with the entire model built on personal choice. Students choose their practitioners based on the transparent data in the PPR. Practitioners choose their own pedagogical tools and methods, accountable to their students and their peers. The concept of an "opt-out" becomes irrelevant because the entire system is an "opt-in." No one is forced to use a technology they believe is harmful or ineffective. No one is forced to attend an institution they believe is harmful or ineffective. No one is forced to work for an institution they believe is harmful or ineffective. No one is forced to work or study with an individual they believe is harmful or ineffective.
The AAUP is fighting for the right to refuse a tool handed down from on high. PSA argues for a system where the professionals are the ones who choose the tools in the first place. It is a fundamental shift from a culture of managed compliance to a culture of empowered, professional and personal choice. PSA is fighting for the right to refuse a toll handed down from on high over centuries. I disclaim my inheritance of the exclusive institutional employer-enroller model. I suggest the same to the AAUP.

The AAUP Might Have Represented Professionals
AI is impacting the work of legal and medical professions, with the professional social contract offering similar but also different responses to that of the AAUP. In law and medicine, the professions themselves are actively working to shape the integration of AI, framing it as a tool to augment the autonomous professional. In academia, as the AAUP report shows, AI is often being deployed by a separate managerial class (the administration) as a tool to manage, surveil, or replace academic labor.
The PSA model is the only framework that aligns the academic profession with the more robust and forward-looking approach of doctors and lawyers. In a PSA world, there is no administration to impose a one-size-fits-all AI solution in a perpetual state of desperation. The autonomous professional practitioner is the one who chooses and deploys AI tools to enhance their own work. They would be like the doctor using an AI diagnostic tool or the lawyer using an AI for research. They are the human-in-the-loop by default. The peer-governed Professional Society would function like the State Bar of California, the University of the State of New York, the American Medical Association or the American Bar Association. It would be the body that sets the ethical standards, vets the technologies, and provides guidance on best practices for use of AI in higher education.
The AAUP's recommendations are an attempt to win a better contract for the employees in a higher education factory. The PSA model is a proposal to get rid of the factory and turn the workers into the owners of the workshop, with full control over their own tools. It is the only model that offers a path for academics to become true "AI-augmented" professionals, rather than "AI-managed" employees.
What the AAUP and the rest of the academe fails to appreciate is that the only way to earn a living in higher education as an academic is as an employee. A law firm might use AI to eliminate the expense of paralegals or junior associates, but the attorney can open an independent, solo legal practice, using that same AI or paralegal as assistance. No such option is available in the institutional employer-enroller monopoly that has been inherited without challenge. If you want to earn in higher education, you must be a faculty employee in a model where the AAUP must recommend - not demand, never mind decide - that the employee be placed on a shared governance committee that oversees the introduction of ed-tech to higher education via the only legal, gainful path available, the university or college employer-enroller.

If it isn’t happening already, one day soon the AAUP will be forced to ask its dues-paying members to sacrifice their individual interests for the welfare of the institutions that have come to embody the social good. For a modern society and its members to flourish, we need broad, equitable, affordable access to law, medicine and education. But social goods are not make-work schemes to boost employment and consumption stats. As such, if replacing much of the academic workforce with AI would better serve and steward the social good, then it must be done, and, particularly in its public provision, individual employment is not a principal concern. Certainly, earning a living is a concern of the employee and their labor union representative, in an absurd model where the individual protects themselves against institution-driven redundancy by compromising their social contract obligations to individuals and society for the sake of a pay check. The AAUP will do well to remember that an attorney, academic or physician is not an actor, advertiser or footballer with no special social contract obligations.
Academics are not inherently employees, they were made that way and the AAUP keeps them that way. These academe leaders are highly educated and deeply obliged, but still they fail to see that in a professional model for higher education like PSA, like the attorney or the physician, the academic does not have to sacrifice themselves to the alter of (AAUP) assumption. As in other professions, academics can earn from their contribution to a social good as a self-employed individual in solo or partnered higher education practice.
Where stewardship of newness in the academe is concerned, this distinction alone makes PSA an obvious victor over the ignorance of academics who can't or won't perform the most basic of their social contract functions - to question the assumptions of their service and stewardship to society.
No comments:
Post a Comment