January 2026 IssueLong scroll reading

New Year’s Resolution #1: Don’t underwrite lobotomies

By David Snowball

Ninety years ago, emergent science and a self-assured entrepreneur came together to offer a quick and cheap solution to an intractable problem.

The press loved it. The public became enamored, and demanded more and more of it. The Nobel committee awarded a Prize for it.

It seemed like a good idea at the time.

It always does.

The magic wand in 1936 was the lobotomy, executed with an ice pick thrust into the eye socket, in front of adoring crowds, by Dr. Walter Freeman.

The story of Walter Freeman’s “ice pick” lobotomy tour is horrifying in its own right, but it’s also an interesting reminder of why we can become enamored with a new technology even when the evidence of its effects is literally staring us in the face.

Understanding loboto-mania in 1936 perhaps offers a window on Chat-mania in 2026.

The lobotomist as miracle worker

In the decade after 1936, Walter Freeman turned lobotomy from an exotic neurosurgical experiment into a mass‑market “solution” for overcrowded asylums and distressed families. He championed a simplified transorbital technique that used an ice‑pick‑like orbitoclast hammered through the eye socket, often with only electroshock to knock patients out, so that he and other psychiatrists could operate outside surgical theaters and without neurosurgical training. (Right: he was not a surgeon; his training was in neurology and psychiatry.)

The effects of gouging a hole in the front of the brain were, let’s say, reasonably predictable. Freeman’s lobotomies sometimes reduced severe agitation but frequently caused profound emotional numbing, cognitive damage, lifelong disability, and a notable risk of death (10-15% in some estimations), making the overall effects devastating for many patients.

You would think the bloody horror of it would dent the procedure’s popularity. You would be incorrect. The fact that the procedure was crude, irreversible, and only loosely tied to scientific understanding of the brain did not stop its spread; if anything, the simplicity of the story, one quick intervention to quiet disorder, was part of the appeal.

Freeman became a showman and salesman. He crisscrossed the United States in a van named the Lobotomobile, barnstorming more than 50 state hospitals, performing or supervising thousands of lobotomies—sometimes dozens in a single day—under headlines that celebrated a miracle cure for intractable mental illness. Media coverage highlighted grateful families and quick discharges. He continued operating into the late 1960s until a patient died during her third lobotomy.

Lobotomy flourished because it promised to solve real institutional crises: underfunded, overcrowded mental hospitals, limited therapies, and public pressure for visible action. The procedure was cheap, fast, and scalable; administrators could “empty the back wards” far more easily than they could fund staff, long‑term psychotherapy, or community care. Professional legitimacy followed: Egas Moniz received the 1949 Nobel Prize for the leucotomy, lending an aura of scientific inevitability to a technique that many neurosurgeons already viewed as reckless.

The rhetoric around lobotomy echoed that aura of inevitability. Advocates described it as cutting‑edge psychosurgery, a humane, scientifically grounded intervention that would free patients from torment and relieve overburdened families and institutions. Its harms were re-framed as acceptable side effects: people reduced to near catatonic dependency were cheerily described as calmer, more manageable, less anxious.

We offer this historic vignette because we’re yet again being offered the opportunity to deal with serious challenges via lobotomy, though this time the procedure involves algorithms rather than an ice pick. They are being sold, with familiar confidence, as solutions to everything from student success to research productivity to investment performance. Let’s check in quickly on three possible sites for modern lobotomies: your house, your alma mater, and your portfolio. We’ll offer the humble suggestion that perhaps, just perhaps, Resolution #1 for 2026 should be: do not underwrite lobotomies, even the frictionless digital kind.

Your home lobotomy kit

Human beings are cognitive misers. Thinking hard is metabolically expensive (even at idle, our brains consume 20% of all the energy we use), effortful, and slow, so we habitually reach for shortcuts whenever we can get away with them. Rules of thumb substitute for calculation, to-do lists or written notes substitute for memorization, and purchased flowers substitute for expressed emotion.

Psychologists call this “cognitive offloading”: the use of external tools and actions to reduce the load on internal memory and attention. Offloading is not inherently bad; writing a shopping list is better than trying to juggle 17 items in working memory, and structured note‑taking can deepen learning rather than weaken it. But the same mechanisms that make offloading efficient also make it dangerous: when a tool makes it easy to skip the work of understanding, the brain, true to form, will often take the deal.​

The healthy question “what options should I consider?” is supplanted by “what should I do?” Students, especially in courses they’ve been “forced” to take, can upload a course reading (or direct AI to an online version) plus a course assignment, and hit “complete this assignment for me.” Lawyers, including those representing the federal Department of Justice, have repeatedly been caught submitting arguments they’ve neither written nor read, containing court decisions that never occurred. Scientists are submitting AI-written research to AI-edited journals read mostly by AI-bots, citing findings that never occurred in the physical world … but that become part of the next generation of AI training data.

A growing body of work shows the trade‑offs. Experiments in which people can store information externally find that they perform better in the moment, yet remember less later if they treat the tool as a substitute for learning rather than as a support. Reviews of digital technology use link heavy, habitual reliance on devices to reductions in sustained attention and self‑initiated effort, raising concerns about a slow drift from “I don’t need to think here” to “I’m not sure I can think here without my tools.” When generative AI is used as an answer machine, it accelerates that drift by offloading not only memory but whole sequences of reasoning, evaluation, and revision.​

If the first lobotomy was physical and irreversible, the new risk is a kind of voluntary, distributed cognitive atrophy: a gradual outsourcing of curiosity, interpretation, and judgment until our own muscles for those tasks weaken from disuse. That is not inevitable, but it is the path of least resistance.​

Three New Year’s rules for using AI

To resist that path, you might adopt three simple rules for the year ahead:

  1. Never accept the first answer: Treat every AI output as a draft or provocation, not a conclusion: ask for counterarguments (“I believe the US stock market is historically overvalued and prone to catastrophic collapse, what are the contrary arguments best supported by credible sources?”), alternative framings, and missing objections, and then decide which are actually persuasive.​
  2. Use it to amplify, not replace, your reasoning. Ask a dumb question, get a dumb answer, so don’t ask dumb questions. Instead, imagine yourself with a partner (I talk with my students about talking with an over-eager intern on their first day) who does good work if and only if they fully understand what’s up and what you need. Use a three-part approach to AI collaboration: (1) explain what challenge you’re facing – whether it’s a vegan friend coming to a barbecue or a family budget that’s crushed under the weight of energy costs. Don’t just name the challenge, explain it. The richer the explanation, the more prospect of a sensible rejoinder. (2) explain what help you’re looking for – “I need three options, rank-ordered from least to most costly, with verifiable sourcing that you share” – rather than vaguely requesting “a fix.” (3) empower rejoinder. Ask the AI “what else do you need to know? What questions do you have? What factors haven’t I considered?” rather than assuming that you’ve been clear, complete, encyclopedic.
  3. Force yourself to write and revise without it, at least some of the time.
    Deliberately set aside projects or phases (first passes at an argument, key analytic moves) where you think on paper without machine assistance, so that the skills of structuring, connecting, and clarifying ideas remain practiced rather than vestigial.​

These are not purity tests. They are reminders that your mind, like your body, adapts to the demands placed on it; tools that constantly invite under-use will reshape the user.

Your alma mater’s lobotomy kit: branding the “AI-ready” graduate

If individual users are tempted to let AI think for them, institutions may be even more tempted to let AI solve problems they have never been willing to face directly. Universities embrace fads with a ferocious passion that would appall most 13-year-old girls. In the 30 years since I acquired the sweater I’m wearing as I write this, universities have embraced more “revolutions” cooked up by marketers than I can count:

“Flipped classrooms” promised to turn professors from “the sage on the stage to the guide by the side” in the early 2000s.

An embrace of “learning styles,” as is “I’m a visual learner,” became common just after the start of the 21st century despite the utter absence of evidence that they, well, exist. (Here’s one hint about the evidence: researchers have identified no fewer than 71 different “learning styles” in the higher ed literature.)

MOOCs, massive open online courses, were heralded in the early 2010s as an existential disruptor of higher education, with universities racing to launch courses and partnerships that promised global reach but ultimately produced modest completion rates and limited revenue and collapsed into “the institutional sediment.”

“Green campus” campaigns packaged sustainability as a marketing asset long before many institutions made costly, less visible changes in energy systems or land use.

Laptop‑for‑all programs and “classroom of the future” initiatives similarly promised transformation, yet observational work has found that students spend a substantial share of in‑class device time on non‑course websites, with corresponding hits to attention and performance.

Universal design for learning, initially framed as a deep pedagogical shift “that anticipates student differences rather than reacting to them,” is supported by … hmm, let’s call it “modest” evidence of effectiveness, and is often implemented at the level of slogan and compliance checklist rather than as a thoroughly resourced redesign of curricula and assessment.

Almost all of these initiatives share two characteristics: the commitment to them maxed out at an inch deep, and the evidence for them was mostly marketing hype. (To be clear, environmentally sustainable practices: good; big green banners declaring The Center for Sustainability Initiatives: bad.)

Universities are rushing to brand themselves as “AI‑forward,” “AI‑enhanced,” or “AI‑powered,” often with vague promises of personalization, efficiency, and “innovation at scale.” The University of Florida brands itself as “the nation’s first AI university” and promises “AI everywhere,” the Cal State system as “Nation’s first and largest AI-empowered university system,” the Ohio State University promises to make every student “bilingual” in their major and AI applications, the Council of Independent Colleges runs a program literally titled “AI Ready” to boost AI adoption and a bunch of schools have taken to building and deploying their own chatbots.

In many cases, the technology is a thin layer over unchanged structures: the same classes, the same incentive systems, the same support gaps, now described in more futuristic language.​

The risk in the current wave is that AI is framed as a painless cure‑all for deep structural issues: chronic underfunding, over-reliance on contingent labor, escalating student needs, political interference, and weak advising and mentoring systems. The more AI is sold as a substitute for time, attention, and genuine human relationships, the closer it comes to a digital lobotomy: a way to quiet disorder without addressing its sources.​

Three rules for intelligent giving

If you are a donor, alumnus, or foundation considering support for “AI in education” this year, a few tests may help:

  1. Follow the labor: Prefer projects that invest in faculty time, advising capacity, and student support, those using AI as a tool inside those relationships, over projects that promise savings by replacing human contact with automated nudges and chatbots.​
  2. Demand real evaluation, not dashboards: insist on evidence that proposed AI initiatives actually improve learning, retention, or equity, and treat glossy dashboards and prediction scores as marketing, not proof.​
  3. Ask what happens if the fad fades: the sustainability challenge. Support efforts that build durable skills and infrastructure – data literacy, transparent pedagogy, open materials – rather than brittle dependency on some quickly assembled office or newly hired administrator, both of which will be quietly abandoned when the next wave of hype arrives.​ (If they say “that will never happen!” ask them for a report of the state of diversity and inclusion efforts on campus. An awkward silence will follow.)

The question, always, is whether AI is being used to deepen the slow work of teaching and learning, or to avoid it.

Your portfolio’s lobotomy: the invisible intrusion

The pattern is not confined to campuses. Recent industry surveys suggest that a large majority of investment advisors—on the order of 90 percent—are adopting AI tools for research, portfolio construction, and client communication. Some of this use is healthy: automating routine screening, flagging anomalies, and stress‑testing portfolios under different scenarios. But the same temptations that haunt individuals and universities appear here as well: overconfidence in opaque models, pressure to appear “cutting‑edge,” and a willingness to let black‑box systems propose or even select portfolios that few humans fully understand.​

If you want to avoid underwriting lobotomies in your own financial life, it is worth asking your advisors simple, concrete questions:

  1. Which parts of your process are automated, and why? Where does human judgment overrule the model, and on what basis?
  2. How will you know if the AI‑driven system is failing, and who is accountable when it does?

The goal is not to reject algorithmic help, but to refuse the story that it absolves anyone of thinking.​ MFO conducted a test of AI-generated investment advice. The results are instructive, and we report them separately in “What five AIs told me about 2026’s best investment.

Technologies that deserve trust tend to make experts more careful, not less. They reward patience. They sharpen judgment. They do not promise to remove the hard parts of thinking, only to support them.

Lobotomy promised relief by subtraction. AI sometimes makes the same offer.

So a modest resolution for the year ahead: don’t underwrite lobotomies. Resist tools—personal or institutional—that trade understanding for efficiency, judgment for fluency, or struggle for speed. The hardest intellectual work has always been inefficient. That inefficiency is not a flaw. It is the price of having a mind worth trusting.

This entry was posted in Mutual Fund Commentary on by .

About David Snowball

David Snowball, PhD (Massachusetts). Cofounder, lead writer. David is a Professor of Communication Studies at Augustana College, Rock Island, Illinois, a nationally-recognized college of the liberal arts and sciences, founded in 1860. For a quarter century, David competed in academic debate and coached college debate teams to over 1500 individual victories and 50 tournament championships. When he retired from that research-intensive endeavor, his interest turned to researching fund investing and fund communication strategies. He served as the closing moderator of Brill’s Mutual Funds Interactive (a Forbes “Best of the Web” site), was the Senior Fund Analyst at FundAlarm and author of over 120 fund profiles.