Now that NaNo is dead, can we admit it should never have lived?

For the uninitiated (and I am sorry to be the one to initiate you) NaNoWriMo, or National Novel Writing Month, is an online competition festival community event in which people who want to be people who have written a novel put fifty thousand words into a file, then have that file word-counted in order to unlock a few free trials and brand collaborations. Lately, the real event of NaNoWriMo has been a series of annual scandals: most recently, a brand deal with a scam "publisher", a truly bewildering diaper fetishist grooming scandal, and now a widely panned (non-)"position on Artificial Intelligence" that informs us that the organization's leadership figures "absolutely do not condemn AI."

This position statement, which begins by insisting that it does not take a position before proceeding to take a position, contains all the usual 2014 tumblr shibboleths, decrying the "categorical condemnation of Artificial Intelligence" as "classist and ableist," and "[tied] to questions about privilege." Twitter's remaining literate denizens have spent the last little while hollowing out these arguments, but while there is no point beating a dead horse, there is knowledge to be gained in dissecting one. With that in mind, I offer a reading of this post and its companion.

What is "What is NaNoWriMo's position on Artificial Intelligence (AI)?"?

NaNoWriMo does not explicitly support any specific approach to writing, nor does it explicitly condemn any approach, including the use of AI.

The authors ask us to accept our first undefended premise: that "the use of AI" systems (in general, unconditioned by use case) is an "approach to writing." Here is a critique of that assumption, argued beautifully by someone who has an approach to writing that does not (unless Mr. Chiang is cruelly deceiving us) include the use of AI systems. Under that premise are a couple of key assumptions that should be highlighted here: first, that what a large language model does should be considered writing; second, that a writer who "approaches" writing with words not their own should be considered to be writing. Dr. Bender gave the critique of AI a priceless gift by coining the phrase "stochastic parrot," but it is unfortunate that the success of this delightful turn of phrase has overshadowed what I consider to be the key position in response to the presumption that a large language model "writes":

Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind. It can’t have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that [...] Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot (Bender et al., 2021, p. 616-7).

Consider a writer who, having kept a zettelkasten her whole life, decides to overturn the box onto the table, decreeing that any cards left facing up will be arranged into her magnum opus, and any cards facing down will be cast into a fire. Did the box do the writing? Did the cards? Did gravity, or the circulation of air in the room, or the height of the table? Of course not. The writer did. The difference here is that the writer prepared every word on every card of that zettelkasten, whether her own words or (properly attributed) quotations from others. The "behavior," and I use that word recklessly here, of an LLM is closer to that of a burglar who specializes in robbing writers of their notes, then sells back access to a pseudorandomly selected subset of those notes for the price of one bottle of water snatched out of the hands of a person dying of thirst per paragraph.

I am, however, fucking up my categories.

Nobody here is talking about making art, not really. NaNoWriMo is a pedagogical exercise; it is (or was intended to be) a course in the self-discipline required for sustained drafting. I am not the only critic to find Chiang stronger on pedagogy than on art, and this frame helps to display the sinews of NaNo at the greatest tension possible, because what NaNo teaches is extremely silly (but more on this later).

NaNoWriMo's mission is to "provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds—on and off the page." We fulfill our mission by supporting the humans doing the writing. Please see this related post that speaks to our overall position on nondiscrimination with respect to approaches to creativity, writer's resources, and personal choice.

"Provide the structure, community, and encouragement to help people use their voices" &c. is a phrase which here means "offer an incentive to produce fifty thousand words of text." Historically, the incentive has been a large stack of coupons for various items that writers of fiction allegedly need (see the brand deal controversy linked above). Other than that, the most consistent reward seems to be the smug satisfaction of calling oneself a novelist. That the reward is one of satisfaction rather than accomplishment is illustrated by the existence of NaNoGenMo, National Novel Generation Month, a decade-old event showcasing the kind of texts that, as of three days ago, NaNoWriMo appears (despite insisting otherwise) to endorse. "The 'novel'," per the organizer, "is defined however you want. It could be 50,000 repetitions of the word "meow". It could literally grab a random novel from Project Gutenberg. It doesn't matter, as long as it's 50k+ words." This is a criterion shared by the process used by NaNoWriMo to screen for winning submissions. This is the only criterion they could use; anything more stringent would make NaNoWriMo a different exercise altogether (perhaps even - horror of horrors - a writing contest) and anything less stringent, well, that's Camp NaNoWriMo.

We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege.

This is here because the contemporary argument must always be an identity-political argument. The defense of large language models, like the defense of food delivery gig work, must wear the armor of progress. This is vital in the present case, because the left (the audience to whom these sorts of appeals are always addressed, and with whom they so frequently fall flat) has broadly adopted a coherent view on AI systems as they relate to various forms of oppression, namely that it's bad. The well-meaning liberals at NaNoWriMo offer a brief bulleted list of responses; we can take them in turn.

Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.

We can reasonably tell we are not dealing with Marxists whenever we see "classism" in place of "class." We can be certain of it when class is framed as a function of "financial ability," a curious turn of phrase that suggests that the proletarian is oppressed not because he does not own the means of production, but because he cannot afford to buy them. This is the argument, not of someone who believes AI (the generic concept, not any one AI system) should be "democratized," but of one who believes it to be "democratizing" writing out from under the oppressive yoke of having to write, or edit, or typeset. This operates from the assumption that "to hire humans to help" is a necessary precondition, not of publishing, but of writing itself. This position only makes sense when one remembers one of the rewards (perhaps the key reward) of NaNoWriMo: the satisfaction of becoming and naming oneself as a novelist. Writing doesn't involve hiring a coterie of help, but being a novelist, of the sort one imagines when one fantasizes about becoming one, demands it.

Now, instead of paying one of the dwindling population of proofreaders to scour your manuscript for the sort of glaring errors that creep in after spending a month maniacally spitting text into a file, you can pay OpenAI to rent a few moments of computing time to have a machine spit back what looks enough like edited text to prolong the novelist fantasy, which because of the manner in which LLMs work, will no longer be your text. If "the financial ability" to "engage a human" editor "assumes a level of privilege," (which is in this sentence because we've already read the phrase "financial ability" twice in this paragraph and goodness forbid we rewrite this sentence to better say what it means) it could be because editing is labor and "humans" have this annoying quirk where we like to be paid enough for our labor to survive.

Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can't "see" the issues in their writing without help.

(Readers expect a writer to outline the standpoint from which she writes on questions of identity politics, so for the sake of avoiding getting called "abled" for holding the views I'm about to express, please note that I'm autistic. I'm the kind of person about (and without) whom a paragraph like this is written, and my response below is indelibly colored by that. I hope this is sufficient; I know from experience that it isn't.)

I tire of ableism's double life as a concept - by day, a useful term for understanding the oppression of disabled people; by night, a universal tool for escaping one's duty of solidarity toward others. It is in the cynical latter sense that we see this concept deployed here. The very idea that "some brains and ability levels require outside help" - crucially, not some people - in order to "achieve certain goals" is deployed as a justification to recontextualize opposition to AI systems as something as heinously and absurdly bigoted as opposition to curb cuts or automatic doors. "Can't you see, some people need-" need what? To write, almost certainly, and there exist accessible tools for disabled people of many varieties to do that. To be a novelist? No. The need for an elevated status should not be indulged, but addressed as a psychological wound.

If AI systems are capable of writing, it might make sense to consider whether they can be used to help others to write. I reach the conclusion that LLMs are not an accessibility tool because I start by accepting the position of the AI critics that, based on an analysis of the operations of an LLM, what it does is not writing as people are understood to do it. However, even if I assume that LLMs write, that is to say that they not only produce something that resembles a comprehensible text but do so in a way that can be generally agreed to be called "writing," that does not satisfactorily warrant the claim that these systems address the ableist underpinnings of writing, it would just mean that instead of a person writing, a machine does it. Accepting the position that an AI system can write - the position necessary to adopt in order for their output to be considered writing - causes the argument that LLM text must be acceptable to NaNoWriMo as an answer to ableism to fall apart; if the software can write, then it is writing, and not the disabled person presumably being "helped" by it. If it can't write, then what it produces isn't writing, and the user has not written a draft of a novel, but has caused a piece of software to produce an output that superficially resembles one. In either case, a disabled person is not being helped to access the process of writing, but being asked to accept the appearance of having written as a substitute. I am content to say I would find this unsatisfying; if, as argued above, NaNo is about the satisfaction of having written, then disabled people are presumed to accept dissatisfaction.

I keep using the phrase "disabled people," and this is because the authors of this statement chose not to. I am not a "brain," or an "ability level," or some abstract concept of a "writer." I am a person, but in this paragraph we're denied even the consolation prize of being called "human." This is a problem that arises in arguments about ableism, like this one, that address ability in the abstract without acknowledging that ableism is something that is done to, not simply done. (This is perhaps a cousin of my problem with the word "dehumanization," which I may address another time.)

General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

This is here because the kind of person who writes a post like this believes lists need to have three items as some kind of iron law of list making. You can tell that this is an afterthought because what little specificity existed at the beginning of the list is long gone. "Underrepresented minorities" (again, not people!) get siloed into self-publishing, because of a lack of "equal access to resources," whichever resources those are. "All" (two) "of these considerations exist within a larger system," but that system remains anonymous for its protection. The "indie author space... inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur," but what are... actually hang on a second, "authors who do not suffer" "may have to incur upfront cost burdens"? Let's speculate about this clause for a moment, because it has either been negated too much or not enough. I am struck with a hunch, one that I cannot prove, that whoever wrote this wrote the clause with two negatives (enough to make it mean something that coheres with the rest of the sentence) but responded to an in-editor grammar warning about double negatives by dropping one of them but not both. The influence of software tools, not unlike the ones being defended here, has caused someone to pen a sentence that contradicts itself, in a post that just made the claim that "there is a wealth of reasons why individuals can't 'see' the issues in their writing without help!" In sum, "marginalized people need help getting their writing taken seriously, which is why they should use tools that will, as we illustrate below, actively undermine them."

Beyond that, we see value in sharing resources and information about AI and any emerging technology, issue, or discussion that is relevant to the writing community as a whole.

"We're just sharing information," they say, after saying all those mean AI ethicists just hate the poor and the disabled!

It's healthy for writers to be curious about what's new and forthcoming, and what might impact their career space or their pursuit of the craft.

Impact, perhaps, in the way that a jet impacts a skyscraper, or a cruise ship an iceberg.

Our events with a connection to AI have been extremely well-attended, further-proof that this programming is serving Wrimos who want to know more.

Setting aside the bizarre demonym the community seems to have accepted, attendance figures at events are the sort of evidence one uses to craft a successful grant application (maybe!) but probably aren't, themselves, proof of anything. (I'm reminded of a question I asked a presenter at PCA in 2015, wanting to know why she took, as her object of analysis to make a claim about "nerd culture," The Big Bang Theory, only to hear that "it's the most viewed show on network television," as if I was deciding whether to give Jim Parsons a raise.) But again, perhaps "Wrimos who want to know more," want that because large language model vendors tend not to explain themselves to the laity.

For all of those reasons, we absolutely do not condemn AI, and we recognize and respect writers who believe that AI tools are right for them. We recognize that some members of our community stand staunchly against AI for themselves, and that's perfectly fine. As individuals, we have the freedom to make our own decisions.

"We can understand why some writers are elitist, ableist scumbags; they're free individuals who possess the right to be elitist, ableist scumbags! However, we will take the position we just outlined as being the counter-scumbag position, while also claiming to be utterly agnostic on the matter."

I have left off a portion of the post, because it did not exist when I started writing. I'm reproducing it here, but it comes after the first paragraph, and the rest of the post is unaltered:

Note: we have edited this post by adding this paragraph to reflect our acknowledgment that there are bad actors in the AI space who are doing harm to writers and who are acting unethically. We want to make clear that, though we find the categorical condemnation for AI to be problematic for the reasons stated below, we are troubled by situational abuse of AI, and that certain situational abuses clearly conflict with our values. We also want to make clear that AI is a large umbrella technology and that the size and complexity of that category (which includes both non-generative and generative AI, among other uses) contributes to our belief that it is simply too big to categorically endorse or not endorse.

This is the committee who wrote this deciding who is allowed to have a problem with them (published authors who have intellectual property qualms about LLMs, which can be siloed off and treated as "situational") and who isn't (the AI ethics crowd who see those qualms as expressions of the immanent plagiarism of LLMs, and besides, categorically reject the notion that LLMs "write"). This isn't a reevaluation; it's damage control.

"this related post" and the fannish disposition

"I can't believe NaNoWriMo is endorsing a person/company who does [blank]!" is the sort of title one gives to a FAQ that is about to be linked under a million WONTFIX bugs. It is a cry of contemptuous dismay that people would dare question why an organization would make a brand deal with a predatory "publisher," or let a forum moderator funnel teens into their kink discord, or endorse the very class of software that stands to endanger the livelihoods of people who may otherwise have had reasonable writing careers - careers that could have started with their NaNo manuscript, but will now be crowded out by a sea of $3.99 synthesized fairy smut. This post merits more contempt than analysis.

NaNoWriMo is not in the business of telling writers how to (or how not to) write, taking a position on what approaches to writing are legitimate vs. illegitimate, or placing value judgments on personal decisions that are a matter of free choice.

The libertarians called and want their position on weed back. This is the sort of refusal to argue that, as we've seen in the last post, can only signal that we're about to encounter some value judgments on personal decisions, including those that are a matter of free choice!

Opinions about "correct" ways to write or "right" vs. "wrong" kinds of writers should not be brought into our spaces.

This is an opinion about correct ways to write, and about which writers are the right or wrong kind of writers to be brought into a given space. This is the rhetoric of fandom, of "don't like? don't read," of non-judgmental spaces that become courtrooms the moment some brave souls can no longer restrain themselves when confronted with slop.

Our priority is creating a welcoming environment for all writers. There is no place for that kind of virtue signaling within NaNoWriMo.

But there is a place for this kind!

This position extends to our partnerships with sponsors and affiliates, with authors who we invite to write pep talks or serve as camp counselors, and to people who we invite to participate in events.

Including scam artists and sex creeps!

NaNoWriMo is a global community of more than 550,000 writers who we fully expect to have different values, different needs, different preferences, and different curiosities.

Wait, they've been around for how long and have only cracked half a million? Since 1999? Is that cumulative or concurrent? Never mind that - if we've got that many people with 'different values,' &c., it's possible that some of those people value things like human creativity, or need things like a forum that's not a hunting ground for teens, or preferences for good writing over bad, or curiosities about why an organization that just had a scandal where it partnered with a publishing scam is really bullish about AI?

Because Wrimos are not a monolith, we don't cater to a specific author archetyope or ideology.

Except individualism, and the belief that having strong opinions that disagree with ours, such as "it's spelled 'archetype'," is bad and must be avoided in the name of inclusion!

We take this position firmly, and we take it seriously. NaNoWriMo is a 25-year-old organization with staff that has been in the writing community for a very long time.

I'm not sure why this sentence exists and we should behave as if it doesn't.

We've seen tremendous harm done over the years by writers who choose to pick at others' methods.

These people would last half an hour in a workshop. To be fair, so would I, which is why I majored in lit and not creative writing! One would think a "staff that has been in the writing community for a very long time" might have observed that writers, as a population, tend to value and even enjoy critique? (Fan communities, on the other hand...)

We've seen indie authors delegitimized by traditionally published authors, highbrow literary types look down their noses at romance authors, fanfiction writers shamed for everything from plagiarism to lack of originality; the list goes on.

There it is. There's the gripe. What do "indie authors" (they mean "self published"), "romance authors," and "fanfiction writers" have in common? A widespread perception that they're looked down upon by "highbrow literary types," despite all being fields subject to a great deal of scholarly attention! (That PCA anecdote from earlier happened because I was there to give a paper on Sherlock Holmes fanfiction and its depictions of autism, back when I was a hack.) This isn't tilting at a windmill, it's tilting at a Dunkin Donuts that was built where a windmill last stood in 2006! Like other right-wing ideologues, the fandom fandom cannot accept that they've won, that the whole world exists to cater to them with endless plastic tchotchkes and feature length commercials for same, because sometimes some fucking snob makes fun of their mediocre writing and that means that actually the literature elitists are still exercising their hegemony over university course catalogs and the "writing community" on Twitter!

Not only is this sort of shaming unnecessary and often mean. It's proven itself to be short-sighted.

Hey, uh, pal? I think you dropped a comma. I'm sorry for sounding like one of those "highbrow literary types" who, when she opens up a blog post, expects to be able to, y'know, parse it, but you're really not helping yourself here.

Some of the most shamed groups within the writing community are also the most successful (e.g., Romance is one of the highest-grossing genres; an increasing body of data shows that indie authors do better than trad-pub authors, and some of the biggest names in publishing started out in fanfic).

"Successful" at what; "do better" how; "biggest names" why? See, success is when you make money. You should aspire to having so much money that you can spend the rest of your life in a castle complaining about transsexuals! Craft? Fuck your lecture on craft; The Big Bang Theory is on!

NaNoWriMo's mission is to "provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds—on and off the page." We fulfill our mission by supporting the humans doing the writing. That means not judging them and not allowing judgmental dynamics to enter into our spaces.

We've already been here, we've already done this, and we've already bought the year's subscription to some Grammarly knockoff that's a couple weeks away from being announced as a 2024 NaNoWriMo brand partner.

What do Wrimos learn about writing?

Fuck all. Try again.

What do Wrimos learn about being writers?

There we go. Before we went on this masochistic hike together, I said NaNoWriMo is pedagogical, that it's meant to (or heavily insists that it's meant to) teach the kind of discipline necessary to quickly produce a draft that can be edited and published. It already excludes any other goals; it's actively hostile to anything that can impede the production of the draft, the extraction of each of those fifty thousand words from every spare second in November. After all, if people don't finish, they can't access all those coupons! They'll have to pay full price for the accouterments of being-an-author, and if they have to pay full price, they might reconsider!

NaNoWriMo is a course in writing the way a Tom Vu seminar is a course in real estate. The object of NaNoWriMo is to unlock deals on products that, when purchased, will mark the participant as an author, a novelist, even! The writing is there to provide some kind of basis in reality for this act of self-fashioning; the draft is as much a part of the personal front as the products NaNo is about to sell you, as the meet-ups with other people who can help you reinforce your belief, as the laptop open in public conspicuously displaying Scrivener, because you're too serious to be using Word. If success as a writer is about making money, as the people who run this operation seem to believe, then success at running this operation is about making customers who, unfortunately for the rest of us, will now forever be "authors." Of course they're okay with synthesized text, and of course they are going to argue that your having a problem with it makes you a classist, an ableist, and a snob! This isn't for you. You'd write anyway, without buying anything besides - how pretentious of you - paper and pens.

NaNoWriMo does not teach anything about how to write that can't (or for that matter, can) be imparted in a tweet about disciplined creative work, like this one right here. He just freed up all your nights in November. Now you can spend them writing.