<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[c(dot)tel]]></title><description><![CDATA[our machines, ourselves, and our problems]]></description><link>https://cariglino.tel/</link><generator>Ghost 5.88</generator><lastBuildDate>Fri, 03 Apr 2026 15:01:06 GMT</lastBuildDate><atom:link href="https://cariglino.tel/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[A Dehuman Catechism]]></title><description><![CDATA[<p><em>The following is a transcript of an interview conducted by Bek Hamelin, a particularly talented young student sent my way by one of my treasured fellow travelers, Dr. Rua Williams. I haven&apos;t edited these terribly much, and as I post this, I&apos;m running on about two</em></p>]]></description><link>https://cariglino.tel/a-dehuman-catechism/</link><guid isPermaLink="false">67871cf5f7f68c46c21f83c2</guid><dc:creator><![CDATA[Em Cariglino]]></dc:creator><pubDate>Wed, 15 Jan 2025 02:45:23 GMT</pubDate><content:encoded><![CDATA[<p><em>The following is a transcript of an interview conducted by Bek Hamelin, a particularly talented young student sent my way by one of my treasured fellow travelers, Dr. Rua Williams. I haven&apos;t edited these terribly much, and as I post this, I&apos;m running on about two hours of sleep, so please bera with me if it takes a couple of days for me to iron out some errors/omissions. My many thanks to Bek for the opportunity to think with them about these matters!</em></p><p>BH: What is a cyborg to you? </p><p>EC: We start from the lay understanding of the cyborg, namely the idea of a organic-technological being (note that I&#x2019;m not limiting this to humans; something I&#x2019;ve been rolling around in my head lately is that those videos of dogs communicating by using a button board express some kind of cyborgish caninity). I construe &#x201C;technological&#x201D; in the broadest possible sense, the sense familiar to, say, readers of Foucault (etc.): to refer to <em>techniques</em> by which power is exercised <em>upon</em> the body. Language is a technology, urban planning is a technology, laws are technologies, and I won&#x2019;t belabor the point further. All of these things <em>relate</em> to our organic bodies (such as they are) and other such bodies.&#xA0;</p><p><em>But aren&#x2019;t we all cyborgs, then? </em>Oh, good heavens, no. I&#x2019;m unprepared to go so far as to echo Cy. Jillian Weise, and make &#x201C;depend[ing] on machines to breathe, stay alive, talk, walk, hear or hold a magazine&#x201D; a necessary criterion to differentiate between cyborg and tryborg (Weise, 2018), but I agree with the basic need to differentiate. Indeed, an earlier intervention by Weise comes closer to my own approach:&#xA0;</p><blockquote>Tryborgs <em>want to be cyborgs</em>. This is why they go to bed with Fitbit, brag about gigabit and buy kit with Bitcoin. They have an affinity for the it or the Id. But even when they find a mate by swiping right, and then tell that mate how many steps they walked since Sunday, still they are not cyborgs. To mistake them for cyborgs is to confuse the figurative with the literal. [...] Tryborgs <em>rely on the nonexistence of actual cyborgs for their bread and butter</em>. If cyborgs exist, how will the tryborg remain relevant? Wouldn&#x2019;t we just ask the cyborg for her opinion? <em>The opinions of cyborgs are conspicuously absent from the expert panels, the tech leadership conferences and the advisory boards</em>. The erasure is not news to us. We have been deleted for centuries, and in the movies, you will often see us go on a long, fruitful journey, only to delete ourselves in the end. (Weise, 2016, emphasis mine).&#xA0;&#xA0;</blockquote><p>The difference between a cyborg and a tryborg/other non-cyborg is, in my analysis, a matter of political commitments and marginalization. A strictly lay understanding of cyborghood (I&#x2019;ve been using the German coinage <em>cyborgkeit</em> in my own notes to differentiate my understanding from others; more on that later) could encompass the billionaire who has used his son as a &#x201C;blood boy,&#x201D; only to replace him later with an exogenous supply of albumin, or the attorney supplementing his perfectly capable paralegal with a large language model (to his detriment), or the seller of AI-generated images as works of art. All of these people could easily argue that they&#x2019;re organic-technological hybrids, to varying degrees (see AI-enhanced &#x201C;second brain&#x201D; notetaking techniques). What makes these people tryborgs is less that none of them would die without their dubious augmentations, but because, for all of them, their engagement with the organo-technolgical unity is devastatingly earnest. This sort of tryborg can only start from a naively positive understanding of their relationship to technology. They are operating in nauseous genuineness; the cyborg is doing something else.</p><p>At the very beginning (the first line!) of &#x201C;A Cyborg Manifesto,&#x201D; Haraway sets out the project of constructing &#x201C;an ironic dream of a common language for women in the integrated circuit,&#x201D; &#x201C;an ironic political myth faithful to feminism, socialism, and materialism,&#x201D; constructed around the &#x201C;image of the cyborg&#x201D; (Haraway, 1985). It is a shame she doesn&#x2019;t spend more time on irony as a component of her cyborg; I find it useful as a differentiator between cy- and tryborgs. The cyborg differentiates themself from the tryborg insofar as their identification is not cloyingly technooptimistic, or bloodlessly depoliticized. Indeed, the cyborg is often <em>vexed</em> by their augmentations (note, I&#x2019;ve not been saying &#x201C;enhancements&#x201D;!). &#x201C;I&#x2019;m a cyborg,&#x201D; is a statement that is most true when it drips with a hint of dissatisfaction, because that dissatisfaction is an index of the necessary political commitments that divide the cyborgs from the tryborgs.&#xA0;&#xA0;<br></p><p>BH: Do you feel like a cyborg?</p><p>EC: No? Yes? I think &#x201C;like&#x201D; is a useful modifier here. I don&#x2019;t believe myself to <em>be </em>a cyborg, in either the traditional sense or the sense elaborated above. I&#x2019;ll develop that more when we get to the dehuman.</p><p>BH: How are you a cyborg? Can you describe it?</p><p>EC: I am not. See below for more.<br><br>BH: Is your cyborg-ness for surviving or thriving? Functionality or living better? Or something else?<br><br>EC: Something else, easily. Again, we&#x2019;ll get to that when we get to the dehuman.</p><p>BH: Do you know of some ways that cyborgs use technology in language? Or ways you have encountered or talked about? What are your thoughts on this arena<br><br>EC: I&#x2019;ll admit this one is a touch out of my wheelhouse; I&#x2019;m not paying terribly close attention to the conduct of others who have adopted the cyborg label. Here, then, and only here, I follow Wittgenstein:&#xA0; &apos;Whereof one cannot speak, thereof one must be silent.&#x201D;</p><p>BH: Is there a connection between &#x2018;the dehuman&#x2019; and &#x2018;the cyborg&#x2019;? How are they related? How are they different? (I moved this one up because my answer here will help with my answer to the call center question -e.c.)<br><br>EC: I&#x2019;ve been rolling this one around in my head a bit, because up to this point I haven&#x2019;t come up with a good way of describing this difference, much to my embarrassment. Let&#x2019;s try this: the dehuman is to the cyborg what forced rhubarb is to ordinary rhubarb. Dehumans are produced when the oppressor needs the cyborg&#x2019;s aptitude for being-HCI, without the political commitments, the irony, or the autonomy the cyborg typically enjoys. Dehumans are cyborgs (or non-cyborgs) who are enframed within a productive situation; their habitus is one oriented around productivity (in the normative sense, under capital, in the cases I examine, under anglosphere service industry conditions). I think, vain as it must seem, my own words from earlier might be useful here:</p><p>The &quot;dehuman&quot; is the product of the process (the <em>flow</em> if you&apos;re a schizoanalyst or a business automation developer) of dehumanization carried to its climax. It is a figure acted upon until it cannot act, or until any action it can take is rendered always and already futile. The dehuman is utilized, in the proper sense that it has been made useful, and <em>then</em> it is used. When a dehuman is used, it is often to bridge the chasm between the properly human and the properly mechanical; it is the material to build a bridge over the uncanny valley. The dehuman is the customer service representative <em>par excellence</em>, the perfect interface of flesh-thought and machine-computation, torn down and rebuilt to fit any purpose, any discipline, any market. Most crucially, the dehuman is <em>made</em>, not simply emergent; to name oneself dehuman is to say &quot;somebody <em>did</em> this <em>to</em> me, and now, like Iago, &apos;I am not what I am&apos;&quot; (I.i.65) (Cariglino, 2024).&#xA0;</p><p>In my answer to your next question, I&#x2019;d like to illustrate some of these processes at work.</p><p>BH:&#xA0;In your piece &#x201C;pinning the dehuman&#x201D;, you discuss this concept of becoming dehuman through talking about being a call center employee. You touch on the gendered and disabled dynamics at play, how the dehuman is &#x201C;the product of the process&#x201D; and about the machine human interface. Can you tell me a bit more about this? And this relationship of exploitation, the character built from it, and what it says about body-minds and machines?<br><br>EC: Some personal context: from 2016-2020, and again briefly in early 2024, I worked as a call center agent, first for a large telecommunications company, then for a much smaller one. These experiences were not uniformly unbearable (the stretch in 2024 was, but there are other reasons for that which I cannot yet work into this theoretical framework.) but they shared key features of dehuman-production (we could say &#x201C;dehumanization,&#x201D; sure, but I want to highlight the deliberate, rational <em>process</em> by which dehumans are <em>produced</em> as a resource).&#xA0;</p><p>Perhaps the key element here is a general awareness that one&#x2019;s role is to function as an interface between various systems, both computational and social, and a caller who does not have access to those systems. There&#x2019;s a fascinating old article in <em>Mother Jones</em> about an early iteration of the job I used to do, entitled &#x201C;Drugged, Bugged, and Coming Unplugged,&#x201D; that I like to quote when talking to people about this subject position:</p><blockquote>In the brand-new Centralized Repair Service Bureaus that are sprouting up throughout the Bell System, the techniques of work-force control are considerably more refined. Nearly 50 repair service clerks sit in groups of four, their eyes glued to the cathode flicker of jet-black video display terminals. Gone is the howler of the Washington office. In its place, a color television displays a bar graph with the office&apos;s &quot;speed of answer&quot; record for the day. Here, the clerks have only eight seconds to answer the electronic beep signaling another customer on the line-a 12-second speed- up over the Washington office. A supervisor rushes forward to explain the new system. What he says holds true for workers throughout the Bell System. &quot;These girls are merely an inter- face between the customer and the computer&#x201D; (Howard, 1981).&#xA0;</blockquote><p>The first of the two companies I worked for is what is called an &#x201C;incumbent local exchange carrier,&#x201D; in other words, the local wireline telephone service monopoly. Most of the ILECs in the United States (indeed, definitionally, <em>all</em> of them) started out as Bell Operating Companies, local subsidiaries of the original AT&amp;T. What Howard describes here is a working situation that, save for the introduction of computer-based phone systems, modern PCs at each workstation, and a policy that every inbound call is answered <em>automatically</em>, without the agent accepting it or not, this particular working situation is largely unchanged. The key portion I was struck by is the quote from the repair service supervisor, &#x201C;these girls are merely an interface,&#x201D; because it <em>drips</em> with unintended meaning. Let&#x2019;s close read it:</p><p>&#x201C;<em>Girls&#x201D; </em>is doing a lot of work here; from the late 19th century (after a brief period during which teenage boys were considered for the role) to the end of widespread operator assisted calling, the vast majority of telephone operators employed by local telephone companies were women (one of my managers, while I worked at the larger telco, started out as an operator in 1997; the gender dynamics were largely unaltered even then.) I return, briefly, to Haraway&#x2019;s initial statement that the cyborg ought to be a &#x201C;political myth faithful to feminism,&#x201D; to note that Haraway&#x2019;s cyborg came about with women/womanhood squarely in mind.</p><p>Of <em>course</em> this supervisor (a man, mind you) sees these girls as almost like an acoustic coupler joining the subscriber&#x2019;s landline phone to the computer running the trouble ticketing system - not just an interface in the generic sense, but a human modem! This is the relevance of &#x201C;between,&#x201D; because the role of this particular form of dehuman is to bridge a gap between the fully human telephone subscriber and the fully mechanical trouble reporting system. Traversing that gap on one&#x2019;s own, at least at the time, was still seen as a dehumanizing process (paranoia about widespread adoption of computers in government and industry was a common trope among late 20th century civil libertarians; there are some <a href="https://www.youtube.com/watch?v=yah54al6Cks" rel="noreferrer">fascinating PSAs directed by Godfrey Reggio</a> you should look up) to which full humans ought not subject themselves, and that computers were insufficiently humanistic to cross on their own. (One could, if one wanted, argue that one way of stating the aim of HCI research is &#x201C;narrowing the gap until the dehuman is no longer necessary, at which time, well, y&#x2019;know&#x2026;&#x201D; This is where the troubling consequences of dehuman-production start to emerge, the sort that end in ashes. We must remember there are things at stake here, perhaps more in the medium term future than I might have expected the last time I wrote on this topic.)</p><p>&#x201C;Merely&#x201D; is also significant, in part in relation to my parenthetical immediately above; the goal state is to not <em>need</em> the dehuman at all, so her role must be reduced, made <em>mere</em>, until it is entirely unnecessary. This has been a goal, to lean on our present example, since the dawn of telephony - human operators, &#x201C;telephone girls,&#x201D; were thought of from the outset as something that should be mechanized, automated away (I can&#x2019;t place the source since I&#x2019;m away from my library at the moment, but something to the effect of &#x201C;the girlless, cussless telephone&#x201D;).</p><p>I focus primarily on contact center work in my writing, because it&#x2019;s familiar enough to, more or less, autoethnographize from memory, but I arrived at this idea of the dehuman when trying to solidify my critique of other autistic scholars&#x2019; engagements with the presumed affinity between autistic people and machines/the mechanical.&#xA0;&#xA0;</p><p></p><p>BH: &#xA0;Moving more towards cyborgs again, how do you see autistic communication different from non-autistics?</p><p>EC: Everything I can think of to say at the moment is better discussed in Yergeau&#x2019;s <em>Authoring Autism</em>. I will go so far as to order you a copy when I get paid if you can&#x2019;t get it otherwise. <br><br>BH: As we are talking about larger social models and theoretical frameworks, could elaborate on glitched, cyborg, dehuman, etc. could also talk about the literal communication differences you or others experience. Or whatever you&#x2019;d like really, I&apos;m interested in it all<br><br>Following that, in what ways does tech change or play into autistic communication in your life and/or in your studies?<br><br>[Did not answer]</p><p>BH: So I had a previous interview with Oswin Latimer, who is an autistic consultant and friend of Rua&#x2019;s as well, and they have strong convictions against behaviorism and are working to build neurodiverse language outside of it. While it is not something I initially planned to cover in my paper, I find that behaviorism is something important to talk about when looking at man-machine relationships, and I saw in your draft &#x201C;Talking Typewriters Talk Back&#x201D; that you also are strongly against behaviorism. Can you elaborate a bit on how you see behaviorism fitting into this larger conversation about technology and people?</p><p>EC: This is simple: behaviorism is a philosophical framework that has been used to justify the industrial scale abuse of autistic people (applied <em>behavior </em>analysis), and even if it wasn&#x2019;t, it&#x2019;s a reductive and dehumanizing analytic of human conduct. With no apologies to Margaret Thatcher, the behaviorist&#x2019;s world is one in which &#x201C;there is no such thing as society, only individuals&#x201D; and their behaviors.&#xA0;</p><p>Put somewhat more directly: behaviorism is the theoretical basis that makes dehuman-production possible. Whether we do it to young (but not only young!) autistic people in the so-called therapeutic process of ABA, or to the worker in the form of workplace training, the philosophy at work is behaviorism. It&#x2019;s important to consider it in the context of HCI because (and this is an unresearched generalization, don&#x2019;t take this as gospel) mainstream HCI development/research sees the user as a container full of behaviors to influence through the design of user interfaces (think &#x201C;nudge theory&quot;).&#xA0;</p><p>BH: In my readings on cyborgs in class, Joshua Earle discusses care and maintenance and what that looks like for cyborgs, especially for physical disabilities like prosthetics. Earle dives into how maintenance should be the upkeep of current tech enmeshment, rather than upgrades to new things, and how this upkeep should be accessible by the user rather than just done by the company. Do you see any sort of resemblance in your own experiences and understandings of cyborg tech?</p><p>EC: Up to this point, I have been struggling with how to integrate the ideas of care and maintenance into what I&#x2019;ve heretofore considered a pessimistic, and essentially <em>fatalistic</em>, concept of the dehuman. That is to say, I&#x2019;ve not yet had a chance to account for what it would mean to reverse the flow of dehuman-production, or even if such a reversal would have a result we&#x2019;d expect it to have (that is to say, rehumanization) or some other outcome.I&#x2019;ve been musing on this during the long stretches of off time at my new job, which itself is the nearest thing I&#x2019;ve encountered so far to a rehuman-production flow for call center dehumans in particular.&#xA0;</p><p>BH: What sorts of care or maintenance are required for the machine connections that you know?<br><br>EC: If such a thing exists in the general sense, if there&#x2019;s some kind of ur-rehuman-production flow, it likely entails a long rest period (I took most of this year off, at incredible personal cost) and reintroduction to a kind of therapeutic habitat that shares superficial features with the deuman-production flow (similar work, using similar tools, but at a radically slower pace and under far less emotional tension.) I&#x2019;m not an occupational therapist, and I&#x2019;m under the impression I&#x2019;m getting close to reinventing their wheel (or stretching it into a triangle) so I&#x2019;ll leave this thread here for now.</p>]]></content:encoded></item><item><title><![CDATA[Pinning the dehuman]]></title><description><![CDATA[<p>I&apos;m sure it&apos;s not the done thing to preemptively call dibs on concepts that I&apos;d <em>like</em> to write about, so as to preserve their novelty in amber until I can get around to them. So I&apos;m not doing that. What this <em>is</em></p>]]></description><link>https://cariglino.tel/pinning-the-dehuman/</link><guid isPermaLink="false">66f951c8f7f68c46c21f8187</guid><dc:creator><![CDATA[Em Cariglino]]></dc:creator><pubDate>Sun, 29 Sep 2024 20:39:30 GMT</pubDate><content:encoded><![CDATA[<p>I&apos;m sure it&apos;s not the done thing to preemptively call dibs on concepts that I&apos;d <em>like</em> to write about, so as to preserve their novelty in amber until I can get around to them. So I&apos;m not doing that. What this <em>is</em>, however, is a rough plan of work for the next several years (without dates attached) and a sketchy summary of my &quot;project&quot; such that I have one. Part of why I&apos;m writing this is that, recently, a couple of people whose work I take somewhat seriously have reached out to ask what I&apos;m up to, and I&apos;m somewhat embarrassed by my paltry answer. I&apos;d like to take another crack at this below.</p><h2 id="unfinished-business">Unfinished Business</h2><h5 id="returning-to-the-work-of-ok-moore">Returning to the work of O.K. Moore</h5><p>Almost two years ago, I presented a paper at SIGCIS, in which I made a second attempt (the first being an undergraduate term paper from <em>ten</em> years ago) to make sense of the relationship between the research done by Omar K. Moore and others using the Edison Responsive Environment, and the development of the diagnostic criteria for autism. I&apos;m not terribly thrilled with how it came out, and I owe that primarily to my desperation to get as much into what I thought could be my last opportunity to discuss this work in public for some time. As I&apos;ll discuss below, I see this as part of a larger theoretical contribution that needed far longer than the twenty minutes of exposure it could get from one conference paper delivered two weeks after my father died. </p><h5 id="telephone-girl"><em>Telephone Girl</em></h5><p>I have neglected <em>Telephone Girl</em> for some time now, since publishing <a href="https://anewsession.com/issue2/meditations_on_a_telephone_girlhood/1" rel="noreferrer">excerpts of its introduction</a> in Cara Esten and Lo Ferris&apos;s <em>A New Session. </em>Writing about telephony has been a mildly emotionally fraught task since my rushed departure from the telecommunications industry; until recently, I&apos;ve not felt stable enough to touch this work without considering the tense year in which it was written. With enough distance from the telecom period of my life, however, I&apos;m beginning to see this work with new eyes. There is, especially in this moment of data worker exploitation and the gradual encroachment of call center management tactics on all forms of office labor, a need to cast fresh eyes on the historical influences on and gender dynamics in which this oppression takes place.</p><h2 id="the-development-of-the-dehuman">The development of the dehuman</h2><p>Both of the above projects, to varying degrees, deal with the process by which the dehumanization of already marginalized subjects (autists and women, respectively, but not exclusively!) occurs, and, as a result, both of these treat the <em>product</em> of that process, the output for which I argue we lack a correct name. &quot;Subhuman&quot; concedes too much to intractable scholarly anthropocentrism; &quot;subaltern&quot; addresses the <em>colonized</em> rather than the dehumanized; &quot;dehumanized person/subject&quot; is insufficiently pessimistic and suggests the eventual creation of &quot;persons with dehumanization&quot; to even further dilute the absolute horror one should experience upon encountering dehumanization itself. Other terms get closer, but have specific excesses or lacks that render them unsuitable for my particular use case. &quot;Cyborg&quot; comes to mind, especially insofar as both of my existing projects consider the relationships between certain kinds of people and their machines, but I cannot follow as far as to say that there remains enough human within the subjects under consideration to preserve the &quot;org&quot; while describing the effects of the &quot;cyb.&quot; &quot;Inhuman&quot; is another option that is unsuitable; it suggest a kind of non-human being that simply <em>is</em>, without allowing us to name the process that makes it so, while also suggesting the &quot;inhumane,&quot; lending a moral dimension to the product of dehumanization with which I find myself unable to reconcile.</p><p>Thus, &quot;dehuman.&quot; The &quot;dehuman&quot; is the product of the process (the <em>flow</em> if you&apos;re a schizoanalyst or a business automation developer) of dehumanization carried to its climax. It is a figure acted upon until it cannot act, or until any action it can take is rendered always and already futile. The dehuman is utilized, in the proper sense that it has been made useful, and <em>then</em> it is used. When a dehuman is used, it is often to bridge the chasm between the properly human and the properly mechanical; it is the material to build a bridge over the uncanny valley. The dehuman is the customer service representative <em>par excellence</em>, the perfect interface of flesh-thought and machine-computation, torn down and rebuilt to fit any purpose, any discipline, any market. Most crucially, the dehuman is <em>made</em>, not simply emergent; to name oneself dehuman is to say &quot;somebody <em>did</em> this <em>to</em> me, and now, like Iago, &apos;I am not what I am&apos;&quot; (I.i.65).</p><h5 id="a-grim-look-at-the-state-of-play">A grim look at the state of play</h5><p>Disability studies in general, and critical autism studies in particular, are in the process of figuring out how to be properly <em>negative</em>, properly <em>pessimistic</em>. J. Logan Smilges, offering <em>Crip Negativity</em>, reframes disability itself as &quot;a regulatory mechanism by which humanity can be distributed and withheld&quot; (2023, 9) and further admits that it is possible to &quot;contest the value of that designation&quot; (ibid., 33). My question, of course, is &quot;which designation?&quot; Disabled, or human? The project of constructing this disquieting term &quot;dehuman&quot; is my answer: it is not only possible to &quot;contest the value&quot; of the human, it is a productive method by which to understand what comes of <em>dehuman</em>ization. </p><p>It is not by accident that my interest in developing such a term emerges from watching a fascinating conversation play out at the corner of critical autism studies and human-computer interaction, and trying myself to enter the conversation at the wrong time and from the wrong room. Some of us, having read Remi Yergeau&apos;s <em>Authoring Autism</em>, and resonating with the position that &quot;we, the autistic, are a peopleless people,&quot; that &quot;we embody not a counter-rhetoric but an anti-rhetoric, a kind of being and moving that exists tragically at the folds of involuntary automation&quot; (2018, 11), and we have started to ask a peculiar question. Really, we&apos;ve started to ask <em>two</em> questions, which I&apos;ll call the &quot;indirect&quot; and &quot;direct&quot; forms:<br></p><ol><li>The indirect form, synthesized from the autistic HCI work of Rua Williams (taken broadly, but in particular &quot;I, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research&quot;), as well as Josh Guberman and Oliver Haimson&apos;s &quot;Not robots; Cyborgs &#x2014; Furthering anti-ableist research in human-computer interaction&quot;: is there liberatory, or at least humanizing, potential in working with the perceived &quot;natural affinity&quot; of the autist for &quot;technology?&quot;</li><li>The direct form, so-called both for its sharpness and for its being <s>shamelessly</s> lifted verbatim from Os Keyes in &quot;Automating Autism,&quot; which quote takes on new resonance having been re-read after J. Logan Smilges finally provided a negativity upon which to hang it, upon which my own hat now rests: &quot;are autists, really, human?&quot;</li></ol><p>From the sidewalk, I have watched a collision developing, or perhaps a series of near-misses, have witnessed governments, and societies, and friends surrender to COVID-19&apos;s siege war against the paltry shreds of inter-ability solidarity I may have foolishly thought we would have built. After all of this, another question: what good, if this is what it is, does it do to be human, to assert our agency, or humanity (or even just our <em>desire</em> for humanity) to the allists, only to appeal to the lion after Caesar has lowered his thumb. It may very well be an accurate claim, but it bears little utility while we are being mauled to death. <em>We are fighting a battle from which there is nothing to gain in victory</em>.</p><h5 id="the-dehuman-and-telephone-girl">The Dehuman and <em>Telephone Girl</em></h5><p>Much of our consideration of the dehuman so far has centered on its particular amenities for discussing the oppression of disabled, mainly autistic, people and the result of said oppression. Accepting it as my frame of reference, however, also bears upon how I might return to <em>Telephone Girl,</em> and what might change about my approach, should it properly take into account the dehuman (as opposed to, at the moment, the cyborg). One place where this adjustment is necessary is in my reading of an excerpt from an issue of the <em>Montreal Witness</em> used by Mich&#xE8;le Martin in <em>&quot;Hello, Central?&quot;</em> to illustrate &quot;the change that occurred in the operator&apos;s labour toward the end of the 1890s&quot;: &quot;<em>The girls, then, are automata</em> ... they looked as cold and passionless as icebergs. <em>But that is only discipline</em>&quot; (unk., qtd. in Martin, 1991, 70; emphasis hers). Crucially, despite writing at a point when the term &quot;cyborg&quot; would have been ready to hand, Martin does not use it. Instead, the operators, the telephone girls, were faced with working arrangements which &quot;[subjected] them more and more to the machine&quot; (Martin, ibid.). </p><p>Decades after the newspaper clipping, but a decade before Martin, Robert Howard, writing for <em>Mother Jones</em>, pens &quot;Drugged, Bugged &amp; Coming Unplugged,&quot; a scathing indictment of work culture in the late Bell System, particularly its operating companies. One paragraph of this essay lends itself to a dehuman reading:</p><blockquote>The Washington Service Center is a small, relatively backward facility. In the brand-new Centralized Repair Service Bureaus that are sprouting up throughout the Bell System, the techniques of work-force control are considerably more refined. Nearly 50 repair service clerks sit in groups of four, their eyes glued to the cathode flicker of jet-black video display terminals. Gone is the howler of the Washington office. In its place, a color television displays a bar graph with the office&apos;s &quot;speed of answer&quot; record for the day. Here, the clerks have only eight seconds to answer the electronic beep signaling another customer on the line-a 12-second speed-up over the Washington office. A supervisor rushes forward to explain the new system. What he says holds true for workers throughout the Bell System. &quot;<strong>These girls are merely an interface between the customer and the computer</strong>&quot; (Howard, 1981, 44; emphasis mine).</blockquote><p>Having myself been such a &quot;repair service clerk&quot; (the title ends in <em>attendant</em>, thank you) during the latter half of the 2010s, I am confident in saying that very little about these conditions had changed, save for the equipment (and the answering standard &#x2014; we were expected to set our phones to automatically accept new calls, no eight second delay). Minutiae aside, the phrase &quot;merely an interface&quot; is deceptively plain; every word screams its slights. &quot;Merely,&quot; because dehumanity <em>is</em> mere, is only as much as, is the minimum viable being capable of those things not yet automated but too mechanical for real humans; &quot;interface,&quot; insofar as these dehumans are conduits, pipes &#x2014; surfaces, perhaps, of <em>natural affinity &#x2014; </em>taking and rerouting the output of a real human into the input of a real computer, but never acting on that output or input (or, at least, not doing so with the sanction of the humans managing the dehumans). The telephone girl does not <em>get</em> to be human in a sense anyone could recognize, even as her &quot;human touch&quot; is thought necessary or desirable to advance the goals of the telephone company. In the era of the IVR, the chatbot, the automated workflow, we might be at the end of the road for the telephone girl. At the same time, the ongoing process of ecological and social collapse (both metaphorical and actual) could, paradoxically, save her from total absorption into the machine, by way of carrying becoming-dehuman to its extrema.</p><h3 id="from-here-where">From  here, where?</h3><p>There are philosophical and practical questions on the matter of the dehuman that should be asked here, but will take far longer to answer. First, are the dehuman autist and the dehuman telephone girl isomorphic to one another? That is to say, am I excessive in attempting to argue that there are commonalities between the manner in which the autist and the call center worker are each dehumanized? I am not yet as confident as I would need to be in order to conclusively argue that the same processes are at work, or that the end results of those processes are subtypes of a unifying dehuman type. Another concern is that, despite my best hopes, I am not alone in my efforts to construct something called the dehuman; in particular, Timothy Luke (1996, 2000) engages with the term in a manner that, in some respects, resembles my own thoughts above, but at least at first glance depends, understandably, on the cyborg as a load-bearing modular component connected to his dehuman in a manner my own project (at least for now) intends to avoid. There is also the matter of whether Pierre F&#xE9;dida means by <em>d&#xE9;shumain </em>something akin to what I mean by dehuman, <s>as strong a motivation as any to learn to read French, as his works have not yet been translated</s>. Those aside, my own understanding of dehumanization is insufficient to make the above claims any more than tentatively; it will be necessary to develop the deepest possible understanding of the term&apos;s use in all domains that speak of dehumanization in order to properly answer the question of whether dehumanization can truly be said to produce the dehuman.</p><h3 id="one-last-thing">One last thing</h3><p>I have never really been that good at judging the level of effort a task requires, and this is no less true of the present plan of work. It&apos;s entirely conceivable that all of the above is already more effort than is necessary to approach this problem, and that I&apos;ll only know this after spending five years reinventing the very wheel under which I find myself crushed out of a theorist&apos;s vain certainty that I could have chosen a better name. I am, at the same time, concerned that I have taken upon myself a whole field&apos;s worth of work, which, compared to the previous concern is far more soluble: I should hope I&apos;m not the only one who might want to think like this, and that potential fellow travelers might drop me a line so that we could work out how to go from here. Finally, it is certainly possible that one reason why the term &quot;dehuman&quot; has not been taken up in the way I outline is that doing so necessarily involves saying something about a large population (which includes many friends and colleagues) that sounds, well, <em>dehumanizing</em>. On this point I can only ask everyone&apos;s indulgence for however long it takes me to follow this thought where it leads.</p><p>Whatever the case, my hope is that this is as much a defense of my radio silence since &apos;22, as it is a provocation. Again, if you&apos;re at all interested in this work, please send a message my way, using the email address on my About page; I have very little going on at the moment and would welcome advice, reading suggestions, cease-and-desist notices, fan letters, challenges to either debates or duels, offers to collaborate, unfounded accusations, and other ideas you think I&apos;d rather pursue.</p><h3 id="references">References</h3><p>Eyal, Gil, ed. <em>The Autism Matrix: The Social Origins of the Autism Epidemic</em>. Cambridge, UK&#x202F;; Malden, MA: Polity, 2010.</p><p>F&#xE9;dida, Pierre, ed. <em>Humain-D&#xE9;shumain</em>. Petite Biblioth&#xE8;que de Psychanalyse. Paris: Presses universitaires de France, 2007.</p><p>Guberman, Josh, and Oliver Haimson. &#x201C;Not Robots; Cyborgs &#x2014; Furthering Anti-Ableist Research in Human-Computer Interaction.&#x201D; <em>First Monday</em>, February 7, 2023. <a href="https://doi.org/10.5210/fm.v28i1.12910">https://doi.org/10.5210/fm.v28i1.12910</a>.</p><p>Keyes, Os. &#x201C;Automating Autism: Disability, Discourse, and Artificial Intelligence.&#x201D; <em>The Journal of Sociotechnical Critique</em> 1, no. 1 (December 4, 2020). <a href="https://doi.org/10.25779/89bj-j396">https://doi.org/10.25779/89bj-j396</a>.</p><p>Luke, Timothy W. &#x201C;Cyberspace as Meta-Nation: The Net Effects of Online E-Publicanism.&#x201D; <em>Alternatives: Global, Local, Political</em> 26, no. 2 (April 2001): 113&#x2013;42. <a href="https://doi.org/10.1177/030437540102600202">https://doi.org/10.1177/030437540102600202</a>.</p><p>&#x2014;&#x2014;&#x2014;. &#x201C;Cyborg Enchantments: Commodity Fetishism and Human/Machine Interactions.&#x201D; <em>Strategies: Journal of Theory, Culture &amp; Politics</em> 13, no. 1 (May 2000): 39&#x2013;62. <a href="https://doi.org/10.1080/10402130050007511">https://doi.org/10.1080/10402130050007511</a>.</p><p>&#x2014;&#x2014;&#x2014;. &#x201C;Liberal Society and Cyborg Subjectivity: The Politics of Environments, Bodies, and Nature.&#x201D; <em>Alternatives: Global, Local, Political</em> 21, no. 1 (January 1996): 1&#x2013;30. <a href="https://doi.org/10.1177/030437549602100101">https://doi.org/10.1177/030437549602100101</a>.</p><p>Martin, Mich&#xE8;le. <em>&#x201C;Hello, Central?&#x201D;: Gender, Technology, and Culture in the Formation of Telephone Systems</em>. Montreal: McGill-Queen&#x2019;s Univ. Press, 1991.</p><p>Smilges, J. Logan. <em>Crip Negativity</em>. Forerunners. Minneapolis, MN: University of Minnesota Press, 2023.</p><p>Williams, Rua M. &#x201C;I, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research.&#x201D; <em>Techn&#xE9;: Research in Philosophy and Technology</em> 25, no. 3 (November 1, 2021): 451&#x2013;78. <a href="https://doi.org/10.5840/techne20211019147">https://doi.org/10.5840/techne20211019147</a>.</p>]]></content:encoded></item><item><title><![CDATA[Now that NaNo is dead, can we admit it should never have lived?]]></title><description><![CDATA[<p>For the uninitiated (and I am sorry to be the one to initiate you) NaNoWriMo, or National Novel Writing Month, is an online <s>competition</s> <s>festival</s> <s>community</s> event in which people who want to be people who have written a novel put fifty thousand words into a file, then have that</p>]]></description><link>https://cariglino.tel/now-that-nano-is-dead-can-we-admit-it-should-never-have-lived/</link><guid isPermaLink="false">66d640bff7f68c46c21f8135</guid><dc:creator><![CDATA[Em Cariglino]]></dc:creator><pubDate>Mon, 02 Sep 2024 23:01:16 GMT</pubDate><content:encoded><![CDATA[<p>For the uninitiated (and I am sorry to be the one to initiate you) NaNoWriMo, or National Novel Writing Month, is an online <s>competition</s> <s>festival</s> <s>community</s> event in which people who want to be people who have written a novel put fifty thousand words into a file, then have that file word-counted in order to unlock a few free trials and brand collaborations. Lately, the <em>real</em> event of NaNoWriMo has been a series of annual scandals: most recently, a <a href="https://wrrrdnrrrdgrrrl.com/2022/11/30/on-nanowrimo-inkitt-and-being-an-author/" rel="noreferrer">brand deal with a scam &quot;publisher&quot;</a>, a t<a href="https://www.ravenoak.net/the-fall-of-nanowrimo/" rel="noreferrer">ruly bewildering diaper fetishist grooming scandal</a>, and now <a href="https://nanowrimo.zendesk.com/hc/en-us/articles/29933455931412-What-is-NaNoWriMo-s-position-on-Artificial-Intelligence-AI" rel="noreferrer">a widely panned (non-)&quot;position on Artificial Intelligence&quot;</a> that informs us that the organization&apos;s leadership figures &quot;absolutely do not condemn AI.&quot;</p><p>This position statement, which begins by insisting that it does not take a position before proceeding to take a position, contains all the usual 2014 tumblr shibboleths, decrying the &quot;categorical condemnation of Artificial Intelligence&quot; as &quot;classist and ableist,&quot; and &quot;[tied] to questions about privilege.&quot; Twitter&apos;s remaining literate denizens have spent the last little while hollowing out these arguments, but while there is no point beating a dead horse, there is knowledge to be gained in dissecting one. With that in mind, I offer a reading of this post and its <a href="https://nanowrimo.zendesk.com/hc/en-us/articles/29929627478804--I-can-t-believe-NaNoWriMo-is-endorsing-a-person-company-who-does%7Ccompanion">companion</a>.</p><h1 id="what-is-what-is-nanowrimos-position-on-artificial-intelligence-ai">What <em>is</em> &quot;What is NaNoWriMo&apos;s position on Artificial Intelligence (AI)?&quot;?</h1><blockquote>NaNoWriMo does not explicitly support any specific approach to writing, nor does it explicitly condemn any approach, including the use of AI.</blockquote><p>The authors ask us to accept our first undefended premise: that &quot;the use of AI&quot; systems (in general, unconditioned by use case) is an &quot;approach to writing.&quot; <a href="https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art%7CHere">Here</a> is a critique of that assumption, argued beautifully by someone who has an approach to writing that does not (unless Mr. Chiang is cruelly deceiving us) include the use of AI systems. Under that premise are a couple of key assumptions that should be highlighted here: first, that what a large language model does should be considered writing; second, that a writer who &quot;approaches&quot; writing with words not their own should be considered to be writing. Dr. Bender gave the critique of AI a priceless gift by coining the phrase &quot;stochastic parrot,&quot; but it is unfortunate that the success of this delightful turn of phrase has overshadowed <a href="https://dl.acm.org/doi/pdf/10.1145/3442188.3445922#subsection.6.1%7C" rel="noreferrer">what I consider to be the key position </a>in response to the presumption that a large language model &quot;writes&quot;:</p><blockquote>Text generated by an LM <em>is not grounded in communicative intent, any model of the world, or any model of the reader&#x2019;s state of mind</em>. It can&#x2019;t have been, because the training data never included sharing thoughts with a listener, nor does the machine <em>have the ability to do that</em> [...] Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: <em>a stochastic parrot</em> (Bender et al., 2021, p. 616-7).</blockquote><p>Consider a writer who, having kept a zettelkasten her whole life, decides to overturn the box onto the table, decreeing that any cards left facing up will be arranged into her magnum opus, and any cards facing down will be cast into a fire. Did the box do the writing? Did the cards? Did gravity, or the circulation of air in the room, or the height of the table? Of course not. The writer did. The difference here is that the writer prepared every word on every card of that zettelkasten, whether her own words or (properly attributed) quotations from others. The &quot;behavior,&quot; and I use that word recklessly here, of an LLM is closer to that of a burglar who specializes in robbing writers of their notes, then sells back access to a pseudorandomly selected subset of those notes for the price of<a href="https://arxiv.org/pdf/2304.03271#subsection.3.3" rel="noreferrer"> one bottle of water snatched out of the hands of a person dying of thirst per paragraph</a>. </p><p>I am, however, fucking up my categories.</p><p>Nobody here is talking about making art, not really. NaNoWriMo is a pedagogical exercise; it is (or was intended to be) a course in the self-discipline required for sustained drafting. I am <a href="https://fivegoodhours.substack.com/p/escaping-a-hostage-situation" rel="noreferrer">not the only critic</a> to find Chiang stronger on pedagogy than on art, and this frame helps to display the sinews of NaNo at the greatest tension possible, because <em>what NaNo teaches is extremely silly</em> (but more on this later).</p><blockquote>NaNoWriMo&apos;s mission is to &quot;provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds&#x2014;on and off the page.&quot; We fulfill our mission by supporting the humans doing the writing.&#xA0;Please see <a href="https://nanowrimo.zendesk.com/hc/en-us/articles/29929627478804">this related post</a> that speaks to our overall position on nondiscrimination with respect to approaches to creativity, writer&apos;s resources, and personal choice.</blockquote><p>&quot;Provide the structure, community, and encouragement to help people use their voices&quot; &amp;c. is a phrase which here means &quot;offer an incentive to produce fifty thousand words of text.&quot; Historically, the incentive has been <a href="https://nanowrimo.org/winner-goodies-2021" rel="noreferrer">a large stack of coupons</a> for various items that writers of fiction allegedly need (see the brand deal controversy linked above). Other than that, the most consistent reward seems to be <a href="https://qr.ae/p22vHB" rel="noreferrer">the smug satisfaction of calling oneself a novelist</a>. That the reward is one of satisfaction rather than accomplishment is illustrated by the existence of <a href="https://nanogenmo.github.io/" rel="noreferrer">NaNoGenMo</a>, National Novel Generation Month, a decade-old event showcasing the kind of texts that, as of three days ago, NaNoWriMo appears (despite insisting otherwise) to endorse. &quot;The &apos;novel&apos;,&quot; per the organizer, &quot;is defined however you want. It could be 50,000 repetitions of the word &quot;meow&quot;. It could literally grab a random novel from Project Gutenberg. It doesn&apos;t matter, as long as it&apos;s 50k+ words.&quot; This is a criterion shared by the process used by NaNoWriMo to screen for winning submissions. This is the <em>only</em> criterion they <em>could</em> use; anything more stringent would make NaNoWriMo a different exercise altogether (perhaps even - horror of horrors - a <em>writing contest</em>) and anything less stringent, well, that&apos;s Camp NaNoWriMo.</p><blockquote>We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege.</blockquote><p>This is here because the contemporary argument must always be an identity-political argument. The defense of large language models, like the defense of food delivery gig work, must wear the armor of progress. This is vital in the present case, because the left (the audience to whom these sorts of appeals are always addressed, and with whom they so frequently fall flat) has broadly adopted a coherent view on AI systems as they relate to various forms of oppression, namely that it&apos;s <em>bad</em>. The well-meaning liberals at NaNoWriMo offer a brief bulleted list of responses; we can take them in turn.</p><blockquote><strong>Classism.</strong> Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.</blockquote><p>We can reasonably tell we are not dealing with Marxists whenever we see &quot;classism&quot; in place of &quot;class.&quot; We can be certain of it when class is framed as a function of &quot;financial ability,&quot; a curious turn of phrase that suggests that the proletarian is oppressed not because he does not own the means of production, but because he cannot afford to buy them. This is the argument, not of someone who believes AI (the generic concept, not any one AI system) should be &quot;democratized,&quot; but of one who believes it to be &quot;democratizing&quot; writing out from under the oppressive yoke of having to write, or edit, or typeset. This operates from the assumption that &quot;to hire humans to help&quot; is a necessary precondition, not of <em>publishing</em>, but of writing itself. This position only makes sense when one remembers one of the rewards (perhaps the <em>key</em> reward) of NaNoWriMo: the satisfaction of becoming and naming oneself as a novelist. <em>Writing</em> doesn&apos;t involve hiring a coterie of help, but <em>being a novelist</em>, of the sort one imagines when one fantasizes about becoming one, demands it.</p><p>Now, instead of paying one of the dwindling population of proofreaders to scour your manuscript for the sort of glaring errors that creep in after spending a month maniacally spitting text into a file, you can pay OpenAI to rent a few moments of computing time to have a machine spit back what looks enough like edited text to prolong the novelist fantasy, which because of the manner in which LLMs work, will no longer be <em>your</em> text. If &quot;the financial ability&quot; to &quot;engage a human&quot; editor &quot;assumes a level of privilege,&quot; (<s>which is in this sentence because we&apos;ve already read the phrase &quot;financial ability&quot; twice in this paragraph and goodness forbid we rewrite this sentence to better say what it means</s>) it could be because editing is labor and &quot;humans&quot; have this annoying quirk where we like to be paid enough for our labor to survive.</p><blockquote><strong>Ableism.</strong> Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers &#x201C;should&#x201C; be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can&apos;t &quot;see&quot; the issues in their writing without help.</blockquote><p>(Readers expect a writer to outline the standpoint from which she writes on questions of identity politics, so for the sake of avoiding getting called &quot;abled&quot; for holding the views I&apos;m about to express, please note that I&apos;m autistic. I&apos;m the kind of person about (and without) whom a paragraph like this is written, and my response below is indelibly colored by that. I hope this is sufficient; I know from experience that it isn&apos;t.)</p><p>I tire of ableism&apos;s double life as a concept - by day, a useful term for understanding the oppression of disabled people; by night, a universal tool for escaping one&apos;s duty of solidarity toward others. It is in the cynical latter sense that we see this concept deployed here. The very idea that &quot;some brains and ability levels require outside help&quot; - crucially, not some <em>people</em> - in order to &quot;achieve certain goals&quot; is deployed as a justification to recontextualize opposition to AI systems as something as heinously and absurdly bigoted as opposition to curb cuts or automatic doors. &quot;Can&apos;t you see, some people need-&quot; need <em>what</em>? To write, almost certainly, and there exist accessible tools for disabled people of many varieties to do that. To <em>be a novelist</em>? No. The need for an elevated status should not be indulged, but addressed as a psychological wound.</p><p>If AI systems are capable of writing, it might make sense to consider whether they can be used to help others to write. I reach the conclusion that LLMs are not an accessibility tool because I start by accepting the position of the AI critics that, based on an analysis of the operations of an LLM, what it does is not writing as people are understood to do it. However, even if I assume that LLMs write, that is to say that they not only produce something that resembles a comprehensible text but do so in a way that can be generally agreed to be called &quot;writing,&quot; that does not satisfactorily warrant the claim that these systems address the ableist underpinnings of writing, it would just mean that instead of a person writing, a machine does it. Accepting the position that an AI system can write - the position necessary to adopt in order for their output to be considered writing - causes the argument that LLM text must be acceptable to NaNoWriMo as an answer to ableism to fall apart; if the software can write, then <em>it</em> is writing, and not the disabled person presumably being &quot;helped&quot; by it. If it can&apos;t write, then what it produces isn&apos;t writing, and the user has not <em>written</em> a draft of a novel, but has caused a piece of software to produce an output that superficially resembles one. In either case, a disabled person is not being helped to access the process of writing, but being asked to accept the appearance of having written as a substitute. I am content to say I would find this unsatisfying; if, as argued above, NaNo is about the satisfaction of having written, then disabled people are presumed to accept dissatisfaction.</p><p>I keep using the phrase &quot;disabled people,&quot; and this is because the authors of this statement chose not to. I am not a &quot;brain,&quot; or an &quot;ability level,&quot; or some abstract concept of a &quot;writer.&quot; I am a <em>person</em>, but in this paragraph we&apos;re denied even the consolation prize of being called &quot;human.&quot; This is a problem that arises in arguments about ableism, like this one, that address ability in the abstract without acknowledging that ableism is something that is <em>done to,</em> not simply <em>done</em>. (This is perhaps a cousin of my problem with the word &quot;dehumanization,&quot; which I may address another time.)</p><blockquote><strong>General Access Issues.</strong> All of these considerations exist within a larger system in which writers don&apos;t always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.</blockquote><p>This is here because the kind of person who writes a post like this believes lists need to have three items as some kind of iron law of list making. You can tell that this is an afterthought because what little specificity existed at the beginning of the list is long gone. &quot;Underrepresented minorities&quot; (again, not people!) get siloed into self-publishing, because of a lack of &quot;equal access to resources,&quot; whichever resources those are. &quot;All&quot; (two) &quot;of these considerations exist within a larger system,&quot; but that system remains anonymous for its protection. The &quot;indie author space... inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur,&quot; but what are... actually hang on a second, &quot;authors who do not suffer&quot; &quot;may have to incur upfront cost burdens&quot;? Let&apos;s speculate about this clause for a moment, because it has either been negated too much or not enough. I am struck with a hunch, one that I cannot prove, that whoever wrote this wrote the clause with two negatives (enough to make it mean something that coheres with the rest of the sentence) but responded to an in-editor grammar warning about double negatives by <em>dropping one of them</em> but not both. The influence of software tools, not unlike the ones being defended here, has caused someone to pen a sentence that contradicts itself, in a post that just made the claim that &quot;there is a wealth of reasons why individuals can&apos;t &apos;see&apos; the issues in their writing without help!&quot; In sum, &quot;marginalized people need help getting their writing taken seriously, which is why they should use tools that will, as we illustrate below, actively undermine them.&quot;</p><blockquote>Beyond that, we see value in sharing resources and information about AI and any emerging technology, issue, or discussion that is relevant to the writing community as a whole.</blockquote><p>&quot;We&apos;re just sharing information,&quot; they say, after saying all those mean AI ethicists just hate the poor and the disabled!</p><blockquote>It&apos;s healthy for writers to be curious about what&apos;s new and forthcoming, and what might impact their career space or their pursuit of the craft.</blockquote><p>Impact, perhaps, in the way that a jet impacts a skyscraper, or a cruise ship an iceberg.</p><blockquote>Our events with a connection to AI have been extremely well-attended, further-proof that this programming is serving Wrimos who want to know more.</blockquote><p>Setting aside the bizarre demonym the community seems to have accepted, attendance figures at events are the sort of evidence one uses to craft a successful grant application (maybe!) but probably aren&apos;t, themselves, proof of anything. (I&apos;m reminded of a question I asked a presenter at PCA in 2015, wanting to know why she took, as her object of analysis to make a claim about &quot;nerd culture,&quot; <em>The Big Bang Theory</em>, only to hear that &quot;it&apos;s the most viewed show on network television,&quot; as if I was deciding whether to give Jim Parsons a raise.) But again, perhaps &quot;Wrimos who want to know more,&quot; want that because large language model vendors tend not to explain themselves to the laity.</p><blockquote>For all of those reasons, we absolutely do not condemn AI, and we recognize and respect writers who believe that AI tools are right for them. We recognize that some members of our community stand staunchly against AI for themselves, and that&apos;s perfectly fine. As individuals, we have the freedom to make our own decisions.</blockquote><p>&quot;We can understand why some writers are elitist, ableist scumbags; they&apos;re free individuals who possess the right to be elitist, ableist scumbags! However, <em>we</em> will take the position we just outlined as being the counter-scumbag position, while also claiming to be utterly agnostic on the matter.&quot;</p><p>I have left off a portion of the post, because it did not exist when I started writing. I&apos;m reproducing it here, but it comes after the first paragraph, and the rest of the post is unaltered:</p><blockquote><em>Note: we have edited this post by adding this paragraph to reflect our acknowledgment that there are bad actors in the AI space who are doing harm to writers and who are acting unethically. We want to make clear that, though we find the <strong>categorical</strong> condemnation for AI to be problematic for the reasons stated below, we are troubled by <strong>situational</strong> abuse of AI, and that certain situational abuses clearly conflict with our values. We also want to make clear that AI is a large umbrella technology and that the size and complexity of that category (which includes both non-generative and generative AI, among other uses) contributes to our belief that it is simply too big to <strong>categorically</strong> endorse or not endorse.</em></blockquote><p>This is the committee who wrote this deciding who is allowed to have a problem with them (published authors who have intellectual property qualms about LLMs, which can be siloed off and treated as &quot;situational&quot;) and who isn&apos;t (the AI ethics crowd who see those qualms as expressions of the immanent plagiarism of LLMs, and besides, categorically reject the notion that LLMs &quot;write&quot;). This isn&apos;t a reevaluation; it&apos;s damage control.</p><h1 id="this-related-post-and-the-fannish-disposition">&quot;this related post&quot; and the fannish disposition</h1><p>&quot;I can&apos;t believe NaNoWriMo is endorsing a person/company who does [blank]!&quot; is the sort of title one gives to a FAQ that is about to be linked under a million WONTFIX bugs. It is a cry of contemptuous dismay that people would dare question why an organization would make a brand deal with a predatory &quot;publisher,&quot; or let a forum moderator funnel teens into their kink discord, or endorse the very class of software that stands to endanger the livelihoods of people who may otherwise have had reasonable writing careers - careers that could have started with their NaNo manuscript, but will now be crowded out by a sea of $3.99 synthesized fairy smut. This post merits more contempt than analysis.</p><blockquote>NaNoWriMo is not in the business of telling writers how to (or how not to) write, taking a position on what approaches to writing are legitimate vs. illegitimate, or placing value judgments on personal decisions that are a matter of free choice.</blockquote><p>The libertarians called and want their position on weed back. This is the sort of refusal to argue that, as we&apos;ve seen in the last post, can only signal that we&apos;re about to encounter some value judgments on personal decisions, including those that are a matter of free choice!</p><blockquote>Opinions about &quot;correct&quot; ways to write or &quot;right&quot; vs. &quot;wrong&quot; kinds of writers should not be brought into our spaces.</blockquote><p>This is an opinion about correct ways to write, and about which writers are the right or wrong kind of writers to be brought into a given space. This is the rhetoric of fandom, of &quot;don&apos;t like? don&apos;t read,&quot; of non-judgmental spaces that become courtrooms the moment some brave souls can no longer restrain themselves when confronted with slop.</p><blockquote>Our priority is creating a welcoming environment for all writers. There is no place for that kind of virtue signaling within NaNoWriMo.</blockquote><p>But there is a place for this kind!</p><blockquote>This position extends to our partnerships with sponsors and affiliates, with authors who we invite to write pep talks or serve as camp counselors, and to people who we invite to participate in events.</blockquote><p>Including scam artists and sex creeps!</p><blockquote>NaNoWriMo is a global community of more than 550,000 writers who we fully expect to have different values, different needs, different preferences, and different curiosities.</blockquote><p>Wait, they&apos;ve been around for how long and have only cracked half a million? Since 1999? Is that cumulative or concurrent? Never mind that - if we&apos;ve got that many people with &apos;different values,&apos; &amp;c., it&apos;s possible that <em>some</em> of those people value things like human creativity, or need things like a forum that&apos;s not a hunting ground for teens, or preferences for good writing over bad, or curiosities about why an organization that just had a scandal where it partnered with a publishing scam is really bullish about AI?</p><blockquote>Because Wrimos are not a monolith, we don&apos;t cater to a specific author archetyope or ideology.</blockquote><p>Except individualism, and the belief that having strong opinions that disagree with ours, such as &quot;it&apos;s spelled &apos;archetype&apos;,&quot; is bad and must be avoided in the name of inclusion!</p><blockquote>We take this position firmly, and we take it seriously. NaNoWriMo is a 25-year-old organization with staff that has been in the writing community for a very long time.</blockquote><p>I&apos;m not sure why this sentence exists and we should behave as if it doesn&apos;t.</p><blockquote>We&apos;ve seen tremendous harm done over the years by writers who choose to pick at others&apos; methods.</blockquote><p>These people would last half an hour in a workshop. To be fair, <em>so would I</em>, which is why I majored in lit and not creative writing! One would think a &quot;staff that has been in the writing community for a very long time&quot; might have observed that writers, as a population, tend to value and even enjoy critique? (Fan communities, on the other hand...)</p><blockquote>We&apos;ve seen indie authors delegitimized by traditionally published authors, highbrow literary types look down their noses at romance authors, fanfiction writers shamed for everything from plagiarism to lack of originality; the list goes on.</blockquote><p>There it is. There&apos;s the gripe. What do &quot;indie authors&quot; (they mean &quot;self published&quot;), &quot;romance authors,&quot; and &quot;fanfiction writers&quot; have in common? A widespread perception that they&apos;re looked down upon by &quot;highbrow literary types,&quot; despite all being fields subject to a great deal of scholarly attention! (That PCA anecdote from earlier happened because I was there to give a paper on Sherlock Holmes fanfiction and its depictions of autism, <s>back when I was a hack.</s>) This isn&apos;t tilting at a windmill, it&apos;s tilting at a Dunkin Donuts that was built where a windmill last stood in 2006! Like other right-wing ideologues, the fandom fandom cannot accept that they&apos;ve <em>won</em>, that the whole world exists to cater to them with endless plastic tchotchkes and feature length commercials for same, because sometimes some <em>fucking snob</em> makes fun of their mediocre writing and that means that actually the literature elitists are still exercising their hegemony over university course catalogs and the &quot;writing community&quot; on Twitter!</p><blockquote>Not only is this sort of shaming unnecessary and often mean. It&apos;s proven itself to be short-sighted.</blockquote><p>Hey, uh, pal? I think you dropped a comma. I&apos;m sorry for sounding like one of those &quot;highbrow literary types&quot; who, when she opens up a blog post, expects to be able to, y&apos;know, <em>parse it</em>, but you&apos;re really not helping yourself here.</p><blockquote>Some of the most shamed groups within the writing community are also the most successful (e.g., Romance is one of <a href="https://www.literatureandlatte.com/blog/why-are-romance-novels-so-popular#:~:text=Romance%20novels%20are%20the%20%E2%80%9Chighest,that%20flourished%20in%20the%201970s.">the highest-grossing genres</a>; an <a href="https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/92003-survey-finds-self-published-authors-making-gains.html#:~:text=A%20survey%20commissioned%20by%20the,authors%20published%20by%20traditional%20houses.">increasing body of data</a> shows that indie authors do better than trad-pub authors, and some of the biggest names in publishing <a href="https://lplks.org/blogs/post/21-published-authors-who-write-fanfiction/">started out in fanfic</a>).</blockquote><p>&quot;Successful&quot; at what; &quot;do better&quot; how; &quot;biggest names&quot; why? See, success is when you <em>make money</em>. You should aspire to having so much money that you can spend the rest of your life in a castle complaining about transsexuals! Craft? <em>Fuck</em> your lecture on craft; <em>The Big Bang Theory</em> is on!</p><blockquote>NaNoWriMo&apos;s mission is to &quot;provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds&#x2014;on and off the page.&quot; We fulfill our mission by supporting the humans doing the writing. That means not judging them and not allowing judgmental dynamics to enter into our spaces.</blockquote><p>We&apos;ve already been here, we&apos;ve already done this, and we&apos;ve already bought the year&apos;s subscription to some Grammarly knockoff that&apos;s a couple weeks away from being announced as a 2024 NaNoWriMo brand partner.</p><h1 id="what-do-wrimos-learn-about-writing">What do Wrimos learn about writing?</h1><p>Fuck all. Try again.</p><h1 id="what-do-wrimos-learn-about-being-writers">What do Wrimos learn about being writers?</h1><p>There we go. Before we went on this masochistic hike together, I said NaNoWriMo is pedagogical, that it&apos;s meant to (or heavily insists that it&apos;s meant to) teach the kind of discipline necessary to quickly produce a draft that can be edited and published. It already excludes any other goals; it&apos;s <em>actively hostile</em> to anything that can impede the production of the draft, the extraction of each of those fifty thousand words from every spare second in November. After all, if people don&apos;t <em>finish,</em> they can&apos;t access all those coupons! They&apos;ll have to pay <em>full price</em> for the accouterments of being-an-author, and if they have to pay full price, they might reconsider!</p><p>NaNoWriMo is a course in writing the way a <a href="https://www.youtube.com/watch?v=c5OOHotxAYk&amp;pp=ygUGdG9tIHZ1" rel="noreferrer">Tom Vu seminar</a> is a course in real estate. The object of NaNoWriMo is to unlock deals on products that, when purchased, will mark the participant as an <em>author</em>, a <em>novelist,</em> even! The writing is there to provide some kind of basis in reality for this act of self-fashioning; the draft is as much a part of the personal front as the products NaNo is about to sell you, as the meet-ups with other people who can help you reinforce your belief, as the laptop open in public conspicuously displaying Scrivener, because <em>you&apos;re too serious to be using Word</em>. If success as a writer is about making money, as the people who run this operation seem to believe, then success at running this operation is about making customers who, unfortunately for the rest of us, will now forever be &quot;authors.&quot; Of <em>course</em> they&apos;re okay with synthesized text, and of course they are going to argue that your having a problem with it makes you a classist, an ableist, and a snob! This isn&apos;t <em>for</em> you. You&apos;d write <em>anyway</em>, without buying anything besides - how <em>pretentious of you</em> - paper and pens.</p><p>NaNoWriMo does not teach anything about how to write that can&apos;t (or for that matter, <em>can</em>) be imparted in a tweet about disciplined creative work, <a href="https://x.com/pourfairelevide/status/1830574945199395113" rel="noreferrer">like this one right here</a>. He just freed up all your nights in November. Now you can spend them <em>writing</em>.</p>]]></content:encoded></item></channel></rss>