AI has been a subject of many conversations with my friends lately, and I’ve decided to share some of my thoughts about it in writing. Quick summary: I never use any LLM-based AI tools for any tasks, ever.
An important part of context for this statement is that my last role at JetBrains, roughly between March and August of last year, was leading the development of AI features in IntelliJ-based IDEs. I volunteered for the role because I believed (and still do) that JetBrains needs to be active in the space, and the efforts by that point were insufficient. However, back then I also didn’t use AI for real – I tested the features, of course, but I never used them for actual work, and nothing that I saw while working on them made me change my mind. One could argue that this should have disqualified me from the role, and I sure hope that the current AI leads at the company don’t share my attitude, but nevertheless at that time the decision made sense.
A lot has been written about the impact of AI on society and the climate, and I don’t have anything to add here. I do find it frustrating, though, that the use of books for training human minds is much more restricted and controlled than using books for training AI models. The AI companies claim that it’s not possible to train the models without getting access to all the books, so they’re entitled to have the access, but we humans still have to pay for each individual book or use variously limited ways (such as libraries) to read it.
However, my primary motivation for not using AI tools is emotional, not ideological. The thing is, I enjoy knowledge work – reading, writing, researching, coding – a lot, and I believe that I’m reasonably good at it. I’m also not under any pressure to deliver results faster, or beyond my capability. So I simply don’t see why I would want to outsource any of the things that I love so much to a machine. If I find pleasure in coming up with an elegant way to express my ideas, why ask an AI to do that for me? If I like reading academic papers, why use an LLM to summarize them (which it doesn’t do anyway)? If discussing my work with my peers gives me a chance to establish a human connection, why replace it with a soulless chatbot? And if coding gets tedious and repetitive, I believe that the right thing to do is create better abstractions, not reach for a slop shovel to churn out the repetitive code faster.
In a small way, part of my motivation to leave the software development industry and study historical linguistics was to move away from AI. The space of developer tools has been pretty much entirely consumed by AI, and judging by the number of AI assistants clamoring for my attention in my PDF reader, note taking tool and many Web sites, not much other software is immune to it.
Linguistics as a whole has also been heavily impacted by LLMs, and this is quite understandable given that the biggest questions of the discipline have been related to human cognition. Once we get access to an entity that can use language without involving human cognition, we can get an entirely new perspective on the properties of language and conduct experiments to understand how those properties came to be.
However, historical linguistics specifically is still mostly about understanding the relationships between data from different languages and reconstructing their history, and LLMs see a very limited application in that space. And as my work on Etymograph attests, I love using digital tools for linguistics research, but my preferences lean strongly towards structured information and rule-based tools, not LLMs.
As an aside, I’m sad that people who do use AI often don’t understand it. One interaction in the university was very representative: a teacher asked us to share our opinions about AI, I mentioned that LLMs can hallucinate, and the teacher’s reaction was “oh, you’ve come up with such a cool term for that” (except that, of course, I haven’t). AI is here to stay, and we all need much better awareness of what it can and cannot do.
Having said all of that, I do see value in some applications of the AI. I’m looking forward to the results of the Herculaneum papyri project as much as everyone else in the field. I’d love to see mass use of software like Transkribus for better digitalization of old books – I’ve worked with them a lot for my Paternosters research, and the quality of OCR in Google Books and Internet Archive ranges from mediocre to abysmal. I can see myself using LLMs for specific tasks when I need to identify or analyze all instances of a specific linguistic phenomenon in a large corpus. And even while coding, I do use the new full-line completion in IntelliJ (it uses a small local model, not a cloud LLM), and it does make me more productive.
Of course, all of this is just my personal opinion at this point in my life – if the AI works for you and helps you, that’s great, and it may well be that my position will change too. But for now, I’m doing without, and I’m happy about it.
Leave a reply to AIIsADumpsterFire Cancel reply