Right, which brings me to another important point.
I think the co-opting of the language in the 2022-present AI hype-boom is an enormous red flag. Prior to 2022, the term AI simply meant “artificial intelligence”, or any computerized system that served as a replacement for organic (non-artificial) intelligence. It could refer to general problem solvers (GPS’s), which are programs that, given a complete set of rules and constraints, can find solutions to logical problems. Or, it could refer to neural networks that classify images and sounds. Or, it could refer to support vector machines (SVM’s), which serve a similar purpose as neural networks, but do so in a completely different way.
Now, colloquially, the term “AI” has become synonymous with LLM’s (large language models), and especially the popular assistants built using those models. It has become a synecdoche, leaving all of the other hundreds of fields of research under the umbrella of “artificial intelligence” hanging out to dry.
It reminds me of the changing of the definition of the word “vaccine” during Covid. The definition according to the 2008 Compact Oxford English dictionary I have sitting on my desk is “a substance made from the microorganisms that cause a disease, injected into the body to make it produce antibodies and so provide immunity against the disease”. Now, compare this to the hilarity that is VACCINE Definition & Meaning - Merriam-Webster. I think any top-down change in common language like this signifies a brainwashing campaign is taking place. In the case of AI, various corporate and government entities want us to forget about the other fields of research in AI and associate these BS-generating assistants with sophistication. It’s also a nice, two-syllable term, which helps the masses accept it more easily.
I’m aware there are LLM models that you can modify on your own (last I checked, there was one called Llama/Llamas). There are also tools that allow you to build your own. Even so, my reasons for wanting to avoid this trend go beyond the fact that the popular assistants are being created, marketed and controlled by malicious big-tech actors. I hesitate to get into them here, though, for the sake of brevity, and because I’m used (based on real-life interactions) to people just “not getting it”. What I will say, however, is that this hype/boom has helped me realize there were a lot of technologies and “conveniences” in my modern lifestyle that were actually harming me in a myriad of hidden, subtle, but not insignificant ways.
I can understand others coming to different conclusions for themselves. But when in doubt, and when I have a choice, I prefer to err on the side of eliminating novel behaviors/lifestyle choices/technologies that humans simply aren’t accustomed to and seeing if I can get the same thing in a different way.