Coding: 'why i don't use AI'

this is from ‘unixsheik’ - that’s the title he used prior to ‘unixdigest’ and the one i prefer - i’ve communicated with him and he seems to be a very capable, clean coder and sys admin - he is one of the reasons why i refer to the ‘modern web’ as being a garbage dump

The reason why i don’t use AI or even code completion

Let’s begin with AI. The reason why I don’t use AI, besides from what I will explain in a moment about code completion, is because I have yet to see useful results beyond what is comparable to ripping of code snippets from Stack Overflow. Yes, AI can certainly produce much code, more that snippets, but what it produces is ripped off snippets of code from here and there. Often the code doesn’t work or it contains security problems.

AI is hyped up to be something it isn’t and it’s “stupidity” (you can’t really call it stupid, because it cannot be smart to begin with) is concealed behind what appears to be solid and authoritative answers. Still it knows absolutely crap about what it is doing.

Microsoft CEO says up to 30% of the company’s code is written by AI - no surprise there

It seems like the tech world is falling completely apart, with the exception of a few good projects and products, everything is getting so crappy that it has long passed beyond what is fathomable.

1 Like

I’ve used AI sparingly but have colleagues that are further invested in it.

Briefly tried cursor and it did pretty well, albeit very similar to examples of code I’ve already came across. My example was using ANSI C and there were a lot of unused variables. Personally I’d allow it to generate code to evaluate as a simple MVP but nothing else.

I read that entire blog post (modern web’ as being a garbage dump) before looking at individual points I thought I’d like to address. I do appreciate their general frame of reference of how things should be done, but it sort of feels like a server vs client side argument of what should do the processing, rather than anything about AI.

On code completion, surely it just needs someone to review it. It can save keystrokes (beyond the IP issues).

There will definitely be some leaky software based on people prompting code, but there’s always a case of saving time (beyond the IP issues).

The list of reasons why I do not use LLM’s in my personal life or for programming/development is not a short list. And yet, whenever I’ve tried to tell people these reasons, they get incredibly defensive.

I don’t know much about brainwashing/mind control/propaganda, but whatever the corporate/government authorities have been up to in this realm has been incredibly effective. My armchair estimate is that at least a third to half of the developed world is already hooked on AI assistants in one way or another, and the people who are falling in love with AI assistants or using them as emotional support seem to be just the tip of the iceberg. I don’t really care what kind of slop the next generation of proprietary OS’s and/or proprietary hardware has baked into it, because I’m going to avoid it like the plague, but what’s sad is I imagine it’s all going to make it harder to regular people to distance themselves from it all should they ever realize what a trap it is.

i kind of agree - at least i’m certainly not in the ‘don’t use AI’ camp - technology isn’t the problem, it’s how it’s used and we humans (business) are simply not capable of using it in a purely beneficial way

i don’t think anyone should be stuck in a windowless cubical hammering away on a keyboard if they don’t want to and i think AI could become a vastly superior tool for writing code, but it’s got a long, long way to go - right now it’s more hype than reality

that may be conservative estimate and if we widen the scope beyond ‘assistants’, i’d wager that the number is close to 100% - for example, a lot of medical stuff depends on AI

try taking a cell phone away from a teenager … people are sold on the tech under the auspices of it being fun, useful and convenient, and it can be all these things - they don’t see the dark side, or don’t care, even if they do

a big caveat i neglected to mention - does using AI to assist with code make people lazy?

we’re already in a terrible spot - i think it was Stallman or the FSF or someone that said that ~90% of the code out there is crap (inefficient, security, wrong language, etc.), so the last thing we need is lazier coders (look at the atrocity the web has become)

Right, which brings me to another important point.

I think the co-opting of the language in the 2022-present AI hype-boom is an enormous red flag. Prior to 2022, the term AI simply meant “artificial intelligence”, or any computerized system that served as a replacement for organic (non-artificial) intelligence. It could refer to general problem solvers (GPS’s), which are programs that, given a complete set of rules and constraints, can find solutions to logical problems. Or, it could refer to neural networks that classify images and sounds. Or, it could refer to support vector machines (SVM’s), which serve a similar purpose as neural networks, but do so in a completely different way.

Now, colloquially, the term “AI” has become synonymous with LLM’s (large language models), and especially the popular assistants built using those models. It has become a synecdoche, leaving all of the other hundreds of fields of research under the umbrella of “artificial intelligence” hanging out to dry.

It reminds me of the changing of the definition of the word “vaccine” during Covid. The definition according to the 2008 Compact Oxford English dictionary I have sitting on my desk is “a substance made from the microorganisms that cause a disease, injected into the body to make it produce antibodies and so provide immunity against the disease”. Now, compare this to the hilarity that is VACCINE Definition & Meaning - Merriam-Webster. I think any top-down change in common language like this signifies a brainwashing campaign is taking place. In the case of AI, various corporate and government entities want us to forget about the other fields of research in AI and associate these BS-generating assistants with sophistication. It’s also a nice, two-syllable term, which helps the masses accept it more easily.

I’m aware there are LLM models that you can modify on your own (last I checked, there was one called Llama/Llamas). There are also tools that allow you to build your own. Even so, my reasons for wanting to avoid this trend go beyond the fact that the popular assistants are being created, marketed and controlled by malicious big-tech actors. I hesitate to get into them here, though, for the sake of brevity, and because I’m used (based on real-life interactions) to people just “not getting it”. What I will say, however, is that this hype/boom has helped me realize there were a lot of technologies and “conveniences” in my modern lifestyle that were actually harming me in a myriad of hidden, subtle, but not insignificant ways.

I can understand others coming to different conclusions for themselves. But when in doubt, and when I have a choice, I prefer to err on the side of eliminating novel behaviors/lifestyle choices/technologies that humans simply aren’t accustomed to and seeing if I can get the same thing in a different way.

is there a fundamental difference between neural networks, artificial intelligence and machine learning? i’ve been treating them as pretty much the same

don’t get me started on the pseudo-science of “virology” and the misuse of PCR

100%, and this brainwashing, or maybe more accurately, behavior modification, has permeated the social fabric, but this is of course nothing remotely new - i think we’re just seeing a steeper ramping up in the last 25 years, give or take, and the role that “AI” is playing is particularly sinister

… if they’re even necessary

i don’t know what the future holds, but one possibility may be a greater divide between those that succumb to the UN-globalist-technocratic-smart city-AI agendas and the rest of us that choose to avoid cities like the plague and live a life much closer to nature as was intended

From a technical standpoint there is, but I totally get why the distinctions would be lost to most people outside the field. Artificial intelligence is an umbrella term for the entire field of research. Machine learning is a general term for an artificially intelligent system that learns progressively using training data. (Some artificially intelligent systems, such as GPS’s (general problem solvers) don’t require training data to work.) Neural networks just refers to a specific way of implementing artificial intelligence, so as to simulate the structure of biological brains, which contain neurons and synapses. Some artifically intelligent systems use neural networks (e.g. LLM’s, facial recognition, some chess-playing systems), while others don’t (e.g. other chess-playing systems, SVM’s, and GPS’s).

Yeah, I was tempted to mention that there are multiple layers of misinformation here

Yep, I’ve thought about this too, and I’d rather be free but uncomfortable in the wilderness somewhere than corralled and mind controlled in a techno-pseudo-utopia.

…and scientifically provable disinformation, but that’s a topic for another day