AI hype and the missing intelligence in 'artificial intelligence'

my opinion regarding AI is that it doesn’t exist, at least not in the public space - despite the hype, there’s nothing out there i’m aware of that can pass the Turing test and this is trivial to demonstrate

the massive amount of energy being consumed by AI is another problem (perhaps that can be solved with “green energy”, like cutting and burning trees ← yes, that’s a thing)

recently i had an interesting conversation with a guy who works for a very prominent robotics company and he seems to know a lot about AI and how companies invested in it are losing millions and tens of millions of dollars regarding their AI projects - he cited a few examples and provided links to financial reports

one of the points he made is that AI is surrounded by a lot of hype - the same was said by a guy i worked for who wrote neural networks to enable unmanned helicopters to fly themselves - he also stated that the major players in the AI game are doing it wrong because they’re building upon very flawed fundamentals

the tech is interesting to play with, but its uses are somewhat limited because there is no intelligence in artificial intelligence

today i saw an article over at unixdigest.com regarding the hype surrounding AI…

I passionately hate hype, especially the AI hype

it’s a very short article, but the links he provides are pretty interesting…

Has Generative AI Already Peaked? - Computerphile - YouTube

Beyond the Hype: A Realistic Look at Large Language Models • Jodie Burchell • GOTO 2024 - YouTube

Decoding AI: A Go Programmer’s Perspective - Beth Anderson, BBC - YouTube

Jon Stewart On The False Promises of AI | The Daily Show - YouTube

1 Like

A fairly familiar tech tale this one, although it’s happening on a pretty grand scale in comparison and without the extremely loose credit that gave birth to your Netflix, Uber, Snap etc.

We’re currently sat at the Peak of Inflated Expectations getting ready to roll downhill

There will be some stuff which is kicking around currently which will survive, but yeah, everyone is putting AI into everything and I’m not hearing much in the way of utility, especially in work contexts.

Some of my friends who write simple code are using it to first draft stuff, those of my friends whose coding is more involved don’t use it at all and find it to be kind of more of a barrier to doing good work than anything. For writing I find similar.

2 Likes

hey Josh - the article you linked leads to another that crystalizes the AI situation…

AI winter - Wikipedia

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.

as i learned from the links i provided earlier, apparently the history of AI extends back to the 40s

is there a difference between the previous and current AI hype? … i dunno, but i have the feeling that, this time, they aren’t going to let it go back into hibernation

1 Like

It has also come back in many different forms, generative being the bulk of what we’re currently looking at.

is there a difference between the previous and current AI hype?

I’d say this cycle looks way more money and way more totalising in terms of what it is seeking to do, but I wasn’t alive/watching on through other cycles so that could just be a case of availability bias.

Probably not going to die though, yes, I’d agree there.

1 Like

Some good commentary here mainly from the market PoV https://www.youtube.com/watch?v=huu_9rAEiQU

2 Likes

There’s a fairly-decent sci fi exploration of this mechanical turkism in fiction in the film Sleep Dealer.

As Boyle says, a lot of what you’re seeing in some sectors is a play to investors and not an example of something real (or useful). Great watch.

also :joy: :point_right: Artificial Canadian

1 Like

I’ll likely watch that. Cheers

Real AI will need biology: Computers powered by human brain cells (2023)

They’re starting by making small clusters of 50,000 brain cells grown from stem cells and known as organoids. That’s about a third the size of a fruit fly brain. They’re aiming for 10 million neurons which would be about the number of neurons in a tortoise brain. By comparison, the average human brain has more than 80 billion neurons.

Artificial intelligence is the simulation of cognitive processes by machines, a concept coined by John McCarthy in 1956 to define a new field aimed at replicating human-like intelligence through artificial means. Intelligence in the conventional sense is the cognitive ability of humans and animals to learn, understand, and solve problems.

If you take the view that machine can “think” and perform tasks that would require intelligence if done by humans, artificial intelligence has long been around. Personally I think (pun intended) that we when we talk about human intelligence we shouldn’t drop the word human. Intelligence is a much broader field that encapsulates human, animal, artificial and other intelligences. We are again making the mistake as viewing ourselves as the centre of things, just as we did when we thought the Sun revolved the Earth. All the references to AGI (Artificial General Intelligence) are unhelpful, and often used by hype-masters, so let’s not get into that. Instead I think it will be helpful to look at how we got to where we are. We cannot predict where we are headed and neither can AI, thankfully, but historical context can help look at challenges in a less tribal way. What follows is my own context and perspective.

Much of the earlier development of AI, notably through to the 1980s was characterised by usage of symbolic reasoning, logic-based methods, and rule-based systems to simulate aspects of human intelligence. It’s known as GOFAI (Good Old-Fashioned Artificial Intelligence). I dabbled in that myself back in the day, but in the 80s and 90s my early career focussed on HPC (High-Performance Computing). HPC started with supercomputers and often used parallel processing techniques to solve complex computational problems that require a vast amount of processing power. HPC has been used in fields such as scientific simulations, weather forecasting, molecular modelling, and big data analysis for decades. I suppose I have a reasonable claim to be one the main people adapting and developing HPC algorithms, for solid and fluid dynamics simulations, and porting them to PCs. Such computations are (still) difficult to port to GPUs, for several technical reasons, so my own experience (outside developing visualisation tools) was largely with CPUs.

The rise of AI we see today refers to ML (Machine Leaning) and its subset Deep Learning. The difference here from GOFAI, and indeed HPC, is that the learning algorithms are derived from data. And today, of course, there is a lot more data around! In the 1990s it was realised that ML models, particularly neural networks, improved significantly as the size of the training data increased. And in 1995, Vapnik’s Statistical Learning Theory provided a theoretical framework for understanding the relationship between data size, model complexity, and generalization performance. His theory highlighted that with more data, a model could achieve better generalization, assuming the model is complex enough to capture the underlying patterns in the data.

In the 2010s deep learning models like convolutional neural networks (CNNs) and recurrent neural networks (RNNs) demonstrated remarkable performance on tasks such as image recognition, speech recognition, and language translation, but this performance was heavily dependent on the availability of large-scale data. The breakthrough moment came with the AlexNet model, which won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012. AlexNet, a deep convolutional neural network, was trained on millions of labeled images from the ImageNet dataset. The success of AlexNet was a clear demonstration that large-scale data, combined with powerful models and computational resources (GPUs), could significantly improve the performance of machine learning systems. AlexNet was developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton; you might have heard of them! This above all was the “Aha moment” for AI researchers and indeed numerous investors and entrepreneurs like myself.

Personally speaking I got deeply involved in AI during 2014 to 2019 at the University of Sussex supporting researchers and developing projects in ML based on the vast amounts of data coming from astronomy and partical physics experiments. I also worked on quantum computers there, but that is another story. More relevantly these groups were also developing techniques in Bayesian inference, which is a statistical method that has played a significant role in the development of various machine learning techniques. In Bayesian inference theoretical and empirical models can be used and then combined with machine learning to handle uncertainties, probabilistically. Bayesian methods don’t require anything like as much data as deep leaning, so we spun-out a company to apply these techniques to challenges for SMEs with smaller and more specialised datasets, in 2019.

Meanwhile the next big breakthrough in AI was the Transformers paper of 2017, called “Attention is All You Need” published by Vaswani et al. at Google. The Transformer architecture has since revolutionized the field of natural language processing (NLP) and has kickstarted the current hype-wave seen with LLMs (Large Language Models) and other forms of Generative AI. Anyone like me who knew about that paper was confident that AI would see a big surge, the only uncertainty was when. It came earlier than almost anyone expected. And whilst OpenAI dominated the mainstream press, with the breakout of ChatGPT in late 2022, there were others already with projects shipped, and/or shelved well before that. We became aware aware of, and/or involved in these during 2021 at Mojeek, and after I joined in 2020.

At Mojeek we have been following these recent developments in AI closely since. We use and develop typically small and efficient ML models for various tasks. And as you may know we use an open source LLM (Mistral) for Mojeek summaries, applying RAG (Retrieval-Augmented Generation) using the Mojeek search results. As it happens the Mojeek API was used by FAIR who pioneered RAG back, in 2021 so we are well aware of how this is used, and how it plays a massive if largely unseen part of Generative AI products nowadays.

Where are we heading? Yes, there is a hype wave, and we are now past the peak. And in my opinion there is plenty of utility already showing, particularly in enterprise products, so we won’t see the bubble burst. It will likely, and arguably is, deflating. And sadly some of the “alternative” AI companies are being swallowed up by Big Tech; Infection into Microsoft and Character into Google. Yes there are all sort of issues that should concern us; hallucinations, copyright, data “theft”, rampany data harvesting, and further privacy erosions.

AI “progress” is best seen as a long curve of development, and so I hope this longer than expected post helps with that. Thanks for reading this far, if you did.

3 Likes

I think it is that “various tasks” that us plain folk don’t understand.

I was reading this article about Jim Balsillie and there was a part about machine learning, intellectual property, and Canada:

Balsillie points to the Toronto-Waterloo corridor, often referred to as Silicon Valley North. The area is globally renowned as a producer of cutting-edge technology. World-changing tech companies? Not so much. For example, Geoffrey Hinton, the University of Toronto professor known as the godfather of artificial intelligence, started a company in 2012 to develop his revolutionary approach to machine learning but sold it to Google a short time later. Hinton’s discovery transformed Google. Eric Schmidt, the company’s former CEO, publicly thanked Trudeau at a 2017 event in Toronto for the gift of Canada’s AI innovation. “We now use it throughout our entire business, and it’s a major driver of our corporate success,” he said.

That’s the same exact hint: We now use [machine learning] throughout our entire business, and it’s a major driver of our corporate success.

That lack of understanding outside industry was also reflected in this article:

So, I’ll say that it is not clear how machine learning is being used in new ways beyond the code assistance and voice recognition that we’re all familiar with.

is that how AI should be measured though? maybe my criteria is too narrow, but i keep falling back to the Turing test and, as i mentioned earlier, there is nothing out there that i’m aware of that comes even close to passing it

i don’t know a lot about AI, but it seems that the fundamental difference between human intelligence vs. AI, regarding what were are being inundated with today (LLM’s), is that the AI can only make “intelligent” (arguably) choices based on already existing data, whereas the human can factor in scenarios and variables that aren’t expressed in the models - we can ‘imagine’

this gets us back to the video i linked earlier, ‘Has Generative AI Already Peaked?’, where more doesn’t necessarily equal better and, in fact, may result in the opposite

science can tell us that the cube is of ‘x’ dimensions, density and composition and that’s all that’s needed to reproduce the thing exactly - adding the opinions of 100m people, in this most simplistic example, is pointless, yet this is how AI is trained (LLM’s)

one of the most basic signs of true intelligence isn’t realized when an AI tells you in one sentence that the earth is spherical, then completely contradicts itself in the next by insisting it’s flat and that’s where “AI” seems to be at and i’m not sure how any amount of training using LLM’s, or the size of the data sets, will change that - i think the ‘truth’ will only be diluted by more data, at least in many cases

as i said, i don’t know a lot about neural networks, but i know and worked with someone who does and who has been in the game for a very long time at a high level - he insists that the basic foundations upon which AI is being built are fundamentally flawed

so yes, it depends on what criteria we set in order to be considered intelligent, but the way i see it, the ‘I’ is currently very absent in ‘AI’ - now what exists in SAP’s is perhaps a totally different matter, suffice to say that, one way or another, the future can be accurately predicted

To cite two examples: 1) as part of safe search for categorisation 2) for embeddings which underpins semantic search scoring

That’s hyperbole. The acquisition of DeepMind has been far more significant to Google. And let’s not forget Hinton’s student Sutskever, co-founded OpenAI.

To list some of the usage in consumer internet applications, never mind the vast swath of enterprise applications:

  1. Recommendation in eCommerce
  2. Recommendation for streaming services
  3. Content personalisation for feeds
  4. Ranking for search engines
  5. Ad targetting
  6. Image recognition
  7. Spam filtering
  8. Content moderation
  9. Autocorrect and predictive text
  10. Translation
1 Like


11. manipulating public opinion

Whether the Turing Test has already been passed is being debated and researched.

I agree and this is not often mentioned.

Indeed who realises that the word intelligence derives from the Latin “intelligentia”, which comes from two Latin roots: “Inter” meaning “between/among” and “Legere” meaning “to choose/read/pick out.” “Intelligentia” thus roughly translates to “the ability to choose between” so that relates to your point about making choices.

Personally I always hated the term AI, and until 2022 always tried to refer to machine learning and other more accurate terminology. There was a joke circa 2015 to 2020 that “AI is for pitch decks, ML is for real work”. Sadly the MSM took the generative AI hype bait, and it will be hard to reverse the trend. It would be far more productive for everyone if we discussed machine learning instead of artificial intelligence. And don’t get me started on AGI.

We should certainly be wary of the generative AI hype, but what those in the field realise it’s about more than that. I blame MSM again, and the folks they listen too, who are either tyoically promoting their investment or safetyist agendas. Who among them is writing much nowadays about other ML techniques?

The problems of generative AI are well-known and being worked upon. The hallucination problem of autoregressive LLMs is inherent; you can call it fundamentally flawed if you like. Solutions lie in other techniques and developments, we can and cannot yet imagine.

If you want to undertand where AI :pensive: is heading read the direct words of a variety of experts, and not the MSM, politicians or hype-masters. If I had to pick just three I would recommend the following:

  • Yann LeCun
  • Andrew Ng
  • Jack Clark

220 seconds of this podcast with LeCun starting at this point is the best short take I have heard yet, on where we are at. If you listen on you will hear some of the approaches that are being worked on to address the limitations that (autoregressive) LLMs have.

i agree with MIT …

All these claims are moot, however, since to date no reported test has conformed to Turing’s specified parameters.

then there’s the question of whether the specified parameters are sufficient to conclude, yes, it ‘thinks’…

Turing’s test is qualitative and discursive, and it disallows the tricky questions that computer scientists have typically used to unmask chatbots …

true AI, in my opinion, cannot imitate - it must originate (must have original thought) and all i’m seeing is nothing more than ‘intelligent’ imitation; a sometimes coherent assembly of previously published content … which is then brought to its knees with the right questions

while that process is not so dissimilar from human thought much (much=most i think) of the time, we have the ability to inject original thought if we choose

here’s what i think is a simple example of how GPT spectacularly fails the Turing test - ask ‘it’ a question, then repeat the same question

the answers may be the same, or similar, or different, but it won’t be ‘i just gave you the answer’ which is what a logical human response might be - it doesn’t realize that it already answered the question

it’s kind of like the chess example; sure, you can teach a computer to play chess, but does it know it’s playing chess?

so, ok, we can say that whether or not it passes the test depends on who’s asking the questions and what questions they ask and how they judge the answers, but that lacks a standard criteria and with the right criteria, all of the AI the public has access to, that i’m aware of anyway, fails miserably

i mean, they’re calling it specifically “artificial intelligence”, the insinuation being artificial human intelligence, when they should, as you suggested, be calling it machine learning or something along those lines

…some theorists have argued that it is mathematically impossible for a vast lookup-table mechanism to pass Turing’s test in the actual world–and Turing focused on real-world machines.

mmmhmmm!

regarding the other article, the numbers are interesting…

Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes – after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time,

ELIZA, a system pre-programmed with responses but with no large language model (LLM) or neural network architecture, was judged to be human just 22% of the time. GPT-3.5 scored 50% while the human participant scored 67%.

assuming some amount of comprehensive criteria was established for the test, if people are fooled into thinking a human is the machine 33% of the time … then is the Turing test even relevant? (yeah, i’m contradicting myself here, but there it is)

Something missing from artificial intelligence is the ability to value one choice over another.

There is a passage in David Eagleman’s The Brain which illustrates this point. In the book, he describes a patient with a traumatic brain injury. And, while she can analyze her options, as a practical issue, she cannot make decisions. And Eagleman draws the conclusion that she can no longer value one choice over another because she is no longer integrating the feelings coming from her body into her thoughts.

One way of expressing this idea is that artificial intelligence does not have a physical body. So, it does not have a set of instincts guiding a (potential) decision-making process.

As a further extension of this metaphor, AI doesn’t relate to or depend on other people for survival. And, so, it lacks an inherent motivation to act like people.

So, one argument might be that artificial intelligence would need some way to emulate these human characteristics before it could start to match the decision-making performance of humans. In other words, it would need some kind of evolutionary history and survival instinct in order to have more human behaviors.

Or, that criticism might be a limit of my imagination. In that Patrick Boyle video, he makes a historical comparison. Humanoid robots are like mechanical horses: they make sense to people at the beginning of a technological change but not to people living after a century of development. And my point is that artificial intelligence will develop in a direction that we can’t imagine today. And its lack of human limitations will probably be the most valuable aspect of the technology. After all, I can’t run as fast as a car can drive.

1 Like

Man, I have not been in this forum forever. Thank you for all this shared knowledge. Personally, I think my brother-in-law says it best: « AI stands for artificially intelligent. ». Thank you for all the shared thoughts and links.

3 Likes

Talk show host Oprah is having a special television presentation titled “AI and the Future of Us: An Oprah Winfrey Special”.

The problem is that none of Oprah’s guests are credible critics of machine learning (ML). And this is set to basically be a giant advertisement for ML products from Microsoft (via Bill Gates and OpenAI CEO Sam Altman).

On the one hand, this show is completely legitimate because Oprah is an entertainer. And her job is to attract an audience and sell advertising. She has been doing that for forty years.

But this show will not be a good source of information. Gates, Altman, and Brownlee will promote “AI” products. Chris Wray will tell us about how America’s enemies are already using AI. And it looks like we’ll also hear something about how AI threatens humanity.

The American skeptical movement has been critical of Oprah for years for promoting unscientific ideas that harm her audience. This special appears to be more of the same. For millions of Americans, what they see next Thursday will be all they know about AI. And the predictable pro‑industry views and scare tactics will only misdirect the audience and undermine their ability to make informed choices.


Guest list:

  • Sam Altman, CEO of Open AI, will explain how AI works in layman’s terms and discusses the immense personal responsibility that must be borne by the executives of AI companies.
  • Microsoft Co-Founder and Chair of the Gates Foundation Bill Gates will lay out the AI revolution coming in science, health and education, and warns of the once-in-a-century type of impact AI may have on the job market.
  • YouTube creator and technologist Marques Brownlee will walk Winfrey through mind-blowing demonstrations of AI’s capabilities.
  • Tristan Harris and Aza Raskin, co-founders of Center for Humane Technology, walk Winfrey through the emerging risks posed by powerful and superintelligent AI — sounding the alarm about the need to confront those risks now.
  • FBI Director Christopher Wray reveals the terrifying ways criminals and foreign adversaries are using AI.
  • Pulitzer Prize-winning author Marilynne Robinson reflects on AI’s threat to human values and the ways in which humans might resist the convenience of AI.

Whitney Alyse Webb of Unlimited Hangout, is one of the greatest, independent, investigative journalists and researchers of our time

Whitney Webb has been a professional writer, researcher and journalist since 2016. She has written for several websites and, from 2017 to 2020, was a staff writer and senior investigative reporter for Mint Press News. She is contributing editor of Unlimited Hangout and author of the book One Nation Under Blackmail.

here’s a clip regarding AI from an interview she did with Neil Oliver…

Whitney Webb: The Global Elite’s Authoritarian Plan - Videos - sovren.media

the full interview is here…

Neil Oliver Interviews Whitney Webb - It’s us versus them!

‘…organised crime, secret services, corporate power, the deep state…they’re investing massive amounts of money in manipulating us…’ This week Neil talks power & corruption with the brilliant and forensically detailed investigative journalist Whitney Webb.