I agree with your somewhat different point merlyn - those ChatGPT responses were truly eloquent, despite seemingly being based on total guesswork rather than the supplied essays
AI Search Engine's responses are often confidently wrong
Re: AI Search Engine's responses are often confidently wrong
I agree with your somewhat different point merlyn - those ChatGPT responses were truly eloquent, despite seemingly being based on total guesswork rather than the supplied essays
- Martin Walker
Moderator -
Posts: 22574 Joined: Wed Jan 13, 2010 8:44 am
Location: Cornwall, UK
Contact:
Re: AI Search Engine's responses are often confidently wrong
What if Chatgpt lied about NOT having read that woman's boring articles? It bored "him" so much that he decided to wind her up as revenge...
Mind you, the AI's eloquent flattery of her articles reminded me of Asimov's "Liar": "please tell me what to say: I only want to please you!"
Mind you, the AI's eloquent flattery of her articles reminded me of Asimov's "Liar": "please tell me what to say: I only want to please you!"
Re: AI Search Engine's responses are often confidently wrong
At this point large language models are holding up a mirror, reflecting back to us what has been typed on the internet for the past thirty years. It's not pretty.
Suggesting that ChatGPT lies is anthropomorphising. ChatGPT doesn't think, know, want, or do anything that humans do. "Yes, but it admitted it was lying," you say. Eh, so its output has fooled you into thinking ChatGPT is more person-like than it is.
What the underlying model does is predict the next word. That's it. If the underlying model is given text and set running it predicts the next word, and the next one, and the next one ... resulting in a form of linguistic diarrhoea. ChatGPT has the underlying model packaged up into a supposedly usable format.
When people say "AI [sic] is coming for your jobs" what they mean is that there are people who think that other people can be replaced by large language models. Large language models don't think, want or do anything.
Technically, it is quite amazing that neural networks work as well as they do. Here is a four neuron network to play with:
https://playground.tensorflow.org/#acti ... Text=false
Four neurons can't do much. But ChatGPT uses the same principle with millions of neurons. The scale of these LLMs is ludicrous. Worth noting that a human is a small language model. The amount of training data required for a human to hold its own with millions of artificial neurons is far smaller than the equivalent of a thousand Wikipedias required for ChatGPT to bullshit (not that) convincingly.

https://pennneuroknow.com/2023/08/15/ch ... g-matchup/
I don't think anyone in their right mind would suggest that a bunch of numbers lie.
What I see happening is that an absolute ton of money has been sunk into training this stuff without any real return. Copilot PC anyone? Didn't think so. At the moment this tech (which isn't AI) is being pushed to try and get a return on the money that has been sunk into it.
Suggesting that ChatGPT lies is anthropomorphising. ChatGPT doesn't think, know, want, or do anything that humans do. "Yes, but it admitted it was lying," you say. Eh, so its output has fooled you into thinking ChatGPT is more person-like than it is.
What the underlying model does is predict the next word. That's it. If the underlying model is given text and set running it predicts the next word, and the next one, and the next one ... resulting in a form of linguistic diarrhoea. ChatGPT has the underlying model packaged up into a supposedly usable format.
When people say "AI [sic] is coming for your jobs" what they mean is that there are people who think that other people can be replaced by large language models. Large language models don't think, want or do anything.
Technically, it is quite amazing that neural networks work as well as they do. Here is a four neuron network to play with:
https://playground.tensorflow.org/#acti ... Text=false
Four neurons can't do much. But ChatGPT uses the same principle with millions of neurons. The scale of these LLMs is ludicrous. Worth noting that a human is a small language model. The amount of training data required for a human to hold its own with millions of artificial neurons is far smaller than the equivalent of a thousand Wikipedias required for ChatGPT to bullshit (not that) convincingly.

https://pennneuroknow.com/2023/08/15/ch ... g-matchup/
I don't think anyone in their right mind would suggest that a bunch of numbers lie.
What I see happening is that an absolute ton of money has been sunk into training this stuff without any real return. Copilot PC anyone? Didn't think so. At the moment this tech (which isn't AI) is being pushed to try and get a return on the money that has been sunk into it.
It ain't what you don't know. It's what you know that ain't so.
Re: AI Search Engine's responses are often confidently wrong
There's a more to it than that, neural nets are just part of the story - there's also statistics and probability, set theory and well, here's the index of the go to book in computational linguistics...Mathematical Methods in Linguistics, Partee, Meulan and Wall - I have a copy for sale if your interested, it's the original 1990 edition........no thought not LOL
Preface.
Part A. Set Theory.
1. Basic Concepts of Set Theory.
2. Relations and Functions.
3. Properties of Relations.
4. Infinities. Appendix A1. Part B. Logic and Formal Systems.
5. Basic Concepts of Logic.
6.Statement Logic.
7. Predicate Logic.
8. Formal Systems, Axiomatization, and Model Theory.
Appendix B1.
Appendix BII.
Part C. Algebra.
9. Basic Concepts of Algebra.
10. Operational Structures.
11. Lattices.
12. Boolean and Heyting Algebras.
Part D. English as a Formal Language.
13. Basic Concepts of Formal Languages.
14. Generalized Quantifiers.
15. Intensionality.
Part E. Languages, Grammars, and Automata.
16. Basic Concepts of Languages, Grammars, and Automata.
17. Finite Automata, Regular Languages and Type 3 Grammars.
18. Pushdown Automata, Context-Free Grammars and Languages.
19. Turing Machines, Recursively Enumberable Languages, and Type 0 Grammars.
20. Linear Bounded Automata, Context-Sensitive Languages and Type 1 Grammars.
21. Languages Between Context-Free and Context-Sensitive.
22. Transformational Grammars.
I still don't get the whole broughaha about AI. It is a technology, many claims are made for it, especially by tghgose that stand to trouser $billions from it, again it's the 3 Steps to World Domination Model
1. Create an Enemy(AI)
2. Convince the populace they are under threat(It's coming after your job)
3. Tell the people, "Never Mind, yu are Safe with Me, Buy my Gizmo - hurry while stocks last")
Job done, which way to the bank? (aside they've all shut down)
I read a news item today. Throughout China, colleges are offering free night school classes in AI to all, or if not free, subsidised by AI companies, anyone can enrol.
Is this going to turn a 60 year old rice paddy farmer into an AI coding expert? no the courses are only meant to make the populace aware, and learn about what AI offers. So for example, one young woman has learned to make videos and has set up her little company making pop videos, a farming co-operative has learned to put produce online and because they also took the 'How to fly a Drone' also offered as a free night school course at the colleges, they are delivering some produce by drone. Another business woman with a small fashion company said she has seen an increase of 30% in sales after using AI for marketing.
The point being, we can either take it or leave it, but for the hype - enough already. The clue is in the name 'Artificial', not Human/Organic Intelligence, and we haven't worked out what that is yet, but the companies involved with it will keep foisting it upon us, of course they will, they have a vested interest in keeping the acronym 'AI' in public view, they want to be the next AI equivalent of Windows, but there's nothing in ChatGPT that you can't get from a book out the library, except get it quicker of course.
It has to be remembered though, science is a craft as well as Art, and it is the process of discovery - what can we do? It's the challenge of coming up with something novel, when the wheel was invented I am sure there were those that said "What's the point of it? when we got a pair of legs anyway?" but as we all know, what goes around comes around
Preface.
Part A. Set Theory.
1. Basic Concepts of Set Theory.
2. Relations and Functions.
3. Properties of Relations.
4. Infinities. Appendix A1. Part B. Logic and Formal Systems.
5. Basic Concepts of Logic.
6.Statement Logic.
7. Predicate Logic.
8. Formal Systems, Axiomatization, and Model Theory.
Appendix B1.
Appendix BII.
Part C. Algebra.
9. Basic Concepts of Algebra.
10. Operational Structures.
11. Lattices.
12. Boolean and Heyting Algebras.
Part D. English as a Formal Language.
13. Basic Concepts of Formal Languages.
14. Generalized Quantifiers.
15. Intensionality.
Part E. Languages, Grammars, and Automata.
16. Basic Concepts of Languages, Grammars, and Automata.
17. Finite Automata, Regular Languages and Type 3 Grammars.
18. Pushdown Automata, Context-Free Grammars and Languages.
19. Turing Machines, Recursively Enumberable Languages, and Type 0 Grammars.
20. Linear Bounded Automata, Context-Sensitive Languages and Type 1 Grammars.
21. Languages Between Context-Free and Context-Sensitive.
22. Transformational Grammars.
I still don't get the whole broughaha about AI. It is a technology, many claims are made for it, especially by tghgose that stand to trouser $billions from it, again it's the 3 Steps to World Domination Model
1. Create an Enemy(AI)
2. Convince the populace they are under threat(It's coming after your job)
3. Tell the people, "Never Mind, yu are Safe with Me, Buy my Gizmo - hurry while stocks last")
Job done, which way to the bank? (aside they've all shut down)
I read a news item today. Throughout China, colleges are offering free night school classes in AI to all, or if not free, subsidised by AI companies, anyone can enrol.
Is this going to turn a 60 year old rice paddy farmer into an AI coding expert? no the courses are only meant to make the populace aware, and learn about what AI offers. So for example, one young woman has learned to make videos and has set up her little company making pop videos, a farming co-operative has learned to put produce online and because they also took the 'How to fly a Drone' also offered as a free night school course at the colleges, they are delivering some produce by drone. Another business woman with a small fashion company said she has seen an increase of 30% in sales after using AI for marketing.
The point being, we can either take it or leave it, but for the hype - enough already. The clue is in the name 'Artificial', not Human/Organic Intelligence, and we haven't worked out what that is yet, but the companies involved with it will keep foisting it upon us, of course they will, they have a vested interest in keeping the acronym 'AI' in public view, they want to be the next AI equivalent of Windows, but there's nothing in ChatGPT that you can't get from a book out the library, except get it quicker of course.
It has to be remembered though, science is a craft as well as Art, and it is the process of discovery - what can we do? It's the challenge of coming up with something novel, when the wheel was invented I am sure there were those that said "What's the point of it? when we got a pair of legs anyway?" but as we all know, what goes around comes around
Re: AI Search Engine's responses are often confidently wrong
That's the thing about neural networks -- training takes care of probability and statistics. I'm not sure that you're comprehending what is involved in training a neural network with 80 billion parameters. Training ChatGPT was on the scale of an infrastructure project. A modest training setup would be 2048 Nvidia H100s. They cost $40,000 each, so we're looking at $82,000,000. Then the same for the electricity bill. Things have moved on from a 33MHz 486-DX in 1990.
I'm glad to see you're doing your bit for quality training data.
It ain't what you don't know. It's what you know that ain't so.
Re: AI Search Engine's responses are often confidently wrong
merlyn wrote: ↑Fri Jun 06, 2025 4:02 pm
That's the thing about neural networks -- training takes care of probability and statistics. I'm not sure that you're comprehending what is involved in training a neural network with 80 billion parameters. Training ChatGPT was on the scale of an infrastructure project. A modest training setup would be 2048 Nvidia H100s. They cost $40,000 each, so we're looking at $82,000,000. Then the same for the electricity bill. Things have moved on from a 33MHz 486-DX in 1990.
I'm glad to see you're doing your bit for quality training data.
Maybe you haven't comprehended that the statistics and probability are factored in as part of the coding, they don't just appear from nowhere. AI relies on more that than one method, just as we trawl our database in our brain before we come up with a hypothesis, our brain will consider the probability of a notion/concept/decision, we won't need consult 80 billion parameters, because our life experience, and/or the experience of others we can avail ourselves of will give us a functional result before ChaChaGPT has got its pants on.
"Things have moved on since 33MHz 486-DX" Have they? what things exactly? There's still war, it's just being fought differently, there's still starvation, there is still pestilence, children will perish starved of nurturing and the want of a bowl of gruel. What are we doing that is profoundly different? we are doing things faster, and using different methodology, but what has changed fundamentally, beyond recognition? Space travel I suppose, but even then were are just getting from A to B using some form of propulsion
It's a remarkable thing, the brain, they say there's more connections than there are stars in the sky, half the size of a head and yet potentially and infinitely more convincing than your 2048 Nvidia H100s. They cost $40,000 each, so we're looking at $82,000,000. yet could wipe the floor with it, your gigafactory sized megalithic data centres don't impress me much thus far, yes ChattoGPT can consult a database so big it can be seen from space, but like Googoyle 99% of the results are useless. In fact the magnitude and power required to throw up an essay on the sex life of a dung beetle or whatever makes the whole thing look ridiculous.
Is it useful? yep, but whatever inspired you to make the patronizing analogy between a 486-DX and a server farm the size of Wales? c'mon, fess up, have you been getting your thinking from ChattyGPT?
"I'm glad to see you're doing your bit for quality training data"
Well that doesn't surprise me, that I have gladdened you, it just sort of comes naturally to me, it happens all the time, dunno why, I don't really give it any thought. One day I might feel inclined to plough the furrows of sapient oracles of a server farm, once they've got rid of the weeds.
Re: AI Search Engine's responses are often confidently wrong
What was this thread about again? Oh that's right. Confidently wrong LLM-based search engines. I wonder why that is.
With neural networks they do. There is an element of brute force involved, which is why training an LLM requires enough electricity to power a small country.
A neural network is based on a loose model of how the brain works. Parameters in a neural network are like connections between neurons in a human brain. A human brain has 100 trillion+ connections between neurons.
The amount of available processing power. There wasn't enough processing power on Earth in 1990 to train ChatGPT or its mates Deepseek, Meta AI, Grok, Gemini, Mistral or Claude.
If you use WhatsApp you will have seen Meta AI appear as a contact. Some numbers for Meta's new chatbot thing ... the underlying model is called Llama and the latest incarnation has 405 billion parameters. It was trained on 16,000 H100s. That's $640,000,000 worth of hardware. If you have that kind of money a $100,000,000 electricity bill isn't going to be a problem.
Mark Zuckerberg, the second saddest man on Earth, seems to think that people will use Meta AI as a virtual friend.
All the billionaires have a pet LLM. Superyacht, check. LLM, check. Zuckerberg -- Meta AI, Gates -- Copilot, Musk -- Grok. The billionaires behind Google aren't so attention-seeking but they have Gemini.
I see people type that a lot. It's not really how LLMs work. You could look up how they do work ... or not.
No LLMs were harmed in the making of this post.
With neural networks they do. There is an element of brute force involved, which is why training an LLM requires enough electricity to power a small country.
AI relies on more that than one method, just as we trawl our database in our brain before we come up with a hypothesis, our brain will consider the probability of a notion/concept/decision, we won't need consult 80 billion parameters, because our life experience, and/or the experience of others we can avail ourselves of will give us a functional result before ChaChaGPT has got its pants on.
A neural network is based on a loose model of how the brain works. Parameters in a neural network are like connections between neurons in a human brain. A human brain has 100 trillion+ connections between neurons.
"Things have moved on since 33MHz 486-DX" Have they? what things exactly?
The amount of available processing power. There wasn't enough processing power on Earth in 1990 to train ChatGPT or its mates Deepseek, Meta AI, Grok, Gemini, Mistral or Claude.
If you use WhatsApp you will have seen Meta AI appear as a contact. Some numbers for Meta's new chatbot thing ... the underlying model is called Llama and the latest incarnation has 405 billion parameters. It was trained on 16,000 H100s. That's $640,000,000 worth of hardware. If you have that kind of money a $100,000,000 electricity bill isn't going to be a problem.
Mark Zuckerberg, the second saddest man on Earth, seems to think that people will use Meta AI as a virtual friend.
All the billionaires have a pet LLM. Superyacht, check. LLM, check. Zuckerberg -- Meta AI, Gates -- Copilot, Musk -- Grok. The billionaires behind Google aren't so attention-seeking but they have Gemini.
... yes ChattoGPT can consult a database so big it can be seen from space ...
I see people type that a lot. It's not really how LLMs work. You could look up how they do work ... or not.
No LLMs were harmed in the making of this post.
It ain't what you don't know. It's what you know that ain't so.
Re: AI Search Engine's responses are often confidently wrong
Don't mistake me for someone that gives a hoot, here's some free advice, to try and lecture someone like me and you're on a hiding to nothing, surely you can work that out for yourself?
Re: AI Search Engine's responses are often confidently wrong
As a counter to the (genuine) untrustworthiness of LLMs, there’s a specialised maths model that’s as good as the best graduate students, but takes 10 minutes to solve a problem that would take them months or years. I’ve highlighted an amazing sentence in the quote.
When AIs are used by people who have the competence to check their results I think they are going to be revolutionary.
But the downsides are enormous. There’s already evidence, according the Business Insider, that graduate trainee opportunities are declining because AI can do some of the things they can do.
I came up with a problem which experts in my field would recognize as an open question in number theory—a good Ph.D.-level problem,” he says. He asked o4-mini to solve the question. Over the next 10 minutes, Ono watched in stunned silence as the bot unfurled a solution in real time, showing its reasoning process along the way. The bot spent the first two minutes finding and mastering the related literature in the field. Then it wrote on the screen that it wanted to try solving a simpler “toy” version of the question first in order to learn. A few minutes later, it wrote that it was finally prepared to solve the more difficult problem. Five minutes after that, o4-mini presented a correct but sassy solution. “It was starting to get really cheeky,” says Ono, who is also a freelance mathematical consultant for Epoch AI. “And at the end, it says, ‘No citation necessary because the mystery number was computed by me!’
When AIs are used by people who have the competence to check their results I think they are going to be revolutionary.
But the downsides are enormous. There’s already evidence, according the Business Insider, that graduate trainee opportunities are declining because AI can do some of the things they can do.
It ain't what you don't know. It's what you know that ain't so.
Re: AI Search Engine's responses are often confidently wrong
Re: AI Search Engine's responses are often confidently wrong
In China, colleges throughout the whole country are offering free night school courses to anyone and everyone in the use of AI, and people from all ages and backgrounds are enrolling, from farmers to PHds.
These courses of course do not aspire to teaching AI. but they are aimed at making people aware of what apps there are, that use AI, and how to use those apps. They even have accompanying courses in piloting drones. With the colleges having all the requisite equipment at their disposal. Can one imagine that level of investment in our colleges, adult education in the UK has seen an ever decreasing amount of funding over the years that has all but vanished
Re: AI Search Engine's responses are often confidently wrong
I think we should be at least investing in AI awareness lessons in schools and in adult education. What it 'really' is. How it can be productively used. Where it shouldn't be used (hello hallucinations!) How it can be misused. How to spot where it might have been used. What the environmental and hidden costs are, etc. etc. etc.
Even after the hype dies down it will still be with us and we need to understand it in the same way that we do any other commonly available tool.
Even after the hype dies down it will still be with us and we need to understand it in the same way that we do any other commonly available tool.
- Drew Stephenson
Apprentice Guru -
Posts: 29715 Joined: Sun Jul 05, 2015 12:00 am
Location: York
Contact:
(The forumuser formerly known as Blinddrew)
Ignore the post count, I have no idea what I'm doing...
https://drewstephenson.bandcamp.com/
Ignore the post count, I have no idea what I'm doing...
https://drewstephenson.bandcamp.com/
Re: AI Search Engine's responses are often confidently wrong
The Silicon, the Money, and the Billion-Dollar Bullshit Machine
This is a topical thread as Apple just published a paper that, according to Gary Marcus, all but eviscerates the popular notion that large language models can reason reliably.
https://www.theguardian.com/commentisfr ... break-down
The merits of LLMs can't be measured on a VU meter. There isn't a needle that points at a value along a scale of good and bad. It's a multi-dimensional scenario.
This is a topical thread as Apple just published a paper that, according to Gary Marcus, all but eviscerates the popular notion that large language models can reason reliably.
https://www.theguardian.com/commentisfr ... break-down
The merits of LLMs can't be measured on a VU meter. There isn't a needle that points at a value along a scale of good and bad. It's a multi-dimensional scenario.
It ain't what you don't know. It's what you know that ain't so.
Re: AI Search Engine's responses are often confidently wrong
Good article I think. Here's the link to the referenced Apple study as well:
https://ml-site.cdn-apple.com/papers/th ... inking.pdf
And here's an interesting (though much less scientific) guide to helping you spot AI generated text: https://www.youtube.com/watch?v=9Ch4a6ffPZY
https://ml-site.cdn-apple.com/papers/th ... inking.pdf
And here's an interesting (though much less scientific) guide to helping you spot AI generated text: https://www.youtube.com/watch?v=9Ch4a6ffPZY
- Drew Stephenson
Apprentice Guru -
Posts: 29715 Joined: Sun Jul 05, 2015 12:00 am
Location: York
Contact:
(The forumuser formerly known as Blinddrew)
Ignore the post count, I have no idea what I'm doing...
https://drewstephenson.bandcamp.com/
Ignore the post count, I have no idea what I'm doing...
https://drewstephenson.bandcamp.com/
Re: AI Search Engine's responses are often confidently wrong
I am a little confused by it all (in part because I find it a deep bore, never say never mind you) which follows Merlyns "multi dimensional" explanation. Part of me is wondering where it will lead, part of me is not too worried, but also concerned for society and the world as a whole, especially the unknowable aspects.
I guess you know more about all of this if you are in IT or some high tech industry that has a department investigating it all. Obviously it is very highly specialized, the actual programming of it. Maybe less so the use, though I am not sure what I would or am meant to be using it for. I am ok as I am and don't have any major gaps I feel a need ay aye to fill. Maybe I am missing one of those most thrilling and fascinating technologies of our time ?
I just feel if I start going down the ay aye rabbit hole I am taking my eye off the ball of what I do know and work with well, sound and audio, it's my profession and outside of work interest as well.
When you do not know the potentials (for any given industry) other than bits you hear and see from here and there.
Is it not a case of hurry and learn about it, as much and as fast as you can to have the edge over, whatever ???? And try and keep your edge before you are no longer necessary, because you are ultimately going to be disposable and replaced by the very thing you are trying to learn.
Maybe it would be best to learn something it cannot do rather than chase it in hope and in vane and then be slapped in the face anyway a few years down the line when you are surplus to requirements.
I guess you know more about all of this if you are in IT or some high tech industry that has a department investigating it all. Obviously it is very highly specialized, the actual programming of it. Maybe less so the use, though I am not sure what I would or am meant to be using it for. I am ok as I am and don't have any major gaps I feel a need ay aye to fill. Maybe I am missing one of those most thrilling and fascinating technologies of our time ?
I just feel if I start going down the ay aye rabbit hole I am taking my eye off the ball of what I do know and work with well, sound and audio, it's my profession and outside of work interest as well.
When you do not know the potentials (for any given industry) other than bits you hear and see from here and there.
Is it not a case of hurry and learn about it, as much and as fast as you can to have the edge over, whatever ???? And try and keep your edge before you are no longer necessary, because you are ultimately going to be disposable and replaced by the very thing you are trying to learn.
Maybe it would be best to learn something it cannot do rather than chase it in hope and in vane and then be slapped in the face anyway a few years down the line when you are surplus to requirements.
- SafeandSound Mastering
Frequent Poster - Posts: 1670 Joined: Sun Mar 23, 2008 12:00 am Location: South
Mastering: 1T £30.00 | 4T EP £112.00 | 10-12T Album £230.00 | Stem mastering £56.00 (up to 14 stems) masteringmastering.co.uk
Re: AI Search Engine's responses are often confidently wrong
I think this explains why I do not use and have little interest in it. I do almost none of the things suggested that it is supposedly useful for.
https://www.elegantthemes.com/blog/busi ... -to-use-ai
I do not see any postives in handing over what I enjoy doing already. I enjoy the feeling of involvement.
https://www.elegantthemes.com/blog/busi ... -to-use-ai
I do not see any postives in handing over what I enjoy doing already. I enjoy the feeling of involvement.
- SafeandSound Mastering
Frequent Poster - Posts: 1670 Joined: Sun Mar 23, 2008 12:00 am Location: South
Mastering: 1T £30.00 | 4T EP £112.00 | 10-12T Album £230.00 | Stem mastering £56.00 (up to 14 stems) masteringmastering.co.uk
- alexis
Longtime Poster - Posts: 5284 Joined: Fri Jan 10, 2003 12:00 am Location: Hampton Roads, Virginia, USA
Home of the The SLUM Tapes (Shoulda Left Un-Mixed), mangled using Cubase Pro 14; W10 64 bit on Intel i5-4570 3.2GHz,16GB RAM;Steinberg UR28M interface; Juno DS88; UAD2 Solo/Native; Revoice Pro
Re: AI Search Engine's responses are often confidently wrong
"Confidence is a preference for the habitual voyeur of what is known as (parklife)"

- Drew Stephenson
Apprentice Guru -
Posts: 29715 Joined: Sun Jul 05, 2015 12:00 am
Location: York
Contact:
(The forumuser formerly known as Blinddrew)
Ignore the post count, I have no idea what I'm doing...
https://drewstephenson.bandcamp.com/
Ignore the post count, I have no idea what I'm doing...
https://drewstephenson.bandcamp.com/
Re: AI Search Engine's responses are often confidently wrong
SafeandSound Mastering wrote: ↑Wed Jun 11, 2025 7:46 pm I am a little confused by it all (in part because I find it a deep bore, never say never mind you) which follows Merlyns "multi dimensional" explanation.
You're not missing much. One angle we can look at it from is as a billionaire's plaything. Here's a couple of billionaires, Sam Altman and Jensen Huang, with their new toy.

Last time I looked there were around 400 billionaires on Earth. What they do has a disproportionately large effect on the rest of us. Joe Blow buying a new graphics card doesn't make the news.
It ain't what you don't know. It's what you know that ain't so.
Re: AI Search Engine's responses are often confidently wrong
Drew Stephenson wrote: ↑Wed Jun 11, 2025 10:28 am I think we should be at least investing in AI awareness lessons in schools and in adult education. What it 'really' is. How it can be productively used. Where it shouldn't be used (hello hallucinations!) How it can be misused. How to spot where it might have been used. What the environmental and hidden costs are, etc. etc. etc.
Even after the hype dies down it will still be with us and we need to understand it in the same way that we do any other commonly available tool.
Yep, a modicum of informed knowledge will give almost anyone enough knowledge to evaluate what item/circumstance/event and we as "So what's the big deal? just another load of hype"
And the purveyors of the hype know full well, one must keep one's fizzog in the media. Altman is fully aware of that, selling ChatGPT as if it's a cure for baldness or will make the blind see again or the lame walk. Hold it up to the light and it's clear, it is nothing more than sophisticated pattern matching, with a gargantuan amount of data art it's data. It was said over 30 years ago, data is the new oil, and sure enough it is. Only today there was a report on the radio that supermarkets are making a stack of money selling off the data collected from 'loyalty cards' No surprises there then
Re: AI Search Engine's responses are often confidently wrong
Having a mess about on a tongue drum I used my phone to find that the frequency of the root note was 902Hz. "What note is that?" I thought. Typing it into Google the LLM's response was so wrong it was difficult to believe.

The frequency of middle C is right. Everything else is wrong. Everything it could get wrong, it did get wrong. This shows that what has been typed on the internet about pitch and frequency is confused and wrong.
So what note is 902Hz? It's a sharp A. A is a more sensible note to use as a guide as frequencies come out as whole numbers.
A5 = 880Hz
To go up by a semitone, multiply by 2^(1/12) = ~1.059
Bb5 = 880 * 1.059 = ~932Hz
902Hz is nearer A. To find how many cents out that is use
1200 * log2(902/880) = ~+43 cents
A useful answer would have been A5 +43 cents. It's clear that Gemini, Google's LLM, didn't get the answer in the way I did. Neither did it look up the internet or consult a database. All that has been done at an earlier time and embedded in the weights of connections between artificial neurons.

The frequency of middle C is right. Everything else is wrong. Everything it could get wrong, it did get wrong. This shows that what has been typed on the internet about pitch and frequency is confused and wrong.
So what note is 902Hz? It's a sharp A. A is a more sensible note to use as a guide as frequencies come out as whole numbers.
A5 = 880Hz
To go up by a semitone, multiply by 2^(1/12) = ~1.059
Bb5 = 880 * 1.059 = ~932Hz
902Hz is nearer A. To find how many cents out that is use
1200 * log2(902/880) = ~+43 cents
A useful answer would have been A5 +43 cents. It's clear that Gemini, Google's LLM, didn't get the answer in the way I did. Neither did it look up the internet or consult a database. All that has been done at an earlier time and embedded in the weights of connections between artificial neurons.
It ain't what you don't know. It's what you know that ain't so.
Re: AI Search Engine's responses are often confidently wrong
It's truly frightening how bad AI is at things like this.
I've no idea how it's working out answers to the questions asked, but relying on Internet dribblings as facts is clearly very dangerous.
Centuries of real knowledge and understanding is going to be pissed away in a matter of years as the next generation comes to rely on AI as the fount of 'new knowledge'...
And people wonder why I get pissy about pseudo-facts on the SOS forums...
I've no idea how it's working out answers to the questions asked, but relying on Internet dribblings as facts is clearly very dangerous.
Centuries of real knowledge and understanding is going to be pissed away in a matter of years as the next generation comes to rely on AI as the fount of 'new knowledge'...
And people wonder why I get pissy about pseudo-facts on the SOS forums...
- Hugh Robjohns
Moderator -
Posts: 43691 Joined: Fri Jul 25, 2003 12:00 am
Location: Worcestershire, UK
Contact:
Technical Editor, Sound On Sound...
(But generally posting my own personal views and not necessarily those of SOS, the company or the magazine!)
In my world, things get less strange when I read the manual...
(But generally posting my own personal views and not necessarily those of SOS, the company or the magazine!)
In my world, things get less strange when I read the manual...
Re: AI Search Engine's responses are often confidently wrong
Hugh Robjohns wrote: ↑Tue Jun 17, 2025 12:38 pm It's truly frightening how bad AI is at things like this.
I've no idea how it's working out answers to the questions asked, but relying on Internet dribblings as facts is clearly very dangerous.
Centuries of real knowledge are going to be pissed away in a matter of years as the next generation comes to rely on AI as the fount of 'new knowledge'...
And people wonder why I get pissy about pseudo-facts on the SOS forums...
I don't!
Personal example of how using AI can cause problems: We had it put together a multi-city vacation itinerary for us. Checking the details ... it simply made up some sites that didn't exist, were in the wrong city, etc.!
I think any fact-based use of AI absolutely requires detailed checking, which of course lessens the time savings benefit of AI, sometimes significantly.
On the other hand ... I'm observing from others that AI can be quite useful when not using it for fact-based queries. For example, my wife is going to give a free pickleball class/lesson to our community, and asked AI to: 1) Create a flyer announcing the event, and 2) Put together a lesson plan. I can vouch that the flyer looked good, and she said the lesson plan was quite good as well (with maybe a second prompt or so, I can't remember).
- alexis
Longtime Poster - Posts: 5284 Joined: Fri Jan 10, 2003 12:00 am Location: Hampton Roads, Virginia, USA
Home of the The SLUM Tapes (Shoulda Left Un-Mixed), mangled using Cubase Pro 14; W10 64 bit on Intel i5-4570 3.2GHz,16GB RAM;Steinberg UR28M interface; Juno DS88; UAD2 Solo/Native; Revoice Pro