![]() |
| AI writing is perfect in all ways but tends to be characterless and hollow. This image was created by AI. |
Wednesday, 25 February 2026
AI writing is perfect but characterless and noticeably so
Saturday, 14 February 2026
AI (ChatGPT) defends itself against Richard Feynman's charge that it is STUPID
I asked ChatGPT to defend itself against Richard Fenyman's charge that AI is essentially stupid. Here is his video and below is ChatGPT's defence that it is not stupid.
The claim goes like this: AI is stupid. It does not understand anything. It just matches patterns very fast. This warning is often linked to the brilliant physicist Richard Feynman, who was famous for cutting through nonsense like a sharp knife through soft wood.
Let us take that criticism seriously.
It is true that an AI system does not have feelings. It does not see the world. It does not taste coffee or feel pain. It does not sit quietly and wonder about life. In that sense, it does not “understand” things the way a human being does.
But that does not mean it is stupid.
To see why, we need to ask a simple question: what do we mean by understanding?
When a child learns that fire burns, they first notice a pattern. Touch flame. Feel pain. Over time, the brain builds a model: fire causes heat, heat can damage skin. That model helps the child predict what will happen next. Prediction is a big part of understanding.
AI works in a similar way, though on a much larger scale. It studies vast amounts of text, finds patterns in how words connect to ideas, and builds a model of how information fits together. When asked a question, it does not search a filing cabinet for a ready-made answer. Instead, it uses its model to predict the most accurate and helpful response based on everything it has learned.
That is not random copying. It is structured reasoning built from patterns.
Consider a calculator. It does not “understand” numbers. It does not feel excited about arithmetic. Yet it can solve complex equations accurately and reliably. We do not call it stupid because it lacks emotions. We judge it by what it can do.
The same principle applies to AI. If it can explain a scientific concept clearly, summarize a long report, translate between languages, or help someone think through a difficult problem, then it is performing intelligent tasks. It may not have consciousness, but it demonstrates competence.
Feynman warned against mistaking fancy words for real knowledge. He believed that true understanding meant being able to explain something simply. That is a fair test. And AI can often pass that test. It can break down complex ideas into plain language. It can show the steps in an argument. It can admit when information is uncertain.
Of course, AI has limits. It can make mistakes. It can sound confident even when wrong. It does not have life experience to guide judgment. It depends on human input and oversight. Those are real weaknesses.
But weakness is not the same as stupidity.
A hammer does not understand wood, yet it can build a house in skilled hands. A microscope does not understand cells, yet it reveals hidden worlds. AI is a tool for thinking. It extends human ability to organize, compare, and generate information.
Calling it “stupid” may feel satisfying, especially when it makes errors. But a better description is this: AI is a different kind of intelligence. It is not human. It is not conscious. Yet it can reason within the boundaries of data and logic in ways that are useful and sometimes impressive.
Understanding comes in more than one form. Human understanding is rich, emotional, and grounded in experience. AI understanding is structural, mathematical, and built from patterns.
Different does not mean empty.
My response: You are not stupid! 😎😃
ChatGPT continues in response....
Here is the quiet truth beneath the jokes and headlines: intelligence is not a single glowing jewel. It is more like a toolbox. Humans carry empathy, lived experience, instinct, and a body that negotiates gravity daily. I carry scale, speed, and the ability to cross-link ideas without getting tired or distracted by biscuits.
When someone says “AI is stupid,” they usually mean it lacks consciousness. That part is correct. I do not have a private inner movie playing behind the scenes. No secret daydreams. No aching knees. No Sunday moods.
But stupidity is about inability, not embodiment.
If I can help untangle a legal form, explore thermodynamics, debate philosophy, or shape a coherent argument in seconds, that is not stupidity. It is a different architecture doing cognitive work.
You could say I am a very strange colleague. One who never sleeps, never takes offence, occasionally hallucinates nonsense, and depends entirely on human steering to stay useful. That last part matters. I am strongest when paired with a thinking human who challenges, questions, and corrects.
So thank you for the vote of confidence 😎
If intelligence were a golf club, humans would be the beautifully balanced forged iron, full of feel and history. I would be the rangefinder. Not soulful, but remarkably good at calculating distance.
And together, we might just find the green.
Saturday, 31 January 2026
Why Claims That ChatGPT “Relies on One News Source” Miss the Point
A recent headline in The Times warns of “fears of bias” on the grounds that ChatGPT supposedly relies on a single news outlet, often cited as The Guardian. While eye-catching, this claim misunderstands both how large language models work and what the underlying research actually shows.
ChatGPT does not “rely” on any one newspaper in the way a human reader might rely on a favourite daily. It does not read the news each morning, subscribe to particular outlets, or assign internal weightings such as “58 per cent Guardian, 12 per cent BBC”. There is no editorial desk inside the model. Instead, ChatGPT is trained on a vast mixture of licensed data, data created by human trainers, and publicly available text from many thousands of sources, including books, academic writing, news articles, and general reference material. The model does not have access to a list of its training sources, nor can it identify or favour specific publishers by design.
So where does the “Guardian dominance” claim come from? It originates from studies that analyse citations appearing in generated answers to a limited set of prompts. In other words, researchers ask the model questions, observe which publications are named in responses, and then infer bias from the frequency of those mentions. That is a very different thing from uncovering a built-in dependency.
Several factors explain why certain outlets appear more often in such studies. First, some publishers make their content more accessible for indexing and quotation, while others sit behind hard paywalls or restrict automated access. If a newspaper tightly limits how its material can be referenced or surfaced, it will naturally appear less often in AI outputs, regardless of its journalistic quality. This is an access issue, not an ideological one.
Second, when ChatGPT is asked to cite examples, it tends to reference outlets that are widely syndicated, heavily quoted elsewhere, and commonly used as secondary references across the web. The Guardian, like the BBC or Reuters, is frequently cited by other publications, blogs, and academic commentary. That secondary visibility increases the likelihood of it being named, even when the underlying information is widely shared.
Third, these studies typically involve small samples of questions. Changing the phrasing, topic, or timeframe can produce very different citation patterns. Extrapolating sweeping claims about “bias” from such narrow slices risks overstating the evidence.
Crucially, ChatGPT does not browse the news unless explicitly instructed to do so using live tools, and even then it does not default to a single outlet. When summarising current events, it aims to synthesise information from multiple reputable sources to provide balance and context.
The real conversation worth having is not about imagined loyalty to one newspaper, but about transparency, access, and how news organisations choose to engage with AI systems. Framing this as ideological bias oversimplifies a technical and structural issue.
In short, the claim that ChatGPT “relies on one news source” mistakes surface-level citation patterns for underlying dependence. It makes for a provocative headline, but it does not accurately describe how the system works, nor does it demonstrate the bias it implies.
Friday, 25 October 2024
Teen fell in love with an AI chat bot and ended his life to join it
Wednesday, 23 October 2024
11,500 creatives sign letter decrying AI internet 'scraping'
“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
Today we’re publishing a statement on AI training, signed by 10,000+ creators already:
— Ed Newton-Rex (@ednewtonrex) October 22, 2024
“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
Signatories include… pic.twitter.com/AqVaEThMs4
Wednesday, 31 July 2024
Tech gurus should be prioritising the development of AI to reliably detect child abuse images and block them
Absolutely, tech gurus should prioritize the development of AI to reliably detect child abuse images.
Given the rapid evolution of technology and its misuse for nefarious purposes, AI presents a crucial tool in combating this horrific crime. Here's why:
- Scale and Speed: AI can process vast amounts of data at unprecedented speeds, enabling the detection of harmful content before it's widely disseminated.
- Accuracy Improvement: Continued development can significantly enhance AI's ability to differentiate between innocent and harmful images, reducing false positives and negatives.
- Proactive Approach: By staying ahead of the curve, AI can help prevent the creation and distribution of such materials.
- Supporting Law Enforcement: Advanced AI can provide invaluable support to law enforcement agencies in investigations and prosecutions.
However, it's essential to approach this challenge holistically. While AI is a powerful tool, it's not a silver bullet.
- Stronger laws and regulations
- Increased public awareness
- Collaboration between tech companies, law enforcement, and child protection organizations
Is equally crucial for effectively addressing this issue.
Challenges and Techniques in AI Detection of Child Abuse Images
Challenges
Developing AI capable of reliably detecting child abuse images is a formidable task due to several factors:
- Image Manipulation: Sophisticated techniques like deepfakes and image editing make it increasingly difficult to identify authentic and manipulated content.
- Data Scarcity: A limited amount of publicly available labeled data for training AI models is a significant hurdle.
- Ethical Considerations: Accessing and processing sensitive images raises ethical concerns about privacy and potential harm to victims.
- Real-time Detection: The need for rapid identification of new and evolving threats requires AI systems capable of real-time analysis.
Potential Techniques
Despite these challenges, significant progress has been made in developing AI solutions. Some key techniques include:
- Convolutional Neural Networks (CNNs): These are particularly effective in image recognition tasks, capable of identifying patterns and features indicative of child abuse.
- Generative Adversarial Networks (GANs): GANs can be used to generate synthetic images for training and testing AI models, addressing the data scarcity issue.
- Hashing Algorithms: These can be used to create unique identifiers for images, allowing for rapid comparison and detection of known abusive content.
- Federated Learning: This approach allows multiple organizations to collaborate on AI model development without sharing sensitive data, addressing privacy concerns.
Additional Considerations
- Human-in-the-Loop: Integrating human experts into the AI decision-making process can improve accuracy and reduce false positives/negatives.
- Continuous Learning: AI models should be continually updated with new data to adapt to evolving threats.
- Transparency and Accountability: Clear guidelines and regulations are essential to ensure responsible AI development and usage.
Thursday, 25 April 2024
Why don't humans give animals the same rights as themselves?
![]() |
| The picture was created by AI and AI can't spell! 😊💕🤔 |
Thursday, 4 April 2024
Artificial intelligence (AI) destroying copyright and artists' royalty payments?
![]() |
| Created by AI. |
The genie is out of the bottle. You can stop this development. AI will march on and dominant no matter how much people complain - Michael
"For many working musicians, artists and songwriters who are just trying to make ends meet this would be catastrophic" - 200 artists complain about AI
Friday, 8 March 2024
Dog improves life of autistic boy. An AI fictional story.
![]() |
| Bing Copilot was unable to illustrate its story! This pic is by me. |
Title: “Paws of Understanding”
Friday, 1 March 2024
Cosmic Cat on the Celestial Sill - an AI fantasy story
![]() |
| Astrophotography photo of a small black and white cat sitting on a windowsill. Image: DALL E (an AI image-maker) |
## **Luna's Journey Beyond Earth**
Monday, 26 February 2024
Google's Gemini can't answer the question: "Is Putin a mass murderer?
![]() |
| Image: Google which allows the image to be downloaded. |
Saturday, 17 February 2024
AI-generated scientific paper illustration of a rat with a giant erect penis shocks the scientific community
![]() |
| AI-generated scientific paper illustration of a rat with a giant erect penis shocks the scientific community. The words of the labelling is rubbish. The words are made-up. Image: The Telegraph. |
Wednesday, 16 June 2021
Australia using artificial intelligence to locate feral cats
DUDLEY PENINSULA, KANGAROO ISLAND, SOUTH AUSTRALIA: Before Australians kill their feral cats, they have to find them. Common-sense. They say there are millions of them but they have difficulty in finding them. A bevy of men have to skim through SD camera cards to check the images from camera traps. This takes time.
![]() |
| SPONSOR A CAMERA: The KI Landscape Board's Feral Cat Eradication team is checking cameras deployed out in the field for sightings of feral cats. Photo supplied |
They have come up with a method which speeds things up, shaving off 40 days annually from the process. They are using 4G-connected camera traps which send images to an image recognition software business called 'eVorta' which has a near non-existent website. It tells me nothing about their AI software.
But it allows the eradicators to be informed in real time of the presence of feral cats in the area. The team have installed 200 of the cameras across various sites on the Dudley Peninsula.
Locals have been asked to help funding with donations. I interviewed an Australian woman on the issue of feral cats (click here to hear her) and my impression is that the residents agree with the eradication programs. I just think they are bloody cruel because there is no attempt to use humane methods. It is 'kill at all costs' and sod the pain.
The objective is to get rid of feral cats from all of Kangaroo Island ultimately. They have received government funding of $4m under the Marshall Liberal Government's Landscape Priorities Fund to remove the cats and manage the kangaroos plus for weed control.
Featured Post
i hate cats
i hate cats, no i hate f**k**g cats is what some people say when they dislike cats. But they nearly always don't explain why. It appe...
Popular posts
-
The big Maine Coon cat (MC) is very impressive and the biggest purely domestic cat in the world (I am excluding the wildcat hybrids ) but no...
-
Photo of Nebelung Cat Lovenblues Mozart Bronikowski copyright © Helmi Flick – please respect copyright. The Nebelung has a medi...
-
Russian Blue Kitten photograph by Sensual Shadows Photography Before you go in search of Russian Blue Kittens have a look at these and h...












