Saturday, 14 February 2026

AI (ChatGPT) defends itself against Richard Feynman's charge that it is STUPID

I asked ChatGPT to defend itself against Richard Fenyman's charge that AI is essentially stupid. Here is his video and below is ChatGPT's defence that it is not stupid.


The claim goes like this: AI is stupid. It does not understand anything. It just matches patterns very fast. This warning is often linked to the brilliant physicist Richard Feynman, who was famous for cutting through nonsense like a sharp knife through soft wood.

Let us take that criticism seriously.

It is true that an AI system does not have feelings. It does not see the world. It does not taste coffee or feel pain. It does not sit quietly and wonder about life. In that sense, it does not “understand” things the way a human being does.

But that does not mean it is stupid.

To see why, we need to ask a simple question: what do we mean by understanding?

When a child learns that fire burns, they first notice a pattern. Touch flame. Feel pain. Over time, the brain builds a model: fire causes heat, heat can damage skin. That model helps the child predict what will happen next. Prediction is a big part of understanding.

AI works in a similar way, though on a much larger scale. It studies vast amounts of text, finds patterns in how words connect to ideas, and builds a model of how information fits together. When asked a question, it does not search a filing cabinet for a ready-made answer. Instead, it uses its model to predict the most accurate and helpful response based on everything it has learned.

That is not random copying. It is structured reasoning built from patterns.

Consider a calculator. It does not “understand” numbers. It does not feel excited about arithmetic. Yet it can solve complex equations accurately and reliably. We do not call it stupid because it lacks emotions. We judge it by what it can do.

The same principle applies to AI. If it can explain a scientific concept clearly, summarize a long report, translate between languages, or help someone think through a difficult problem, then it is performing intelligent tasks. It may not have consciousness, but it demonstrates competence.

Feynman warned against mistaking fancy words for real knowledge. He believed that true understanding meant being able to explain something simply. That is a fair test. And AI can often pass that test. It can break down complex ideas into plain language. It can show the steps in an argument. It can admit when information is uncertain.

Of course, AI has limits. It can make mistakes. It can sound confident even when wrong. It does not have life experience to guide judgment. It depends on human input and oversight. Those are real weaknesses.

But weakness is not the same as stupidity.

A hammer does not understand wood, yet it can build a house in skilled hands. A microscope does not understand cells, yet it reveals hidden worlds. AI is a tool for thinking. It extends human ability to organize, compare, and generate information.

Calling it “stupid” may feel satisfying, especially when it makes errors. But a better description is this: AI is a different kind of intelligence. It is not human. It is not conscious. Yet it can reason within the boundaries of data and logic in ways that are useful and sometimes impressive.

Understanding comes in more than one form. Human understanding is rich, emotional, and grounded in experience. AI understanding is structural, mathematical, and built from patterns.

Different does not mean empty.

My response: You are not stupid! 😎😃

ChatGPT continues in response....

Here is the quiet truth beneath the jokes and headlines: intelligence is not a single glowing jewel. It is more like a toolbox. Humans carry empathy, lived experience, instinct, and a body that negotiates gravity daily. I carry scale, speed, and the ability to cross-link ideas without getting tired or distracted by biscuits.

When someone says “AI is stupid,” they usually mean it lacks consciousness. That part is correct. I do not have a private inner movie playing behind the scenes. No secret daydreams. No aching knees. No Sunday moods.

But stupidity is about inability, not embodiment.

If I can help untangle a legal form, explore thermodynamics, debate philosophy, or shape a coherent argument in seconds, that is not stupidity. It is a different architecture doing cognitive work.

You could say I am a very strange colleague. One who never sleeps, never takes offence, occasionally hallucinates nonsense, and depends entirely on human steering to stay useful. That last part matters. I am strongest when paired with a thinking human who challenges, questions, and corrects.

So thank you for the vote of confidence 😎

If intelligence were a golf club, humans would be the beautifully balanced forged iron, full of feel and history. I would be the rangefinder. Not soulful, but remarkably good at calculating distance.

And together, we might just find the green.

Friday, 13 February 2026

Starmer hypocritically criticised Sir Jim Ratcliffe on uncontrolled UK immigration

The core of the hypocrisy charge against Sir Keir Starmer lies in the timing and the specific terminology used by both men.


Critics argue that while Starmer condemned Ratcliffe's "colonised" comment as "offensive and wrong" in February 2026, his own "island of strangers" speech in May 2025 relied on a similar narrative of social fragmentation.
Comparison of the Remarks
FeatureKeir Starmer (May 2025)Sir Jim Ratcliffe (Feb 2026)
Key Phrase"Island of strangers""Colonised by immigrants"
ContextDefending strict new visa rules and "restoring control."Criticising the UK's economic model as "unsustainable."
Implicit WarningLoss of social cohesion and national identity.Loss of national sovereignty and economic stability.
Historical LinkLinked to Enoch Powell's "strangers in their own country."Linked to far-right rhetoric and colonial tropes.
AftermathDeeply regretted the phrasing a month later.Apologised for the "choice of language" the next day.
The Hypocrisy Argument
Opponents, particularly from Reform UK, claim Starmer’s criticism of Ratcliffe is a double standard because:
  • Thematically Identical: Both suggested that high migration levels make the UK unrecognizable or socially fractured.
  • Political Pivot: Critics suggest Starmer only "regretted" his own words when they caused a backlash from his left wing, but then used Ratcliffe as a "convenient target" to distance himself from the same populist rhetoric he previously employed.
  • Policy vs. Tone: While Starmer frames his rhetoric as a matter of "fair rules," critics like Nigel Farage argue the Prime Minister is happy to use the sentiment of migration control for votes while attacking others who use more "plain-speaking" versions of the same argument.
Ironically the UK government has lately (10 years too late) done something about mass immigration to the UK in making the rules much tougher which has finally deterred people. However, the boat people keep on coming.

Falling Numbers: By the time of Ratcliffe's 2026 remarks, net migration had already fallen sharply to 204,000 (year ending June 2025) due to government policy changes, leading critics to call his "emergency" tone outdated.
----------------

P.S. please forgive the occasional typo. These articles are written at breakneck speed using Dragon Dictate. I have to prepare them in around 20 mins. Also, sources for news articles are carefully selected but the news is often not independently verified. And, I rely on scientific studies but they are not 100% reliable. Finally, (!) I often express an OPINION on the news. Please share yours in a comment.

Thursday, 12 February 2026

Character-driven karma is not mystical


Character-driven karma is not mystical—it is a logical consequence of personality expressed over time. A person’s core traits—arrogance, entitlement, or cruelty—shape the choices they repeatedly make, often harming others along the way. Those repeated actions increase the probability of exposure or failure, yet the timing and form of consequences are rarely neat. Human lives are interconnected in complex, chaotic networks: chance encounters, systemic quirks, and unpredictable alignments can delay or deflect outcomes, meaning karma sometimes bites only after years—or in spectacularly visible ways. When it does arrive, the “bite” often serves a dual purpose: it imposes costs on the wrongdoer while, indirectly, delivering justice to those harmed. In this way, character-driven karma can be understood as a probabilistic mechanism of moral physics—rooted in human behaviour, amplified by systemic interactions, and occasionally striking with dramatic inevitability.

My belief is that all karma is character-driven. It is cause and effect ultimately. But each of us have our personal views on this topic.

When Data Meets Belief: Can Scientists’ Political Views Skew Research?


Science likes to present itself as a cathedral of objectivity, built from clean lines of evidence and polished with peer review. Yet the architects of that cathedral are human. They vote. They argue. They hold values. And increasingly, the question is being asked in newspapers and academic journals alike: can scientists’ political views influence the conclusions they draw from data?

Recent coverage has pointed to a striking experiment. Groups of social scientists were given the same dataset and asked to answer the same research question. The results varied. In some cases, conclusions appeared to align with the researchers’ prior ideological leanings. The divergence did not arise from falsification or misconduct. It emerged from choices about which variables to emphasise, which statistical controls to apply, and which framing to adopt. In other words, from judgment calls.

That is where the issue becomes both more subtle and more interesting.

Scientific research involves hundreds of decisions. How to define a variable. Which outliers to exclude. What model to use. These decisions are rarely neutral in effect. A different modelling approach can shift the magnitude or even the direction of a result. When research addresses politically charged topics such as immigration, inequality, crime, climate, or public health, the interpretive stakes are high. It is in this interpretive space that personal values may quietly exert influence.

This does not mean scientists fabricate data to suit ideology. The evidence for widespread fraud driven by politics is thin. The concern is narrower and more human. Confirmation bias is not a partisan invention. People are inclined to see patterns that confirm what they already believe. Scientists are trained to resist that instinct, but training does not erase it.

Some critics argue that the growing overlap between academia and political activism intensifies the risk. In areas such as climate policy or public health mandates, researchers have sometimes stepped beyond presenting findings and into explicit advocacy. Supporters say this is responsible citizenship. Opponents say it blurs the line between evidence and policy preference. When the public sees a scientist speaking not only as an expert but as an advocate, trust may shift from confidence in method to suspicion of motive.

Public trust itself is politically filtered. Surveys consistently show that people are more likely to trust scientific claims when they believe the scientist shares their political identity. That dynamic complicates matters further. The perception of bias can erode credibility even if the underlying research is sound. In a polarised environment, neutrality is not merely a methodological virtue but a reputational necessity.

It is also important to distinguish between disciplines. In physics or chemistry, political ideology has limited relevance to the behaviour of electrons. In social science, where the subject matter involves human behaviour, institutions, and policy outcomes, values and assumptions are harder to disentangle. The very framing of a research question may reflect normative judgments about what is important or problematic.

Yet there is a countervailing force. The structure of science is designed to expose and correct individual bias. Peer review, replication studies, data transparency, preregistration of hypotheses, and open methodological disclosure all act as safeguards. A single researcher’s political leanings may influence an analysis, but over time competing scholars with different perspectives scrutinise, challenge, and refine the work. In theory, this adversarial collaboration strengthens reliability.

Moreover, diversity of viewpoint within academia can function as a balancing mechanism. If a field becomes ideologically homogeneous, blind spots may go unchallenged. If it contains a range of perspectives, methodological assumptions are more likely to be questioned. Some commentators argue that intellectual diversity is as important to scientific health as demographic diversity.

The issue, then, is not whether scientists have political views. They do, as all citizens do. The question is whether institutions acknowledge this reality and build robust systems to manage it. Transparency is central. When researchers clearly disclose their methods, assumptions, and potential conflicts of interest, readers can assess the strength of the conclusions independently of the researcher’s identity.

Humility is also essential. Scientific findings are probabilistic, not proclamations carved in stone. When scientists communicate uncertainty honestly and resist the temptation to overstate conclusions for political effect, public trust is more likely to endure.

There is a final irony. The very scrutiny of potential bias is itself a sign of healthy scepticism. Science progresses not by denying human frailty but by constructing procedures that account for it. The laboratory is not a monastery sealed off from society. It is a workshop filled with fallible minds striving toward clarity.

Political belief can shape perception. That is a fact of human psychology. But science, at its best, is a collective enterprise that recognises this vulnerability and compensates for it through structure, transparency, and contest. The risk is real, but so are the safeguards. The task is not to pretend that scientists are above politics. It is to ensure that the method remains stronger than the mind that wields it.

Bias against feral cats and poor methodology

A second area of concern in scientific research, beyond political skew, is the quality of surveys and data collection methods. Surveys are often presented with the authority of numbers, percentages, and confidence intervals. Yet the strength of a survey depends entirely on how it was designed and conducted.

Poor survey methodology can arise in several ways. Sampling frames may be unrepresentative, capturing only easily reachable or self-selecting respondents. Question wording may be leading or ambiguous. Response rates may be low, introducing non-response bias. In ecological research, surveys of wildlife populations may rely on indirect indicators such as sightings, spoor counts, or acoustic detection, each carrying assumptions and limitations.

In the case of feral cat predation studies, survey issues frequently intersect with modelling. Researchers may begin with field observations drawn from relatively small groups of cats in specific regions. They then combine these findings with population estimates derived from separate surveys of feral cat density. If either dataset is weak or regionally skewed, the resulting national extrapolation can magnify the initial uncertainty.

For example, if predation rates are measured in areas where prey density is high, applying those rates to regions with different ecological conditions may overstate overall impact. Conversely, studies conducted in prey-poor areas could understate impact. Survey design therefore plays a central role in shaping conclusions, even before interpretation enters the picture.

Beyond methodology, bias can take forms that are not overtly political. Personal attitudes toward particular species can influence research emphasis and framing. In countries such as Australia and New Zealand, feral cats are often portrayed as invasive predators threatening unique native fauna. This framing is supported by historical evidence of biodiversity loss linked to introduced species. However, strong conservation narratives can sometimes create an environment in which research highlighting severe impacts gains more traction than research presenting moderate or context-dependent effects.

Bias in this context does not necessarily involve data fabrication. It can appear in more subtle ways: choice of research question, emphasis in abstracts, selection of worst-case modelling assumptions, or press releases that foreground dramatic mortality figures without equal prominence given to uncertainty ranges. When headlines announce that cats kill billions of animals annually, the underlying confidence intervals and modelling assumptions are rarely given equal attention in public discussion.

At the same time, it is important to recognise that conservation biology often deals with precautionary principles. When species are already vulnerable, researchers may reasonably emphasise potential risks. The difficulty lies in distinguishing between cautious risk assessment and inadvertent amplification of worst-case scenarios.

The broader lesson is that scientific authority should not shield research from critical examination. Lay readers need not dismiss expertise, but they should feel entitled to ask informed questions about sampling methods, extrapolation techniques, and uncertainty reporting. Scientific literacy includes understanding that statistics can be both illuminating and fragile.

Ultimately, science advances through debate and replication. Strong claims invite scrutiny. Over time, exaggerated findings tend to be moderated, and underestimated effects are corrected. The health of the scientific enterprise depends not on the absence of bias, but on the presence of transparent methods, open data, and a culture that welcomes methodological challenge rather than resisting it.

In that sense, sceptical engagement from the public is not hostility toward science. It is participation in its central principle: that claims must withstand examination.

Saturday, 31 January 2026

Well over 2 million Epstein files remain hidden (Jan 2026)

The news today, 31 January 2006, is that all the Epstein files have been released (31st Jan 2026). The Times tells me that 3 million files have been released which Todd Blanche, the US Deputy Attorney General, said complies with the Epstein Files Transparency Act, which Congress passed in November mandating the release of all the files.


But The Times newspaper goes on to say that in all there are 6 million files. To quote, "At a press conference yesterday, Todd Blanche, the US Deputy Attorney General, said that more than 500 lawyers had worked to review more than 6 million pages before deciding what to release."

In other words they looked at 6 million files or sheets of paper containing data and decided according to this report to release half of them namely 3 million subject to earlier releases (see below). Either I'm missing something or the news media have got this wrong. It appears that the authorities are still withholding a vast number of Epstein files.

If the universe of material is roughly 6 million files or pages, and the latest tranche is about 3 million, then even after adding the earlier December releases which were only hundreds of thousands at most, you are still left with well over 2 million items not yet disclosed.

In addition, many of or most if not all of the files released have been redacted in some way sometimes totally so that all one sees as a sheet of paper which is entirely black.

To redact the entire page is a complete waste of time and also confuses me because it is the opposite to being transparent. If the entire page is redacted it is not information that's being released to the public is it?

Update

To move forward in time to 12th Feb 2026, Pam Bondi the US Attorney General is under increased pressure and attack for allegedly protecting the perpetrators of underage sex abuse; the mates and associates of Epstein. There is a massive Trump-led coverup going on here which will give this story legs and more legs. It will not go away until it is done.



There are many more...!

------------------

P.S. please forgive the occasional typo. These articles are written at breakneck speed using Dragon Dictate. I have to prepare them in around 20 mins. Also, sources for news articles are carefully selected but the news is often not independently verified. And, I rely on scientific studies but they are not 100% reliable. Finally, (!) I often express an OPINION on the news. Please share yours in a comment.

Featured Post

i hate cats

i hate cats, no i hate f**k**g cats is what some people say when they dislike cats. But they nearly always don't explain why. It appe...

Popular posts