Thursday, 12 February 2026

Character-driven karma is not mystical


Character-driven karma is not mystical—it is a logical consequence of personality expressed over time. A person’s core traits—arrogance, entitlement, or cruelty—shape the choices they repeatedly make, often harming others along the way. Those repeated actions increase the probability of exposure or failure, yet the timing and form of consequences are rarely neat. Human lives are interconnected in complex, chaotic networks: chance encounters, systemic quirks, and unpredictable alignments can delay or deflect outcomes, meaning karma sometimes bites only after years—or in spectacularly visible ways. When it does arrive, the “bite” often serves a dual purpose: it imposes costs on the wrongdoer while, indirectly, delivering justice to those harmed. In this way, character-driven karma can be understood as a probabilistic mechanism of moral physics—rooted in human behaviour, amplified by systemic interactions, and occasionally striking with dramatic inevitability.

My belief is that all karma is character-driven. It is cause and effect ultimately. But each of us have our personal views on this topic.

When Data Meets Belief: Can Scientists’ Political Views Skew Research?


Science likes to present itself as a cathedral of objectivity, built from clean lines of evidence and polished with peer review. Yet the architects of that cathedral are human. They vote. They argue. They hold values. And increasingly, the question is being asked in newspapers and academic journals alike: can scientists’ political views influence the conclusions they draw from data?

Recent coverage has pointed to a striking experiment. Groups of social scientists were given the same dataset and asked to answer the same research question. The results varied. In some cases, conclusions appeared to align with the researchers’ prior ideological leanings. The divergence did not arise from falsification or misconduct. It emerged from choices about which variables to emphasise, which statistical controls to apply, and which framing to adopt. In other words, from judgment calls.

That is where the issue becomes both more subtle and more interesting.

Scientific research involves hundreds of decisions. How to define a variable. Which outliers to exclude. What model to use. These decisions are rarely neutral in effect. A different modelling approach can shift the magnitude or even the direction of a result. When research addresses politically charged topics such as immigration, inequality, crime, climate, or public health, the interpretive stakes are high. It is in this interpretive space that personal values may quietly exert influence.

This does not mean scientists fabricate data to suit ideology. The evidence for widespread fraud driven by politics is thin. The concern is narrower and more human. Confirmation bias is not a partisan invention. People are inclined to see patterns that confirm what they already believe. Scientists are trained to resist that instinct, but training does not erase it.

Some critics argue that the growing overlap between academia and political activism intensifies the risk. In areas such as climate policy or public health mandates, researchers have sometimes stepped beyond presenting findings and into explicit advocacy. Supporters say this is responsible citizenship. Opponents say it blurs the line between evidence and policy preference. When the public sees a scientist speaking not only as an expert but as an advocate, trust may shift from confidence in method to suspicion of motive.

Public trust itself is politically filtered. Surveys consistently show that people are more likely to trust scientific claims when they believe the scientist shares their political identity. That dynamic complicates matters further. The perception of bias can erode credibility even if the underlying research is sound. In a polarised environment, neutrality is not merely a methodological virtue but a reputational necessity.

It is also important to distinguish between disciplines. In physics or chemistry, political ideology has limited relevance to the behaviour of electrons. In social science, where the subject matter involves human behaviour, institutions, and policy outcomes, values and assumptions are harder to disentangle. The very framing of a research question may reflect normative judgments about what is important or problematic.

Yet there is a countervailing force. The structure of science is designed to expose and correct individual bias. Peer review, replication studies, data transparency, preregistration of hypotheses, and open methodological disclosure all act as safeguards. A single researcher’s political leanings may influence an analysis, but over time competing scholars with different perspectives scrutinise, challenge, and refine the work. In theory, this adversarial collaboration strengthens reliability.

Moreover, diversity of viewpoint within academia can function as a balancing mechanism. If a field becomes ideologically homogeneous, blind spots may go unchallenged. If it contains a range of perspectives, methodological assumptions are more likely to be questioned. Some commentators argue that intellectual diversity is as important to scientific health as demographic diversity.

The issue, then, is not whether scientists have political views. They do, as all citizens do. The question is whether institutions acknowledge this reality and build robust systems to manage it. Transparency is central. When researchers clearly disclose their methods, assumptions, and potential conflicts of interest, readers can assess the strength of the conclusions independently of the researcher’s identity.

Humility is also essential. Scientific findings are probabilistic, not proclamations carved in stone. When scientists communicate uncertainty honestly and resist the temptation to overstate conclusions for political effect, public trust is more likely to endure.

There is a final irony. The very scrutiny of potential bias is itself a sign of healthy scepticism. Science progresses not by denying human frailty but by constructing procedures that account for it. The laboratory is not a monastery sealed off from society. It is a workshop filled with fallible minds striving toward clarity.

Political belief can shape perception. That is a fact of human psychology. But science, at its best, is a collective enterprise that recognises this vulnerability and compensates for it through structure, transparency, and contest. The risk is real, but so are the safeguards. The task is not to pretend that scientists are above politics. It is to ensure that the method remains stronger than the mind that wields it.

Bias against feral cats and poor methodology

A second area of concern in scientific research, beyond political skew, is the quality of surveys and data collection methods. Surveys are often presented with the authority of numbers, percentages, and confidence intervals. Yet the strength of a survey depends entirely on how it was designed and conducted.

Poor survey methodology can arise in several ways. Sampling frames may be unrepresentative, capturing only easily reachable or self-selecting respondents. Question wording may be leading or ambiguous. Response rates may be low, introducing non-response bias. In ecological research, surveys of wildlife populations may rely on indirect indicators such as sightings, spoor counts, or acoustic detection, each carrying assumptions and limitations.

In the case of feral cat predation studies, survey issues frequently intersect with modelling. Researchers may begin with field observations drawn from relatively small groups of cats in specific regions. They then combine these findings with population estimates derived from separate surveys of feral cat density. If either dataset is weak or regionally skewed, the resulting national extrapolation can magnify the initial uncertainty.

For example, if predation rates are measured in areas where prey density is high, applying those rates to regions with different ecological conditions may overstate overall impact. Conversely, studies conducted in prey-poor areas could understate impact. Survey design therefore plays a central role in shaping conclusions, even before interpretation enters the picture.

Beyond methodology, bias can take forms that are not overtly political. Personal attitudes toward particular species can influence research emphasis and framing. In countries such as Australia and New Zealand, feral cats are often portrayed as invasive predators threatening unique native fauna. This framing is supported by historical evidence of biodiversity loss linked to introduced species. However, strong conservation narratives can sometimes create an environment in which research highlighting severe impacts gains more traction than research presenting moderate or context-dependent effects.

Bias in this context does not necessarily involve data fabrication. It can appear in more subtle ways: choice of research question, emphasis in abstracts, selection of worst-case modelling assumptions, or press releases that foreground dramatic mortality figures without equal prominence given to uncertainty ranges. When headlines announce that cats kill billions of animals annually, the underlying confidence intervals and modelling assumptions are rarely given equal attention in public discussion.

At the same time, it is important to recognise that conservation biology often deals with precautionary principles. When species are already vulnerable, researchers may reasonably emphasise potential risks. The difficulty lies in distinguishing between cautious risk assessment and inadvertent amplification of worst-case scenarios.

The broader lesson is that scientific authority should not shield research from critical examination. Lay readers need not dismiss expertise, but they should feel entitled to ask informed questions about sampling methods, extrapolation techniques, and uncertainty reporting. Scientific literacy includes understanding that statistics can be both illuminating and fragile.

Ultimately, science advances through debate and replication. Strong claims invite scrutiny. Over time, exaggerated findings tend to be moderated, and underestimated effects are corrected. The health of the scientific enterprise depends not on the absence of bias, but on the presence of transparent methods, open data, and a culture that welcomes methodological challenge rather than resisting it.

In that sense, sceptical engagement from the public is not hostility toward science. It is participation in its central principle: that claims must withstand examination.

Saturday, 31 January 2026

Well over 2 million Epstein files remain hidden (Jan 2026)

The news today, 31 January 2006, is that all the Epstein files have been released (31st Jan 2026). The Times tells me that 3 million files have been released which Todd Blanche, the US Deputy Attorney General, said complies with the Epstein Files Transparency Act, which Congress passed in November mandating the release of all the files.


But The Times newspaper goes on to say that in all there are 6 million files. To quote, "At a press conference yesterday, Todd Blanche, the US Deputy Attorney General, said that more than 500 lawyers had worked to review more than 6 million pages before deciding what to release."

In other words they looked at 6 million files or sheets of paper containing data and decided according to this report to release half of them namely 3 million subject to earlier releases (see below). Either I'm missing something or the news media have got this wrong. It appears that the authorities are still withholding a vast number of Epstein files.

If the universe of material is roughly 6 million files or pages, and the latest tranche is about 3 million, then even after adding the earlier December releases which were only hundreds of thousands at most, you are still left with well over 2 million items not yet disclosed.

In addition, many of or most if not all of the files released have been redacted in some way sometimes totally so that all one sees as a sheet of paper which is entirely black.

To redact the entire page is a complete waste of time and also confuses me because it is the opposite to being transparent. If the entire page is redacted it is not information that's being released to the public is it?

Update

To move forward in time to 12th Feb 2026, Pam Bondi the US Attorney General is under increased pressure and attack for allegedly protecting the perpetrators of underage sex abuse; the mates and associates of Epstein. There is a massive Trump-led coverup going on here which will give this story legs and more legs. It will not go away until it is done.



There are many more...!

------------------

P.S. please forgive the occasional typo. These articles are written at breakneck speed using Dragon Dictate. I have to prepare them in around 20 mins. Also, sources for news articles are carefully selected but the news is often not independently verified. And, I rely on scientific studies but they are not 100% reliable. Finally, (!) I often express an OPINION on the news. Please share yours in a comment.

Why Claims That ChatGPT “Relies on One News Source” Miss the Point

A recent headline in The Times warns of “fears of bias” on the grounds that ChatGPT supposedly relies on a single news outlet, often cited as The Guardian. While eye-catching, this claim misunderstands both how large language models work and what the underlying research actually shows.

ChatGPT does not “rely” on any one newspaper in the way a human reader might rely on a favourite daily. It does not read the news each morning, subscribe to particular outlets, or assign internal weightings such as “58 per cent Guardian, 12 per cent BBC”. There is no editorial desk inside the model. Instead, ChatGPT is trained on a vast mixture of licensed data, data created by human trainers, and publicly available text from many thousands of sources, including books, academic writing, news articles, and general reference material. The model does not have access to a list of its training sources, nor can it identify or favour specific publishers by design.

So where does the “Guardian dominance” claim come from? It originates from studies that analyse citations appearing in generated answers to a limited set of prompts. In other words, researchers ask the model questions, observe which publications are named in responses, and then infer bias from the frequency of those mentions. That is a very different thing from uncovering a built-in dependency.

Several factors explain why certain outlets appear more often in such studies. First, some publishers make their content more accessible for indexing and quotation, while others sit behind hard paywalls or restrict automated access. If a newspaper tightly limits how its material can be referenced or surfaced, it will naturally appear less often in AI outputs, regardless of its journalistic quality. This is an access issue, not an ideological one.

Second, when ChatGPT is asked to cite examples, it tends to reference outlets that are widely syndicated, heavily quoted elsewhere, and commonly used as secondary references across the web. The Guardian, like the BBC or Reuters, is frequently cited by other publications, blogs, and academic commentary. That secondary visibility increases the likelihood of it being named, even when the underlying information is widely shared.

Third, these studies typically involve small samples of questions. Changing the phrasing, topic, or timeframe can produce very different citation patterns. Extrapolating sweeping claims about “bias” from such narrow slices risks overstating the evidence.

Crucially, ChatGPT does not browse the news unless explicitly instructed to do so using live tools, and even then it does not default to a single outlet. When summarising current events, it aims to synthesise information from multiple reputable sources to provide balance and context.

The real conversation worth having is not about imagined loyalty to one newspaper, but about transparency, access, and how news organisations choose to engage with AI systems. Framing this as ideological bias oversimplifies a technical and structural issue.

In short, the claim that ChatGPT “relies on one news source” mistakes surface-level citation patterns for underlying dependence. It makes for a provocative headline, but it does not accurately describe how the system works, nor does it demonstrate the bias it implies.

---------------------

P.S. please forgive the occasional typo. These articles are written at breakneck speed using Dragon Dictate. I have to prepare them in around 20 mins. Also, sources for news articles are carefully selected but the news is often not independently verified. And, I rely on scientific studies but they are not 100% reliable. Finally, (!) I often express an OPINION on the news. Please share yours in a comment.

POINTLESS UK EV grant of £3,750


This UK Labour government is as pointless and as misguided as the EV grant that they've introduced of £3,750 for brand-new electric vehicles.

It has to be a new vehicle. I'll tell you why it's a pointless grant and quite hopelessly misconceived. Take a EV that apparently holds value quite well: the Ford Puma GEN-E. Brand-new it costs £29,995 (as at today).

After the first year it'll be worth about £7000 less than that at about £23,000. So the purchaser loses about £7000 after 12 months, a point in the car's life at which the car is almost new. It's as good as new.

So if the buyer buys a nearly new i.e. one year old Ford Puma GEN-E car they will pay £23,000 for it and thereby save themselves £7000. But if they buy new one they will save themselves £3750 under the UK government grant.

It's pretty obvious that the wise choice is to buy a one year old version of this car because you save about twice as much money then you would if you bought a new one.

Other cars will depreciate faster. Many electric vehicles depreciate very rapidly actually, more so than the car mentioned in this article. And therefore the losses will be greater. As soon as the car is driven out of the showroom the buyer loses around £10,000 on many high-end EVs. They're paying £10,000 for the pleasure of smelling a new car!

This government's EV grant scheme is hopeless. It is hopelessly misconceived and is just a PR exercise. Anybody with a bit of common sense will not go down the route of seeking that grant.

In practice, the smart money is almost always a nearly new cars. You might like the dealer perks and the brand-new experience and you might like the maximum warranty but nowadays many cars have very long warranties up to 7 years and therefore taking one year off is neither here nor there.

To be fair, the grant is not absolutely useless. It does reduce the entry price for new buyers and some people really like to be new-car buyers. But in real cash terms, it's benefit is offset by the rapid drop in value of all new cars.

--------------------

P.S. please forgive the occasional typo. These articles are written at breakneck speed using Dragon Dictate. I have to prepare them in around 20 mins. Also, sources for news articles are carefully selected but the news is often not independently verified. And, I rely on scientific studies but they are not 100% reliable. Finally, (!) I often express an OPINION on the news. Please share yours in a comment.

Featured Post

i hate cats

i hate cats, no i hate f**k**g cats is what some people say when they dislike cats. But they nearly always don't explain why. It appe...

Popular posts