All posts by NuQum

An inferior U.S. health care system made the COVID-19 pandemic worse

By Kent R. Kroeger (Source:; February 19, 2021)

A computer generated representation of COVID-19 virions (SARS-CoV-2) under electron microscope (Image by Felipe Esquivel Reed; Used under the CCA-Share Alike 4.0 International license.)

The U.S. may have experienced 7.7 million additional COVID-19 cases and 155 thousand additional COVID-19 deaths due to its subpar health care system.

This finding is based on a cross-national statistical analysis of 20 West European and West European-heritage countries using aggregate, country-level data provided by Johns Hopkins University (COVID-19 cases and deaths per 1 million people), (Policy Stringency Index) and (Health Access and Quality Index). The analysis covers the period from January 1, 2020 to February 5, 2021.

Figure 1 (below) shows the bivariate relationship between the number of COVID-19 cases (per 1 million people) and the quality of a country’s health care system as measured by the Health care Access and Quality Index (HAQ Index ) that was compiled during the 2016 Global Burden of Disease Study.

In countries where health care access is universal and of high quality, the performance on the number of COVID-19 cases per capita is much better. New Zealand, Australia, Iceland, Finland and Norway are positive exemplars in this regard. Israel, Portugal, U.S., and the U.K., in comparison, are not.

Figure 1: Health care access/quality (HAQ) and its relationship to COVID-19 cases per 1 million people

Image for post

More interestingly, if over the study period we control for the average level of COVID-19 policy actions (as measured by Oxford University’s COVID-19 Government Response Tracker (OxCGRT)) and whether or not a country is an island, the significance of the quality of a country’s health care system remains significant.

As seen in this simple linear regression model, three variables — the HAQ Index, COVID-19 Policy Stringency, and whether or not the country is an island — account for about 60 percent of the variance in COVID-19 cases per capita for these 20 countries.

Image for post

Using this model, we can estimate the number of COVID-19 cases (per 1 million people) the U.S. would have experienced if its health care system was as good as the countries rated as having the best health care systems in the world (Iceland and Norway — HAQ Index = 97).

(-2946.23 * 8)*330 = 7,778,000 additional COVID-19 cases

[Note: U.S. has approximately 330 million people and its HAQ Index = 89]

Additionally, as there is a strong relationship between the number of COVID-19 cases per capita and the number of COVID-19 deaths per capita (i.e., roughly 0.2 deaths per case — see Appendix), we can estimate that the U.S. has experienced 155,560 additional deaths as a result of inadequacies with its health care system.

The U.S. does not have the best health care system in the world

Remarkably little discussion within the national news media has been about the systemic problems within the U.S. health care system and how those problems contributed to the tragic COVID-19 numbers witnessed by this country during the pandemic. Where most of the media attention has been focused on political failures associated with the COVID-19 pandemic — and most of that has been directed at the Trump administration — the hard evidence continues to suggest systemic factors, such as racial disparities in socioeconomics and health, are driving U.S. COVID-19 case and death rates above other developed countries.

“Socioeconomically and racially segregated neighborhoods are more vulnerable and are more likely to be disproportionately impacted by the adverse effects of COVID-19,” conclude health analysts Ahmad Khanijahani and Larisa Tomassoni. As for why this is the case, Khanijahani and Tomassoni offer this explanation:

“Black and low-SES individuals in the US are more likely to be employed as essential workers in occupations such as food distribution, truckers, and janitors. Most of these jobs cannot be fulfilled remotely and usually do not offer adequate sick leaves. Additionally, individuals of low-SES and Black minority are disproportionately impacted by homelessness or reside in housing units with limited space that makes the practice of isolating infected family members challenging or impossible. Moreover, limited or no child/elderly care and higher uninsurance rates impose an additional financial burden on low-SES families (emphasis mine) making it challenging to stop working.”

The racial and ethnic disparities in COVID-19 death figures (in Figure 2) are shockingly apparent in the following graphic produced by the The Centers for Disease Control and Prevention (CDC).

Figure 2: Racial/ethnic disparities in COVID-19 deaths in the U.S. (Source: CDC)

Image for post

The gray bars in Figure 2 show how non-Hispanic Whites across all age categories have experienced fewer deaths than expected relative to their prevalence in the total U.S. population. In stark contrast, across all age groups, Hispanic and non-Hispanic Blacks account for a significantly higher percentage of COVID-19 deaths than expected based on their population numbers.

Figure 2 is what a broken health care system interacting with systemic racial and ethnic inequalities looks like in a chart.

Final thoughts

Citing the negative role of the U.S. health care system on COVID-19 outcomes is not an indictment of U.S. health care workers. To the contrary, because Americans live in a country where health care is significantly rationed by market forces — e.g., a relatively high rate of uninsured, people delaying preventative care, diagnoses and treatments due to financial concerns, etc. — our health care workers are forced to work harder as a high number of COVID-19 patients enter the health care system only after their symptoms have already become severe.

The awful impact of the COVID-19 pandemic on Americans is less a political failure than it is a systemic failure —though we cannot dismiss the ineptitude of politicians like New York Governor Andrew Cuomo who, presumably at the behest of his health policy experts, authorized sending critically ill seniors from hospitals to nursing homes in order to free up hospital beds. New York’s elderly paid a steep price so Governor Cuomo could learn that nursing homes are not hospitals.

More broadly, the COVID-19 pandemic exposed a broken and inadequate U.S. health care system that is better designed to protect high profit margins for insurance and pharmaceutical companies and the billing rates of physicians than it is to ensure the health of the American people.

Sadly, with the physician, health insurance and pharmaceutical lobbyists firmly entrenched in the policymaking machinery of the Biden administration, don’t expect any time soon the types of health care system reforms needed to fix our systemic problems.

  • K.R.K.

Send comments to:

Methodological Note:

The decision to analyze just 20 West European and West European-heritage countries (i.e., U.S., Israel, Canada, Australia, and New Zealand) was prompted by a desire to control for two factors that are significantly related to country-level COVID-19 outcomes: (1) cultural norms, and (2) economic development.

Controlling for cultural norms, particularly the differences between countries with individualistic cultures (i.e., Western European nations) and those with collectivist cultures (i.e., East Asian nations), facilitated a clearer look at the impact health care systems on COVID-19 outcomes.

As for the exclusion of lesser-developed countries from this analysis, I simply don’t trust the accuracy or completeness of their COVID-19 data.

APPENDIX — A Linear Model of COVID-19 Deaths (Per 1 Million People)

Image for post

Were proactive COVID-19 policies the key to success?

By Kent R. Kroeger (Source:; February 11, 2021)

There is no weenier way of copping out in data journalism (and social science more broadly) than posing a question in an article’s headline.

This intellectual timidity probably stems from the fact that most peer-reviewed, published social science research findings are irreproducible. In other words, social science research findings are more likely a function of bias and methodology than a function of reality itself.

As my father, a mechanical engineer, would often say: “Social science is not science.”

The consequence is that social science findings are too often artifacts of their methods and temporal context — so much so that a field like psychology has become a graveyard for old, discredited theories: Physiognomy (i.e., assessing personality traits through facial characteristics), graphology (i.e., assessing personality traits through handwriting), primal therapy (i.e., psychotherapy method where patients re-experience childhood pains), and neuro-linguistic programming (i.e., reprogramming behavioral patterns through language) are just a few embarrassing examples.

Indeed, established psychological theories such as cognitive dissonancehave proven to be such an over-simplification of behavioral reality that their practical and academic utility is debatable.

And what does this have to do with COVID-19? Not much. I’m just venting.

Well, not exactly.

The COVID-19 pandemic has unleashed a torrent of speculation and research about what COVID-19 policies work and don’t work.

For example, how effective are masks and mandatory mask policies?

Masks work, conclude most studies.

Another cross-national study, however, found that it is cultural norms that drive the effectiveness of mandatory mask policies.

And there are credible scientific studies that show the effectiveness of masks can be seriously compromised without other factors in place (namely, social distancing).

In part, the variation in findings on mask-wearing is a product of a healthy application of the scientific method. No single study can address every aspect of mask effectiveness.

Research is like a gestalt painting where a single study represents but a tiny part of the total picture. One must step back from the specific research findings of one study in order to understand the essence of all the research together.

In other words, masks work, but with some important caveats.

Some countries have done a better job than others at containing COVID-19

Among the largest, most developed economies, it is increasingly apparent which countries have been most effective at minimizing the impact of COVID-19.

According to the cumulative tally reported at, these 10 developed countries have the lowest COVID-19 deaths rates (per 1 million people) as of February 10, 2021:

Taiwan — 0 (deaths per 1 million)
China — 3
Singapore — 5
New Zealand — 5
Iceland — 6
South Korea — 29
Australia — 36
Japan — 52
UAE — 99
Norway — 111

On the other end of the scale, these 10 developed countries have the highest COVID-19 deaths rates (per 1 million people) as of February 10, 2021:

Belgium — 1,880 (deaths per 1 million)
UK — 1,712
Italy — 1,522
U.S. — 1,467
Portugal — 1,431
Spain — 1,350
Mexico — 1,335
Sweden — 1,210
France — 1,196
Switzerland — 1,139

If there are complaints about the validity of the data from China or Taiwan, I am all ears. However, regardless of their inclusion, an informal review of the other advanced countries suggests some patterns in their COVID-19 outcomes over time.

First, in fighting COVID-19, it helps to be on an island (Taiwan, Singapore, New Zealand, Iceland, Australia, and Japan). And for all practical purposes, I consider South Korea to have near-island status as few people enter that country by way of a land border.

Second, it appears European romance language countries such as Italy, Portugal, Spain, France (tourism perhaps?) and countries with subpar health care systems (such as the U.S. and Switzerland which rely disproportionately on insurance-based health care access) have not fared well in fighting COVID-19.

On an anecdotal level, at least, I’ve explained most of the high- and low-performing countries with respect to COVID-19 without even referencing their COVID-19 policies.

So what impact have COVID-19 policies had on containing this pandemic?

Though lacking a definitive answer that question on a specific policy level, in the aggregate, there are strong indications that changes in national COVID-19 policies have had, for a small subset of countries, at least one of two discernible relationships with weekly variation in new COVID-19 cases: A small number of countries have been proactive in their COVID-19 policies and the result has been relatively few COVID-19 deaths per capita. Another small set of countries have been reactive in their COVID-19 policies and their COVID-19 deaths per capita have been relatively higher in comparison to the proactive countries. As for most countries, they fall somewhere in the middle, as they are both proactive and reactive.

Before we look at the data, here is a conceptual perspective.

Figure 1 visualizes a theoretical framework for how national policies may relate to the spread of COVID-19 (see Figure 1). There are two axes: (1) The vertical axis represents the correlation between changes in COVID-19 policies and changes in weekly new cases of COVID-19, while the (2) second axis represents that relationship at various time lags.

In this construct, consider the weekly number of new COVID-19 cases to be our outcome variable (Y) and the stringency of national COVID-19 policies to be our input variable (X).

The intersection of the two axes represents the contemporaneous relationship between COVID-19 policies (X) and new COVID-19 cases (Y). As one proceeds to the left of center on the vertical axis, this represents the relationship between prior values of COVID-19 policy with future numbers of new COVID-19 cases (i.e., X causes Y). As one proceeds to the right of center on the vertical axis, this represents the relationship between prior numbers of new COVID-19 cases with future values of COVID-19 policy (Y causes X).

Figure 1: A Theoretical Framework for Understanding COVID-19 Policies

Figure 2 shows the practical implication of Figure 1: COVID-19 policies in some countries will generally follow the red line over time (reactive), while others the green line (proactive), and still others — probably the majority of countries — will follow the black line (a combination of reactive and proactive policy changes).

Figure 2: Proactive Policies versus Reactive Policies

In fact, these patterns did emerge across the 30 countries I analyzed when I plotted the cross correlations over time between changes in their COVID-19 policies and changes in the weekly number of new COVID-19 cases.

Country-level patterns in COVID-19 policy effectiveness

As demonstrably important as COVID-19 policies such as mask mandates or business lockdowns are to containing COVID-19, my curiosity is with an aggregate measure of those policies, as any single policy will not be enough to address something as pervasive as COVID-19.

As a result, I used country-level COVID-19 policy data from the Coronavirus Government Response Tracker (OxCGRT) compiled by researchers at the Blavatnik School of Government at the University of Oxford who aggregate 17 policy measures, ranging from containment and closure policies (such as such as school closures and restrictions in movement); economic policies; and health system policies (such as testing regimes), to create one summary measure: The COVID-19 Policy Stringency Index (PSI). Details on how OxCGRT collects and summarizes their policy data can be found in a working paper.

The outcome measure used here is the number of new COVID-19 cases reported by Johns Hopkins University every week from January 20, 2020 to February 5, 2021 for each of the following countries: Australia, Austria, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, Greece, Iceland, India, Indonesia, Ireland, Israel, Italy, Japan, Mexico, Netherlands, New Zealand, Norway, Portugal, Russia, South Africa, South Korea, Spain, Sweden, Switzerland, UK, and US.

It should be noted that the raw data from OxCGRT and Johns Hopkins were at the daily level, but was aggregated to the weekly level for data smoothing purposes.

In total, there were 50 data points for each of the 30 countries, and to address the relevance of the theoretical framework in Figure 2 a bivariate Granger causality test is employed for each country (an example of the R code used to generate this analysis is in the appendix).

The Results

Of the 30 countries in this analysis, bivariate Granger causality tests found only three in which prior increases in the stringency of COVID-19 policies (PSI) were significantly associated with decreases in the weekly numbers of new COVID-19 cases (Iceland, New Zealand, and Norway). Figures 3 through 5 show the cross correlation functions (CCF) for those countries in which their COVID-19 policies shaped events, instead of merely reacting to them. Hence, I label their COVID-19 policies as proactive.

In another six countries it was found that increases in the stringency of COVID-19 policies tended to follow increases in the weekly numbers of new COVID-19 cases (see Figures 6 through 9). In other words, their COVID-19 policies tended to follow events instead of shaping them. The policies in these countries are therefore labeled as reactive.

For the remaining 21 countries, the Granger causality tests revealed no significant relationships in either causal direction, though their CCF patterns tended to follow the S-curve shape posited in Figures 1 and 2. The lack of statistical significance in those cases could be a function of the limited samples sizes which was a product of aggregating the data to the weekly-level.


Figure 3: Cross Correlation Function — Iceland

Figure 4: Cross Correlation Function — New Zealand

Figure 5: Cross Correlation Function — Norway


Figure 6: Cross Correlation Function — Germany

Figure 7: Cross Correlation Function — Israel

Figure 8: Cross Correlation Function — Switzerland

Figure 9: Cross Correlation Function — Austria

Proactive countries have had better COVID-19 outcomes

Figure 10 and 11 reveal how the COVID-19 outcomes in the proactive countries were significantly better than in the other countries. Countries that kept ahead of the health crisis did a better job of controlling the health crisis.

Figure 10: Confirmed COVID-19 Cases (per 1 million) by Policy Group

Figure 11: Confirmed COVID-19 Deaths (per 1 million) by Policy Group

Not coincidentally, many of the qualitative and quantitative analyses of worldwide COVID-19 policies have found Iceland, New Zealand, and Norway among the highest performers according to their metrics (examples are here, here and here).

Final Thoughts

In no way does my analysis suggest that COVID-19 policies in only three countries (Iceland, New Zealand, and Norway) were effective and the policies in the remaining countries were mere reactions to an ongoing health crisis they could not control.

Undoubtedly, there are well-documented examples of policy impotence across this worldwide pandemic. The lack of a consistent mask mandates in U.S. states like Arizona, North Dakota and South Dakota may help explain why those states have among the highest COVID-19 infection rates in the country. Sweden’s initial decision to keep their economy open during the early stages of the pandemic most likely explains their relatively high case and fatality rates relative to their Scandinavian neighbors.

But, in fairness, not every policy (or lack thereof) is going to work for a pathogen that has proven to be so pernicious. At the same time, as this pandemic winds down with the roll out of vaccinations, we are now seeing evidence in retrospect that a relatively small number of countries did do a better job than others in managing this pandemic. For the majority of countries, however, their policy leaders may have had frighteningly little impact on the ultimate course of this virus. Their citizens would have been better off moving to an island.

  • K.R.K.

Send comments to:


R code used to generate the bivariate Granger causality test for Iceland:

y <- c(49.00,106.00,317.00,490.00,454.00,272.00,71.00,30.00,8.00,3.00,1.00,2.00,2.00,.00,2.00,7.00,5.00,10.00,3.00,5.00,5.00,50.00,62.00,44.00,59.00,42.00,36.00,26.00,145.00,294.00,271.00,588.00,538.00,396.00,471.00,198.00,123.00,83.00,102.00,105.00,76.00,69.00,62.00,71.00,126.00,76.00,25.00,21.00)
x <- c(16.67,16.67,48.02,53.70,53.70,53.70,53.70,53.70,53.70,50.53,50.00,50.00,41.27,39.81,39.81,39.81,39.81,39.81,39.81,39.81,39.81,41.66,46.30,46.30,46.30,46.30,46.30,39.15,37.96,37.96,37.96,40.34,39.15,42.99,46.43,52.78,52.78,52.78,52.78,52.78,52.78,52.78,52.78,52.78,50.40,46.82,44.44,44.44)
par8 = '4'
par7 = '0'
par6 = '1'
par5 = '1'
par4 = '1'
par3 = '0'
par2 = '1'
par1 = '1'
ylab = 'y'
xlab = 'x'
par8 <- '4'
par7 <- '0'
par6 <- '1'
par5 <- '1'
par4 <- '1'
par3 <- '0'
par2 <- '1'
par1 <- '1'

par1 <- as.numeric(par1)
par2 <- as.numeric(par2)
par3 <- as.numeric(par3)
par4 <- as.numeric(par4)
par5 <- as.numeric(par5)
par6 <- as.numeric(par6)
par7 <- as.numeric(par7)
par8 <- as.numeric(par8)
ox <- x
oy <- y
if (par1 == 0) {
x <- log(x)
} else {
x <- (x ^ par1 - 1) / par1
if (par5 == 0) {
y <- log(y)
} else {
y <- (y ^ par5 - 1) / par5
if (par2 > 0) x <- diff(x,lag=1,difference=par2)
if (par6 > 0) y <- diff(y,lag=1,difference=par6)
if (par3 > 0) x <- diff(x,lag=par4,difference=par3)
if (par7 > 0) y <- diff(y,lag=par4,difference=par7)
(gyx <- grangertest(y ~ x, order=par8))
(gxy <- grangertest(x ~ y, order=par8))
op <- par(mfrow=c(2,1))
(r <- ccf(ox,oy,main='Cross Correlation Function (raw data)',ylab='CCF',xlab='Lag (k)'))
(r <- ccf(x,y,main='Cross Correlation Function (transformed and differenced)',ylab='CCF',xlab='Lag (k)'))
op <- par(mfrow=c(2,1))
acf(ox,lag.max=round(length(x)/2),main='ACF of x (raw)')
acf(x,lag.max=round(length(x)/2),main='ACF of x (transformed and differenced)')
op <- par(mfrow=c(2,1))
acf(oy,lag.max=round(length(y)/2),main='ACF of y (raw)')
acf(y,lag.max=round(length(y)/2),main='ACF of y (transformed and differenced)')

a<-table.element(a,'Granger Causality Test: Y = f(X)',5,TRUE)
a<-table.element(a,'Diff. DF',header=TRUE)
a<-table.element(a,'Complete model',header=TRUE)
a<-table.element(a,'Reduced model',header=TRUE)
a<-table.element(a,'Granger Causality Test: X = f(Y)',5,TRUE)
a<-table.element(a,'Diff. DF',header=TRUE)
a<-table.element(a,'Complete model',header=TRUE)
a<-table.element(a,'Reduced model',header=TRUE)

try(system("convert /home/tmp/ /home/tmp/1auwb1612990002.png",intern=TRUE))
try(system("convert /home/tmp/ /home/tmp/2xjfo1612990002.png",intern=TRUE))
try(system("convert /home/tmp/ /home/tmp/3zoqj1612990002.png",intern=TRUE))

Why is Hollywood failing with its re-branded science fiction and superhero franchises?

Photo by Tomas Castelazo — Own work, CC BY-SA 3.0

By Kent R. Kroeger (Source:; February 4, 2021)]

In April 1985, the Coca-Cola Company, the largest beverage company in the world, replaced their flagship beverage, Coca-Cola, with New Coke — a soda drink designed to match the sugary sweetness of Coca-Cola’s biggest competitor, Pepsi.

At the time, Pepsi was riding a surge in sales, fueled by two marketing campaigns: The first campaign was a clever use of blind taste tests, called the “Pepsi Challenge,” and through which Pepsi claimed most consumers preferred the taste of Pepsi over Coca-Cola. The second, called the “The Pepsi Generation” campaign, featured the most popular show business personality at the time, Michael Jackson. Pepsi’s message to consumers was clear: Pepsi is young and cool and Coca-Cola isn’t.

Hence, the launch of New Coke — which, to this day, is considered one of the great marketing and re-branding failures of all time. Within weeks of New Coke’s launch it was clear to Coca-Cola’s senior management that their loyal customer base — raised on the original Coca-Cola formula — was not going to accept the New Coke formula. Hastily, the company would re-brand their original Coca-Cola formula as Coca-Cola Classic.

Coca-Cola’s public relations managers tried to retcon the whole New Coke-Classic Coke story to make it appear the company planned to launch Coca-Cola Classic all along — but most business historians continue to describe New Coke as an epic re-branding failureNew Coke was discontinued in July 2002.

What did Coca-Cola do wrong? First, it never looks for good for a leader to appear too reactive to a rising competitor. On a practical level, for brands to lead over long periods they must adapt to changing consumer tastes — but there is a difference between ‘adapting’ and ‘panicking.’ Coca-Cola panicked.

But, in what may have been Coca-Cola’s biggest mistake, they failed to understand the emotional importance to their loyal customers of the original Coca-Cola formula.

“New Coke left a bitter taste in the mouths of the company’s loyal customers,” according to the History Channel’s Christopher Klein. “Within weeks of the announcement, the company was fielding 5,000 angry phone calls a day. By June, that number grew to 8,000 calls a day, a volume that forced the company to hire extra operators. ‘I don’t think I’d be more upset if you were to burn the flag in our front yard,’ one disgruntled drinker wrote to company headquarters.”

Prior to New Coke’s roll out, Coca-Cola did the taste-test research (which showed New Coke was favored over Pepsi), but they didn’t understand the psychology of Coca-Cola’s most loyal customers.

“The company had underestimated loyal drinkers’ emotional attachments to the brand. Never did its market research testers ask subjects how they would feel if the new formula replaced the old one,” according to Klein.

Is Hollywood Making the Same Mistake as Coca-Cola?

Another term for ‘loyal customer’ is ‘fan.’ In the entertainment industry, fans represent a franchise’s core audience. They are the first to line up at a movie premiere or stream a TV show when it becomes available. They’ll forgive an occasional plot convenience or questionable acting performance, as long as they can still recognize the characters, mood and narratives that make up the franchise they love.

Star Trek fans showed up in command and science officer-colored swarms for 1979’s Star Trek: The Motion Picture, an extremely boring, almost unwatchable film, according to many Trek fans. Yet, they still showed up for 1982’s Star Trek: The Wrath of Khan (a far better film) even when the casual Trek audience didn’t — helping make the Star Trek “trilogy” films (The Wrath of Khan, The Search for SpockThe Voyage Home) among the franchise’s most successful.

No superhero franchise has endured as many peaks and valleys in quality as Batman, a campy TV show in my youth, but a significant box office event with Tim Burton’s Batman (1989). Unfortunately, the franchise descended into numbing mediocrity after Burton, reaching a creative depth with 1997’s Batman & Robin, only to exceed the critical acclaim of the Burton-era films with Christopher Nolan’s Batman trilogy movies in the 2000s. Through all of this, Batman movies make money…most of the time.

This phenomenon is common to a lot of science fiction and superhero franchises: Star Wars, Superman, Spider-Man, Doctor Who, Alien, and The Terminator. among others. They are not consistently great, but they almost always bring out a faithful fan base.

That is, until they don’t.

Three major science fiction franchises have undergone significant re-branding efforts in the past five years, in the understandable hope of building a new, younger, and more diverse fan base for these long-time, successful franchises — not too dissimilar from what Coca-Cola was trying to do in the mid-1980s:

Star Wars

Now owned by Disney, Star Wars had its canon significantly altered in the three Disney trilogy movies from the original George Lucas-led Star Wars movies when the heroic stature of its two most iconic male characters — Luke Skywalker and Han Solo — was unceremoniously diminished in favor of new characters (Rey, Finn, and Poe Dameron). If Disney had a customer complaint line, it would have been overwhelmed after the first trilogy movie, The Force Awakens, and shutdown after The Last Jedi.

Result: Disney made billions in box office receipts from the trilogy movies, but it is hard to declare these movies an unqualified success. Yes, the movies made money, but Disney designs movies as devices for generating stable (and profitable) revenue streams across a variety of platforms (amusement park attendance, spin-off videos, toy sales, etc.). The Disney trilogy has generated little apparent interest in sequel films. At the Walt Disney Company’s most recent Investor Daylast Decemberwhichfeatured announcements for future Star Wars TV and movie projects, not one of the new projects involved characters or story lines emanating from the trilogy movies. More telling, pre-pandemic attendance at the new Star Wars-themed Galaxy’s Edge parks at Disney World and Disneyland have seen smaller-than-expected crowds — and to make matters worse, Star Wars merchandise sales have been soft since the trilogy roll out. Be assured, these outcomes are not part of Disney’s Star Wars strategy.

Star Trek

The Star Trek franchise has launched two new TV shows through Paramount/CBS in the past three years: Star Trek: Discovery and Star Trek: Picard. Through three seasons of Discovery and one for Picard, the re-branded Star Trek has turned the inherent optimism of Gene Roddenberry’s original Star Trek vision into a depressing, dystopian future. Starfleet, once an intergalactic beacon for inclusiveness, integrity and justice, is now a bureaucratic leviathan filled with corruption and incompetence. To further distance the new Star Trek from the original Star Trek series (TOS), Discovery’s writers concocted an incomprehensible plot twist — the Seven Red Signals — in order to send the Discovery’s crew 900 years into the future, past the TOS and Star Trek: The Next Generation timelines, thereby freeing the new series from the shackles of previous Star Trek canon.

Result: As Picard has had only one season, I will focus on Discovery, which has hadthree seasons on CBS’s All Access streaming platform (though only one on the broadcast network). In its 2017 series premiere, Discovery reportedly attracted 9.6 million viewers on the CBS broadcast network before the show was transitioned to the streaming service. Parrot Analytics subsequently reported Discovery was the second most streamed TV show in Summer 2017 (after Netflix’ Ozark), with 12.6 million average demand expressions, and was first for the week of October 5 through 11, with over 53 million average demand expressions.

Not a bad start, but by moving Discovery to the broadcast side this past year, CBS apparently signaled the show wasn’t building enough audience interest on the streaming service to offset the ad revenue losses from not putting it on the broadcast network — or, at least, that is how some TV insiders are interpreting the move. But, given Discovery’s broadcast ratings for the first season, it is unlikely the show is inundating the network in increased ad revenues either. It’s “linear” premiere broadcast on September 24, 2020 attracted 1.7 million viewers, placing it 8th out of the 12 broadcast network shows on that night — a bad start which has not improved over the next 13 episodes. [The most recent episode, broadcast on January 28, 2021, brought in 1.8 million viewers.]

Nonetheless, perhaps Discovery is at least attracting a new, younger audience for Star Trek? Uh, nope. Consistently, the show has achieved around a 0.2 rating within the 18–49 demo, which translates to about 280,000 people out of the 139 million Americans in that age group. That means the remaining 1.4 million Discovery viewers are aged 50 or older — in other words, old Star Trek nerds like me. How ironic would it be if it were the franchise’s original series fans that saved Discovery from cancellation, despite the show’s apparent attempts to distance itself from those same fans?

Doctor Who

No re-branding effort has broken my heart more than the decline of the BBC’s Doctor Who under showrunner Chris Chibnall’s leadership. The oldest science fiction series still on television feels irreparably damaged with its underdeveloped companion characters, generally poor scripts, and grade school level political sermons. The net result? The last two seasons featuring the 13th Doctor, played gamely by Jodi Whittaker, are more often boring than entertaining or thought-provoking.

But most regrettably, to lifelong fans who have loved the show since its first Doctor (played by William Hartnell), the BBC and Chibnall have taken the show’s long established canon, stuffed it in a British Army duffel bag, and thrown it in the River Thames to drown. And how did they do that? By rewriting the Doctor’s origin story — a Time Lord exiled from his home world of Gallifrey — in the fifth (“Fugitive of the Judoonand twelfth (“The Timeless Children”) episodes in the 13th Doctor’s second season, to where now the first Doctor is actually a previously unknown woman named Ruth Clayton and the ability of Doctors (Time Lords) to regenerate is now initially derived from a sadistic experiment on a small child who was the first living being found to have regenerate powers. If this story wasn’t so stupid, it would be sick.

Chibnall’s re-telling of the Doctor’s origin story was a WTF! moment for a lot of Whovians (the name often given to Doctor Who fans). But not a WTF! moment in the entertaining sense (like when half of The Avengers dissolved at the end of Infinity War), but in the bad sense.

Chibnall would have inspired no more controversy if he had gone back and rewritten Genesis 1:1 to read: “In the beginning Hillary Clinton created the heaven and the earth.” Such rewrites have only one purpose: to piss off people emotionally attached to the original story.

And that is exactly what the BBC and Chibnall have done — and many Doctor Who fans (though, as yet, not me) have responded by abandoning the franchise.

Result: The TV ratings history for the 13th Doctor’s two seasons reveals the damage done, though there was hope at the beginning. The 13th Doctor’s first episode on October 7, 2018, pulled in 10.96 million viewers — a significant improvement over the previous Doctor’s final season ratings which never exceeded 7.3 million viewers for an individual episode. However, in a near monotonic decline, the 13th Doctor’s latest episode (and last of the 2020 season) could only generate 4.69 million viewers, an all-time low since the series reboot in 2005.

And why did Doctor Who lose 6.3 million viewers? Because the BBC (through Chibnall) wanted Doctor Who to be more tuned to the times. They wanted a younger, more diverse, more socially enlightened audience for their show. Doctor Who was never cool enough. In fact, the original Doctor Who series was always kind of silly and escapist — a condition completely unacceptable in today’s political climate, according to the big heads at the BBC. Doctor Who needed to be relevant, so it became the BBC’s version of New Coke.

Except the BBC’s new Doctor Who is New Coke only if New Coke had tasted like windshield wiper fluid. From Chibnall’s pointed pen, the show has aggressively (and I would add, vindictively) alienated fans by marginalizing its original story.

Perhaps the Chibnall narrative is a objectively a better one. Who am I to say it isn’t? But the answer to that question doesn’t matter to Whovians who are deeply connected to the pre-Chibnall series. Whovians have left the franchise in the millions and unless the BBC has already concocted a Coca-Cola Classic-like response, I don’t see why they will come back.

Have TV and Movie Studios Forgotten How to Do Market Research?

The lesson from Star Wars, Star Trek and Doctor Who is NOT that brands shouldn’t change over time or that canon is sacrosanct and any deviations are unacceptable. Brands must adapt to survive.

All three of these franchises need a more diverse fan base to stay relevant and that starts with attracting more women and minorities into the fold. But how these franchises tried to evolve has been a textbook example on how not to do it.

In my opinion, it starts with solid writing and good storytelling, which requires better developed characters and more compelling narratives. Harry Potter is the contemporary gold standard. My personal favorite, however, is Guardians of the Galaxy — a comic book series I ignored as a kid, but in cinema form I love. Director/Writer James Gunn has offered us a master class at creating memorable characters, such as Nebula, Gamora, Drax, Peter Quill, Mantis, Rocket, and Groot. So much so, that a few plot holes now and then are quickly forgiven — not so with the re-branded Star Wars, Star Trek and Doctor Who.

Along with better writing, these three franchises have been poorly managed at the business level — and that starts with market research. Disney, Paramount and the BBC have demonstrated through their actions that they do not know their existing customers, much less how to attract new ones.

As a 2020 American Marketing Association study warns businesses:

“Any standout customer experience starts with figuring out the ‘what’ and working backwards to design, develop and deliver products and services that customers use and recommend to others. But how effective are marketing organizations at understanding “what” customers are looking for and ‘why’?”

The AMA’s answer to that last question was that most businesses — 80 percent by their estimate — do not understand the ‘what’ and ‘whys’ behind their current and potential customers’ motivations.

Does that mean these franchises would have been better off just engaging in slavish fan service? Absolutely not.

Fans are good at spending their money to watch their favorite movies and TV shows. They are not creative writers. Few people in contemporary marketing reference anymore the old trope of the “customer always being right,” as experience has taught companies and organizations that customers don’t always know what they want, much less know what they need. As Henry Ford is quoted as saying, “If I’d asked people what they wanted, they’d have asked for faster horses” [thoughit is not clear that he ever said that or would have believed it].

Instead, modern marketers tend to focus on understanding the customer experience and mindset in an effort to strategically differentiate their brands. Central to that process, the best organizations depend heavily on sound, objective research to answer key questions about their current and prospective customers.

There was a time when the entertainment industry was no different in its reliance on consumer data and feedback to shape product and distribution. The power of one particular market research firm, National Research Group (NRG), to determine movie release dates or whether a movie even gets released in theaters is legend in Hollywood. Though now a part of Nielsen Global Media (another research behemoth that has probably done as much to shape what we watch as NBC or CBS ever have), early in its existence the NRG was able to get the six major movie studios to sign exclusivity agreements granting NRG an effective monopoly on consumer-level information regarding upcoming movies. If you can control information, you can control people (including studio executives).

But something has happened in Hollywood in the past few years with respect to science fiction and superhero audiences (i.e., customers) who are perceived by many in Hollywood — wrongfully, I might add — as being predominately white males.

While men have long been over-represented among science fiction and fantasy writers — and that is a problem — the audience for these genres are more evenly divided demographically than commonly assumed.

For certain, the research says science fiction moviegoers skew young and male, but that is a crude understanding of science fiction fandom. Go to a Comicon conference and one will see crowds almost evenly divided between men and women and drawn from most races, ethnicities and age groups — though my experience has been that African-Americans are noticeably underrepresented (see photo below).

Image for post
Comic-Con 2010 — the Hall H crowd (Photo by The Conmunity — Pop Culture Geek; used under CCA 2.0 Generic license.)

Similarly, a 2018 online survey of U.S. adults found that, while roughly three-quarters of men like science fiction and fantasy movies (76% and 71%, respectively), roughly two-thirds of women also like those movie genres (62% and 70%, respectively). The same survey also found that white Americans are no more likely to prefer these movie genres than Hispanic, African-American or other race/ethnicities.

Methodological Note:

Total sample size for this survey of U.S. moviegoers was 2,200 and the favorite genre results are based on the percentage of respondents who had a ‘very’ or ‘somewhat favorable’ impression of the movie genre.

The ‘white-male’ stereotyping of science fiction fans, so common within entertainment news stories about ‘toxic’ fans, also permeates descriptions of the gaming community, an entertainment subculture that shares many of the same franchises (Star Wars, The Avengers, The Witcher, Halo, etc.) popular within the science fiction and fantasy communities. Despite knowing better, when I think of a ‘gamer,’ I, too, think of people like my teenage son and his male cohorts.

Yet, in a 2008 study, Pew Research found that self-described “gamers” were 65 percent male and 35 percent female, and in 2013 Nintendo reported that its game-system users were evenly divided between men and women.

Conflating “white male” stereotypes of science fiction fans with “toxic” fans serves a dark purpose within Hollywood: The industry believes it can’t re-brand some of its most successful franchises without first destroying the foundations upon which those franchises were built.

In the process of doing that, Hollywood has ignited a war with some of its most loyal customers who have been

labeled within the industry as “toxic fans.”

A Cold War between Science Fiction Fans and Hollywood Continues

The digital cesspool — otherwise known as social media — can’t stop filling my inbox with stories about how “woke” Hollywood is vandalizing our most cherished science fiction and superhero franchises (Star Wars, Star Trek, Batman and Doctor Who, etc.) or how supposedly malevolent slash racist/sexist/homophobic/transphobic fans are bullying those who enjoy the recent “re-imaginings” of these franchises. All sides are producing more noise than insight.

In the midst of these unproductive, online shouting matches, there is real data to suggest much of the criticism of Disney’s Star Wars trilogy (The Force AwakensThe Last Jedi, and The Rise of Skywalker), CBS’ new Star Trek shows and theBBC’s 13th iteration of Doctor Who isrooted in genuine popularity declines within those franchises.

I produced the following chart in a previous post about the impact of The Force Awakens on interest in the Star Wars franchise:

Figure 1: Worldwide Google searches on ‘Star Wars’ from January 2004 to May 2020

Image for post
Source: Google Trends

The finding was that the The Force Awakens failed to maintain the interest in Star Wars it had initially generated, as evidence by the declining “trend in peaks” with subsequent Disney Star Wars films.

A similar story can be told for some of TV’s most prominent science fiction and superhero franchises.

The broadcast TV audience numbers are well-summarized at these links: Star Trek: DiscoveryDoctor WhoBatwoman (Season 1 & Season 2).

Bottom line: Our most enshrined science fiction and superhero franchises are losing audiences fast.

‘The Mandalorian’ Offers Hope on How To Re-Brand a Franchise

Whether these audience problems are due to bad writing, bad marketing, negative publicity caused by a small core of “toxic” fans, or just unsatisfied fan bases are an open dispute. What can be said with some certitude is that these franchises have underwhelmed audiences in their latest incarnations, with one exception…Disney’s Star Wars streaming series: The Mandalorian.

Methodological Note:

In a previous post I’ve shown that Google search trends strongly correlate with TV streaming viewership: TV shows that generate large numbers of Google searches tend to be TV shows people watch. A similar relationship has been shown to exist between movie box office figures and Google search trends.

Figure 2 shows the Google search trends since September 2019 for Disney’s The Mandalorian. Over the two seasons the show has been available on Disney+ (Season 1: Nov. 12 — Dec. 27, 2019; Season 2: Oct. 30 — Dec. 18, 2020), intraseasonal interest in the show has generally gone up with each successive episode, with the most interest occurring for the season’s final episode. This “rising peaks” phenomenon — indicative of a well-received and successful TV or movie series — was particularly evident in The Mandalorian’s second season where characters very popular among long-time fans periodically emerged over the course of the season: Boba Fett, Bo-Katan, Ashoka Tano, and (of course) Luke Skywalker.

Figure 2: Google search trends for Disney’s The Mandalorian (Sept. 2019 to Feb. 2021)

Image for post
Source: Google Trends

It has only been two seasons, but The Mandalorian’s creative leaders — Jon Favreau and Dave Filoni — have been able to maintain steady audience interest, though they run the risk of eating their seed corn with the frequent fan-favorite character roll outs. It will not take long for them to run out of cherished and widely-known Star Wars characters. [Jon/Dave, I love Shaak Ti, but what are the chances she will ever come back?]

Nonetheless, The Mandalorian stands in stark contrast to some other science fiction and superhero franchises who have struggled to build their audiences in the past five years.

Figure 3 shows four TV shows with declining intraseasonal and/or interseasonal peaks. We’ve discussed Discovery and Doctor Who’s audience problems above, but Supergirl deserves some particular attention as my son and I watched the show faithfully through the first four seasons.

Figure 3: Google search trends for Star Trek: Discovery, Doctor Who, Batwoman, and Supergirl (Feb. 2016 to Feb. 2021)

Image for post
Source: Google Trends

Supergirl, whose title character is played bya mostcharmingMelissa Benoist, debuted on October 26, 2015 on CBS and averaged 9.8 million viewers per episode in its freshman season, making it the 8th most watched TV show for the year — a solid start.

Image for post
Melissa Benoist speaking at the 2019 San Diego Comic Con International (Photo by Gage Skidmore; Used under CCA-Share Alike 2.0 Generic license.)

Regrettably, CBS moved the show to its sister network, The CW, where it experienced an immediate drop of 6.5 million viewers in its first season there and another 1.5 million viewers over the next three seasons. [Supergirl has since been cancelled.]

How did this happen?

Before blaming the show’s overt ‘wokeness’ — women were always competent, while white men were either evil (Jon Cryer’s Lex Luther was magnificent though) or lovesick puppies — the spotlight must be turned on the CBS executives who decided to anchor the show to The CW’s other superhero shows (The Flash and Arrow) in an effort to help the flagging sister network. Their attempt failed and Supergirl paid the price.

At the same time, Supergirl didn’t build on its smaller CW audience and that problem rests on the shoulders of the show’s creative minds, particularly Jessica Queller and Robert Rovner, who were the showrunners after the second season.

First, what happened to Superman? Supergirl’s cousin, Kal-El, was prominent in the second season, but then disappeared in Season 3 (apparently, he had a problem to solve in Madagascar caused by Reign, an existential threat to our planet who Supergirl happened to be fighting at the time). Superman couldn’t break away to help his cousin?

I watched the Supergirl TV show because I was a fan of her DC comics in childhood and few realize that Supergirl comics, at least among boys, were more popular than Wonder Woman’s in the 1960s, according to comics historian Peter Sanderson. And central to her story was always her cousin, Superman. But for reasons seemingly unrelated to the sentiments of the show’s fans, Superman’s appearances after Season 2 were largely limited to the annual Arrowverse crossover episodes.

Fine, Supergirl’s showrunners wanted the show to live or die based on the Supergirl character, not Superman’s. I get it. But a waste of one of the franchise’s greatest assets.

Second, the script writing on Supergirl changed noticeably after Season 2, with story lines mired in overly convenient plot twists (M’ymn, the Green Martian, gives up his life and merges with the earth to stop Reign’s terraforming the planet? How does that work? We’ll never know.), and clunky teaching moments on topics ranging from gun control to homosexuality. Instead of being a lighthearted diversion from the real world, as it mostly was in its first two seasons, the show’s writers thought it necessary to repurpose MSNBC content. Supergirl stopped being fun.

What Lessons are Learned?

The most important lesson from Coca-Cola’s New Coke blunder was that mistakes can be rectified, if dealt with promptly and earnestly. It is OK to make mistakes. You don’t even have to admit them. But you do have to address them.

In 1985, the year of New Coke’s introduction, Coca-Cola’s beverage lines owned 32.3 percent of the U.S. market to Pepsi’s 24.8 percent. Today, Coca-Cola owns 43.7 percent of the non-alcoholic beverage market in the U.S., compared to Pepsi’s 24.1 percent.

With The Mandalorian’s success, Hollywood may still realize that burning down decades of brand equity earned by franchises such as Star Wars, Star Trek and Doctor Who is not a sound business plan. The good news is, it is not too late to make amends with the millions of ardent fans who have supported these franchises through the good, the bad and the Jar Jar Binks. That Star Wars fans can now laugh about Jar Jar is proof of that.

  • K.R.K.

Send comments to:

The most important moment in human history may have passed without much notice

Parkes Radio Telescope in Australia which detected possible extraterrestrial signals from Proxima Centauri last year (Photo by Maksym Kozlenko; Used under CCA-Share Alike 4.0 International license.)

By Kent R. Kroeger (Source:; January 25, 2021)

Some background music while you read ==> Undiscovered Moon (by Miguel Johnson)

Shane Smith, an intern in the University of California at Berkeley’s Search for Extraterrestrial Intelligence (SETI) program, was the first to see the anomaly buried in petabytes of Parkes Radio Observatory data.

It was sometime in October of last year, the start of Australia’s spring. when Smith found a strange, unmodulated narrowband emission at 982.002 megahertz seemingly from Proxima Centauri, our Sun’s closest star neighbor.

While there have been other intriguing radio emissions — 1977’s “Wow” signal being the most famous — none have offered conclusive evidence of alien civilizations. Similarly, the odds are in favor of the Parkes signal being explained by something less dramatic than extraterrestrial life; but, as of now, that has not happened.

“It has some particular properties that caused it to pass many of our checks, and we cannot yet explain it,” Dr. Andrew Siemion. director of the University of California, Berkeley’s SETI Research Center, told Scientific American recently. “We don’t know of any natural way to compress electromagnetic energy into a single bin in frequency,” Siemion says. “For the moment, the only source that we know of is technological.”

Proof of an extraterrestrial intelligence? No, but initial evidence offering the intriguing possibility? Why not. And if another radio telescope were to also detect this tone at 982.002 megahertz coming from Proxima Centauri, a cattle stampede of conjecture would likely erupt.

As yet, however, the scientists behind the Parkes Radio Telescope observations have not published the details of their potentially momentous discovery, as they still contend, publicly, that the most likely explanation for their data is human-sourced.

“The chances against this being an artificial signal from Proxima Centauri seem staggering,” says Dr. Lewis Dartnell, an astrobiologist and professor of science communication at the University of Westminster (UK).

Is there room for “wild speculation” in science?

We live in a time when being called a “conspiracy theorist” is among the worst smears possible, not matter how dishonest or unproductive the charge. How dare you not agree with consensus opinion!

However science, presumably, operates above the daily machinations of us peons. How could any scientist make a revolutionary discovery if not by tearing down consensus opinion? Do you think when Albert Einstein published his relativity papers he was universally embraced by the scientific community? Of course not.

“Sometimes scientists have too much invested in the status quo to accept a new way of looking at things,” says writer Matthew Wills, who studied how the scientific establishment in Einstein’s time reacted to his relativity theories.

But just because scientists cannot yet explain the Parkes signal doesn’t mean the most logical conclusion should be “aliens.” There are many less dramatic explanations that also remain under consideration.

At the same time, we need to prepare ourselves for the possibility the Parkes signal cannot be explained as a human-created or natural phenomenon.

“Extraordinary claims require extraordinary evidence.” — Astrophysicist Carl Sagan’s rewording of Laplace’s principle, which says that “the weight of evidence for an extraordinary claim must be proportioned to its strangeness”

“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” — Sir Arthur Conan Doyle, stated by Sherlock Holmes.

The late Carl Sagan was a scientist but became famous as the host of the PBS show “Cosmos” in the 1980s. Sherlock Holmes, of course, is a fictional character conceived by Sir Arthur Conan Doyle. It should surprise few then that Sagan’s quote about ‘extraordinary claims’ aligns comfortably with mainstream scientific thinking, while the Holmes quote is referred to among logicians and scientific philosophers as the process of elimination fallacy — when an explanation is asserted as true on the belief that all alternate explanations have been eliminated when, in truth, not all alternate explanations have been considered.

If you are a scientist wanting tenure at a major research university, you hold the Sagan (Laplace) quote in high regard, not the Holmesian one.

The two quotes encourage very different scientific outcomes: Sagan’s biases science towards status quo thinking (not always a bad thing), while Holmes’ aggressively pushes outward the envelope of the possible (not always a good thing).

Both serve an important role in scientific inquiry.

Oumuamua’s 2017 pass-by

Harvard astronomer Dr. Abraham (“Avi”) Loeb, the Frank B. Baird Jr. Professor of Science at Harvard University, consciously uses both quotes when discussing his upcoming book, “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth,” as he recently did on the YouTube podcast “Event Horizon,” hosted by John Michael Godier.

His book details why he believes that the first observed interstellar object in our solar system, first sighted in 2017 and nicknamed Oumuamua (the Hawaiian term for ‘scout’), might have been created by an alien civilization and could be either some of their space junk or a space probe designed to observe our solar system, particularly the third planet from the Sun. For his assertion about Oumuamua, Dr. Loeb has faced significant resistance (even ridicule) from many in the scientific community.

Canadian astronomer Dr. Robert Weryk calls Dr. Loeb’s alien conclusion “wild speculation.” But even if Dr. Wertk is correct, what is wrong some analytic provocation now and then? Can heretical scientific discoveries advance without it?

Dr. Loeb, in turn, chides his critics right back for their lack of intellectual flexibility: “Suppose you took a cell phone and showed it to a cave person. The cave person would say it was a nice rock. The cave person is used to rocks.”

Hyperbolic trajectory of ʻOumuamua through the inner Solar System, with the planet positions fixed at the perihelion on September 9, 2017 (Image by nagualdesign — Tomruen; Used under under the CCA-Share Alike 4.0 International license.)

Dr. Loeb’s controversial conclusion about Oumuamuaformed soonafter it became apparent that Oumuamua’s original home was from outside our solar system and that its physical characteristics are unlike anything we’ve observed prior. Two characteristics specifically encourage speculation about Oumuamua’s possible artificial origins: First, it is highly elongated, perhaps a 10-to-1 aspect ratio. If confirmed, it is unlike any asteroid or comet ever observed, according to NASA. And, second, it was observed accelerating as it started to exit our solar system without showing large amounts of dust and gas being ejected as it passed near our Sun, as is the case with comets.

In stark contrast to comets and other natural objects in our solar system, Oumuamua is very dry and unusually shiny (for an asteroid). Furthermore, according to Dr. Loeb, the current data on its shape cannot rule out the possibility that it is flat — like a sail — though the consensus view remains that Oumuamua is long, rounded (not flat) and possibly the remnants of a planet that was shredded by a distant star.

I should point out that other scientists have responded in detail to Dr. Loeb’s reasons for suggesting Oumuamua might be alien technology and an excellent summary of those responses can be found here.

Place your bets on whether the Parkes signal and/or Oumuamua are signs of alien intelligence

What are the chances Oumuamua or the Parkes signal are evidence of extraterrestrial intelligent life?

If one asks mainstream scientists, the answers would cluster near ‘zero.’ Even the scientists involved in discovering the Parkes signal will say that. “The most likely thing is that it’s some human cause,” says Pete Worden, executive director of the Breakthrough Initiatives, the project responsible for detecting the Parkes signal. “And when I say most likely, it’s like 99.9 [percent].”

In 2011, radiation from a microwave oven in the lunchroom at Parkes Observatory was, at first, mistakenly confused with an interstellar radio signal. These things happen when you put radiation sources near radio telescopes looking for radiation.

Possibly the most discouraging news for those of us who believe advanced extraterrestrial intelligence commonly exists in our galaxy is a recent statistical analysis published in May 2020 in the Proceedings of the National Academy of Sciences by Dr. David Kipping, an assistant professor in Columbia University’s Department of Astronomy.

In his paper, Dr. Kipping employs an objective Bayesian analysis to estimate the odds ratios for the early emergence of life on an alien planet and for the subsequent development of intelligent life. Since he only had a sample size of 1 — Earth — he used Earth’s timeline for the emergence of early life (which occurred about 500 million years after Earth’s formation) and intelligent life (which took another 4 billion years) to run a Monte Carlo simulation. In other words, he estimated how often elementary life forms and then intelligent life would emerge if we repeated Earth’s history many times over.

Dr. Kipping’s answer? “Our results find betting odds of >3:1 that abiogenesis (the first emergence of life) is indeed a rapid process versus a slow and rare scenario, but 3:2 odds that intelligence may be rare,” concludes Dr. Kipping.

Put differently, there is a 75 percent change our galaxy is full of low-level life forms that formed early in a planet’s history, but a 60 percent chance that human-like intelligence is quite rare.

Dr. Kipping is not suggesting humans are alone in the galaxy, but his results suggest we are rare enough that to have a similarly intelligent life form living in our nearest neighboring solar system, Proxima Centauri, is unlikely.

What a killjoy.

My POOMA-estimate of Extraterrestrial Intelligent Life

I want to believe aliens living in the Proxima Centauri system are broadcasting a beacon towards Earth saying, in effect, “We are over here!” I also want to believe Oumuamua is an alien probe (akin to our Voyager probes now leaving the confines of our solar system).

If either is true (and we may never know), it would be the biggest event in human history…at least until the alien invasion that will follow happens.

Both events leave me with questions: If Oumuamua is a reconnaissance probe, shouldn’t we have detected electromagnetic signatures suggesting such a mission? [It could be a dead probe.] And in the case of the Parkes signal, if a civilization is going to go to the trouble of creating a beacon signal (which requires a lot of energy directed at a specific target in order to be detectable at great distances), why not throw some information into the signal? Something like, “We are Vulcans. What is your name?” or “When is a good time for us to visit?” And why do these signals never reappear? [At this writing, there have been no additional narrowband signals detected from Proxima Centauri subsequent to the ones found last year.]

Given the partisan insanity that grips our nation and the fear-mongering meat heads that overpopulate our two political parties, we would be well-served by a genuine planetary menace. We would all gain some perspective. And I’ll say it now, in case they are listening: I, for one, welcome our new intergalactic overlords (Yes, I stole that line from “The Simpsons).

In the face of an intergalactic invasion force, we may look back and realize that a lot of the circumstantial evidence of extraterrestrial life we had previously dismissed as foil-hat-level speculation was, in reality, part of a bread crumb trail to a clearer understanding of our place in the galactic expanse.

So, are Oumuamua and the possible radio signal from the Proxima Centauri system evidence of advanced extraterrestrial life? In isolation, probably not. But I wonder what evidence we have overlooked because our best scientific minds are too career conscious to risk their professional reputations.

I don’t have a professional reputation to protect, so here is my guess as to whether Oumuamua and/or the possible Proxima Centauri radio signal are actual evidence of advanced extraterrestrial life: A solid 5 percent probability.

The chance that we’ve overlooked other confirmatory evidence already captured by our scientists? A much higher chance…say, a plucky 20 percent probability.

Turning the question around, given our time on Earth, our technology, and the amount of time we’ve been broadcasting towards the stars, what are the chances an alien civilization living nearby would detect our civilization? Probably a rather good chance, but not in the way they did in the 1997 movie Contact. There is no way the 1936 Berlin Olympics broadcast would be detectable and recoverable even a few light-years away from Earth. Instead, aliens are more likely to see evidence of life in the composition of our atmosphere.

And what is my estimated probability that advanced extraterrestrial life (of the space-traveling kind) lives in our tiny corner of the Milky Way — say, within 50 light years of our sun? Given there are at least 133 Sun-like stars within this distance (many with planets in the organic life-friendly Goldilocks- zone) and probably 1,000 more planetary systems orbiting red dwarf stars, I give it an optimistic 90 percent chance that intelligent life lives nearby.

We are not likely to be alone. In fact, we probably don’t have the biggest house or the smartest kids in our own cul-de-sac. We are probably average.

I am even more convinced that we live in a galaxy densely-populated with life on every point of the advancement scale, a galactic menagerie of life that has more in common with Gene Roddenberry’s Star Trek than Dr. Kippling’s 3:2 odds estimate against such intelligent life abundance.

So, it won’t surprise me if someday we learn that aliens in the Proxima Centauri system were trying to contact us or that Oumuamua was a reconnaissance mission of our solar system by aliens looking for a hospitable place to explore (and perhaps spend holidays if the climate permits). I’m not saying that is what happened, I’m just saying I would not be surprised.

  • K.R.K.

Send comments to:

Be part of the solution, not the problem

By Kent R. Kroeger (Source:; January 16, 2021)

Could Donald Trump’s presidency have ended any other way?

What happened at — and, more importantly, in — the U.S. Capitol on January 6th was tragic. People died because an uncontrollable mob formed outside the U.S. Capitol to support a president who, at best, was recklessly naive about what a mass rally like that could turn into; and, at worst, deliberately ignited those flames.

If only Trump instead of me had gotten this fortune cookie and taken it to heart:

“If you win, act like you are used to it. If you lose, act like you love it.” — A fortune cookie

To my Biden-supporting readers, concerned that I am going to defend Trump’s actions leading up to the storming of the U.S. Capitol on January 6th, rest easy. I am not.

Now is not the time to discover the mental gymnastics necessary to excuse a political act — Trump’s rally to “Stop the Steal” — that a child would have realized had the potential to provoke significant violence.

To my Trump-supporting readers, already practicing levels of emotional isolation and self-censorship that can’t possibly be good for your long-term health, you will be spared any self-important, virtue-signaling lecture about the moral righteousness of Republicans “brave” enough to disown Trump or how the GOP’s many latent malignancies were exposed (and exploited) by the Trump presidency.

No, instead, I will use the January 6th debacle to share what I am telling myself so I can help make sure something like that sh*t-carnival never happens again.

For starters…

Now is NOT the time to say, ‘They started it.

For partisan purposes, I will not compare or equate last year’s George Floyd/Black Lives Matter protests in which at least 19 people died and caused approximately $1.5 billion in property damage to the Capitol riot.

Protests turning deadly are not that uncommon in U.S. history, and they’ve been instigated from both the left and right. We’ve before even seen gun violence directed at U.S. House members within the Capitol building itself (1954 Capitol shooting).

But to use the 2021 Capitol riot tragedy to propel the narrative that violence is primarily the domain of the political right is to willfully ignore instances such as the 12 people who died of lead poisoning in Flint, Michigan when a Democrat mayor, a Republican governor, and an oddly passive Environmental Protection Agency under Barack Obama carelessly switched Flint’s water supply in order to save money.

One might say that Flint is a different kind of violence and they’d be right. I think its worse. Its silent. Hard to identify its perpetrator. And even harder to secure justice and restitution.

Or how about the hundreds of mostly brown people U.S. drones and airstrikes kill every year? These military and intelligence actions, uniformly funded by bipartisan votes since the 9/11 attacks, have arguably accomplished little except make the U.S. the world’s most prolific killer of pine nut farmers in Afghanistan.

Whether we acknowledge it, deadly violence is central part of our culture and no political party, ideology, race or ethnicity is immune from being complicit in it.

Now is NOT the time to call other people conspiracy theorists — especially since we are all inclined to be one now and then.

While I emphatically oppose the overuse of mail-in voting (particularly when third parties are allowed to collect and deliver large numbers of completed ballots) on the grounds that it compromises two core principles of sound election system design — timeliness and integrity — it is regrettable that Trump and his subordinates have encouraged his voters to believe the three-headed chimera that the 2020 presidential election was stolen. The evidence simply isn’t there, as hard as they try to find it.

That said, for Democrats or anyone else to call Trump voters “conspiracy theorists” is to turn a blind eye to a four-year Democratic Party and news media project called Russiagate that, in the brutal end, found no evidence of a conspiracy between the 2016 Trump campaign and the Russians to influence the 2016 election. At this point my Democrat friends usually lean in and say something like, “The Mueller investigation found insufficient evidence to indict Trump and his associates on conspiracy charges — read the Mueller report!” At which time I lean in and say, “Read the Mueller report!” There was no evidence of a conspiracy, a term with a distinct legal definitionAn agreement between two or more people to commit an illegal act, along with an intent to achieve the agreement’s goal.

What the Mueller report did do was document: (1) the Trump campaign’s clumsy quest to find Hillary Clinton’s 30,000 deleted emails (George Papadopoulos and Roger Stone), (2) the incoming Trump administration’s opening of a dialogue with a Russian diplomat (Sergey Kislyak) using an Trump administration representative (General Michael Flynn) and (3) the Trump organization’s effort to build a Trump Tower in Moscow. All of those actions were legal — as they should be.

And, yes, I am skeptical that Lee Harvey Oswald acted alone — even as I believe he was the lone gunman. If that makes me a conspiracy theorist, so be it.

Now is NOT the time to shame people for believing that most of our political elites work more for the political donor class than the average American (whoever that is).

I do not believe the data supports the thesis that economic grievances are the primary factor behind Trump’s popularity within the Republican Party. Instead, the evidence says something deeper drives Trump support, more rooted in race, social status, and culture than economics.

Still, the stark realization that our political system is broken binds many Democrat progressives and Trump supporters and has been continually buried over the past four-plus years of anti-Trump media coverage: This country has a political-economic system primarily designed to fulfill the interests of a relatively small number of Americans.

In Democracy in America?: What Has Gone Wrong and What We Can Do About It (University of Chicago Press, 2017), perhaps themost important political science book in the past thirty years, political scientists Benjamin Page and Martin Gilens offer compelling evidence that public policy in the U.S. is best explained by understanding the interests of elites and not those of the average American. In fact, this disconnect is so bad in their view, it is fair to ask if Americans even live in a democracy.

“Our analysis of some 2,000 federal government policy decisions indicates that when you take account of what affluent Americans, corporations and organized interest groups want, ordinary citizens have little or no independent influence at all,” Page and Gilens said in a Washington Post interview while promoting their book. “The wealthy, corporations and organized interest groups have substantial influence. But the estimated influence of the public is statistically indistinguishable from zero.”

“This has real consequences. Millions of Americans are denied government help with jobs, incomes, health care or retirement pensions. They do not get action against climate change or stricter regulation of the financial sector or a tax system that asks the wealthy to pay a fair share. On all these issues, wealthy Americans tend to want very different things than average Americans do. And the wealthy usually win.”

And while Page and Gilen’s research rightfully has methodological detractors, the most direct statistical indicator of its validity — wealth inequality —has been growing steadily in the U.S. since 1990, with a few temporary pauses during the Clinton administration, the 2008 worldwide financial crisis, and the Trump administration (yes, you read that right).

Image for post
Source: St. Louis Federal Reserve

Only the disproportionate amount of the coronavirus pandemic relief money going to corporate bank accounts has put the wealthiest 1-percent back near their Obama administration highs.

So while Trump supporters don’t always marshal the best evidence-based critiques of the American political system, with a little more effort and the help of better leaders it wouldn’t be hard for them to do so.

Now is NOT the time to reduce three-fifths of our population down to words like ‘fascist’ and ‘racist.’

Are there racist Republicans? Of course there are — around 45 percent among white Republican voters, according to my analysis of the 2018 American National Election Study (Pilot). That same analysis, which used a measure of racial bias common in social science literature, found 20 percent of white Democrat voters have a more favorable view of their race relative to African-Americans and/or Hispanics. Any assumption that racism is unique or in a more toxic form among Trump supporters is challenged by the evidence.

Now IS the time for cooler heads to prevail, which eliminates almost anyone appearing on the major cable news networks in the past two weeks.

The national news media profits from the use of exaggeration and hyperbole. That can never be discounted when talking about events such as what happened January 6th.

Here is how Google searches on the term ‘coup d’état’ was affected by the Capitol riot:

Image for post
Source: Google Trends

I confess I was not horrified watching live on social media as Trump supporters forced their way into the Capitol. I was shocked, but not horrified. A small semantic difference, but an important one. At no point did I think I was watching an ongoing coup d’état.

But for my family and friends that watched the mob unfold on the major cable news networks, they thought an actual coup d’état was in motion — that this mob was viably attempting to stop the electoral college vote, overturn the 2020 election, and keep Trump in the presidency.

Where the news media has an obligation to discern fact from fantasy, they did the exact opposite on January 16th. They, in fact, helped fan the spread of disinformation coming out of news reports from inside the Capitol.

As disconcerting as the scene was on January 6th, there is a chasm-sized difference between Facebook chuckle heads causing a deadly riot and a credible attempt to take over the U.S. government.

This is how journalist Michael Tracey described the Capitol riot and the media’s predilection for hyperbole while reporting on it:

“Is it unusual for a mob to breach the Capitol Building — ransacking offices, taking goofy selfies, and disrupting the proceedings of Congress for a few hours? Yes, that’s unusual. But the idea that this was a real attempt at a “coup” — meaning an attempt to seize by force the reins of the most powerful state in world history — is so preposterous that you really have to be a special kind of deluded in order to believe it. Or if not deluded, you have to believe that using such terminology serves some other political purpose. Such as, perhaps, imposing even more stringent censorship on social media, where the “coup” is reported to have been organized. Or inflicting punishment on the man who is accused of “inciting” the coup, which you’ve spent four years desperately craving to do anyway.

Journalists and pundits, glorying in their natural state — which is to peddle as much free-flowing hysteria as possible — eagerly invoke all the same rhetoric that they’d abhor in other circumstances on civil libertarian grounds. “Domestic terrorism,” “insurrection,” and other such terms now being promoted by the corporate media will nicely advance the upcoming project of “making sure something like this never happens again.” Use your imagination as to what kind of remedial measures that will entail.

Trump’s promotion of election fraud fantasies has been a disaster not just for him, but for his “movement” — such as it exists — and it’s obvious that a large segment of the population actively wants to be deceived about such matters. But the notion that Trump has “incited” a violent insurrection is laughable. His speech Monday afternoon that preceded the march to the Capitol was another standard-fare Trump grievance fest, except without the humor that used to make them kind of entertaining.”

This is not a semantic debate. What happened on January 6th was not a credible coup attempt, despite verbal goading from a large number of the mob suggesting as much and notwithstanding Senator Ted Cruz’ poorly-timed fundraising tweet that some construed (falsely) as his attempt to lead the nascent rebellion.

Still, do not confuse my words with an exoneration of Trump’s role in the Capitol riot. To the contrary, time and contemplation has led to me to conclude Trump is wholly responsible for the deadly acts conducted (literally) under banner’s displaying his name, regardless of the fact his speech on that morning did not directly call for a violent insurrection. In truth, he explicitly said the opposite: “I know that everyone here will soon be marching over to the Capitol building to peacefully and patriotically make your voices heard.”

Nonetheless, he had to know the potential was there and it was his job to lead at that moment. He didn’t.

Now IS the time to encourage more dialogue, not less — and that means fewer “Hitler” and “Communist” references (my subsequent references notwithstanding).

Along with Page and Gilen’s book on our democracy’s policy dysfunction, another influential book for me has been Yale historian Timothy Snyder’s On Tyranny: Twenty Lessons from the Twentieth Century (Tim Duggan Books, 2017). In it he uses historical examples to explain how governments use tragedies and crises to increase their control over society (and not usually for the common good).

For example, weeks after Adolf Hitler was made Chancellor of Germany, he used the Reichstag fire on February 27, 1933, to issue The Reichstag Fire Decree which suspended most civil liberties in Germany, including freedom of the press and the right of public assembly.

“A week later, the Nazi party, having claimed that the fire was the beginning of a major terror campaign by the Left, won a decisive victory in parliamentary elections,” says Snyder. “The Reichstag fire shows how quickly a modern republic can be transformed into an authoritarian regime. There is nothing new, to be sure, in the politics of exception.”

It would be reductio ad absurdum to use Hitler’s shutting down of Communist newspapers as the forewarning to a future U.S. dictatorship caused by Twitter banning Trump. Our democracy can survive Trump’s Twitter ban. At the same time, our democracy isn’t stronger for it.

Conservative voices are now systematically targeted for censorship, as described in journalist Glenn Greenwald’s (not a conservative) recent Twitter salvo:

Final Thoughts

Today, because of what happened on January 6th, the U.S. is not as free as it was even a month ago, and it is fruitless to blame one person, a group of people, the news media or a political party for this outcome. We have all contributed in a tiny way by isolating ourselves in self-selected information bubbles that keep us as far away as humanly possible from challenging and unpleasant thoughts. [For example, I spend 99 percent of my social media time watching Nerdrotic and Doomcock torch Disney, CBS and the BBC for destroying my favorite science fiction franchises: Star Wars, Star Trek and Doctor Who.]

A few days ago I chatted with a neighbor who continues to keep his badly dog-eared, F-150-sized Trump sign in his front yard. He talked weather, sports, and movies. Not a word on politics. I wanted to, but knew not to push it. If he had mentioned the current political situation, I would have offered this observation:

Political parties on the rise always overplay their hand. How else can you explain how the Democrats, facing an historically unpopular incumbent president — during a deep, pandemic-caused recession— could still lose seats in U.S. House elections? Republicans are one midterm election away from regaining the House of Representatives and the two years until the next congressional election is a political eternity.

The Republicans will learn from the 2021 Capitol riot.

As for the Democrats, I would just suggest this fortune cookie wisdom:

Image for post

Actually, that is wisdom for all of us.

  • K.R.K.

Send comments to:

The status quo is back — expect them to cry about the budget deficit

By Kent R. Kroeger (January 21, 2021)

Political scientist Harold Lasswell (1902–1978) said politics is about ‘who gets what, when and how.’

He wrote it in 1936, but his words are more relevant than ever.

In the U.S., his definition is actualized in Article I, Section 8 of the U.S. Constitution:

The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defense and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;

To borrow Money on the credit of the United States;

To regulate Commerce with foreign Nations, and among the several states, and with the Indian Tribes;

To establish an uniform Rule of Naturalization, and uniform Laws on the subject of Bankruptcies throughout the United States;

To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures.

In short, the U.S. Congress has the authority to create money — which they’ve done in ex cathedra abundance in the post-World War II era.

According to the U.S. Federal Reserve, the U.S. total public debt is 127 percent of gross domestic product (or roughly $27 trillion) — a level unseen in U.S. history (see Figure 1).

Figure 1: Total U.S. public debt as a percent of gross domestic product (GDP)

Image for post
Source: St. Louis Federal Reserve

And who owns most of the U.S. debt? Not China. Not Germany. Not Japan. Not the U.K. It is Americans who own roughly 70 percent of the U.S. federal debt.

Its like owing money to your family — and if you’ve ever had that weight hanging over your head, you might prefer owing the money to the Chinese.

When it comes to dishing out goodies, the U.S. Congress makes Santa Claus look like a hack.

But, unlike Saint Nick, Congress doesn’t print and give money to just anyone who’s been good— Congress plays favorites. About 70 percent goes to mandatory spending, composed of interest payments on the debt (10%), Social Security (23%), Medicare/Medicaid (23%), and other social programs (14%). As for the other 30 percent of government spending, called discretionary spending, 51 percent goes to the Department of Defense.

That leaves about three trillion dollars annually to allocate for the remaining discretionary expenditures. To that end, the Congress could just hand each of us (including children) $9,000, but that is crazy talk. Instead, we have federal spending targeted towards education, training, transportation, veteran benefits, health, income security, and the basic maintenance of government.

There was a time when three trillion dollars was a lot of money — and maybe it still is — but it is amazing how quickly that amount of money can be spent with the drop of a House gavel and a presidential signature.

The Coronavirus Aid, Relief, and Economic Security Actpassed by the U.S. Congress and signed into law by President Donald Trump on March 27, 2020, costed out at $2.2 trillion, with about $560 billion going to individual Americans and the remainder to businesses and state or local governments.

That is a lot of money…all of it debt-financed. And the largest share of it went directly to the bank accounts of corporate America.

And what do traditional economists tell us about the potential impact of this new (and old) federal debt? Their collective warning goes something like this:

U.S. deficits are partially financed through the sale of government securities (such as T-bonds) to individuals, businesses and other governments. The practical impact is that this money is drawn from financial reserves that could have been used for business investment, thereby reducing the potential capital stock in the economy.

Furthermore, due to their reputation as safe investments, the sale of government securities can impact interest rates when they force other types of financial assets to pay interest rates high enough to attract investors away from government securities.

Finally, the Federal Reserve can inject money into the economy either by directly printing money or through central bank purchases of government bonds, such as the quantitative easing (QE) policies implemented in response to the 2008 worldwide financial crisis. The economic danger in these cases, according to economists, is inflation (i.e., too much money chasing too few goods).

How does reality match with economic theory?

I am not an economist and don’t pretend to have mastered all of the quantitative literature surrounding the relationship between federal debt, inflation and interest rates, but here is what the raw data tells me: If there is a relationship, it is far from obvious (see Figure 2).

Figure 2: The Relationship between Federal Debt, Inflation and Interest Rates

Image for post
Source: St. Louis Federal Reserve

Despite a growing federal debt, which has gone from just 35 percent of GDP in the mid-1970s to over 100 percent of GDP following the 2008 worldwide financial crisis (blue line), interest rates and annual inflation rates have fallen over that same period. Unless there is a 30-year lag, there is no clear long-term relationship between federal deficits and interest rates or inflation. If anything, the post-World War II relationship has been negative.

Given mainstream economic theory, how is that possible?

The possible explanations are varied and complex, but among the reasons for continued low inflation and interest rates, despite large and ongoing federal deficits, is an abundant labor supply, premature monetary tightening by the Federal Reserve (keeping the U.S. below full employment), globalization, and technological (productivity) advances.

Nonetheless, the longer interest rates and inflation stay subdued amidst a fast growing federal debt, it becomes increasingly likely heterodox macroeconomic theories — such as Modern Monetary Theory (MMT) — will grow in popularity among economists. At some point, consensus economic theory must catch up to the facts on the ground.

What is MMT?

Investopedia’s Deborah D’Souza offers a concise explanation:

Modern Monetary Theory says monetarily sovereign countries like the U.S., U.K., Japan, and Canada, which spend, tax, and borrow in a fiat currency they fully control, are not operationally constrained by revenues when it comes to federal government spending.

Put simply, such governments do not rely on taxes or borrowing for spending since they can print as much as they need and are the monopoly issuers of the currency. Since their budgets aren’t like a regular household’s, their policies should not be shaped by fears of rising national debt.

MMT challenges conventional beliefs about the way the government interacts with the economy, the nature of money, the use of taxes, and the significance of budget deficits. These beliefs, MMT advocates say, are a hangover from the gold standard era and are no longer accurate, useful, or necessary.

More importantly, these old Keynesian arguments — empirically tenuous, in my opinion — needlessly restrict the range of policy ideas considered to address national problems such as universal access to health care, growing student debt and climate change. [Thank God we didn’t get overly worried about the federal debt when we were fighting the Axis in World War II!]

Progressive New York congresswoman Alexandria Ocasio-Cortez has consistently shown an understanding of MMT’s key tenets. When asked by CNN’s Chris Cuomo how she would pay for the social programs she wants to pass, her answer was simple (and I paraphrase): The federal government can pay for Medicare-for-All, student debt forgiveness, and the Green New Deal the same way it pays for a nearly trillion dollar annual defense budgetjust print the money.

In fact, that is essentially what this country has done since President Lyndon Johnson decided to prosecute a war in Southeast Asia at the same time he launched the largest set of new social programs since the New Deal.

Such assertions, however, generate scorn from status quo-anchored political and media elites, who are now telling the incoming Biden administration that the money isn’t there to offer Americans the $2,000 coronavirus relief checks promised by Joe Biden as recently as January 14th. [I’ll bet the farm I don’t own that these $2,000 relief checks will never happen.]

Cue the journalistic beacon of the economic status quo — The Wall Street Journal — which plastered this headline above the front page fold in its January 19th edition: Janet Yellen’s Debt Burden: $21.6 Trillion and Growing

WSJ writers Kate Davidson and Jon Hilsenrath correctly point out that the incoming U.S. Treasury secretary, Yellen, was the Chairwoman of the Clinton administration’s White House Council of Economic Advisers and among its most prominent budget deficit hawks, and offer this warning: “The Biden administration will now contend with progressives who want even more spending, and conservatives who say the government is tempting fate by adding to its swollen balance sheet.”

This misrepresentation of the federal debt’s true nature is precisely what MMT advocates are trying to fight, who note that when Congress spends money, the U.S. Treasury creates a debit from its operating account (through the Federal Reserve) and deposits this Congress-sanctioned new money into private bank accounts and the commercial banking sector. In other words, the federal debt boosts private savings — which, according to MMT advocates, is a good thing when the “debt” addresses any slack (i.e., unused economic resources) in the economy.

Regardless of MMT’s validity, this heterodox theory reminds us of how poorly mainstream economic thinking describes the relationship between federal spending and the economy. From what I’ve seen after 40 years of watching politicians warn about the impending ‘economic meltdown’ caused by our growing national debt, consensus economic theory seems more a tool for politicians to scold each other (and their constituents) about the importance of the government paying its bills than it is a genuine way to understand how the U.S. economy works.

Yet, I think everyone can agree on this: Money doesn’t grow on trees, it grows on Capitol Hill. And as the U.S. total public debt has grown, so have the U.S. economy and wealth inequality — which are intricately interconnected through, as Lasswell described 85 years ago, a Congress (and president) who decide ‘who gets what, when and how.’

  • K.R.K.

Send comments and your economic theories to:

Beadle (the Data Crunching Robot) Predicts the NFL Playoffs

By Kent R. Kroeger (Source:; January 15, 2021)

Beadle (the Data Crunching Robot); Photo by Hello Robotics (Used under the Creative Commons Attribution-Share Alike 4.0 International license)

Since we are a mere 24 hours away from the start of the NFL Divisional Round playoffs, I will dispense with any long-winded explanation of how my data loving robot (Beadle) came up with her predictions for those games.

Suffice it to say, despite her Bayesian roots, Beadle is rather lazy statistician who typically eschews the rigors and challenges associated with building statistical models from scratch for the convenience of cribbing off the work of others.

Why do all that work when you can have others do it for you?

There is no better arena to award Beadle’s sluggardness than predicting NFL football games, as there are literally hundreds of statisticians, data modelers and highly-motivated gamblers who publicly share their methodologies and resultant game predictions for all to see.

Why reinvent the wheel?

With this frame-of-mind, Beadle has all season long been scanning the Web for these game predictions and quietly noting those data analysts with the best prediction track records. Oh, heck, who am I kidding? Beadle stopped doing that about four weeks into the season.

What was the point? It was obvious from the beginning that all, not most, but ALL of these prediction models use mostly the same variables and statistical modeling techniques and, voilà, come up with mostly the same predictions.

FiveThirtyEight’s prediction model predicted back in September that the Kansas City Chiefs would win this year’s Super Bowl over the New Orleans Saints. And so did about 538 other prediction models.

Why? Because they are all using the same data inputs and whatever variation in methods they employ to crunch that data (e.g., Bayesians versus Frequentists) is not different enough to substantively change model predictions.

But what if the Chiefs are that good? Shouldn’t the models reflect that reality?

And it can never be forgotten that these NFL prediction models face a highly dynamic environment where quarterbacks and other key players can get injured over the course of a season, fundamentally changing a team prospects — a fact FiveThirtyEight’s model accounts for with respect to QBs — and the reason for which preseason model predictions (and Vegas betting lines) need to be updated from week-to-week.

Beadle and I are not negative towards statistical prediction models. To the contrary, given the infinitely complex contexts in which they are asked to make judgments, we couldn’t be more in awe of the fact that many of them are very predictive.

Before I share Beadle’s predictions for the NFL Divisional Round, I should extend thanks to these eight analytic websites that shared their data and methodologies:, ESPN’s Football Power Index,,,,,, and

It is from these prediction models that Beadle aggregated their NFL team scores to generate her own game predictions.

Beadle’s Predictions for the NFL Divisional Playoffs

Without any further adieu, here is how Beadle ranks the remaining NFL playoff teams on her Average Power Index (API), which is merely each team’s standardized (z-score) after averaging the index scores for the eight prediction models:

Analysis by Kent R. Kroeger (

And from those API values, Beadle makes the following game predictions (including point spreads and scores) through the Super Bowl:

No surprise: Beadle predicts the Kansas City Chiefs will win the Super Bowl in a close game with the New Orleans Saints.

But you didn’t need Beadle to tell you that. made that similar prediction five months ago.

  • K.R.K.

Send comments to:

My popular vote forecast for the 2020 presidential election

By Kent R. Kroeger (, January 8, 2020)

Occasionally, I pocket my ego and read comments left by readers of my various essays and articles.

Two essays in particular, Trump’s market share problem and Trump should deny his enemies their ultimate victory, attracted an unusually high percentage of negative responses, as both essays seemed to annoy strong partisans on both ends of the political spectrum.

In both essays I argued Trump’s probability of reelection is significantly reduced for three reasons: (1) A shrinking Republican base, (2) a shrinking pool of swing voters who could potentially alter the outcome on Election Day, and (3) Trump’s stubbornly low (almost invariant) job approval rating.

Many readers vehemently disagreed with my conclusion.

Donald Trump supporters disagreed with my pessimism about his chances of being reelected, while Trump-haters suggested I am a Republican troll trying to promote complacency among Democrats and thereby ensure his reelection.

However, as those who know me can attest, I possess no strong affinity towards any party or ideology.

For me, Trump is a market researcher’s dream challenge. How does a president so deeply disliked by nearly half of the U.S. adult population get reelected?

It is not going to be easy.

In the polling era, only one president — Barack Obama — has been reelected with a job performance rating as low as Trump’s is less than one year out from Election Day. At this juncture in Obama’s first term, his Gallup job approval stood at 42 percent (see Figure 1 below), compared to 45 percent for Trump in Gallup’s December 2–15, 2019 survey.

Figure 1: Comparing 1st-Term Job Approval for Trump, Obama & G. W. Bush

Source: The Gallup Organization

However, there is a major difference in Trump’s situation. Obama was above or near 50 percent job approval for the first year of his presidency. Trump has never sniffed 50 percent approval, his highest approval being 46 percent approval, according to Gallup. A strong, though not conclusive, piece of evidence that Trump may have a maximum approval threshold around 46 percent.

Is 46 percent approval enough to win reelection?

According to one of my anonymous readers, 46 percent approval would make Trump very competitive in the next election. For evidence, he/she directed me to a popular vote econometric model created by political scientists Michael Lewis-Beck and Charles Tien (see Figure 2). Their two-variable model predicted Hillary Clinton would win 51.1 percent of the two-party popular vote in 2016 — which was exactly her two-party vote share.

Figure 2: The Lewis-Beck-Tien Presidential Vote Share Model

Source: Lewis-Beck, M. S. & Tien, C. (2016). The political economy model: 2016 US election forecastsPS: Political Science & Politics, 49(4), 661–663.

There is no guarantee the Lewis-Beck-Tien (LBT) model’s bulls eye accuracy in 2016 will be repeated in 2020. In every election, one model is always going to have the most accurate prediction, either by chance or model validity.

Nonetheless, the LBT model has predicted the winner in all presidential elections since 1948 (with the exceptions of 1960, 1968, and 1976).

The LBT model also has the virtue of simplicity — just two variables: Presidential approval in July of the election year and GNP growth in the first two quarters of the election year.

Assuming current conditions remain constant through Election day, what does the LBT model predict for 2020? The LBT model predicts Trump will win 49.4 percent of the two-party presidential vote. Even if we assume no economic growth in the first two quarters of 2020, the LBT model predicts Trump will receive 49.2 percent of the two-party vote if he maintains 45 percent job approval through Election Day.

Accepting the predictive validity of the LBT model — and its failure to predict the 1976 winner (Jimmy Carter) gives me significant pause in this regard — Trump will be competitive in 2020, all else equal.


Given this evidence, therefore, do I retract my previous conclusion that Trump has little chance of winning reelection in 2020?

Yes and no.

Even before considering the possibility of another Electoral College victory and popular vote loss by Trump, there are other econometric models using presidential approval and the economy — besides the LBT model — to predict presidential popular vote outcomes (descriptions of the most widely cited models are found here). And, currently, these models predict Trump will receive between 45 and 50 percent of the two-party popular vote.

So, yes, I was wrong to assert Trump has a meager chance of winning reelection. In reality, he has more than a punchers chance.

Yet, I stand by my other conclusion that Trump has little room for popularity growth. The Donald Trump we see today — in terms of job approval — is likely the same one we will see in November 2020.

As I stated in my February 2019 essay, “Donald Trump, the candidate, has a market share problem. His support has very little upside and — as of now — has a maximum potential market (vote) share of just 45 percent among eligible voters.”

Given a shrinking political center amidst a highly partisan electorate combined with a Republican base also in demographic decline, Trump has a market share growth problem that is only getting worse. There simply are not enough likely and potential Trump voters to ensure his reelection. Which is way many pundits and forecasters, not just me, have questioned Trump’s base-focused electoral strategy.

Among them, political blogger Kevin Drum, wonders if the Republican Party is prepared for the day of reckoning should it continue Trump’s base-focused strategy in 2020.

In 2012 I thought that the Republican Party had gone as far as it could to get votes from the white working class. There was just no more blood to be squeezed from that particular turnip. But I was wrong — barely. In 2016 they did what no one could have predicted by nominating a guy who was an all but open racist. And it worked, buying them just a few more votes than it lost them. Once again, they gained a few years.

But it can’t last forever. The Republican Party has already gone much further down the road of lashing itself to the cause of white racial resentment than I would have guessed possible. How much longer can it last? Like most bubbles, longer than you think. But someday the bubble will burst. The big question is, what will happen next?

Trump’s limited market growth potential is evident in the December 2019 poll, which found 47 percent of U.S. adults possess a ‘very unfavorable’ opinion of Trump (see Figure 3). No other major politician comes close to the intensity of Trump’s unfavorability. The next closest is House Speaker Nancy Pelosi at 38 percent of U.S. adults having a ‘very unfavorable’ view of her.

Conversely, 30 percent of U.S. adults view Trump ‘very favorably’ — more than any other major U.S. politician. The next closest political figure in this regard is Vice President Mike Pence at 22 percent.

It is this large, intense pro-Trump base that fills his pep rallies and gives the media pundits the impression that the incumbent president will be hard to beat in 2020.

“Look at crowd sizes and enthusiasm as a better metric than opinion polls which oversample Democrats,” writes Denver physician and conservative writer Brian Joondeph, MD.

Unfortunately for Dr. Joondeph, the evidence does not support his idea of using crowd size as a proxy for electoral viability. And the YouGov survey plainly shows us why: Trump’s loyal base (30% of the population) is swamped in size by the number of Americans that can’t stand him (47%).

What the YouGov survey also tells us is that most Americans have strong feelings about Trump, whether they be positive or negative. Only 7 percent of U.S. adults lack an opinion on Trump.

Figure 3: Favorability Ratings of Selected Political Figures (December 2019)

Source: The Economist/YouGov Poll; Conducted Dec. 28–31, 2019 with 1,500 US adults; Margin of Error = ±2.7%

At first blush, the 47%-Very-Unfavorable figure does not bode well for Trump’s reelection chances. My analysis of the 2016 American National Election Study found that three percent of eligible voters with a ‘very unfavorable’ view of Trump in October 2016 voted for him anyway (see Figure 4). That some Trump-dislikers did this was apparently driven by their even greater distaste for Hillary Clinton, particularly on their perception of each candidate’s honesty.

Figure 4: Favorability Ratings of Selected Political Figures (December 2019)

Data source: American National Election Study (2016); analysis by Kent R. Kroeger; data are weighted

Even if we subtract the 1 to 2 percent of Trump-dislikers who will still vote for him (3% of 47%), this puts him in a deep hole as he tries to pull off another Election Day miracle: Trump starts with 45 percent of eligible voters most likely unwilling to consider his candidacy. In other words, Trump has little margin for error to work with heading into the 2020 campaign.

But virtually the same argument could be made from the perspective of the eventual Democratic Party nominee. In a highly-polarized electorate where almost 80 percent of adults have already made up their minds about Trump — and half of all eligible voters find our political system so irrelevant to their lives they don’t even bother to vote — a small percentage of swing voters can tip the balance.

Figure 5 (below) is a back-of-the-envelope exercise that shows how close the presidential popular vote could easily be in 2020 — despite the substantial favorability disadvantage Trump takes into the contest.

Using Trump’s favorability distribution from the December 2019 Economist/YouGov Survey, along with voting ratios estimated from the 2016 American National Election Study (ANES), I generated a two-party popular vote forecast for 2020.

And the winner will be: The eventual Democrat nominee…51% to 49%. This forecast is almost identical to the LBT econometric model forecast detailed earlier in this essay.

Figure 5: 2020 Presidential Vote Forecast

This forecast should not be surprising. American presidential elections tend to be relatively close. And despite my earlier musings on the dire situation facing Trump and the Republicans, I am now convinced Trump has a legitimate chance of getting reelected.

  • K.R.K.

As always, please send suggestions and complaints to:

The Democratic Party’s impeachment strategy isn’t as futile as it appears

By Kent R. Kroeger (, January 6, 2020)


Without consequences, there are no lessons learned: How the news media rationalizes botched reporting

By Kent R. Kroeger (, December 30, 2019)

Once the news media publishes allegations that you are the ‘Centennial Olympic Park bomber’ or ‘a Russian agent/stooge/tool,’ regardless of your innocence, the scar is permanent.

But that is exactly what happened to security guard Richard Jewell after the 1996 bombing at the Atlanta Olympics and to a host of individuals in the aftermath of the 2016 election. These two independent events illustrate how the U.S. news media can fail at covering important events and how they rationalize their botched reporting by assigning blame to their sources, not reporting standards.

Worse yet, there is no consistent mechanism holding journalists accountable for their botched reporting. To the contrary, if their reporting feeds a compelling and profitable narrative (i.e., attracts audiences), it is handsomely rewarded.

Rachel Maddow today makes around $7 million-a-year for MSNBC and, in covering the Russia-Trump collusion (Russiagate) story, was directly responsible for promoting many of the most baseless rumors about Donald Trump and his campaign.

In a scathing indictment of Maddow’s journalistic integrity, Washington Post media critic Erik Wemple recently laid out this example of her deceptive technique:

When small bits of news arose in favor of the dossier, the franchise MSNBC host pumped air into them. At least some of her many fans surely came away from her broadcasts thinking the dossier was a serious piece of investigative research, not the flimflam, quick-twitch game of telephone outlined in the Horowitz report. She seemed to be rooting for the document.

“Seemed to be rooting for the document”? More like cheerleading for its authenticity, I would say.

When Michael Isikoff of Yahoo News, one of the first journalists to report on the infamous Steele Dossier and, to my knowledge, the only major reporter to retrospectively apologize for his mistakes in covering Russiagate, tried to confront Maddow on her reporting errors during the three-year media frenzy, she promptly brushed him aside. In effect, telling him she always said the Steele Dossier was unverified and she won’t be held accountable for failing to prove or disprove the Dossier’s accuracy.

When the Robert Mueller investigation thoroughly destroyed the entire Russiagate narrative along with the Steele Dossier, Maddow could barely find the energy to devote even one full show to Mueller’s final report.


The problem, personified by Maddow but far from exclusive to her, is systemic and rooted in what I call the spot news standard endemic to corporate-controlled news organizations.

Just as Maddow refuses to be held accountable for not substantiating the Steele Dossier, the national news media embraced the same cop out, essentially saying, “What? You expected us to determine the veracity of the Dossier? We just report what our sources tell us.

Hence, the spot news standard: the type of reporting, typically done in the first few hours/days of a news worthy event, which focuses on information provided by official sources (e.g., police, government officials), victims or eyewitnesses. In contrast to spot news reporting, in-depth investigative reporting usually aspires to not only report the ‘facts’ (often as told by sources), but endeavors to independently confirm the quality of such information.

When the Atlanta Journal-Constitution (AJC) first reported that Jewell was the FBI’s prime suspect in the Olympic Park bombing, they were merely repeating what an anonymous FBI source had told one of their journalists. By the spot news standard, the AJC did its job. And a federal court would eventually exonerate AJC for that very reason from a civil lawsuit over its Olympic Park bombing coverage.

But the AJC got the Jewell story wrong, as did most of the national news media in their Russiagate coverage. In both events, the news organizations fed their audiences little truths (e.g., quotes from official sources), but botched the bigger, more important, truths.

And, yet, some people still wring their hands wondering why Americans are increasingly distrustful of the news media.

The answer is obvious. Much of what the news media reports today just isn’t true — and this problem cuts across all news media outlets and ideological points of view.

How did the Fourth Estate regress to this lousy state?

Tennessean columnist Saritha Prabhu lays the blame at the feet of partisanship:

In the Trump era, American national media (CNN, MSNBC, ABC, CBS, Fox News, The Washington Post, The New York Times) have self-divided into openly pro- or anti-Trump factions.

It isn’t covert anymore, and many media outlets wear it as a badge of honor.

But when a media outlet is strongly for or against a leader or party, the first casualties are fairness, honesty and accuracy.

Prabhu further notes that partisan-biased news reporting is profitable, in essence, creating a reinforcing feedback loop.

But the problem with today’s journalism may be more systemic than just the economic incentives behind partisan news coverage.

The privately-controlled media serve a different master than what we are taught in high school civics. They serve the sensational to the extent it is marketable and profitable; they serve social conflict for the same reason; and they serve the interests of political and economic elites because it is from this orbit most journalists and media pundits originate.

And the news media make no apologies for any of this — and why should they? They are rarely held accountable for even their biggest mistakes. If they had to answer for their mischievous disregard for the bigger truths, MSNBC’s Lawrence O’Donnell would have lost his prime time slot after his entirely false report that a Russian oligarch co-signed one of Donald Trump’s business loans with Deutsche Bank.

To his credit, O’Donnell apologized on-air for his amateur-hour mistake. But was his job ever on the line over this incident? I seriously doubt it.


The deep flaws of our national news organizations were exposed during Russiagate, which is one reason the lessons from Clint Eastwood’s newest movie, Richard Jewell, go beyond just reminding us that the AJC (and other news outlets) did a grave disservice to Jewell during their Olympic Park bombing coverage.

The immediate reaction to Eastwood’s movie is particularly revealing. Before Eastwood’s movie had even been released, lawyers were exchanging letters.

On December 9th, the Monday before the movie’s release, the Los Angeles-based law firm Lavely & Singer sent a letter to director Clint Eastwood, screenwriter Billy Ray, Warner Bros. and other parties, on behalf of The (AJC) and Cox Enterprises, its parent corporation. The letter demands Warner Bros. to publicly acknowledge “that some events were imagined for dramatic purposes and artistic license and dramatization were used in the film’s portrayal of events and characters.”

The central issue is the movie’s portrayal of AJC reporter Kathy Scruggs, who broke the new story, sourced from an anonymous FBI informant, that Richard Jewell was the FBI’s prime suspect in the July 1996 deadly bombing at Atlanta’s Centennial Olympic Park.

We know today that Jewell was completely innocent.

It would take the FBI three months before they would publicly acknowledge Jewell was no longer a suspect. Eventually, confessed serial bomber Eric Robert Rudolph would be convicted for the Atlanta bombing and sentenced to life in prison.

In response to the release of Eastwood’s movie, the AJC editorial board published this defense of their original Jewell story:

“Far from acting recklessly, the AJC actually held that story for a day to develop additional independent corroboration of key facts prior to publication,” wrote the AJC editorial board. “Law enforcement sources confirmed to the AJC their focus on Mr. Jewell, and FBI activity had been visible at the Jewells’ apartment. The accuracy of the story had also been confirmed with an FBI spokesperson to whom the entire story was read before publication…

…Within days of the July 1996 bombing, investigators came to focus on Jewell. The AJC was first to report, accurately, that the FBI considered him a suspect. Authorities questioned Jewell, searched his and his mother’s belongings and kept him under round-the-clock surveillance before publicly clearing him about three months later…

…The AJC was among numerous entities sued after Jewell was cleared, and the only one that didn’t settle. The litigation naming the AJC was dismissed in 2011, with the Court of Appeals concluding that the coverage was substantially true at the time of publication.”

The AJC defense that their coverage was “substantially true at the time of publication” disregards the damage their reporting did to an innocent man (i.e., based on the spot news accuracy standard, the AJC did their job).

Yet, Jewell saved hundreds of lives by his quick response to an imminent threat and the initial thanks for his effort was a three-month horsewhipping by the national media as a law enforcement wannabe that lived with his mother. AJC, as much as any other news outlet, built and cultivated that narrative through the use of anonymous FBI sources.

[In AJC’s defense, they were the first news outlet to report timeline evidence that Jewell did not have time to both plant the bomb and reach a pay phone used by the bomber to make a warning call.]

AJC’s apparent insensitivity to Jewell’s interests must be understood in the context of journalism’s fundamental reliance on the spot news standard: If the facts are mostly correct at the time of publication, journalists have nothing for which to apologize.

This low test for journalists is predicated on our shared First Amendment speech rights which protect American journalists from legal prosecution for their reporting, even if it is later proven to be substantively flawed. Absent malice or the ire of the American security state, U.S. journalists do not face legal jeopardy for inaccurate reporting. And that is as it should be. It never should be easy to jail journalists and news publishers.


The low-bar threshold for our Constitution-backed press freedoms is sufficient to protect MSNBC, CNN, the Washington Post, the New York Times, and other news outlets from any legal consequences for their three-year crusade of careless reporting on the Russiagate myth — a story propelled almost entirely in the mainstream media by anonymous, often government, sources.

We can draw parallels from the Jewell case to the more recent Russiagate coverage. As in the Jewell story, Russiagate coverage was constructed around faulty, uncorroborated intelligence passed on by anonymous government sources to the press. Whether the anonymous sources knew they were passing along flawed or irrelevant information may never be known, as we are unlikely to ever know the identities of those sources (again, as it should be).

But we know from both the Robert Mueller report and the Department of Justice Inspector General report that there is no compelling evidence to suggest the Trump’s presidential campaign conspired with the Russians to defeat Hillary Clinton. This narrative was a complete fiction from the very beginning, first promulgated by Clinton’s bitter minions, spread by her equally bitter collaborators in the national news media and *verified* by a vast, nameless infantry of government bureaucrats and intelligence officers.

Russiagate mirrored a standard intelligence community disinformation campaign, its true origins still largely unknown as the national news media has purposefully avoided seeking such answers.

A baseless conspiracy theory is how Rachel Maddow describes the question of whether Trump campaign operative George Papadopoulos’ chance meeting with Malta professor Joseph Mifsud and subsequent bean-spilling conversation with Australian diplomat Alexander Downer were, in fact, a U.S. intelligence community attempt to frame and perhaps turn the former low-level Trump flunky into a government informant.

Or, as still reported in the mainstream news media, was Mifsud a Russian agent tasked with compromising the Trump campaign through Papadopoulos?

Both theories are unproven (though most of the hard evidence resides with the former) and likely will be addressed in Connecticut U.S. Attorney John Durham’s special investigation into the origins of Russiagate.

Either way, the answers are not coming from the anti-Trump press corps because these answers could undermine the media’s ongoing narrative that the FBI and U.S. intelligence community were objective, upright actors in the Russiagate drama.

Which only further reinforces the bigger question: will the national news media ever be forced to answer for their botched Russiagate reporting.

Just as Kathy Scruggs did in the Jewell story, the national news media did the minimum in covering Russiagate (sans a few adversarial journalists like The Intercept’s Glenn Greenwald and The Nation’s Aaron Maté who base their careers on challenging official sources, regardless of partisan affiliation).

The problem is the minimum too often means getting the bigger story wrong (and never needing to say you are sorry).

But if news organizations aren’t going to hold themselves accountable for botching the big stories, who will? It most certainly can’t be the government or an “independent” ombudsman. Putting so much power in so few hands feels like an invitation to unintended consequences.

There is only one irrefutable way to hold news organizations and news entertainers like Rachel Maddow accountable: educated, discerning news consumers empowered to think critically and objectively (to the best extent possible) when consuming news. More importantly, consumers’ judgments will result in palpable carrot-and-stick economic consequences for news organizations.

We, in fact, may be seeing evidence of this dynamic already with the most recent year-to-year decline in TV audiences for both CNN and MSNBC (the most prominent cable news curators of the Russiagate myth).

For the sake of our Fourth Estate, we can hope accountability is making a comeback.

  • K.R.K.

Postscript: Clint Eastwood’s movie about Richard Jewell and the 1996 Olympic Park bombing is trying to serve the truthUnfortunately, he should have first gotten his own truthiness house in order.

Why Eastwood and screenwriter Billy Ray decided to imply Atlanta Journal-Constitution reporter Kathy Scruggs slept with an FBI source in order to extract crucial information is incomprehensible. The specific scene in dispute — featuring Scruggs (played by Olivia Wilde) and Tom Shaw (an FBI agent played by Jon Hamm) — could have easily been replaced with something less sexually overt and the movie would have lost nothing.

It just confirms that Hollywood, like corporate news organizations, are not dependable guardians of the truth.

If all you see are hijabs, you are missing the fast rise of Jordan’s women

By Kent R. Kroeger (, December 20, 2019)


This essay documents my challenges and observations during my family’s recent travels through Oman and Jordan.

This is the fourth essay in a series. The previous essays can be found herehere and here.


“Zach, you know what a hijab is, right?” I asked my teenage son, Zach, a few days prior to a recent family trip to Oman and Jordan.

“Yes, Dad. I do.” he responded in a reflexive, annoyed-teenager tone. “We have girls in school that wear them. I see them all the time.”

“Yes, but, especially in Oman, some will cover their entire face,” I said. “I don’t want it to…”

“Dad, I know. I’ve seen that before too,” Zach interrupted me mid-sentence.

“It will be more common.”

“Dad. Stop it. I don’t care.”

For some reason, I always find Zach’s thorough indifference about everything refreshing. I’m even a little jealous. He doesn’t waste a lot of energy worrying about the future or other people’s lives. Greta Thunberg he is not.

And after thinking about my non-conversation with Zach, I realized the problem was more mine than his.

Prior to this most recent trip, I had been to Middle Eastern countries where the burka (a common hijab in Afghanistan) or the niqaab (a usually black hijab that covers the entire face except for the eyes) were common. I found those particular versions of the hijab unsettling and I just wanted to prepare my son. Is that so wrong?

My wife, Christa, and I try to take advantage of any opportunity we can to engage our son in conversations about politics and current issues — in this case, the status of women in the Islamic world (and the world, in general). It is obviously not a subject self-absorbed, 13-year-old boys want to talk about, but it is precisely at that age where, as parents, my wife and I feel we can have the most lasting impact. [We admit we may be deceiving ourselves on that one; but, as progressive Democrats and regular-attending Unitarians, self-delusion is one of our most highly-developed skills.]

We just didn’t want our son to judge the women we’d be meeting on our trip based on their clothing attire. We believe what a person wears (and other daily practices) can only be understood through a deep knowledge of that person’s culture. In theory, we try not to use American cultural norms and values as criteria for judging other people and cultures. In other words, we are the people Rush Limbaugh started warning everyone about thirty years ago: We are practicing cultural relativists.

Poorly executed, cultural relativism in practice often comes across as condescending. Employed with the proper balance between humility and objective judgment, however, and cultural relativism opens up doors and understanding to other places on a level you can never attain when you restrict your analytic lens to the one provided by your own culture.

I tried to explain ‘cultural relativism’ to Zach and wasn’t sure if it really sank in.

“Dad, don’t worry. I won’t say or do anything to embarrass you and Mom on the trip,” was his curt reaction.

He can be charmless sometimes, but he knows how to cut to the chase.


I’m a political scientist and statistician by training and to me, Islam, as with any cultural attribute, is but one variable in a larger explanatory model. In the aggregate, all else equal, does it explain anything substantive in the corporeal world?

Western academics and thought leaders periodically trot out research and essays broadly condemning Islam and the Qur’an for the mistreatment of Muslim women.

One of the most provocative denunciations of Islam’s view on women has come from Mona Eltahawy, the renowned Egyptian writer and journalist who now lives in Cairo and New York. In her 2015 book, Headscarves and Hymens: Why the Middle East Needs a Sexual Revolution, she repeatedly whomps Muslim men by referring them merely as ‘they’ as she summarizes what life was like for her living in Egypt and Saudi Arabia:

“They hate us because they need us, they fear us, they understand how much control it takes to keep us in line, to keep us good girls with our hymens intact until it’s time for them to fuck us into mothers who raise future generations of misogynists to forever fuel their patriarchy. They hate us because we are at once their temptation and their salvation from that patriarchy, which they must sooner or later realize hurts them, too. They hate us because they know that once we rid ourselves of the alliance of State and Street that works in tandem to control us, we will demand a reckoning.”

Eltahawy’s has an uncompromising view of the inveterate misogyny that is prevalent in most Muslim-majority countries today. And backing her perspective are objective data that overwhelming and unforgiving categorize Muslim-majority countries as brutal towards women (see Appendix B for the Freedom House scoring of selected countries on gender-related freedoms).

Few disagree on the problem. In most Muslim-majority countries the oppression of women is pervasive, persistent, codified and institutionalized. Gender bias runs deep in a country like Saudi Arabia where women only recently have been granted the right to drive and to travel outside the country without a male guardian’s permission.

Yet, there is significant variation in gender equality within Muslim-majority countries that should give us pause before accepting any reductive explanation ascribing the oppression of women almost solely to Islam and the Qur’an. If Muslim-majority countries like Tunisia and Senegal can be ranked similarly to the United Kingdom, Israel, Denmark and France in terms of gender-related freedoms, perhaps Islam and gender equality are compatible under the right conditions. If how Islam is practiced can adapt in some countries to the liberation of women, why not in all countries?


Eltahawy puts ‘headscarves’ front and center among the symbols of women’s suffering in the Islamic countries. But there is so much variation in hijab styles, it seems unnecessary to lump the overt symbolism of the burka or the niqaab — which cover a woman’s face — with the more common shayla or al-amira styles which do not cover the face.

Two Iranian women wearing an al-amira hijab (Photo by by Gabriel White, distributed under a CC-BY 2.0 license.)

Is the hijab really a visible sign of oppression or is it merely an article of clothing motivated by social norms, personal preferences, and/or a desire to demonstrate one’s faith?

“Sometimes a hijab is just a hijab,” a Muslim (Malaysian) friend once explained to us — a group of her graduate school colleagues — when someone suggested her hijab symbolized oppression.

At the same time, stories today of Muslim women and girls being forced to wear the hijab or enter into arranged marriages are real and include examples from Western countries. In her book, Unveiled: How Western Liberals Empower Radical Islam, Yasmine Mohamed, an author and civil rights activist, recounts her own experience growing up in Canada in a household where the rules closely resembled Sharia law and where the hijab was mandatory starting at age nine.


While I prepared for our family trip, I reacquainted myself with the often contentious debate between religious scholar, Reza Aslan, and neuroscientist, Sam Harris, over the roles of reason (science) and religion in explaining various issues in modern society. Inevitably, their discussion miscarries into a dispute over Islam and terrorism.

Where Aslan argues Islam and its guiding text, the Qur’an, is not the cause of terrorist violence, but rather a product of long-standing social conditions and cultural norms (which can change), Harris views the Qur’an itself as a contributing factor to violence against apostates to Islam and to the oppression of women in Islamic societies. “On almost every page, the Qur’an instructs observant Muslims to despise non-believers,” HBO’s Real Time host Bill Maher once said in defending Harris.

While Harris dismisses as false equivalencies any comparisons of the Qur’an to Judeo-Christianity’s Old Testament — which is comparably soaked in violence and misogyny — more disappointing is his resistance to consider the organized violence perpetrated by Western societies against Islamic populations in distant and recent history.

Still, Harris is right when he says ‘the words matter’ and I do consider elements of his thesis as useful, particularly when considering the status of women in Islamic societies. The words we write are important. Major religions codify their rules for a reason — to maintain religious discipline and rules of engagement among their followers.

Nonetheless, my overall impulse is to side with Aslan’s multiculturalist perspective which emphasizes the central role of evolving cultural norms and social conditions in explaining current society.

The hijab is a visible manifestation of the Aslan-Harris debate. Is it the reflection of culture and, therefore, subject to change as cultural norms and rules change? Or, has the garment’s centuries-old existence through the present further evidence that the Koran’s text reinforces and preserves the oppression of women.

In a 2014 Huffington Post article by psychologist Valerie Tarico, Faisal Saeed Al Mutar, a Washington D.C.-based writer and the founder of Global Secular Humanist Movement, tells her why he rejects multiculturalism’s instinct to not judge the hijab:

“I understand the liberal impulse to respect multiculturalism, but aren’t human rights more important than cultures? Humans have rights, cultures don’t, cultures evolve and reform. Liberal friends and allies ask churches and pastors to accept gay rights and women’s rights. It is disrespectful and even racist to ask any less of mosques and Muslim leaders.”

Yet, other Muslim women say the hijab is a symbol of faith, not oppression, and resent the assumption that their personal decision to wear it is forced on them by the men in their lives.

There are no right or wrong answers here. Just a lot of opinions.


Like many of the world’s older cities Amman, Jordan is very walkable. Every daily need is within a short distance from where you might be standing, as self-contained neighborhoods are one of Amman’s defining features.

Leveraging this aspect of Amman, Zach and I decided to make our last day in Amman as simple as possible: walk to a nearby restaurant; wander through a few shops for last minute mementos and gifts; before using the Lyft taxi service to take us to the University of Jordan where my wife and her colleagues were wrapping up their business trip.

After picking us up at our hotel, the Lyft driver snaked at varied, erratic speeds through Amman’s near constant heavy traffic. On the approach to the University of Jordan’s (UJ) campus, my son spotted a McDonald’s near our drop off point — the UJ’s main gate along Queen Rania Street. Annoying our driver with a u-turn request that put us in front of McDonald’s, we stumbled out the Hyundai sedan (which appeared to be every other car in Amman) and texted my wife to announce our arrival.

As we waited for my wife and her colleagues, we felt like pinballs as the high volume of foot traffic passing near the UJ main gate bounced us around. At 31,000 students, UJ is Jordan’s largest university. It is also the oldest and most prestigious.

The University of Jordan’s Main Library ( Photo courtesy of the Univ. of Jordan Library)

While being absorbed into this crowd, I was immediately struck by the high percentage of woman compared to men heading in and out of the UJ campus. It wasn’t even close. Doing a quick headcount (as best I could), I estimated three females for every male student. Just a guesstimate. And, as it turns out, not too far off the actual gender ratio reported by the UJ’s statistical office: Female students at UJ outnumber male students two-to-one. In comparison, females represent 49 percent of Harvard University’s undergraduate population.


Based on their student populations, Jordan’s most elite university is more female-dominated than Harvard. Such is the world we live in today.

More importantly, we are seeing this trend throughout the Middle East and North Africa, not just with respect to tertiary institutions (i.e., higher education), but also with primary and secondary education (grades K-through-12).

UNESCO has created indexes monitoring the status of women in the world’s education system. Since 1995, the evidence is overwhelming: Women in Islamic countries are increasingly being educated at rates comparable to women in economically advanced countries (e.g., the U.S. and Europe).

This fact will fundamentally change the Middle East.

One UNESCO measure is particularly informative: The Gender Parity Index for School Life Expectancy (GPI-SLE). The SLE component is defined as the number of years of schooling from primary to tertiary levels of education. Subsequently, the GPI-SLE is calculated as the ratio of female of school life expectancy to male school life expectancy.

A GPI-SLE equal to 1 indicates parity between females and males. In general, a value less than 1 indicates disparity in favor of males and a value greater than 1 indicates disparity in favor of females.

On this UNESCO measure, women in the Middle East may have already equaled their sisters in the advanced economic world (see table below). For 82 countries with consistent data from 1995 to 2018, each was categorized into one of three categories: Middle East/North Africa, Advanced Economies, Rest of World (see Appendix A for the list of countries and categorization).

The Middle East/North Africa (MENA) countries with reliable data include: Tunisia, Morocco, Palestine, Turkey, Kuwait, Oman, Iran, Jordan and Bahrain.

According to the most recent GPI-SLE, MENA countries have an average GPI-SLE of 1.05, indicating a slight bias in favor of females. This MENA index score is the same as for the advanced economies and slightly higher than all remaining countries (GPI-SLE = 1.02).

It may surprise some that the education systems of Kuwait, Palestine, Tunisia and Oman are more gender biased in favor of females than the United States, Norway, Israel, Denmark, and Finland (see Appendix A). The educational advancement of women in MENA countries defies an arrogant and common assumption about the West’s advantage over Islamic countries in terms of gender equity.

And while the GPI-SLE is just one measure of gender equality in education, other UN-monitored gender equity measures show a similar worldwide pattern to the GPI-SLE (e.g., gross enrollment ratiosadjusted net enrollment ratesadjusted net intake rate to Grade 1Percentage of female graduates by level of tertiary education).

In terms of education, women living in MENA are rapidly catching up to their counterparts in the advanced economies. And its happened relatively fast. In the above table (2nd column) we see that, since 1995, the increase in the GPI-SLE (i.e., the education system has become less biased towards females) has been greater among MENA countries (Δ GPI-SLE = +0.11), than the advanced economics (Δ GPI-SLE = +0.03)or the remaining countries (Δ GPI-SLE = +0.08). The higher magnitude improvements in MENA countries are due, in part, to those countries having started their gender equity efforts with females significantly more disadvantaged at the beginning of the process.

It should be noted that the educational rise of women in MENA countries mirrors similar increases in other parts of the world. It is not an Islamic-thing, it is a world-thing. And this trend is particularly evident in higher education.

“Despite what history across the globe has told us, women now outnumber men at universities — and it is a trend which is accelerating year upon year in the majority of countries,” Isabelle Bilton wrote last year for Study International News, an independent news service monitoring education trends for international students.

Many theories have emerged explaining this worldwide, systemic shift in education attendance and outcomes favoring females. In summarizing the research on this question, Bilton offered this: “There is no answer to why the gender shift occurred but many researchers have speculated problems occur when students are in school. Boys tend to be less interested and less focused on schoolwork, leading to lower grades at all levels of study. As a result, fewer of them choose — or are able — to enroll in universities.”

But something deeper is going on in Jordan and the Middle East more generally. As girls are rising, boys are falling precipitously. As noted by education researcher and author Amanda Ripley in a September 2017 article for The Atlantic, “In school, Jordanian girls are crushing their male peers. The nation’s girls outperform its boys in just about every subject and at every age level. At the University of Jordan, the country’s largest university, women outnumber men by a ratio of two to one — and earn higher grades in math, engineering, computer-information systems, and a range of other subjects.”

This relative trend between males and females in Jordan is not entirely dissimilar from trends in the U.S. where girls do better than boys on standard tests, are more likely to take Advanced Placement tests, more likely to go to college, and will spend more than a year longer in school over their lifetimes compared to their male counterparts.

But, as Ripley’s research shows, something more pernicious may be happening in the Middle East that is (literally) hitting boys hard. In her article, Ripley shared an interview she conducted with a group of teenage girls at an elite Jordanian secondary school. When the girls were asked why they thought girls do better than boys in school, one 16-year-old student, Nawar Mousa, was quick with an answer: “I do my homework, and I read books. My brother, what does he do? He goes with his friends. He plays PlayStation.”

During her research, Ripley also interviewed a Jordanian family with a 15-year-old son attending an all-boys high school. In the course of the interview, the son acknowledged that is common in his boys-only school for teachers to hit students (boys). The boy’s mother added: “Girls’ schools are better,” she said, “less dangerous.”

Whether we can generalize from Ripley’s research to the Jordanian education system writ large is debatable, but her work does conform to other research in Jordan showing academic achievement is affected by violence in the learning environment. Students don’t learn in threatening environments and gender-segregated schools may amplify those threats for boys.

Half of Jordanian primary and secondary public schools are gender-segregated. While there has long been evidence in the U.S. that girls learn better in all-female environments, boys clearly don’t flourish in all-male environments. Quite the opposite. Boys achieve more academically when more girls are in the environment, according to recent research.

In practice, educational strategies that work for girls also work for boys, and vice versa. The research is unequivocal on this point: What drives academic performance is expectations placed on the individual student, regardless if it is a boy or girl.

“Our research shows that boys will compete for good grades and often achieve them in schools where academic effort is expected and valued,” says Claudia Buchmann, a sociology professor at Ohio State University and co-author (with Thomas A. DiPrete) of the book, The Rise of Women: The Growing Gender Gap in Education and What It Means for American Schools.


The progress Jordan has made in the past two decades in building its higher education infrastructure can be seen in the chart below. Where in 2000 few Jordanians aged 25 years or more had a higher education degree, in just 10 years 19 percent of men and 13 percent of women had higher education degrees. While these rates are significantly lower than those observed in the U.S., they are comparable to other Middle Eastern countries. Compared to other Middle Eastern countries, over the past two decades Jordan has among the highest increases in educational attainment for women.

Some observers express caution about the educational advancements of Jordanian women given this fact: women may be getting higher education degrees, but it is not translating into comparable employment opportunities.

In Jordan, nearly 27 percent of unemployed university graduates are male, while almost 68 percent are female. In 2013, the World Bank reported a 35 percent unemployment rate among college-educated women, considerably higher than the rate for college-educated men. In fact, the unemployment rate for men goes down with higher levels of education, but for women the exact opposite is true: As women become more educated, the unemployment rate goes up.

In Jordan’s highly patriarchal society where job opportunities for women have been historically limited, the employment rate disparity between men and women is not surprising. However, as more Jordanian women earn college degrees, many will choose to leave Jordan for other countries, such as the United Arab Emirates, where more jobs (and higher paying ones) exist for women.

This brain drain is a critical issue in Jordan today.


Our last activity in Jordan before flying home was to visit with the Chair of UJ’s Political Science department, Dr. Amir Salameh Al-Qaralleh, an imposing figure when standing, but reclined in his office chair with a cigarette in his hand, he was disarming.

Over the course of an hour long conversation, much of it about Syria and the U.S. withdrawal from the Turkish-Syrian border, a number of Dr. Al-Qaralleh’s students popped their heads into his office. One student forgot her notebook for a class and asked if Dr. Al-Qaralleh had paper he could spare. Another asked about a class assignment. And still another asked about an exam.

“Students today,” he said, slightly shaking his head. “They are not self-sufficient.”

But I had a different reaction, first noticing the obvious comfort these students had walking into Dr. Al-Qaralleh’s office, even as he was in a conversation. Where I went to school (University of Iowa, Columbia University), it is unimaginable that I would have walked in on a Department Chair in such a circumstance.

I also noticed that all of the students that asked to speak to Dr. Al-Qaralleh during our meeting were women. That was not random chance, as I’ve seen this pattern before as a college instructor. Of the students that would visit me during office hours, easily three-quarters were women.

When I shared this observation with Dr. Qaralleh, he paused, choosing his words wisely. “Women aren’t afraid to ask for help,” he replied.

Apparently, the reluctance of men to ask for help is a universal phenomenon.

But this anecdotal evidence may also reflect a higher level of engagement found among female students pursuing a higher education in Jordan.

Dr. Al-Qaralleh seemed to agree with this broad generalization.

Apparent is that Jordanian women are getting educated at increasingly higher rates and it is not a transitory phenomenon. Instead, it is proving to be broad-based and sustained, despite the strains put on the Jordanian education system by the influx of over 600,000 registered Syrian refugees into Jordan since the start of the Syrian Civil War.

The rise of women in Jordanian education mirrors similar changes around the globe, but as I’ve documented here, this trend has been more pronounced in MENA countries than in other regions.

In a patriarchal society like Jordan’s, don’t be surprised if in the next decade the question asked among policymakers is less focused on helping Jordanian women achieve educational and economic parity with Jordanian men, but on how to help men keep up with their female counterparts.


Computer engineering graduates Arwa Tarawneh (left) and Suha Habashneh are learning how to work with Intel Galileo (Photo provided by Intel Free Press)

Appendix A:

Appendix B:

A call for a moratorium on ‘debunking conspiracy theories’

By Kent R. Kroeger (, November 22, 2019)

The Democrats now use the terms ‘debunked’ & ‘conspiracy theory’ the way the Republicans and President Donald Trump co-opted the use of ‘fake news.’

All three of these terms should be purged from our working vocabulary until we can demonstrate their proper use.

Perhaps the saddest result of Trump’s three years in office is that today’s Democratic Party shares his disregard for facts and honesty. Some would argue the Clintons pioneered the modern art of dissembling and deception, but that conclusion is indeed unfair to the thousands of deceitful politicians that preceded them.


The evidence is vast about the poor state of our nation’s intellectual curiosity:

CNN (November 20, 2019): “No longer is Volker claiming that the idea of investigations — into the Bidens and into a debunked conspiracy theory that Ukraine somehow had meddled in the 2016 election to help Hillary Clinton and might be in possession of the hacked Democratic National Committee server — ever came up in a White House meeting two weeks before the July 25 call between Trump and Ukrainian President Volodymyr Zelensky.”

The Intercept (November 20, 2019): “…the president continued his unlikely attempt to reinvent himself as an anti-corruption crusader, by stating flatly that “Joe Biden and his son are corrupt.” Without offering any evidence to support that debunked claim, the president who has used his office to enrich himself added, “If a Republican ever did what Joe Biden did, if a Republican ever said what Joe Biden said, they’d be getting the electric chair by right now.”

Business Insider (November 19, 2019): “Despite Trump and (Rudy) Giuliani’s allegations, both US and Ukrainian government officials have confirmed there’s no evidence that the Bidens did anything improper. Former Ukrainian prosecutor general Yuriy Lutsenko clearly said he had no evidence of wrongdoing by Joe or Hunter Biden.

“I do not want Ukraine to again be the subject of US presidential elections,” Lutsenko said. “Hunter Biden did not violate any Ukrainian laws … at least as of now, we do not see any wrongdoing.”


On this last example, there is this indisputable point: If your wrongdoing defense ever includes these words or equivalent — “I did not violate any Ukrainian laws” — you probably have an ethics problem.

Together, these false ‘debunked conspiracy theory’ narratives fall into one of two categories: (1) the guilt-by-association debunking, and (2) the straw man debunking.

An example of a guilt-by-association debunk is dismissing the idea of Ukrainian interference in the 2016 election by using one of its genuinely debunked variants — Ukrainians hacked the DNC and Podesta emails which are now stored on a server somewhere in the Ukraine — to justify ignoring more fact-based lines of inquiry.

Until Giuliani and Trump — the GOP’s Hardy Boys —mentioned Cloudstrike and the alleged Ukrainian server, I didn’t even know about that conspiracy theory, much less that there is no substantive evidence to support it.

To the people who closely followed the Russiagate story, the Ukrainian- interference allegation was always about the 2016 election interactions of the Hillary Clinton campaign and a Democratic National Committee (DNC) consultant with Ukrainians linked to their government.

The Ukraine-interference story is not a conspiracy theory and it certainly hasn’t been debunked. It is rooted in original reporting by Politico’s Kenneth P. Vogel (now with The New York Times) and David Stern from their story published on January 11, 2017 (“Ukrainian efforts to sabotage Trump backfire”).

Their key findings were:

Ukrainian government officials tried to help Hillary Clinton and undermine Trump by publicly questioning his fitness for office. They also disseminated documents implicating a top Trump aide in corruption and suggested they were investigating the matter, only to back away after the election. And they helped Clinton’s allies research damaging information on Trump and his advisers, a Politico investigation found.

A Ukrainian-American operative (Alexandra Chalupa) who was consulting for the Democratic National Committee met with top officials in the Ukrainian Embassy in Washington in an effort to expose ties between Trump, top campaign aide Paul Manafort and Russia, according to people with direct knowledge of the situation.

Is that electoral interference? It is nation-state politics on its most fundamental, self-interested level. Its not like the Ukrainians stole private emails (a crime). They merely assisted Hillary Clinton’s campaign in associating one of her political rivals (Trump) with unlawful behavior (Manafort). [It wasn’t hard to do].

The Republicans will rightfully note, however, that information provided by the Ukrainians about Manafort helped cultivate the false media narrative (‘fake news’) that the Trump campaign conspired with the Russians to win the 2016 election.

Foreign influence in American elections is not restricted to 2016. In fact, it has been a near constant in our elections since 1968 and likely before that.

Foreign governments, acting purely in self-interest, have every incentive to nudge American elections in their favor. Nothing is going to stop foreign influence of American elections. Nothing. And it is naive (and potentially harmful given some of the solutions offered) to think it can be stopped. Minimized? Fine, we should always try to do that. Stopped? Never.

The Ukrainian-interference story is based on real, documented evidence. Do we have a complete understanding of the story? Probably not. Does it warrant Trump’s personal lawyer, Rudy Giuliani, sent on a fool’s errand to find evidence of a mythical Ukrainian server containing the DNC/Podesta emails? Absolutely not.

Why is anyone surprised that the Trump/Giuliani brain trust would subsequently confound the Ukrainian server theory with the more substantive Vogel-Stern reporting on Ukrainian assistance to the Clinton campaign?


The second form of debunking is more pernicious: Using a straw man conspiracy theory to debunk legitimate lines of inquiry.

Deemed ‘debunked’ by the mainstream media, the Biden-Burisma corruption allegation is a classic straw man debunk.

First, a little Burisma/Biden background:

Burisma is a holding company for a group of energy exploration and production companies and is controlled by Mykola Zlochevsky through his company Brociti Investments Limited. Charges of corruption by Western governments have followed the pro-Russia Zlochevsky for most of his career — which included a stint as Ukraine’s Minister of Ecology and Natural Resources from 2010 to 2012 under the pro-Russia government of President Viktor Yanukovych. Burisma Holdings is estimated to have had revenues around $400 million in 2018.

A Ukrainian revolution removed Yanukovych in February 2014 and was replaced by a more Western-leaning government eventually led by Petro Poroshenko from 2014 to 2019.

Hunter Biden, along with his legal partner Devon Archer, joined the Burisma board in April 2014, as part of Zlochevsky’s attempt to fend off corruption charges that were likely to be pursued more aggressively under the new political order in Ukraine.

Hunter’s father, Joe Biden, was the U.S. Vice President at the time and was the Obama administration’s point-man on Ukraine policy.

The facts and timeline alone should raise eyebrows among oxygen-breathing journalists. Had the Obama administration been as clean as Barack Obama claims, Hunter’s Ukrainian adventure would have been chop-blocked immediately by the White House ethics office.

And don’t believe the mainstream media when it suggests Hunter’s hiring was a non-issue within Washington, D.C.

Contradicting that assertion is the Washington Post’s reporting that Chris Heinz, Secretary of State John Kerry’s stepson and business partners with Devon Archer and Hunter Biden, opposed his partners Archer and Biden joining the board in 2014 due to the reputational risk. Heinz opposed the Biden-Archer deal because it was unethical, not because it was illegal. Even Vice President Biden asked his son if “he knew what he was doing.”

And this is the essence of the straw man debunk of the Burisma-Biden story: There was nothing illegal, so there is nothing to look at, we are repeatedly told by the mainstream media.

But legality was never the central question surrounding Hunter’s Burisma deal. Private citizens, even those related to vice presidents, are not generally prevented from joining the boards of foreign companies — even those with business interests directly involving the U.S. government. [There are exceptions, such as when the business’ home country is the target of U.S. sanctions, ex. Iran or Venezuela.]

Never mind that recent reporting has shown Burisma lobbied the U.S. State Department regarding the corruption investigations against it, at a time when Joe was still Vice President and Hunter was still on the Burisma board.

Never mind that the business arrangement Hunter Biden crafted with Burisma, where he sat on the board while his law firm was also hired as a consultant to Burisma, would be illegal in this country.

Nothing to look at here, we are told.

“A totally debunked conspiracy theory,” MSNBC’s Joy Reid continues to say about the Burisma-Biden story.

No aid money went from the U.S. government to Burisma while Hunter Biden was on the board, proudly asserts Again, another straw man debunk since few have seriously argued that aid money went directly to Burisma from the U.S. Treasury. The fact that the U.S.-picked Ukrainian prosecutor let Burisma walk away from corruption charges with only $9.5 million in fines (despite evidence Burisma and its subsidiaries failed to pay around $70 million in taxes from 2014–15 alone) is all the payback Burisma ever wanted.

This is what international financial corruption looks like with an assist from the U.S. news media.

And, no, getting Donald Trump out of office isn’t worth condoning the incurious, elite-friendly journalism that now dominates our national news organizations.


Another conspiracy theory debunking does not fit as neatly into the two categories I’ve created.

The chemical (chlorine) weapons attack on Douma, Syria on April 7, 2018 has seen a virtual news blackout since information has slowly emerged suggesting the culpability of the Bashar al-Assad regime is not clear— despite a near universal assumption in the Western media that Assad’s force did it.

MSNBC, CNN, Fox News, NPR, The New York Times, and The Washington Post have all definitely declared the Douma-attack an Assad crime. Any suggestion otherwise has been declared — you guessed it — a debunked conspiracy theory.

In the meantime, the Organisation for the Prevention of Chemical Weapons (OPCW), which investigated the Douma-attack and concluded it was mostly likely conducted by a helicopter drop (i.e., the Assad regime), has seen the release of a dissenting report by one of its staff engineers suggesting the Douma-attack was more likely ground-based, not air-based.

Recently, more potential evidence has been reported suggesting the OPCW may not be the independent watchdog of chemical weapons use it claims to be. [Rogue reporter (her words) Caitlin Johnstone has been reporting on Douma while the mainstream media has been silent. Her latest article on Douma is a must read.]

I wrote about Douma last year and concluded (as if my opinion matters): While the OPCW report (and dissenting report) still leave it unclear who conducted the attack, to state without qualification that the Assad regime did it is a willful disregard for the available evidence.

That is the problem when you let governments and their surrogates control information. They purposely ignore relevant facts so they can, technically, still claim they are not lying.

When the U.S. and its allies attacked Syria with Storm Shadow, MdCN, and Tomahawk missiles on April 14, 2018, the U.S .Department of Defense and its media outlets were still referring to the chemical attack on April 7th as a “suspected” chemical attack. French intelligence would claim they had definitive information that Assad was responsible for the April 7th attack: Judge for yourself the quality of the French information.

That, of course, is the defense and intelligence community’s ultimate fallback position should the full details ever become public: Battlefield information is frequently imperfect and we often need to act on the best information available at the time.

That is especially true when you don’t demand better information. And, naturally, MSNBC, Fox News and CNN didn’t ask for it when reporting on the April 14th missile attack by the U.S. and its allies.

That is how the game is played today and the score is — World elites: 23,265,784 and The Rest of Us: 12. [Glenn Greenwald, our leading scorer, is now mostly playing for the Brazilians.]


At the end of the day, conspiracy debunking is a variation on the Simon Says game we played as children.

If, through their congenital ignorance, Trump and Giuliani misstate a valid point of contention regarding Hunter Biden’s Ukrainian (and Chinese) deals or the 2016 Ukrainian activities of the Hillary Clinton campaign, should these topics be removed from the public debate or no longer the subject of further investigation by an independent press?

Of course not.

But that is exactly the chill the mainstream news media and the Democratic Party has cast over our national dialogue. Instead of arguing on the facts, they hide behind partisan talking points.

This need to kill intellectual inquiry by the U.S. political parties and the national news media is now hard-coded into the system and will be hard to excise from the body politic. It long ago killed from within the integrity of the Republican and Democratic parties and is swiftly killing objective journalism (if that patient isn’t dead as well).

The cure? We all need to stop getting our news and information from a short list of sources. Its OK to watch MSNBC, NPR, Fox News or CNN. It is not OK to get the vast majority of your information from these sources.

Nothing ignites a Democrat faster than saying you watch comedian Lee Camp’s Redacted Tonight on the RT network (a news network partially funded by the Russian government). Or, worse yet, tell them you love jagoff nightclub comedian Jimmy Dore’s YouTube channel. [There should be a required journalism course at Northwestern University titled, “How could Jimmy Dore get Russiagate right and the mainstream media totally f**ked it up?”]

Mention those shows and “Russian propaganda!” is a typical response. Besides the fact it isn’t true in the case of Redacted Tonight or The Jimmy Dore Show, why would that rule out watching something?

Hell yes! I want to know what the Russian government thinks. I want the Chinese Communist Party perspective too. And I want to know what the Iranian Supreme Leader is saying on current events. And, to be fair, I want the U.S. political and economic establishment’s latest propaganda spiel on my subscriber list (the hard part there is choosing between CNN, Fox News, MSNBC, The Washington Post or The New York Times. Usually any one of these sources will do, but, occasionally, genuine internal divisions within the establishment do emerge across those news outlets.).

That is what informed people do. They take in as much diverse information as possible and use their education and experience-informed intuition to sort out fact from fiction, as best as possible.

Its not easy. If it was, MSNBC’s Rachel Maddow and CNN’s Chris Cuomo would be doing it.

  • K.R.K.

Send comments and non-physical threats to:

On the road to Petra

By Kent R. Kroeger (, November 18, 2019)


This essay documents my challenges and observations during my family’s recent travels through Oman and Jordan.

This is the third essay in a series. The previous essays can be found here and here.


The last destination in our journey through southern Jordan was Petra, an archaeological city that was the capital for the Nabataean Kingdom from about the 4th century BC until a major earthquake in 363 AD led to its eventual abandonment.

One of the best preserved structures in Petra, Al-Khazneh, also known as “The Treasury,” is its most famous, having served as the fictional resting place of the Holy Grail in Steven Spielberg’s 1989 movie, Indiana Jones and the Last Crusade.

Indiana Jones and the Last Crusade (Paramount Pictures and LucasFilm, 1989)

Of all the touristy things we planned, for Zach, visiting “The Treasury” at Petra was the trip’s most anticipated event. The drive would be a couple of hours from Wadi Rum and the views along the way spectacular, according to Abo, our Jordanian guide.

He wasn’t kidding. The Jordan-Arabah Valley near Petra is deep and wide, employing a limited color palette ranging from grey pewter to buttermilk tan, but giving it a grim starkness no less stunning than the Grand Canyon or Bryce Canyon. My son described the Valley and the Edomite Mountains that define its texture as South Dakota’s “Badlands on steroids.”

Looking west (from Jordan Hwy. 35) towards the Edomite Mountains in the Jordan-Arabah Valley near Petra (Photo by Kent R. Kroeger)

At one point, Abo pulled the car over so we could take a few photos. To get a cleaner picture without the intrusion of cars, homes or highways, I smuggled myself away from the parking lot as far away as possible (without falling into the Jordan-Arabah Valley). From my vantage point there was no evidence of modern civilization, just the vastness of the Valley and its grey mountains. For a brief moment, I was Christopher Reeve in Somewhere in Time, a motivated time traveler. Moses himself, standing on that same spot, would have seen exactly what I saw on that day.

And, not coincidentally, the tallest mountain in my sight was Jebel Nebi Harun (“Mountain of the Prophet Aaron” in Arabic), one of the holiest sites in Jordan, venerated by Muslims, Christians and Jews as the resting place of Prophet Haroun, Moses’ brother.


But, as spectacular as the scenery was as we resumed our drive to Petra, my attention was drawn to Abo as he told us a story about another entourage he once escorted to Petra on the same highway.

No more than 30 kilometers outside Wadi Musa, the gate city to Petra, we passed a black Bedouin tent, one of many that we had seen in Jordan.

A Bedouin tent (Bait al Sha’er — House of Hair) in Wadi Rum (Photo courtesy of

Bedouin tents, called Bait al Sha’er (‘House of Hair’) in Arabic, are made of goat hair, typically black, and are typically divided into two sections separated by a curtain known as a ma’nad. One side is reserved for the men and male guests and is called the mag’ad (‘sitting place’). The other half of the tent, called the maharama (‘place of the women’), is for cooking and female guests.

As we passed this particular Bedouin tent, Abo asked an unexpected question: “Have you heard of Muammar al-Gaddafi?”

In hesitating unison, we all gave a nod, or some indication in the affirmative. Undeniably, I had few positive images of Gaddafi. If we had done a free association exercise, my thoughts regarding Gaddafi would have been: Libyan dictator, Colonel, bombing of a Pan Am flight over Lockerbie (Scotland), gave up Libyan nuclear program, female guards (some taken from their homes as young girls and raped before becoming guards), his execution by rebels during the Libyan Civil War.

I do not have a positive view of Gaddafi…though I do have one quirky remembrance of him: In September 2009, he was scheduled to attend a United Nations General Assembly meeting in New York City. As he often did when he traveled abroad, he took with him a large Bedouin tent that he would set up to receive guests and dignitaries. What I did not know, until Abo told me, was that Gaddafi was born a Bedouin and carried its core tradition of exceptional hospitality with him when he became the Libyan leader in 1969 following a military coup.

In Bedouin culture, said Abo, a family will take in guests — who may be complete strangers — to stay in their home for as long as they need, without question. In fact, the host will not ask any questions of the visitor — even their name — until the fourth day of the stay. [I don’t even let immediate family stay more than three days in my home.]

During Gaddafi’s 2009 visit to New York City, he wanted to set up his tent in Central Park, but objections by the U.S. government stopped that idea. So, instead, somewhat ironically, Gaddafi had to rely on the hospitality of a private American citizen with a soft spot for aggrieved dictators — Donald Trump — to locate a large tract of private land in suburban New York City where the tent could be erected.

That is my Gaddafi memory. Abo had his own.

A little background first.

In October 2000, Gaddafi made his first visit to Jordan since King Abdullah II had been crowned the country’s ruling monarch in 1999.

Gaddafi had a famously icy relationship with other Arab leaders, including Jordan’s King Hussein, King Abdullah’s father. Observers would remark that the only thing Arab leaders ever reached a consensus agreement on was their collective dislike of Gaddafi.

Part of their annoyance with Gaddafi was his uncompromising support of the Palestinian cause and willingness expose Arab leaders when he felt they betrayed the Palestinian people. For that, Gaddafi is still admired by many Palestinians today.

But Gaddafi could sometimes just be rude and, especially when directed towards Arab monarchs not accustomed to such treatment, it caused deep-seeded resentment.

At the 1988 Arab League meeting in Algiers, Algeria, Gaddafi invited the other attending Arab leaders to “go to hell,” and, at one point, pulled a white hood over his head when Jordan’s King Hussein was giving a speech.

Gaddafi was the Kanye West of Arab leaders.

The President of the Government of SpainJosé Luis Rodríguez Zapatero, and the President of LibyaMuammar al-Gaddafi, in Tripoli (Libya), November 2010. (Photo by:


Abo’s Gaddafi story started as we passed a black Bedouin tent just outside of Wadi Musa. As Abo tells it, Gaddafi’s entourage passed a similar Bedouin tent in this same area back in 2000, on the way to a luncheon with King Abdullah in the ancient city of Petra. However, after spotting the Bedouin tent, Gaddafi had the entourage’s fleet of cars pull over and stop.

“Gaddafi exited his car and asked everyone to place their cellphones in the trunk of one of the cars,” recalled Abo. “Nobody would be able to contact King Abdullah’s staff to notify them of Gaddafi’s likely late arrive to the luncheon.”

The male head of household greeted Gaddafi and invited him for food and coffee, as would be the Bedouin custom.

“Three hours later,” laughed Abo.

Gaddafi was so moved by the hospitality of this Bedouin family he would later send them money to build homes for the parents and their children, according to Abo, who spoke passionately about Gaddafi’s desire to be “with the people, no matter their status.”

Back in Petra, King Abdullah, who waited for Gaddafi’s arrival at the luncheon, was furious. When Gaddafi finally did arrive, King Abdullah let the Libyan leader know how profoundly insulted he was by the tardiness, softening his admonishment somewhat by calling Gaddafi a “mentor.”

As well-know as Gaddafi was for his legendary rudeness, he was equally well-known for his ability to charm enemies and critics (when motivated to do so), and did exactly that with the young Jordanian King.

Years later, King Abdullah would authorize Jordanian special forces to assist in the overthrow of the Gaddafi regime in 2011.

Eventually, we did make it to Petra, and it was spectacular.

My son (Zach) and two Bedouin children selling postcards at Petra (Photo by Kent R. Kroeger)
The author (who does not know how to wrap a keffiyeh) and his son (Zach) in front of “The Treasury” in Petra (Photo by Christa Olson)

– K.R.K.

All comments and questions can be sent to:

On the road to Wadi Rum

By Kent R. Kroeger (, November 16, 2019)


This essay documents my challenges and observations during my family’s recent travels through Oman and Jordan.

This is the second essay in a series. The first essay can be found here.


Our Jordanian guide, Abo, a middle-aged man of Palestinian descent, greeted us outside our hotel in central Amman.

“Good morning!” he said with a wide grin across his handsome, sun-weathered face. “We have a long drive to Wadi Rum.” A five-hour drive, according to Siri.

As my wife, two of her work colleagues, and my son piled into a white Dodge Caravan, I hurriedly finished a Boston creme I purchased from a Dunkin Donuts next to the hotel. My wife glared back at me, knowing I would mess Abo’s nicely detailed van if I tried to finish my feeble breakfast there.

“I’m sorry, Christa. I was hungry.” Her glare softened somewhat.

“Just finish it outside.”

My teenage son, Zach, in a constant grump since arriving in Amman, especially after our room’s satellite TV feed went dark, jumped into the far back rumble seat with me and put in his earbuds, immediately losing himself in a YouTube video.

Abo gave us a quick description of the route we’d be taking through Amman’s southern suburbs and then south on Jordan’s Route 15.

“Can we stop at a QuikTrip?” Zach barked up to his mother, sounding like a military officer’s command to an XO. Abo, thinking the question was for him, seemed puzzled.

“I don’t think they call them QuikTrip’s here,” I told Zach.

“A 7–Eleven?” Zach clarified, as if that would help Abo understand. “WaWa?” was Zach’s last halfhearted attempt at generating mutual understanding, as his attention returned to his video.

This was going to be a long ride.


I pulled out my cell phone to plot our trip on Google Earth and wasn’t encouraged when the virtual terrain looked like a mix of Utah’s Salt Flats and Nevada’s Mojave Desert, with a few small villages sprinkled here and there.

I tried to ask Abo about any sights of interest along the drive, but the combination of road noise and my wife’s shoptalk with her colleagues left my question unheard and unanswered.

Perhaps I’ll grab some additional sleep, I thought to myself, but then remembered my wife was in the van. There would be no sleeping.

Among my wife’s best qualities is her limitless curiosity, as she will ask questions and pester hosts right up to their breaking point — and everyone has a breaking point (except for Abo, thankfully).

We were on the road for no more than an hour when we came to an isolated, but sizable town.

“What city is this?” Christa asked.

“Talbieh Camp,” replied Abo. “A Palestinian camp created after the 1967 War.”

Consisting mostly of internally displaced Palestinian Jordanians when it opened, as opposed to Palestinian refugees that crossed into Jordan during the war, the camp today is home to about 9,000 Palestinians living in an area covering roughly 0.13 square kilometers — about three times the population density of Guttenberg, New Jersey (the most densely-populated incorporated municipality in the U.S.).

Entry road to Talbieh Camp near Amman, Jordan (Photo by Kent R. Kroeger)

As we passed the camp, Abo related his own story in which, as a young boy, he was forced to move with his family from Jerusalem to Jordan as a consequence of the ’67 War.

“My family still has a home and property in Israel,” said Abo. “We could sell it, but the Israelis will only let us sell it to the them.”

My follow-up question about whether the property is earning income for his family went unheard as my wife asked why many of the homes in Talbieh had concrete columns with rebar protruding from the top of the dwellings. It was a better question anyway, as it was striking how many homes and buildings in these camps had an unfinished look.

“So they can add a new floor,” replied Abo.

Concrete columns and protruding rebar in Uum Sayoun (Bedouin Village), Jordan (Photo by Joseph Redwood-Martinez)

“Say again?” asked Christa.

Abo explained how the Jordanian government limits the number of floors a home can have, but, under current law, the government won’t make a family tear it down if the additional floor is complete. So, when a family can afford it, the family hires building contractors to come in at night and complete the new floor before morning.

Whether Abo’s story is a complete explanation is less important than the symbolism of these concrete columns. “Necessary incompleteness” is how writer and filmmaker Joseph Redwood-Martinez characterizes this architectural phenomena, which exists in other parts of the world as well, such as Haiti, the Dominican Republic, Ecuador, Mexico, the United States, Jordan, Israel, Palestine and Turkey.

Redwood-Martinez quotes a Palestinian artist in Ramallah (West Bank) as saying the concrete columns represent Palestinians’ “optimism for the future.”

Optimism is a word not often associated with the Palestinians in the Western media. But I heard it many times while in Jordan. Abo’s own story is an exemplar of this outlook. A refugee of the Israeli-Palestinian conflict as a child, as an adult he helped build a successful Jordanian tourism company and, today, his children are college-educated or bound, including a son studying at a U.S. university. Abo and his family’s story is not uncommon in Jordan.

Just over 2.5 percent of Jordan’s total population is enrolled at a university, a proportion comparable to the United Kingdom, and the percentage of Jordanians 25 years old and older with at least a college degree has risen from 9 percent in 2001 to 12 percent in 2014, according to the World Values Survey. In comparison, 36 percent of U.S. adults in this age bracket have at least a college degree.


Abo spoke with obvious pride about his family, particularly with respect to their educational accomplishments, and about Jordan, in general. But his contagious enthusiasm was also laced with a keen understanding of the country’s realities.

Jordan has a lot of refugees, he would remind us as we drove past smaller, more recently opened refugee camps, usually Syrian’s that had fled the Syrian Civil War.

Jordan hosts over 670,000 Syrian refugees, but has not permitted these refugees to seek asylum, according to Human Rights Watch. The Syrian refugees tend to be isolated in remote border areas within Jordan with limited access to humanitarian aid.

In addition to the existing refugees who are fully or partially integrated into Jordanian society, the Syrian refugees will further increase the financial commitment required by the Jordanian government and the international community to handle Jordan’s refugee population.

And the Jordanian education system is already under pressure.

The recent influx of Syrian refugees (in addition to a large number of existing refugees from the West Bank, Gaza, Iraq, Egypt and other parts of the Middle East) have put tremendous pressure on the capacity of Jordan to handle more students. Adding to the stress, the unilateral decision by the U.S. in August 2018 to stop its contribution to the United Nations Relief and Works Agency (UNRWA), which in amounted to about one-third of the Agency’s $1.1 billion 2017 budget, has had a devastating impact on Palestinian refugees in Jordan, particularly children.

UNRWA services to the 2.3 million Palestinian refugees registered in Jordan include aid for education, health care, food security and other essential day-to-day services. UNWRA also employs thousands of Jordanians.

These funds go toward a modern, secular education for 500,000 boys and girls; vaccinations and health clinics that provide services to over three million refugees and a basic level of dignity for millions who otherwise would lead lives of despair,” said Hady Amr, a Nonresident Senior Fellow at the Brooking Institute’s Center for Middle East Policy, soon after the U.S. decision was announced. “So when UNRWA cuts back services in the impoverished refugee camps in Lebanon, Jordan, Syria, the West Bank, and Gaza, what forces on the ground will fill the void? Whoever it is, they are unlikely to be America’s friends. Even the Israeli military knows that cutting funding for basic services to refugees are a recipe for disaster for Israel.”

While not necessarily linked to the U.S. action against UNWRA, a November 5th knife attack on tourists in Jerash, Jordan by a young, unemployed Palestinian refugee from the UNWRA-run Jerash Camp highlights a subgroup in Jordan susceptible to radicalization and to committing acts of violence.

According to the assailant’s family, the young man grew more radicalized after being unable to attend a higher education institution due to its cost and, subsequently, finding it difficult to find regular employment.

That is a story that may become even more common in Jordan if it cannot adequately support the growing demand for educational and other social services among its most economically vulnerable populations.


The author, his son, a local Bedouin host, and our guide, Abo Yazan Mahfoze

We eventually made it to Wadi Rum, one of God’s geologic masterworks in southern Jordan, where a number of well-known American movies have been filmed, including David Lean’s Lawrence of Arabia and, more recently, The Martian, starring Matt Damon.

Wadi Rum’s wind-eroded, sun-drenched sandstone and granite rock formations are beautiful and humbling.

“The Seven Pillars” rock formation in Wadi Rum (Photo by Kent R. Kroeger)

Among the region’s many striking geologic features are The Seven Pillars, which was the inspiration for the title of T. E. Lawrence’s autobiography, The Seven Pillars of Wisdom, an account of his experiences during the Arab Revolt of 1916–18, when Lawrence was based in Wadi Rum as a member of the British Forces of North Africa.

Lawrence grew to respect the harshness of the desert environment surrounding and encompassing Wadi Rum, also known as the Valley of the Moon. “By day the hot sun fermented us; and we were dizzied by the beating wind. At night we were stained by dew, and shamed into pettiness by the innumerable silences of stars,” he wrote.

The author, trying to keep his hat on (which he eventually lost to a camel), and his son, in the back of an ATV in Wadi Rum (Photo by Christa Olson)

All the same, from the vantage point of a sporty all-terrain vehicle, Wadi Rum seemed quite hospitable to the human desire for play, almost calling out to have you run up one of its limitless number of sand dunes or scramble to the top of one of its towering, craggy rock faces.

We learned quickly, however. Don’t do that.

While its sad to think mass tourism might someday harm the natural beauty of Wadi Rum, the region has many assets working in its favor to mitigate that problem. For one, its relentless, sand-saturated winds can wipe out evidence of human activity in just minutes. Furthermore, if its hair-roasting, bone-dry heat doesn’t convince you of its hostility towards humans, a presumptuous dash up one of its sand dunes will more than break whatever remaining spirit you still possess.

“Try running up this sand dune,” challenged Abo, as he stopped the ATV in the midst of a relatively narrow passage bordered by two parallel rock formations, each about 200 feet in height with walls of sand drifting up perhaps 20 to 30 feet at their base.

“How hard can this be?” I shouted towards Zach, who was 10 feet up the sand drift before he had a chance reply.

Zach at the top of a sand drift in Wadi Rum (Photo by Kent R. Kroeger)

How hard could it be? Really? Zach was already a few feet from the top, talking smack back in my direction.

“Dad, you’re too fat,” was his encouragement.

It did seem like Zach’s 90-pound frame coasted up the drift; where as, my 185-pound heft seemed to get increasingly swallowed by the sand with every step. And when the sand wasn’t chewing me up, it was pushing me back down the dune.

What took Zach maybe five minutes to accomplish, I needed 20, taking a brief respite at the three-quarter mark, a demarcation I dubbed the ‘death zone,’ seriously wondering if my heart might stop if I took one more step. Combined with the energy-sapping heat, I genuinely entertained the possibility I could die right there, on a nameless sand dune in a place that was beginning to look more like Mars than Earth. The warning that someone lost in this desert on an especially hot day could die within hours was becoming palpable.

A month removed from my attempt to scale what was. objectively, a small sand dune, my back still hasn’t forgiven me and I’m still digging sand out of my fingernails.

Wadi Rum’s natural environment is beautiful but not made with humans in mind. It’s uncharitable ecosystem makes it even more remarkable that a group of people like the Bedouins — a nomadic people that live in the desert — have survived off this land since the time of the Old Testament. Jordan alone has 360,000 Bedouins living within its borders, more than any other country except Syria and Saudi Arabia.

A Bedouin Shepherd (Photo by Ed Brambley)

Where Westerners prefer occupying and terraforming hostile environments to fit their wants, the Bedouin’s have, out of necessity, adapted to the challenges of the Arabian desert. And, to a similar extent, the Arabs throughout this region share this accommodative trait with the Bedouins. The Arabic term Naseeb — meaning fate or destiny — is part of the cultural bedrock for Muslims, reinforcing their deference to Allah’s will. In American colloquial language, we’d say the Jordanians and the Bedouins know how to go with the flow.

Has this traditional, almost fatalistic attitude held back economic development and social progress in the Middle East? I’m skeptical, though Western social scientists love using culture-based, Clash of Civilizations-type theories to explain variation in economic and social development across world cultures.

But as I laid on the side of that sand dune in Wadi Rum wondering how long it would take for the wind to completely cover my lifeless body in sand, the last thing on my mind was whether cultural norms centered on self-expression and secular-rational values are best at promoting economic and social development.

I just wanted to get off that sand dune and to our lodging for the night — a tent. A very nice, air-conditioned “tent”…with satellite TV.

The fat author and his son, happy with their “tent” in Wadi Rum (Photo by Christa Olson)